Bitcoin Forum
January 21, 2020, 05:13:16 AM *
News: Latest Bitcoin Core release: 0.19.0.1 [Torrent]
 
  Home Help Search Login Register More  
  Show Posts
Pages: [1] 2 3 4 5 »
1  Bitcoin / Development & Technical Discussion / Re: BlockReduce: Scaling Blockchain to human commerce on: December 27, 2019, 11:33:03 PM

Then I would debate that the statement, "allowing it to scale to handle tens of thousands of transactions per second without impacting fault tolerance or decentralization", is u true.


The actual effect on fault tolerance and decentralization has to be put into context that currently over 50% of Bitcoin's hashpower comes from only 4 pools.  As BlockReduce scales the node requirements to have a node which does partial state validation would be much less then a if Bitcoin scaled in its current state.  That would mean that although there may be fewer people validating full state, there will be more people, and fewer pools validating partial state. I would argue that having partially validating mining nodes is advantageous over having a deminimis number of pools.  Having smaller economic entities decide on the fate of the protocol rather than a few large pools would be positive for the ecosystem.


Is to REALLY scale out the network is = more partially validating nodes, but fewer fully validating nodes?

That goes the opposite path of what you said below. Or might I have misunderstood?

https://bitcointalk.org/index.php?topic=5060909.msg53240986#msg53240986


Yes, scaling the network is adding more network participants, this is accomplished through scaling.


More participants partially validating, which won't be part of the whole network, and less participants fully validating is centralizing, making the network smaller. It is anti-scaling.

I think you should more holistically consider the meaning of centralization.  If I can't go to 7-11 and buy a coke with Bitcoin it is not fully decentralized.  If I need to have 3rd parties involved in a transaction it is not fully decentralized.  If I need to use centralized exchanges to trade with good liquidity it is not fully decentralized.  If it costs $200 to make a transaction it is pricing out network participants and small transactions which is not fully decentralized.

The more people that use Bitcoin, not just the number of people running nodes, is critical in answering the question of is it is decentralized.  Additionally, to have the largest network with the most particpants (most decentralized...?), I would argue that Bitcoin needs to scale on-chain.


Growing node requirements/costs would only make node count go down, not up. Block Reduce might increase transaction throughput, but it's centralizing.


If there are benefits such as a greater number of users, and increased utility at a lower cost, the marginal degree of centralization (fewer fully validating nodes) may very well be worth it.  However, I would contend that with larger user base even if the cost of running a fully validating node increases, the absolute number of full nodes would likely go up not down even if the relative number shrinks.
2  Bitcoin / Development & Technical Discussion / Re: BlockReduce: Scaling Blockchain to human commerce on: December 10, 2019, 12:31:42 AM

Then I would debate that the statement, "allowing it to scale to handle tens of thousands of transactions per second without impacting fault tolerance or decentralization", is u true.


The actual effect on fault tolerance and decentralization has to be put into context that currently over 50% of Bitcoin's hashpower comes from only 4 pools.  As BlockReduce scales the node requirements to have a node which does partial state validation would be much less then a if Bitcoin scaled in its current state.  That would mean that although there may be fewer people validating full state, there will be more people, and fewer pools validating partial state. I would argue that having partially validating mining nodes is advantageous over having a deminimis number of pools.  Having smaller economic entities decide on the fate of the protocol rather than a few large pools would be positive for the ecosystem.


Is to REALLY scale out the network is = more partially validating nodes, but fewer fully validating nodes?

That goes the opposite path of what you said below. Or might I have misunderstood?

https://bitcointalk.org/index.php?topic=5060909.msg53240986#msg53240986


Yes, scaling the network is adding more network participants, this is accomplished through scaling.  The requirement that all market participants be fully validating nodes is a flaw not a virtue.  BlockReduce allows a larger number of incrementally more expensive ways of participating in the network while also scaling.  I think this is better than an all or nothing approach.  Additionally, when calculating market participants you should consider Bitcoin users in addition to nodes and miners as a metric of success.



3  Bitcoin / Development & Technical Discussion / Re: Blockrope - Internal Parrallel Intertwined Blockchains - scaleability related on: December 10, 2019, 12:14:24 AM
A more thought out version of this is BlockReduce.

https://bitcointalk.org/index.php?topic=5060909.0

Overview paper:

https://arxiv.org/pdf/1811.00125.pdf
4  Bitcoin / Development & Technical Discussion / Re: BlockReduce: Scaling Blockchain to human commerce on: December 07, 2019, 12:07:37 AM

Then I would debate that the statement, "allowing it to scale to handle tens of thousands of transactions per second without impacting fault tolerance or decentralization", is u true.


The actual effect on fault tolerance and decentralization has to be put into context that currently over 50% of Bitcoin's hashpower comes from only 4 pools.  As BlockReduce scales the node requirements to have a node which does partial state validation would be much less then a if Bitcoin scaled in its current state.  That would mean that although there may be fewer people validating full state, there will be more people, and fewer pools validating partial state. I would argue that having partially validating mining nodes is advantageous over having a deminimis number of pools.  Having smaller economic entities decide on the fate of the protocol rather than a few large pools would be positive for the ecosystem.


You don't believe that that will centralize Bitcoin toward the miners? Or you don't believe that users/economic majority should have the ability to run their own full nodes?


I think that people often times fall into tired narratives about majority of users, and fairness, et cetera without fully considering what any of it really means, or why it might be good or bad.  I would argue that if Bitcoin is meant to be censorship resistant and decentralized, that it must allow the greatest number of people to use it with the fewest intermediaries possible. Making low resource validation the primary focus of decentralization misses the point.  If even 20% of a population self custodianed Bitcoin which they regularly used for transactions it would be effectively impossible to censor or outlaw. When we discuss decentralization taking into account the power of the network which scales should also be a consideration, not just how easily it is validated.
5  Bitcoin / Development & Technical Discussion / Re: BlockReduce: Scaling Blockchain to human commerce on: December 06, 2019, 11:49:25 PM
Hi there! I was looking through some old research papers about merge-mining and came upon this thread. I'm very interested in your proposal as it seems like a great way to shard state without losing security via merge mining! I have a question for you though: If miners have to verify all the state that passes up to Prime, they have to run a full node so that they have the state of all the blockchains to properly verify everything. They are incentivized to do this so that they don't mine invalid blocks, but in doing so they might put a strain on the network because their zone and region nodes are not necessarily in the same geographic region as the rest of the zone and region nodes. (Of course, the zone that the miner is located in will be optimal, but I am talking about the rest of the zones and regions necessary for running a full state node).
For example, for n zones, n/2 regions, and m miners running a full state node, we have m - 1 latent nodes in each zone (or n*(m-1) latent zone nodes total) and m - 1 latent nodes in each region (or (n/2)*(m-1) latent region nodes total). Do you consider this an issue for network latency? Is there perhaps some way or incentive for a miner to run a full node and also run each sub-node (zone and region) in the proper geographic location? This might be physically difficult without the use of some cloud provider like AWS.

Looking forward to hearing more! Thanks.

Coopex, great question!  Sorry, it has taken me a bit to get back to you. The miner is incentivized to hold zone state which they are not mining because it reduces the risk they will include a zone block in a region block which eventually gets rolled back in the zone.  If they were to wait or delay including zone blocks in the region blocks, they could also achieve greater certainty, however they would get lower rewards.  Running the alternate zone state allows them to have greater certainty about a zone block faster.  Doing so with a node which is appropriately placed in the network topology will decrease that nodes latency and further decrease risk.  Therefore, miners will be incentivized to keep state and do so in a network optimal way.  I would absolutely expect that a person running full state would do so using something like AWS to allow optimization of the geographic placement of nodes.

Thanks for your response! I see now that miners are incentivized to run all of their nodes in the least latent way possible. However, miners might not physically be able to do so without moving their mining operation outside of the zone that they operate in, unless they want to pay a cloud provider to host it for them - which may not work if the cloud provider does not offer server hosting close enough or with proper precision to the geographic location of the zone. Perhaps a business could evolve to host servers in close proximity to every zone and move them around when necessary, kind of like high frequency trading does with the stock market, but even then you'd have the business be a centralizing factor.

In any case, my question is more general. Do you consider it an issue if some of the nodes in a zone are more latent than others? Are there bandwidth concerns with users or miners who run latent nodes? What if I just have a really shitty internet connection - could I be causing bandwidth issues for the network, or am I just causing issues for myself?

Thank you for your responses!

That is a pretty insightful question.  However, the answer is pretty simple.  With all distributed networks from things like Napster to Bitcoin you always have a seed and a leech problem.  In the example of Napster it is driven by storage space more than bandwidth.  In the context of Bitcoin it is driven by bandwidth more than storage.  Therefore, if you are a Bitcoin node that has low bandwidth you are slowing the overall network down (leech), whereas if you have high bandwidth you are speeding it up (seed).  BlockReduce is not different from Bitcoin in this regard. However, BlockReduce rewards mining in the zone chains creating an economic incentive for participants who are mining to optimize for latency.
6  Bitcoin / Development & Technical Discussion / Re: BlockReduce: Scaling Blockchain to human commerce on: December 02, 2019, 02:47:48 PM
Hi there! I was looking through some old research papers about merge-mining and came upon this thread. I'm very interested in your proposal as it seems like a great way to shard state without losing security via merge mining! I have a question for you though: If miners have to verify all the state that passes up to Prime, they have to run a full node so that they have the state of all the blockchains to properly verify everything. They are incentivized to do this so that they don't mine invalid blocks, but in doing so they might put a strain on the network because their zone and region nodes are not necessarily in the same geographic region as the rest of the zone and region nodes. (Of course, the zone that the miner is located in will be optimal, but I am talking about the rest of the zones and regions necessary for running a full state node).
For example, for n zones, n/2 regions, and m miners running a full state node, we have m - 1 latent nodes in each zone (or n*(m-1) latent zone nodes total) and m - 1 latent nodes in each region (or (n/2)*(m-1) latent region nodes total). Do you consider this an issue for network latency? Is there perhaps some way or incentive for a miner to run a full node and also run each sub-node (zone and region) in the proper geographic location? This might be physically difficult without the use of some cloud provider like AWS.

Looking forward to hearing more! Thanks.

Coopex, great question!  Sorry, it has taken me a bit to get back to you. The miner is incentivized to hold zone state which they are not mining because it reduces the risk they will include a zone block in a region block which eventually gets rolled back in the zone.  If they were to wait or delay including zone blocks in the region blocks, they could also achieve greater certainty, however they would get lower rewards.  Running the alternate zone state allows them to have greater certainty about a zone block faster.  Doing so with a node which is appropriately placed in the network topology will decrease that nodes latency and further decrease risk.  Therefore, miners will be incentivized to keep state and do so in a network optimal way.  I would absolutely expect that a person running full state would do so using something like AWS to allow optimization of the geographic placement of nodes.
7  Bitcoin / Development & Technical Discussion / Re: BlockReduce: Scaling Blockchain to human commerce on: December 02, 2019, 02:37:13 PM

The proposal basically deals with the bandwidth problem of on-chain scaling, trading it off against trust that miners are doing the proper cross-shard checks that they're supposed and incentivized to. What it fails to do making the whole chain fully verifiable by a typical desktop computer, as should be apparent from "the total chain will require around 8 Tb/year of storage".


Tromp, I appreciate the time that you have taken to look at BlockReduce.  One thing that I would debate is the use of the word sharding.  Although, a miner can depend upon a zone blocks work as an attestation to the correctness of the included transactions, they are not required to.  Much like an SPV node doesn't have to keep the entire chainstate but rather just looks at a block header.  This is not sharding per say, but rather a mode of operation that a node can work within to use less resources.  I would anticipate that serious miners or pools will run and validate full state because they have an economic incentive to do so, while merchants will likely run partial state much like SPV. 

Another way to think about BlockReduce is as a form of multi-level erlay where "sketches" are sent when a zone, or region block is found rather than an arbitrary delay.  The obvious difference being that actual sub-blocks are found which are rewarded to incentivize miners to self organize in a network optimal way.
8  Economy / Games and rounds / Re: Find 2 ETH hidden in plain sight on: May 07, 2019, 04:24:14 PM
Round 2

The current hints for the 2 ETH are:

May 6th: The CAs
May 7th: I think you should go home now Devin

The previous hints can be disregarded as Round 1 to find 1 ETH has already been found.
9  Economy / Games and rounds / Re: Find 1 ETH hidden in plain sight on: May 06, 2019, 06:34:27 PM
Here is the solution to last weeks Puzzle for 1 ETH. This corresponds to this address which had 1 ETH.

Checkout our new puzzle for a chance to win 2 ETH!

10  Economy / Games and rounds / Re: Find 1 ETH hidden in plain sight on: May 03, 2019, 01:54:12 PM
The current hints for finding the 1 ETH are:

May 2nd: ECC
May 3rd: P=Q

Retweet today's hint for a chance to get tomorrow's hint today!
11  Economy / Games and rounds / Find 2 ETH hidden in plain sight on: May 02, 2019, 06:52:26 PM
GridPlus 1 2 ETH Treasure Hunt





In preparing for the launch of the GridPlus Lattice1, we have setup a little puzzle for everyone. There is 1 2 ETH hidden in the pattern of the Lattice1 box. Think you can find it, try your guess at gridplus . If you need a hint 1 in 10 people that retweet the last hint will get the next hint early. Learn more at GridPlus.io




 
12  Bitcoin / Development & Technical Discussion / Re: [Scaling] Minisketch - Unmoderated on: January 05, 2019, 07:47:12 PM
Firstly, thank you gmaxwell for taking the time to answer my question.  I very much appreciate it.

Transactions announcements to other peers are already delayed for bandwidth reduction (because announcing may at once takes asymptotically about one forth the bandwidth: since ip+tcp+bitcoin have overheads similar to the size of one announcement and the delays usually prevents the same transaction from being announced both ways across a single link) and privacy reasons.  The delays are currently a poisson process with an expected delay of 5 seconds between a node transmitting to all inbound connecting peers. Outbound connections each see 2 second expected delay.

Is the proposal with minisketch to keep the same delay that is currently used?  Given the current rate of transactions on the network, what would be the anticipated bandwidth savings with minisketch (assuming delay remains unchanged)?


Quote
selfish and resource optimization behavior especially against small nodes
All of the computational work in reconciliation is performed by the party that initiates the reconciliation.  So a third party cannot cause you to expend more resources than you want to expend on it.  We've been doing protocol design assuming the need to support rpi3 nodes without an undue burden; which was part of the reason minisketch was required since prior bandwidth efficient reconciliation code was far too slow for limited hardware like that.  rpi3 is more or less the bottom end for running a full node-- already that target takes 20+ days to synchronize the chain.  Slower could still be used, but it would presumably reconcile less often.

With minisketch wouldn't a node need to compute and transmit different sketches for each one of its peers?

Quote
The theoretical propagation time of data across the network is actually the upper bound on network throughput.
Not at all, Bitcoin is already a batch system-- blocks show up at random times and commit to a chunk of transactions, whatever is missed and still valid goes into a subsistent block. Both because blocks are relatively infrequent and because the mining process itself has seconds of latency (e.g. from batching new block templates to mining devices to avoid creating extreme loads on block template servers) the existing delays have little effect on the transaction handling delays.

More fundamentally, the connection you believe exists between tx propagation time and network throughput just doesn't exist:  It could take an hour to propagate transactions and the resulting network throughput would be unchanged because the network doesn't stop and wait while the transactions are being propagated. If it did, it would add an hour until you saw a confirmation, but the number of confirmed transactions per hour would not be changed.

Imagine you had an infinite number of dump trucks to use in hauling gravel from one city to another, 24 hours a day 365 days a year. Each truck carries 1 ton of payload and every 5 minutes a full truck leaves.  During week you will carry 2016 tons of gravel between the cities. It does not matter if the cities are 1 hour apart or 5 hours apart:  Changing latency does not change throughput for non-serialized operations.

In Bitcoin the latency is relevant-- not because of throughput reasons, but because of things like people caring about their transactions being confirmed sooner rather than later.  So long as the TX propagation delays are very small compared to the block finding interval they don't matter much in terms of the user's experience, so it's fine to trade off some small latency for other considerations like reduced bandwidth or increased privacy-- which is something Bitcoin currently does and has always done since the very first version (though the details of the trade-off have changed over time).

There are other minor considerations that make tx propagation delays matter some, but they're all of a sort where they don't matter much so long as they're significantly less than other delays (like block template updating delays).

I don't think that this example fully encompasses the problem.  

The transaction propagation time, although inconvenient, is not the limitation on bitcoin specifically.  The problem limiting the throughput of Bitcoin is the average bandwidth available to the Bitcoin nodes and how much bandwidth is needed to handle a given number of TPS. The reason that latency becomes important in a multi-hop system is that it acts to reduce the effective bandwidth of the network.  

Adding additional propagation delays needs to be compensated by decrease in bandwidth. Minisketch clearly accomplishes this. However, there is a point which is optimal in that the total throughput of the system will be maximized with delay X given the amount of bandwidth reduction Y that the delay enables.

This is a well study trade-off that effects the throughput of TOR given the latency of nodes as well as the prescribed number of hops. I think that some of this work may be helpful in understanding how to optimize Minisketch.

In the case of Bitcoin, you may actually realize that substantially longer delays will be the point at which total network throughput is maximized or rather how the bandwidth is minimized given the current blocksize limit.  

This again gets a little bit tricky though because delaying relaying by say a minute could substantively increase TPS or decrease bandwidth but would cause some form of selfish mining.
13  Bitcoin / Development & Technical Discussion / Re: [Scaling] Minisketch - Unmoderated on: January 05, 2019, 07:14:03 PM
Sorry about that, I have basic rules in my threads

1. No trolling. Even if you're not definitively a troll

2. No trolls. Even if you're not trolling


You fell afoul of rule 2 here: https://bitcointalk.org/index.php?topic=4638321.msg49019088#msg49019088

Is lightning network making bitcoin centralised?

Yes.

It is also reintroducing middleman which will perpetually charge transactions fees due to network driven oligopolies.  It is a lot like Visa and Master card.

If someone wants to talk to you in your thread, that's their business

I find that to be a thoughtful answer to the question.  You may disagree, but that is the point of discussion. In all parts of civil society I think censorship is detrimental (even for trolls).  You classifying of me as a troll because of that comment, with which you disagree, is a perfect example of the slippery slope of censorship.

I don't want to discuss this further in this thread, but felt compelled to respond.
14  Bitcoin / Development & Technical Discussion / [Scaling] Minisketch - Unmoderated on: January 03, 2019, 08:00:20 PM
This post was deleted from the self moderated Minisketch thread, so I decided to start a new one so people can discuss.

We don't plan on changing how the mempool works-- rather, for each peer a sketch would be kept with all the transactions that your node would like to send / expect to receive with that peer. It takes a bit of memory but it's pretty negligible compared to other data kept per peer (such as the filters used to avoid redundant tx relay today).

Although, this is a very intriguing idea at first I have a couple of questions regarding incentives and resources of nodes as well as propagation latency.

In terms of peer sketches.  What determines how often a node will update a peer sketch? Are they doing it every time they get a transaction, or right after a peer requests it.  If they are doing it every time they get a transaction, the sketch takes more resources.  If they do it every time a peer requests an update, sketch creation causes a delay, followed by the delay in determining the difference and than subsequently requesting the deltas.  It was expressed that this could be done without a hard fork because it doesn't change the incentives.  I think that this does change the incentives of nodes in an indeterministic way which could lead to various forms of selfish and resource optimization behavior especially against small nodes.

The second question I have is how does this net effect latency.  The theoretical propagation time of data across the network is actually the upper bound on network throughput. Introducing latency to aggregate, share, compute, request and share transactions likely adds a lot of latency to the network which can actually decrease throughput.  Any thoughts on where the use of bandwidth competing with latency is optimal?
15  Bitcoin / Development & Technical Discussion / Re: Lightning Network Discussion Thread on: January 03, 2019, 12:53:06 AM
Is lightning network making bitcoin centralised?

Yes.

It is also reintroducing middleman which will perpetually charge transactions fees due to network driven oligopolies.  It is a lot like Visa and Master card.
16  Bitcoin / Development & Technical Discussion / Re: Bitcoin Scaling Solution Without Lightning Network... on: November 25, 2018, 08:15:38 PM
bitcoins beauty is about how to solve having multiple generals ruling multiple regions but ensuring they all comply to one rule.
and solves how those generals abide by that one rule without having one general.

the answer was everyone watch everything and reject the malicious individuals
we have had "sharding" systems in the real world for decades. sharding is DE-inventing what makes bitcoin, bitcoin

Franky,  based on all of your comment and discussion, much of which I agree with, I think you should look at BlockReduce.

There is also a discussion thread on it specifically.

Basically it is using Proof-of-Work to create a hierarchy of merge mined blockchains.  It allows for incremental work to be used (lower chain blocks) to efficiently group and propagate transactions to the entire network.  I think of many of the people here, you would get it and be able to provide constructive feedback.
17  Bitcoin / Development & Technical Discussion / Re: BlockReduce: Scaling Blockchain to human commerce on: November 23, 2018, 07:16:20 PM
Looking closer to your calculations it is about comparing bitcoin transaction propagation with your schema's block reconciliation. These are two basically different things (and yet you are exaggerating about the situation with bitcoin).

It is not comparing propagation to block reconciliation.  I understand how this could be confused. The  blocks found at the lowest level are the mechanism for sharing groups of transactions.  That is why I am comparing Bitcoins transaction propagation to BlockReduce Zone block propagation.  Propogating transactions in groups in a whisper protocol is much more bandwidth efficient.

BlockReduce will be as inefficient as Bitcoin at the lowest level zone groups.  The efficiency is gained by the aggregation of transactions into groups via "Zone blocks" and whispering them further in the network as blocks of transactions.

As I've argued before, both bitcoin p2p and your schema need the same amount of effort and resources to be consumed for raw transaction propagation because they are full nodes and you just can't dictate a predefined topology like a central authority or something.

The structure for grouping and propagating transactions would not be dictated, but rather would be incentivized.  Miners would be incentivized to find lower level blocks and the impact of network latency on effectively using their hash power would incentivize them to find low latency zones to mine in.  This would cause each zone group to have lower latency then the total network and be able to process a higher TPS.  For example, if there were 4 zones available to mine in, miners would roughly divide the world into 4 geographic zones.  They wouldn't be well defined and would overlap from a geography standpoint, but having 4 networks at the zone level would be much more performant then having a single network.  The single network would still exist at the PRIME level.  However, by the time the transactions make it to PRIME they would be whispered in region blocks of 1000 transactions.

Well, I'm ok with collateral work and most of the ideas above, but not with dedicated nodes it would be hard to implement an incentive mechanism.

The incentive mechanism would be giving some reward for the collateral work via merge mining of zone, and region blocks with PRIME.

Right now, Greg Maxwell and others are working on a privacy focused improvement to transaction relay protocol in bitcoin, Dandelion.  The good thing about this BIP is their strategy for delaying transaction relay procedure a while (to make it very hard if not impossible for surveillance services to track sender's IP) and it looks to be committed to the release code in near future! It will be a great opportunity  to do more processing (classification, aggregation, ... ) while the transaction is queued waiting for relay.

I have reviewed this proposal by Greg Maxwell.  It is interesting from an anonymization standpoint.  However, I did not see a mechanism by which the transactions would be delayed and aggregated but rather a routing schema that obfuscates the origin of a transaction.
18  Bitcoin / Development & Technical Discussion / Re: BlockReduce: Scaling Blockchain to human commerce on: November 21, 2018, 02:46:37 PM
Sorry, but it is not correct. For transactions to be included in a block they have to be "whispered" in the network and stored in the node's mempool beforehand.  Your schema couldn't change anything about this and once a block is generated, nodes have to validate it and if for some reason they've not been informed of the transaction already they have to fetch it anyway. Without sharding the state, it would be very hard to improve bandwidth requirement for the network.

Could you please be more specific in what you find wrong with the math in the example I gave above?  The example is giving a simplified version of how a transaction is "whispered" across the network currently and with BlockReduce.  In terms of mempool, I don't really think this effects bandwidth because it essentially adds capacitance to the system which can be ignored at steady-state.

Quote
We could initiate a BIP for such a situation to improve transaction relay protocol in bitcoin networking layer by aggregating batches of fresh transactions and synchronizing them according to some convention like ordered set of 100 transaction ids falling into a specific criteria, ...

The most critical problem with networking is the latencies involved in whispering new blocks and when nodes have no clue about some of included transactions, it is the basis for proximity premium flaw in bitcoin and its centralization threat because of its direct consequence: pooling pressure.

I would like to work with you to initiate a BIP like this.  

I would propose that if we aggregate transactions we will need a way for the nodes to determine when they should share groups of transactions.  Ideally the mechanism we come up with for determining when the groups share transaction should be decentralized so no single node can withhold transactions for an inordinate amount of time.  There should also be a way for the nodes that receive the grouping of transactions to be able to verify the criteria was met.  

We could use something like a one-way mathematic function operating on an unpredictable data set that would periodically stochastically meet some arbitrary criteria.  Then when the criteria is met the aggregating node could transmit the data to all of the other nodes.  The other nodes would then be able to easily verify the criteria was met.

We could improve upon this further if we created some redundancy for the aggregating node.  This could be accomplished by having a small group of nodes with high connectivity that work together to aggregate a set of transactions amongst themselves before sharing to the larger network.

Do you think something like this could work?


19  Bitcoin / Development & Technical Discussion / Re: BlockReduce: Scaling Blockchain to human commerce on: November 20, 2018, 04:26:43 PM
The reason that the side chains are just as secure as the PRIME chain is that the PRIME chain is still checking all transactions explicitly (if not implicitly). Lets use a simple example with a PRIME chain and two children chains.  Each child chain has 50% of the hash power of PRIME.  Miners in both children (A and B) are validating transactions that they receive via the children blocks and including the transactions and block hashes in PRIME.  Therefore, by validating the transactions before including the hash in PRIME, 100% of the hash power is checking the transaction. Therefore, the transactions are just as secure if there where no children chains.

no.

With your example. Prime chain + 2 side chains. The miners of each side chain mine just their sidechain and the PRIME chain. So yes the Prime chain has 100% hash, but the side chains have 50% each.

This means that any transactions about to be included in the PRIME chain have been _secured_ by 50% of the hash in the side chain. Then you use 100% of the hash to secure that - in the PRIME chain. The weakest link in this scenario/chain is the 50% POW side chain. This is the security of the transactions in the side chain - not 100%.

You have secured the 50% POW trxs in perpetuity with 100% of the POW..

Since the miners are validating all transactions in PRIME and including them in the PRIME block, 100% of the hash power voted for transactions that originated in either A or B.  

Think about the child chains as not consensus, rather think about them as a mechanism for creating sets of transactions which can be efficiently propagated.  This is what I call consistency.  A consistent set of transactions on which the network validates and performs work on to include in a block. Once everyone, all miners have the transaction set, they check them and build a PRIME block.  

All transactions are included in PRIME in some form of a nested merkle tree.  All transactions were validated by all miners, and voted on by all miners when included in PRIME.  Therefore there is no sharding of work.  Lets say for example A propogates a block with an invalid transaction.  Miners in B will see this invalid transaction and refuse to include the hash in PRIME or to do any work on the transactions included in that block.  By the rules of consensus, transactions are not actually valid until they have been included in PRIME.

Again, don't even think of the children chains as blocks.  Only think about them as a mechanism for aggregating transactions.

The reason that this is different then merge mining is that the consensus rules are consistent and checked across the hierarchy of all chains.  This is not true in the case of something like Bitcoin and Namecoin.
20  Bitcoin / Development & Technical Discussion / Re: BlockReduce: Scaling Blockchain to human commerce on: November 20, 2018, 03:22:02 PM

I like the idea of a central POW chain that everyone mines. And that off this central chain (the PRIME) you can run multiple side chains (a simplification of your hierarchical structure). The real issue for me is that POW side chains are insecure.. unless you can get a lot of miners to mine it. Check pointing the side chain blocks in the PRIME POW chain simply timestamps the insecure blocks securely ;p


The reason that the side chains are just as secure as the PRIME chain is that the PRIME chain is still checking all transactions explicitly (if not implicitly). Lets use a simple example with a PRIME chain and two children chains.  Each child chain has 50% of the hash power of PRIME.  Miners in both children (A and B) are validating transactions that they receive via the children blocks and including the transactions and block hashes in PRIME.  Therefore, by validating the transactions before including the hash in PRIME, 100% of the hash power is checking the transaction. Therefore, the transactions are just as secure if there where no children chains.

Lets for example imagine a fork in A (malicious or otherwise).  The miners in B will have to decide which hashes in A to include when working on PRIME.  If they include the "wrong/invalid" hash they will be wasting work.  Therefore,  everyone is still explicitly voting on every transaction with 100% of the hash power.


Is there a way that every_chain can benefit from ALL the shared POW equally across all the different chains whilst only validating shards of the data ? Not yet.. me thinks. But THAT would be cool.


I think that the above is they way that every chain benefits from ALL the PoW while explicitly validating every transactions.

If we want to complicate the discussion, if I am a small miner in PRIME and working in A and I don't want to validate every transaction in B, I could choose to "trust" the block given to me.  Why is this trust but not trust?  If I take it on faith I may waste work by including that B block hash in PRIME if it is invalid.  However, I am not taking it on faith because B is presenting to me half the work that goes into a PRIME block.  Therefore, it would be very expensive for B to create a bad block that would ultimately be discovered and re-orged out of the chain.  So, if I don't have the other computing resources to justify this validation, I can "bet" that the "bet" made by B is good.  In the case it is not, a larger miner with greater economic incentive and resource will identify and not include the block hash in PRIME. 

This level of trust is significantly lower compared to the trust that anyone has that is using a mining pool, or a Bitcoin user that doesn't run a full node.
Pages: [1] 2 3 4 5 »
Sponsored by , a Bitcoin-accepting VPN.
Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!