Bitcoin Forum
February 22, 2020, 11:15:41 AM *
News: Latest Bitcoin Core release: 0.19.0.1 [Torrent]
 
   Home   Help Search Login Register More  
Pages: 1 2 3 [All]
  Print  
Author Topic: BlockReduce: Scaling Blockchain to human commerce  (Read 928 times)
mechanikalk
Member
**
Offline Offline

Activity: 91
Merit: 63


View Profile WWW
October 31, 2018, 07:31:41 PM
Last edit: November 20, 2018, 04:47:28 PM by mechanikalk
Merited by ETFbitcoin (9), Welsh (4), d5000 (1), o_e_l_e_o (1)
 #1

BlockReduce presents a new blockchain topology that offers 3+ orders of magnitude improvement in transaction throughput while avoiding the introduction of hierarchical power structures and centralization. This is accomplished through a modification to the block reward incentive to not only reward work, but to also reward optimization of network constraints and efficient propagation of transactions. This is accomplished by creating a Proof-of-Work (PoW) managed hierarchy of tightly-coupled, merge-mined blockchains.

Please take a look at the paper:https://arxiv.org/pdf/1811.00125.pdf

Here is a video presentation of BlockReduce presented at the McCombs Bitcoin Conference.

Also, here is a BIP draft to review and contribute to on Github.

Any comments or questions are greatly appreciated!
1582370141
Hero Member
*
Offline Offline

Posts: 1582370141

View Profile Personal Message (Offline)

Ignore
1582370141
Reply with quote  #2

1582370141
Report to moderator
1582370141
Hero Member
*
Offline Offline

Posts: 1582370141

View Profile Personal Message (Offline)

Ignore
1582370141
Reply with quote  #2

1582370141
Report to moderator
1582370141
Hero Member
*
Offline Offline

Posts: 1582370141

View Profile Personal Message (Offline)

Ignore
1582370141
Reply with quote  #2

1582370141
Report to moderator
100% First Deposit Bonus Instant Withdrawals Best Odds 10+ Sports Since 2014 No KYC Asked Play Now
Advertised sites are not endorsed by the Bitcoin Forum. They may be unsafe, untrustworthy, or illegal in your jurisdiction. Advertise here.
1582370141
Hero Member
*
Offline Offline

Posts: 1582370141

View Profile Personal Message (Offline)

Ignore
1582370141
Reply with quote  #2

1582370141
Report to moderator
ETFbitcoin
Legendary
*
Offline Offline

Activity: 1918
Merit: 2207

Use SegWit and enjoy lower fees.


View Profile WWW
November 01, 2018, 02:21:18 AM
 #2

We're glad to see another scaling proposal. Some of my thoughts :
1. Part 4.3 have good idea to reduce bandwidth usage,
  • Sending TX hash to know whether a nodes already got transaction is great
  • I doubt self-identifying as sub-group is reliable solution since not all nodes online 24/7 and it might make transaction propagation become slower. Besides i doubt there's any reliable way to know everyone in a sub-graph has received transaction.
2. In general, IMO merge-mining and splitting/categorize blocks and transaction would increase Bitcoin development complexity and open far more attack-vector.
3. You forget to research verification time (for block & transaction), RAM usage and storage speed growth which are important when we're talking about scaling Bitcoin.

Gonna read your paper thoroughly later and give my thoughts later.

mechanikalk
Member
**
Offline Offline

Activity: 91
Merit: 63


View Profile WWW
November 01, 2018, 02:04:03 PM
Merited by Welsh (8), DarkStar_ (5), ETFbitcoin (1)
 #3

ETFBitcoin, thank you for taking a first look!  I look forward to hearing your further thoughts.

1.  The sub-groups are essentially children blockchain.  They have the same characteristics and issues in terms of transaction propagation as BTC.  However, because the groups are smaller and economically incentivized to form into low latency groups, the bandwidth constrained TPS will likely be much higher in BlockReduce.

2.  I don't think that the attack vector is ultimately any different than Bitcoin because the transactions are all ultimately propagated, validated, and recorded using PoW.  They just do so incrementally.

3. This is a good point.  If you look in the paper one of the references is to a study published by BitFury looking at the constraints to scaling.  The first constraint is bandwidth, the next is RAM, and the last is persistant data storage.  However, BlockReduce actually enables meaningfully different types of nodes which allows nodes to decide on the level of resource they want to commit based on the economic use case.  For example,  a "zone node" would only need to keep state of their zone as well as a trimmed region state and trimmed PRIME state.  This would mean that although the aggregate blockchain is running at tens of thousands of TPS, the "zone node" would be using similar amount of resources as a Bitcoin node uses today.  If a large miner wants to participate, they will dedicate more computation resources (RAM, storage, et cetera) because they will have an economic incentive to validate more transactions in more zones.  This will allow them to quickly discover if a fork exists in a zone or region preventing them from wasting hash power.  Also, it will provide them a arbitrage opportunity by increasing their ROI by directing hash into a zone when a fork takes place.
ETFbitcoin
Legendary
*
Offline Offline

Activity: 1918
Merit: 2207

Use SegWit and enjoy lower fees.


View Profile WWW
November 02, 2018, 06:05:21 PM
 #4

I've finished reading the paper thoughtfully and i barely understand about splitting nodes/block/transaction into prime, region and zone. But anyway, my thoughts are :
1. Since there are many region and zone, do you think each region/zone would have active nodes, miners and users? IMO with current Bitcoin state, i'm sure many region/zone would have empty block (no transaction).
2. In 4.4, it mentions that region and zone have lower mining difficulty. If so, what stopping attacker from 51% hashrate attack to prevent transaction from confirmed or double-spend transaction? I've read 4.11, but IMO it's not good enough since :
  • Attacker might pretend as honest 40%/miner with minority hashrate
  • Human intervention is required (move to different zone)
3. In 4.5, it mentions "If multiple inputs are used in a transaction, they must reside within the same scope, meaning inputs must be taken from the same zone". Is it possible to treat it only as region/PRIME transaction scope?
4. Since both nodes with PRIME and region scope require more resources. IMO, centralization or control will happen since there's less group or nodes that needs to be attacked/hijacked.
5. With region and zone scope, this will make bitcoin less pseudonymous since this would make transaction tracking analysis far easier

FYI, BitFury's research is outdated, since :
1. SegWit fix "quadratic hashing issue" problem which make verification time far slower
2. Currently bitcoin use block weight limit which makes TPS and blockchain size growth depends on how many transaction use SegWit
3. Excluding block and transaction propagation time which would be slower when TPS is high.

Your idea is interesting, but i seriously doubt Bitcoin community will accept idea since :
1. Increase bitcoin development complexity a lot
2. Reduce decentralization since running full/PRIME nodes would be expensive
3. Region and zone scope reduce potential anonymity that Bitcoin offers

mechanikalk
Member
**
Offline Offline

Activity: 91
Merit: 63


View Profile WWW
November 06, 2018, 08:52:38 PM
Merited by Welsh (4), ETFbitcoin (1), xtraelv (1)
 #5

Quote
1. Since there are many region and zone, do you think each region/zone would have active nodes, miners and users? IMO with current Bitcoin state, i'm sure many region/zone would have empty block (no transaction).

The idea of BlockReduce is that the protocol would be able to adjust the block size and the number of region and zones such that the system operates near capacity say 80% fill.  This will ensure that transaction fees are non-zero, but also that there is economic incentive to balance transaction demand evenly amongst the zones.  This also will help to create transaction revenue to incentivize miners to continue to secure the chain when inflation block rewards move towards zero.

Quote
2. In 4.4, it mentions that region and zone have lower mining difficulty. If so, what stopping attacker from 51% hash rate attack to prevent transaction from confirmed or double-spend transaction? I've read 4.11, but IMO it's not good enough since :
Attacker might pretend as honest 40%/miner with minority hash rate
Human intervention is required (move to different zone)

The zone is only a mechanism in which incremental work ensures valid transaction processing without necessarily requiring other nodes validate all transactions.  This does not mean that other nodes don't need to verify the transactions.  If they have enough hash power they will be economically incentivized to validate a greater number of transactions from further removed zones to ensure they never lose work.  Additionally, even light nodes could validate transactions of adjacent zones without necessarily having to keep canonical state.  Large miners will keep the entirety of state as well as verify all transactions just because they do not want to lose hash working on bad blocks.  Even if a bad block is added in a zone, it will eventually be found and thrown out as it propagates up into regions and zones.  Ideally, it will be found out quickly in a zone, but if it is not, eventually all hash power in the network will work on it, which means that to get included persistently in PRIME it would be a 51% attack of all network hash.

Quote
3. In 4.5, it mentions "If multiple inputs are used in a transaction, they must reside within the same scope, meaning inputs must be taken from the same zone". Is it possible to treat it only as region/PRIME transaction scope?

Originally, I thought that this could be done.  However, I don't think it is a good idea because if a UTXO had PRIME scope, a user could initiate conflicting transactions in many zones.  This would take some time to be discovered and would cause significant waste of hash power.  If the PRIME transaction was sufficiently expensive to make this type of SPAM attack unreasonably expensive, it could be done.

Quote
4. Since both nodes with PRIME and region scope require more resources. IMO, centralization or control will happen since there's less group or nodes that needs to be attacked/hijacked.

There could potentially be some amount of centralization, but the fact that nodes can have a continuum of different resource requirements is decentralizing.  Additionally, the fact that overlap of verification and processing is not prescriptive, but rather variable based on a nodes economic incentives, it will create a diverse overlapping set of miners.

Quote
5. With region and zone scope, this will make bitcoin less pseudonymous since this would make transaction tracking analysis far easier

There will be some decrease in anonymity, however this is likely small and can be managed by the user if they desire better privacy.  If a user is in a set of 1/4 of bitcoin transactions rather than all bitcoin transactions there is less anonymity.  That user could choose to operate throughout all zones which would cost slightly more, but would provide the same amount of anonymity as bitcoin.

Quote
Your idea is interesting, but i seriously doubt Bitcoin community will accept idea since :
1. Increase bitcoin development complexity a lot

This is actually an incredibly simple change to the bitcoin code base.  It only requires a change in the block header, transaction header, peer management, and the chain database.
mechanikalk
Member
**
Offline Offline

Activity: 91
Merit: 63


View Profile WWW
November 16, 2018, 12:28:31 AM
Merited by Welsh (1)
 #6

I have developed this idea into a BIP which is available for people to view here. https://github.com/mechanikalk/bips/blob/master/bip-%3F%3F%3F%3F.mediawiki

Would really appreciate any additional feedback or discussion.

The TL;DR is that it is many Bitcoin blockchains that are merge mined at different difficulties with different sets of state.  This prevents sharding of PoW without causing sharding of state.  Would enable a Bitcoin like blockchain to scale to 10,000s of TPS without any centralization.

Thanks!
boogersguy
Newbie
*
Offline Offline

Activity: 21
Merit: 1


View Profile
November 19, 2018, 11:12:03 AM
 #7

Are there or are there not legal and tax implications with respect to explicitly selecting a geographic location from which your transaction originates?
spartacusrex
Hero Member
*****
Offline Offline

Activity: 717
Merit: 533



View Profile
November 19, 2018, 12:17:11 PM
 #8

OP! Congratulations on getting to this stage  Smiley There is a lot of work here. Wonderful stuff.

A main chain that acts as an anchor for all the other chains running off it. Merged Mining Meta-Chain or something. (I have a niggly feeling that somewhere deep in the bowels of bitcointalk something like this was mentioned / discussed - no idea though )

Diving straight in - Merge mining means that the hash power is spread over all the chains that miner chooses to mine. All the chosen chains benefit. This is good.

The only chain that everyone mines in your system is the PRIME chain. The lower levels are mined by less of the miners the lower you go.  

So although the hash power is kept high on PRIME - what gives the lower level chains the POW security required ?  

Adding them in as a root hash in the prime block (or whatever merkle path binds it) depends on lower level regions agreeing via POW what goes up to the higher levels. So - would that not be the attack vector ?

Please correct my understanding if this is incorrect!

Life is Code.
mechanikalk
Member
**
Offline Offline

Activity: 91
Merit: 63


View Profile WWW
November 19, 2018, 03:06:03 PM
 #9

Are there or are there not legal and tax implications with respect to explicitly selecting a geographic location from which your transaction originates?


The regions and zones will are not prescribed by BlockReduce, but rather incentivized.  Therefore, there will be influences of economic groups, geographic groups, and network topologies that play into "where" a region or zone is "located".  However, none of the regions or zones will be perfectly monolithic.  They will be overlapping and intertwined with little respect for geographic jurisdictional boundaries.  Furthermore, because there will always be at least one zone node outside of a given jurisdiction, there will be no ability for anyone to claim geographic location of a transaction.
aliashraf
Hero Member
*****
Offline Offline

Activity: 994
Merit: 727

always remember the cause


View Profile WWW
November 19, 2018, 05:44:32 PM
Merited by Welsh (2), ETFbitcoin (1)
 #10

I think there are a few contradictions and shortcomings in the model you have proposed in the github document:

Let's begin with your following argument
Quote from: mechanikalk link=https://github.com/mechanikalk/bips/blob/master/bip-%3F%3F%3F%3F.mediawiki
Now that the transactions are built up into larger groups of data, say hundreds or thousands of transactions, nodes can first check if they need to transmit data to their peers by checking a hash. This enables a significant increase in bandwidth efficiency. Transmitting a hash would only take tens of bytes, but would allow the nodes to determine if they need to transmit several kilobytes of transaction data.
{hence} Most importantly, the amount of bandwidth used to create 1,000 TPS blocks is significantly smaller than that of current blockchains
It is based on a false understanding of how bitcoin networking layer works. Actually, it is a while that there is no unnecessary transmission of raw transaction data in bitcoin. If a node is already aware of a transaction, it will never ask for it to be re-transmitted and no peer would 'push' it again. Nodes just check txids included in the blocks by querying the Merkle path and if they ever find an unknown txid they can submit an inv message and ask for the raw data.

So, In your so-called Prime chain, we have no communication efficiency compared to bitcoin as long as nodes present on this chain have to validate the transactions they are committing to.

But nodes do have to validate the transactions, don't they? Otherwise how they could ever be able to commit to their sub-trees (regions/zones)? I think there is a big misunderstanding for you about blockchains, we don't commit to the hash of anything (a block, a transaction, ...) unless we have examined and validated it and we couldn't validate anything without having access to its raw data.  

It leads us to another serious issue: state segmentation. Officially you describe the proposal being a combination of state sharding with other techniques:
Quote from: mechanikalk link=https://github.com/mechanikalk/bips/blob/master/bip-%3F%3F%3F%3F.mediawiki
BlockReduce combines the ideas of incremental work and sharding of state with merge-mining to form a tightly coupled PoW-managed hierarchy of blockchains which satisfies all of the proposed scaling requirements.
Well, it is not completely right, I afraid.

For prime chain being legitimate for committing to region blocks (and so on) they not only need ultimate access to all transaction data of the whole network but they need to keep the whole state preserved and uptodate. There could be no sharding of state.

There is more to discuss but for now I suppose we have enough material to work on.

P.S.
Please note that I'm a fan and I'm not denouncing your proposal as a whole. I'm now thinking of hierarchical sharding as a promising field of investigation, thanks to your work, well done.   Smiley


mechanikalk
Member
**
Offline Offline

Activity: 91
Merit: 63


View Profile WWW
November 19, 2018, 07:05:59 PM
Last edit: November 20, 2018, 12:17:55 PM by mechanikalk
Merited by Welsh (8)
 #11

OP! Congratulations on getting to this stage  Smiley There is a lot of work here. Wonderful stuff.

A main chain that acts as an anchor for all the other chains running off it. Merged Mining Meta-Chain or something. (I have a niggly feeling that somewhere deep in the bowels of bitcointalk something like this was mentioned / discussed - no idea though )

Diving straight in - Merge mining means that the hash power is spread over all the chains that miner chooses to mine. All the chosen chains benefit. This is good.

The only chain that everyone mines in your system is the PRIME chain. The lower levels are mined by less of the miners the lower you go.  

So although the hash power is kept high on PRIME - what gives the lower level chains the POW security required ?  

Adding them in as a root hash in the prime block (or whatever merkle path binds it) depends on lower level regions agreeing via POW what goes up to the higher levels. So - would that not be the attack vector ?

Please correct my understanding if this is incorrect!


Great question and I appreciate the feedback.  The large miners that are mining PRIME will have an economic incentive to validate the blocks that are being passed up.  If a block is detected because it is nefarious or there is a casual fork in a lower chain, a miner that is keeping state over multiple child chains will be incentivized to figure out the valid block and redirect hashpower toward the correct fork in both the lower chain as well as only including the correct head hash in the blocks they are mining in the parent chains. The lower fork could propagate as a fork in a region and eventually a fork in PRIME which means that all hashpower is used to determine consensus. The PRIME miners are effectively notarizing the results of the lower chain.  Ideally, based on the slower block times of PRIME relative to region, and region relative to zones, most forks should be fixed before propagating one or more levels up.  If they do, the hash trees will provide an efficient mechanism for re-orging large amounts of data in PRIME.

Because all miners eventually need to do work on the region and zone hashes in PRIME and including a hash of an invalid block from a region or zone chain will invalidate a PRIME block they are incentivized to validate transactions.  This effectively means that all hashpower is validating what is going into PRIME.  It will be optional for the nodes that are validating transactions outside of the zone in which the are mining how much state they want to maintain.  If a node with little hashpower is mining in a zone and doesn't want to validate adjacent transactions, they could trust the incoming transactions to some economic point because incremental work has been performed by mining an adjacent zone block.  If a miner has more hash power, they will be incentivized to do more checking of more transactions and keep a greater amount of state.  In the extreme example they could keep all state of all things and this could be thought of as a PRIME node.  They would still only be able to direct hashpower uniquely to each zone, however, by keeping state in more places they will create an insurance policy against their hashpower being wasted in that bad blocks are found quickly.  They will also have an economic incentive to redirect hash power to adjudicate a fork.

There are many node types that could be had in this architecture.  I could have a zone node, a region node, and a PRIME node.
 
The zone node would only keep state of a zone, a region, and PRIME.  
A region node would keep state of a region, all zones in that region and PRIME.  
A PRIME node would keep state of all things.  

Even within the different node types there are different levels of validation that could be performed.  For example, a zone node could choose to validate and keep state of all zone peers.  They could also decide to keep some form of limited or truncated state.  For example, if the UTXO pool in each zone is passed into PRIME via the merkle interlink field, a zone would be able to trust zone state at some point of depth in PRIME. This would enable them to only need to keep a limited block depth of the adjacent zone verifications with only needing to maintain several hundred or a thousand blocks of the adjacent zones without introducing a trust outside of PRIME consensus.  The same logic could be used at the region level.

Even if all nodes are PRIME nodes and keep full state, the amount of permanent storage needed would only be 8GbTb per year for 1000 TPS which currently can be purchased for a couple hundred dollars.
aliashraf
Hero Member
*****
Offline Offline

Activity: 994
Merit: 727

always remember the cause


View Profile WWW
November 20, 2018, 06:48:20 AM
 #12

The zone node would only keep state of a zone, a region, and PRIME.  
A region node would keep state of a region, all zones in that region and PRIME.  
A PRIME node would keep state of all things.  
This is an example of what I criticized in my above post about your misinterpretation of state sharding:
In cryptocurrency and blockchain literature, specially in bitcoin literature, by state we are referring to UTXO (the set of unspent transaction outputs). Plus, keeping, in this context, implies nothing less than commitment and as I've discussed in my previous post, you need to fully validate what you are committing to.

For your so-called 'zone nodes' to 'keep the state' of its ancestors, they need to validate the events (transactions) and state transitions (blocks) and it won't happen without spending the same amount of resources required for being a full region/PRIME node. It makes your classification of nodes and the whole 'node types' concept pointless. There is only one node type in your proposed architecture as far as we are talking about full/self-contained nodes and not bitcoin spv like nodes that are not secure and have little if not zero role in keeping the network secure.

On the other hand as upper level nodes are allowed to include transactions in their blocks, zone nodes have to keep track of their states and as @spartacusrex has correctly mentioned above, in your proposal, all nodes are PRIME.

Actually, you have gone that far to admit it, well, somehow:
Quote
Even if all nodes are PRIME nodes and keep full state, the amount of permanent storage needed would only be 8Gb per year for 1000 TPS which currently can be purchased for a couple hundred dollars.
It is 8TB obviously and for 1000 TPS you need more than just storage: RAM, processing power and bandwidth among them and for the latter I have already refuted your efficiency argument.
mechanikalk
Member
**
Offline Offline

Activity: 91
Merit: 63


View Profile WWW
November 20, 2018, 12:08:10 PM
Merited by Welsh (4)
 #13

I think there are a few contradictions and shortcomings in the model you have proposed in the github document:

Let's begin with your following argument
Quote from: mechanikalk link=https://github.com/mechanikalk/bips/blob/master/bip-%3F%3F%3F%3F.mediawiki
Now that the transactions are built up into larger groups of data, say hundreds or thousands of transactions, nodes can first check if they need to transmit data to their peers by checking a hash. This enables a significant increase in bandwidth efficiency. Transmitting a hash would only take tens of bytes, but would allow the nodes to determine if they need to transmit several kilobytes of transaction data.
{hence} Most importantly, the amount of bandwidth used to create 1,000 TPS blocks is significantly smaller than that of current blockchains
It is based on a false understanding of how bitcoin networking layer works. Actually, it is a while that there is no unnecessary transmission of raw transaction data in bitcoin. If a node is already aware of a transaction, it will never ask for it to be re-transmitted and no peer would 'push' it again. Nodes just check txids included in the blocks by querying the Merkle path and if they ever find an unknown txid they can submit an inv message and ask for the raw data.

So, In your so-called Prime chain, we have no communication efficiency compared to bitcoin as long as nodes present on this chain have to validate the transactions they are committing to.

But nodes do have to validate the transactions, don't they? Otherwise how they could ever be able to commit to their sub-trees (regions/zones)? I think there is a big misunderstanding for you about blockchains, we don't commit to the hash of anything (a block, a transaction, ...) unless we have examined and validated it and we couldn't validate anything without having access to its raw data.  

It leads us to another serious issue: state segmentation. Officially you describe the proposal being a combination of state sharding with other techniques:
Quote from: mechanikalk link=https://github.com/mechanikalk/bips/blob/master/bip-%3F%3F%3F%3F.mediawiki
BlockReduce combines the ideas of incremental work and sharding of state with merge-mining to form a tightly coupled PoW-managed hierarchy of blockchains which satisfies all of the proposed scaling requirements.
Well, it is not completely right, I afraid.

For prime chain being legitimate for committing to region blocks (and so on) they not only need ultimate access to all transaction data of the whole network but they need to keep the whole state preserved and uptodate. There could be no sharding of state.

There is more to discuss but for now I suppose we have enough material to work on.

P.S.
Please note that I'm a fan and I'm not denouncing your proposal as a whole. I'm now thinking of hierarchical sharding as a promising field of investigation, thanks to your work, well done.   Smiley




Thank you for your comments and feedback. On your first point, I am aware that only hashes of transactions are originally transmitted before all transaction data. There was lack of precision in how I described transaction propagation in the Github BIP and I have updated it accordingly. However, it remains true that a significant volume of traffic is used passing the transaction hashes around.  If you look at this research paper by Bitfury from a few years ago, although slightly out of date, gives a good idea of bandwidth efficiency.  One time transmission of data in a peer-to-peer network is highly efficient.  The inefficiencies measured have to do with creating a consistent set of transactions. Therefore, if I can incrementally create this set and represent it by a hash as I share it, I will decrease my bandwidth usage significantly because 1 hash will represent 100 or 1000 hashes.

In terms of sharding state.  For simplicity of the initial discussions, lets just assume all nodes keep all state and validate all transactions.  BlockReduce would reduce the amount of bandwidth needed to propagate a higher number of TPS efficiently. It would increase the need for RAM proportional to the UTXO set, permanent data storage proportional to chain size, and CPU power proportional to TPS.  However, none of these items are currently the limiting factor for scaling.  The present limiting factor for scaling is bandwidth and addressing this item would allow a significant increase in TPS.

In terms of the other limiting factors, these can be addressed with other mechanisms, but again, lets just discuss the proposal initially in terms of all nodes keeping and validating all state.

Also, for reference I presented this at McCombs Blockchain Conference.  The video gives a quick overview of BlockReduce if anyone is interested in getting a summary that is easier than reading the paper.
aliashraf
Hero Member
*****
Offline Offline

Activity: 994
Merit: 727

always remember the cause


View Profile WWW
November 20, 2018, 02:13:02 PM
Merited by Welsh (4)
 #14

Thank you for your comments and feedback. On your first point, I am aware that only hashes of transactions are originally transmitted before all transaction data. There was lack of precision in how I described transaction propagation in the Github BIP and I have updated it accordingly. However, it remains true that a significant volume of traffic is used passing the transaction hashes around.  If you look at this research paper by Bitfury from a few years ago, although slightly out of date, gives a good idea of bandwidth efficiency.  One time transmission of data in a peer-to-peer network is highly efficient.  The inefficiencies measured have to do with creating a consistent set of transactions. Therefore, if I can incrementally create this set and represent it by a hash as I share it, I will decrease my bandwidth usage significantly because 1 hash will represent 100 or 1000 hashes.
The research document from Bitfury you've referenced is out of date as you've mentioned, bitcoin has improved a lot since 2016 and BIP 152 (compact blocks).

As of your concerns about "significant volume of traffic is used passing the transaction hashes around" and your argument about improving it by topological improvement that you are proposing, besides ambiguities and threats involving the way this topology is to be dictated, I doubt it to be much helpful:
1- For non-mining full nodes/wallets that are actively participating in block/transaction relay protocol, nothing is changed and they will do their usual business of connecting and relaying hashes and the raw data if it is requested by peers.
2- For mining nodes, although showing little interest in transactions that belong to shards they are not currently mining on might look somewhat reasonable, it is not! They need to be aware of other transactions and their fees to make right decisions about the next shard they are willing to mine.
3-For spv wallets the business is as usual too.
4- For all full nodes, having transactions in mempool is a privilege as it helps to catch up earlier with new-block-found events (as they wouldn't need to query and validate transactions immediately)

So, I don't find a disruptive improvement here.

Once again, I need to express my deep respect for your work as I found it very inspirative and useful for my own line of research and protocol design, but I think it needs a lot of serious improvements.

cheers
spartacusrex
Hero Member
*****
Offline Offline

Activity: 717
Merit: 533



View Profile
November 20, 2018, 02:39:47 PM
 #15

@Aliashraf makes good points about the need for miners to validate all the data. Sharding has yet to fix this issue.

But, if we can brainstorm a bit, this idea may bear fruit Smiley

I like the idea of a central POW chain that everyone mines. And that off this central chain (the PRIME) you can run multiple side chains (a simplification of your hierarchical structure). The real issue for me is that POW side chains are insecure.. unless you can get a lot of miners to mine it. Check pointing the side chain blocks in the PRIME POW chain simply timestamps the insecure blocks securely ;p

So can we make the side chains more secure..well yes you can. You can mine them POS. POS doesn't work 'publicly' but in a federated environment it works much better. In an environment where you don't mind that certain people run the chain. Still better, by time-stamping the blocks in the PRIME POW chain, you actually remove the nothing at stake problem. Because now there is something. So 'getting on the correct chain' (the base POS problem - as you can copy a chain's security for free) is easy.

And Boom. I've just described Bitcoin + Liquid..

But can we do the same with POW.. ? I think POW altcoins are almost exactly this. They take traffic off the PRIME chain, PRIME miners don't need to validate them, they just don't run as _official_ side chains. The trick here is that the POW algorithms need to be different. The mining community large. Then you could run these as POW side chains.

Is there a way that every_chain can benefit from ALL the shared POW equally across all the different chains whilst only validating shards of the data ? Not yet.. me thinks. But THAT would be cool.

Life is Code.
mechanikalk
Member
**
Offline Offline

Activity: 91
Merit: 63


View Profile WWW
November 20, 2018, 03:08:02 PM
Last edit: November 20, 2018, 04:50:20 PM by mechanikalk
 #16

As of your concerns about "significant volume of traffic is used passing the transaction hashes around" and your argument about improving it by topological improvement that you are proposing, besides ambiguities and threats involving the way this topology is to be dictated, I doubt it to be much helpful:

Lets work through an example.  Lets use the following assumption, that a node has 100 peers, a transaction hash is 32 bytes, and a transaction is 250 bytes.

A transaction is broadcast to our nodes' peer.  Our node sees this, request the transaction data, and begins to broadcast the transaction hash to our peers.  In the meantime 49 of our nodes peers also broadcast the same transaction hash to our node.  This means that our node sends 50*32 bytes plus maybe 2*250 bytes and receives 50*32 bytes and 250 bytes. Just making an assumption on the number of times we transmit the transaction data.  Ideally for all nodes this would be >1 so 2 is pretty conservative.

Summary: 1 transaction - Tx: 2100 bytes Rx: 1850 bytes

If transactions are shared with a peer as a zone block that has 100 transactions in a block.  I will share a 32 byte hash of the block the same number of times and transmit the full block of data say the same number of times. So our node sends 50*32 plus 2*250*100 and receives 50*32 plus 250*100.

Summary: 100 transactions - Tx: 51600 bytes Rx: 26600 bytes

If I normalize this I get 516 Tx bytes/transaction and 266 Rx bytes/transaction.  

This represents 75% greater efficiency in transmit bandwidth usage and 90% greater efficiency in receive bandwidth usage.  

This gets even better when aggregating into PRIME.  In the smallest 2x2 hierarchy region blocks would have roughly 1000 transactions per block.  This means that at the PRIME level I would be achieving >98% reduction in bandwidth by more efficiently communicating transactions by aggregating them into region blocks before transmitting to peers.
mechanikalk
Member
**
Offline Offline

Activity: 91
Merit: 63


View Profile WWW
November 20, 2018, 03:22:02 PM
Last edit: November 20, 2018, 03:32:45 PM by mechanikalk
 #17


I like the idea of a central POW chain that everyone mines. And that off this central chain (the PRIME) you can run multiple side chains (a simplification of your hierarchical structure). The real issue for me is that POW side chains are insecure.. unless you can get a lot of miners to mine it. Check pointing the side chain blocks in the PRIME POW chain simply timestamps the insecure blocks securely ;p


The reason that the side chains are just as secure as the PRIME chain is that the PRIME chain is still checking all transactions explicitly (if not implicitly). Lets use a simple example with a PRIME chain and two children chains.  Each child chain has 50% of the hash power of PRIME.  Miners in both children (A and B) are validating transactions that they receive via the children blocks and including the transactions and block hashes in PRIME.  Therefore, by validating the transactions before including the hash in PRIME, 100% of the hash power is checking the transaction. Therefore, the transactions are just as secure if there where no children chains.

Lets for example imagine a fork in A (malicious or otherwise).  The miners in B will have to decide which hashes in A to include when working on PRIME.  If they include the "wrong/invalid" hash they will be wasting work.  Therefore,  everyone is still explicitly voting on every transaction with 100% of the hash power.


Is there a way that every_chain can benefit from ALL the shared POW equally across all the different chains whilst only validating shards of the data ? Not yet.. me thinks. But THAT would be cool.


I think that the above is they way that every chain benefits from ALL the PoW while explicitly validating every transactions.

If we want to complicate the discussion, if I am a small miner in PRIME and working in A and I don't want to validate every transaction in B, I could choose to "trust" the block given to me.  Why is this trust but not trust?  If I take it on faith I may waste work by including that B block hash in PRIME if it is invalid.  However, I am not taking it on faith because B is presenting to me half the work that goes into a PRIME block.  Therefore, it would be very expensive for B to create a bad block that would ultimately be discovered and re-orged out of the chain.  So, if I don't have the other computing resources to justify this validation, I can "bet" that the "bet" made by B is good.  In the case it is not, a larger miner with greater economic incentive and resource will identify and not include the block hash in PRIME. 

This level of trust is significantly lower compared to the trust that anyone has that is using a mining pool, or a Bitcoin user that doesn't run a full node.
spartacusrex
Hero Member
*****
Offline Offline

Activity: 717
Merit: 533



View Profile
November 20, 2018, 03:53:59 PM
 #18

The reason that the side chains are just as secure as the PRIME chain is that the PRIME chain is still checking all transactions explicitly (if not implicitly). Lets use a simple example with a PRIME chain and two children chains.  Each child chain has 50% of the hash power of PRIME.  Miners in both children (A and B) are validating transactions that they receive via the children blocks and including the transactions and block hashes in PRIME.  Therefore, by validating the transactions before including the hash in PRIME, 100% of the hash power is checking the transaction. Therefore, the transactions are just as secure if there where no children chains.

no.

With your example. Prime chain + 2 side chains. The miners of each side chain mine just their sidechain and the PRIME chain. So yes the Prime chain has 100% hash, but the side chains have 50% each.

This means that any transactions about to be included in the PRIME chain have been _secured_ by 50% of the hash in the side chain. Then you use 100% of the hash to secure that - in the PRIME chain. The weakest link in this scenario/chain is the 50% POW side chain. This is the security of the transactions in the side chain - not 100%.

You have secured the 50% POW trxs in perpetuity with 100% of the POW..

Life is Code.
mechanikalk
Member
**
Offline Offline

Activity: 91
Merit: 63


View Profile WWW
November 20, 2018, 04:26:43 PM
Last edit: November 20, 2018, 04:38:08 PM by mechanikalk
 #19

The reason that the side chains are just as secure as the PRIME chain is that the PRIME chain is still checking all transactions explicitly (if not implicitly). Lets use a simple example with a PRIME chain and two children chains.  Each child chain has 50% of the hash power of PRIME.  Miners in both children (A and B) are validating transactions that they receive via the children blocks and including the transactions and block hashes in PRIME.  Therefore, by validating the transactions before including the hash in PRIME, 100% of the hash power is checking the transaction. Therefore, the transactions are just as secure if there where no children chains.

no.

With your example. Prime chain + 2 side chains. The miners of each side chain mine just their sidechain and the PRIME chain. So yes the Prime chain has 100% hash, but the side chains have 50% each.

This means that any transactions about to be included in the PRIME chain have been _secured_ by 50% of the hash in the side chain. Then you use 100% of the hash to secure that - in the PRIME chain. The weakest link in this scenario/chain is the 50% POW side chain. This is the security of the transactions in the side chain - not 100%.

You have secured the 50% POW trxs in perpetuity with 100% of the POW..

Since the miners are validating all transactions in PRIME and including them in the PRIME block, 100% of the hash power voted for transactions that originated in either A or B.  

Think about the child chains as not consensus, rather think about them as a mechanism for creating sets of transactions which can be efficiently propagated.  This is what I call consistency.  A consistent set of transactions on which the network validates and performs work on to include in a block. Once everyone, all miners have the transaction set, they check them and build a PRIME block.  

All transactions are included in PRIME in some form of a nested merkle tree.  All transactions were validated by all miners, and voted on by all miners when included in PRIME.  Therefore there is no sharding of work.  Lets say for example A propogates a block with an invalid transaction.  Miners in B will see this invalid transaction and refuse to include the hash in PRIME or to do any work on the transactions included in that block.  By the rules of consensus, transactions are not actually valid until they have been included in PRIME.

Again, don't even think of the children chains as blocks.  Only think about them as a mechanism for aggregating transactions.

The reason that this is different then merge mining is that the consensus rules are consistent and checked across the hierarchy of all chains.  This is not true in the case of something like Bitcoin and Namecoin.
aliashraf
Hero Member
*****
Offline Offline

Activity: 994
Merit: 727

always remember the cause


View Profile WWW
November 20, 2018, 05:43:31 PM
 #20

As of your concerns about "significant volume of traffic is used passing the transaction hashes around" and your argument about improving it by topological improvement that you are proposing, besides ambiguities and threats involving the way this topology is to be dictated, I doubt it to be much helpful:

Lets work through an example.  Lets use the following assumption, that a node has 100 peers, a transaction hash is 32 bytes, and a transaction is 250 bytes.

A transaction is broadcast to our nodes' peer.  Our node sees this, request the transaction data, and begins to broadcast the transaction hash to our peers.  In the meantime 49 of our nodes peers also broadcast the same transaction hash to our node.  This means that our node sends 50*32 bytes plus maybe 2*250 bytes and receives 50*32 bytes and 250 bytes. Just making an assumption on the number of times we transmit the transaction data.  Ideally for all nodes this would be >1 so 2 is pretty conservative.

Summary: 1 transaction - Tx: 2100 bytes Rx: 1850 bytes

If transactions are shared with a peer as a zone block that has 100 transactions in a block.  I will share a 32 byte hash of the block the same number of times and transmit the full block of data say the same number of times. So our node sends 50*32 plus 2*250*100 and receives 50*32 plus 250*100.

Summary: 100 transactions - Tx: 51600 bytes Rx: 26600 bytes

If I normalize this I get 516 Tx bytes/transaction and 266 Rx bytes/transaction.  

This represents 75% greater efficiency in transmit bandwidth usage and 90% greater efficiency in receive bandwidth usage.  

This gets even better when aggregating into PRIME.  In the smallest 2x2 hierarchy region blocks would have roughly 1000 transactions per block.  This means that at the PRIME level I would be achieving >98% reduction in bandwidth by more efficiently communicating transactions by aggregating them into region blocks before transmitting to peers.
Sorry, but it is not correct. For transactions to be included in a block they have to be "whispered" in the network and stored in the node's mempool beforehand.  Your schema couldn't change anything about this and once a block is generated, nodes have to validate it and if for some reason they've not been informed of the transaction already they have to fetch it anyway. Without sharding the state, it would be very hard to improve bandwidth requirement for the network.

That been said, it is worth mentioning that the main challenge for scaling bitcoin is not the total bandwidth requirement for whispering transactions. Transactions occur with a random distribution in time and are naturally tolerant to be queued and delayed for few seconds, in a hypothetical 10,000 tps situation, it takes a moderate internet connection for a node to semi-synchronize its mempool with other peers. We could initiate a BIP for such a situation to improve transaction relay protocol in bitcoin networking layer by aggregating batches of fresh transactions and synchronizing them according to some convention like ordered set of 100 transaction ids falling into a specific criteria, ...

The most critical problem with networking is the latencies involved in whispering new blocks and when nodes have no clue about some of included transactions, it is the basis for proximity premium flaw in bitcoin and its centralization threat because of its direct consequence: pooling pressure.
mechanikalk
Member
**
Offline Offline

Activity: 91
Merit: 63


View Profile WWW
November 21, 2018, 02:46:37 PM
 #21

Sorry, but it is not correct. For transactions to be included in a block they have to be "whispered" in the network and stored in the node's mempool beforehand.  Your schema couldn't change anything about this and once a block is generated, nodes have to validate it and if for some reason they've not been informed of the transaction already they have to fetch it anyway. Without sharding the state, it would be very hard to improve bandwidth requirement for the network.

Could you please be more specific in what you find wrong with the math in the example I gave above?  The example is giving a simplified version of how a transaction is "whispered" across the network currently and with BlockReduce.  In terms of mempool, I don't really think this effects bandwidth because it essentially adds capacitance to the system which can be ignored at steady-state.

Quote
We could initiate a BIP for such a situation to improve transaction relay protocol in bitcoin networking layer by aggregating batches of fresh transactions and synchronizing them according to some convention like ordered set of 100 transaction ids falling into a specific criteria, ...

The most critical problem with networking is the latencies involved in whispering new blocks and when nodes have no clue about some of included transactions, it is the basis for proximity premium flaw in bitcoin and its centralization threat because of its direct consequence: pooling pressure.

I would like to work with you to initiate a BIP like this.  

I would propose that if we aggregate transactions we will need a way for the nodes to determine when they should share groups of transactions.  Ideally the mechanism we come up with for determining when the groups share transaction should be decentralized so no single node can withhold transactions for an inordinate amount of time.  There should also be a way for the nodes that receive the grouping of transactions to be able to verify the criteria was met.  

We could use something like a one-way mathematic function operating on an unpredictable data set that would periodically stochastically meet some arbitrary criteria.  Then when the criteria is met the aggregating node could transmit the data to all of the other nodes.  The other nodes would then be able to easily verify the criteria was met.

We could improve upon this further if we created some redundancy for the aggregating node.  This could be accomplished by having a small group of nodes with high connectivity that work together to aggregate a set of transactions amongst themselves before sharing to the larger network.

Do you think something like this could work?


aliashraf
Hero Member
*****
Offline Offline

Activity: 994
Merit: 727

always remember the cause


View Profile WWW
November 21, 2018, 06:38:57 PM
Merited by Welsh (4), xtraelv (1)
 #22

Sorry, but it is not correct. For transactions to be included in a block they have to be "whispered" in the network and stored in the node's mempool beforehand.  Your schema couldn't change anything about this and once a block is generated, nodes have to validate it and if for some reason they've not been informed of the transaction already they have to fetch it anyway. Without sharding the state, it would be very hard to improve bandwidth requirement for the network.
Could you please be more specific in what you find wrong with the math in the example I gave above?  The example is giving a simplified version of how a transaction is "whispered" across the network currently and with BlockReduce.  In terms of mempool, I don't really think this effects bandwidth because it essentially adds capacitance to the system which can be ignored at steady-state.
Looking closer to your calculations it is about comparing bitcoin transaction propagation with your schema's block reconciliation. These are two basically different things (and yet you are exaggerating about the situation with bitcoin). As I've argued before, both bitcoin p2p and your schema need the same amount of effort and resources to be consumed for raw transaction propagation because they are full nodes and you just can't dictate a predefined topology like a central authority or something.


We could initiate a BIP for such a situation to improve transaction relay protocol in bitcoin networking layer by aggregating batches of fresh transactions and synchronizing them according to some convention like ordered set of 100 transaction ids falling into a specific criteria, ...

The most critical problem with networking is the latencies involved in whispering new blocks and when nodes have no clue about some of included transactions, it is the basis for proximity premium flaw in bitcoin and its centralization threat because of its direct consequence: pooling pressure.
I would like to work with you to initiate a BIP like this.  

I would propose that if we aggregate transactions we will need a way for the nodes to determine when they should share groups of transactions.  Ideally the mechanism we come up with for determining when the groups share transaction should be decentralized so no single node can withhold transactions for an inordinate amount of time.  There should also be a way for the nodes that receive the grouping of transactions to be able to verify the criteria was met.  

We could use something like a one-way mathematic function operating on an unpredictable data set that would periodically stochastically meet some arbitrary criteria.  Then when the criteria is met the aggregating node could transmit the data to all of the other nodes.  The other nodes would then be able to easily verify the criteria was met.

We could improve upon this further if we created some redundancy for the aggregating node.  This could be accomplished by having a small group of nodes with high connectivity that work together to aggregate a set of transactions amongst themselves before sharing to the larger network.

Do you think something like this could work?
Well, I'm ok with collateral work and most of the ideas above, but not with dedicated nodes it would be hard to implement an incentive mechanism. Right now, Greg Maxwell and others are working on a privacy focused improvement to transaction relay protocol in bitcoin, Dandelion.  The good thing about this BIP is their strategy for delaying transaction relay procedure a while (to make it very hard if not impossible for surveillance services to track sender's IP) and it looks to be committed to the release code in near future! It will be a great opportunity  to do more processing (classification, aggregation, ... ) while the transaction is queued waiting for relay.
mechanikalk
Member
**
Offline Offline

Activity: 91
Merit: 63


View Profile WWW
November 23, 2018, 07:16:20 PM
Merited by Welsh (2)
 #23

Looking closer to your calculations it is about comparing bitcoin transaction propagation with your schema's block reconciliation. These are two basically different things (and yet you are exaggerating about the situation with bitcoin).

It is not comparing propagation to block reconciliation.  I understand how this could be confused. The  blocks found at the lowest level are the mechanism for sharing groups of transactions.  That is why I am comparing Bitcoins transaction propagation to BlockReduce Zone block propagation.  Propogating transactions in groups in a whisper protocol is much more bandwidth efficient.

BlockReduce will be as inefficient as Bitcoin at the lowest level zone groups.  The efficiency is gained by the aggregation of transactions into groups via "Zone blocks" and whispering them further in the network as blocks of transactions.

As I've argued before, both bitcoin p2p and your schema need the same amount of effort and resources to be consumed for raw transaction propagation because they are full nodes and you just can't dictate a predefined topology like a central authority or something.

The structure for grouping and propagating transactions would not be dictated, but rather would be incentivized.  Miners would be incentivized to find lower level blocks and the impact of network latency on effectively using their hash power would incentivize them to find low latency zones to mine in.  This would cause each zone group to have lower latency then the total network and be able to process a higher TPS.  For example, if there were 4 zones available to mine in, miners would roughly divide the world into 4 geographic zones.  They wouldn't be well defined and would overlap from a geography standpoint, but having 4 networks at the zone level would be much more performant then having a single network.  The single network would still exist at the PRIME level.  However, by the time the transactions make it to PRIME they would be whispered in region blocks of 1000 transactions.

Well, I'm ok with collateral work and most of the ideas above, but not with dedicated nodes it would be hard to implement an incentive mechanism.

The incentive mechanism would be giving some reward for the collateral work via merge mining of zone, and region blocks with PRIME.

Right now, Greg Maxwell and others are working on a privacy focused improvement to transaction relay protocol in bitcoin, Dandelion.  The good thing about this BIP is their strategy for delaying transaction relay procedure a while (to make it very hard if not impossible for surveillance services to track sender's IP) and it looks to be committed to the release code in near future! It will be a great opportunity  to do more processing (classification, aggregation, ... ) while the transaction is queued waiting for relay.

I have reviewed this proposal by Greg Maxwell.  It is interesting from an anonymization standpoint.  However, I did not see a mechanism by which the transactions would be delayed and aggregated but rather a routing schema that obfuscates the origin of a transaction.
coopex
Copper Member
Newbie
*
Offline Offline

Activity: 9
Merit: 3


View Profile
November 13, 2019, 10:51:51 PM
Merited by mechanikalk (3)
 #24

Hi there! I was looking through some old research papers about merge-mining and came upon this thread. I'm very interested in your proposal as it seems like a great way to shard state without losing security via merge mining! I have a question for you though: If miners have to verify all the state that passes up to Prime, they have to run a full node so that they have the state of all the blockchains to properly verify everything. They are incentivized to do this so that they don't mine invalid blocks, but in doing so they might put a strain on the network because their zone and region nodes are not necessarily in the same geographic region as the rest of the zone and region nodes. (Of course, the zone that the miner is located in will be optimal, but I am talking about the rest of the zones and regions necessary for running a full state node).
For example, for n zones, n/2 regions, and m miners running a full state node, we have m - 1 latent nodes in each zone (or n*(m-1) latent zone nodes total) and m - 1 latent nodes in each region (or (n/2)*(m-1) latent region nodes total). Do you consider this an issue for network latency? Is there perhaps some way or incentive for a miner to run a full node and also run each sub-node (zone and region) in the proper geographic location? This might be physically difficult without the use of some cloud provider like AWS.

Looking forward to hearing more! Thanks.
Wind_FURY
Hero Member
*****
Offline Offline

Activity: 1372
Merit: 875


Crypto-Games.net: Multiple coins, multiple games


View Profile
November 17, 2019, 07:49:18 AM
 #25

OP, I read the white paper. I'm not a technical person, but are you actually serious about your proposal? For Bitcoin as a hard fork?

▄▄█████████▄▄
▄█████████████████▄
▄████▀▀▀▀█████▀▀▀▀████▄
████▀██████▀█▀██████▀████
██████████████████████████
▐█████▄███████████████▄█████▌
▐███████▄▄█████████▄▄███████▌
▐██████▀█████████████▀██████▌
▐███████████████████████████▌
▀██████████████████████▀
▀████▄████▄▀▀▄████▄████▀
▀███████▀███▀███████▀
▀▀█████████████▀▀
  ▀▀▀▀▀▀▀▀▀
|
★.★.★   8 GAMES   ★   WAGERING CONTEST   ★   JACKPOTS   ★   FAUCET   ★.★.★
  ▄▄▄
▄█ ▄▀█▄
██ ▄▀██
 ▀▄▄█▀
  ▄▄▄
▄█▀ ▀█▄
██   ██
 ▀█▄█▀
  ▄▄▄
▄█▀█▀█▄

 ▀███▀
  ▄▄▄
▄██▀▄█▄
██▀▄███
 ▀▄▄▄▀
  ▄▄▄
▄█ ▄▀█▄
██ █ ██
 ▀▄▄█▀
  ▄▄▄
▄▀▄▄▄▀▄
█▀▀▀▀▄█
 ▀███▀
  ▄▄▄
▄▀   ▀▄
█  █▄ █
 ▀▄██▀
  ▄▄▄
▄█▀ ▀█▄
██   ██
 ▀█▄█▀
  ▄▄▄
▀ █ ▀
▀▀▄▀▀
 ▀▄█▄
  ▄▄▄
▄█ ▄▀█▄
██ ▄▀██
 ▀▄▄█▀
|
ETFbitcoin
Legendary
*
Offline Offline

Activity: 1918
Merit: 2207

Use SegWit and enjoy lower fees.


View Profile WWW
November 17, 2019, 06:45:22 PM
 #26

OP, I read the white paper. I'm not a technical person, but are you actually serious about your proposal? For Bitcoin as a hard fork?

With lots of breaking changes, i doubt soft-fork is possible.

P.S. with amount of effort OP has done, it's very obvious that he's serious.

Wind_FURY
Hero Member
*****
Offline Offline

Activity: 1372
Merit: 875


Crypto-Games.net: Multiple coins, multiple games


View Profile
November 19, 2019, 06:57:29 AM
 #27

OP, I read the white paper. I'm not a technical person, but are you actually serious about your proposal? For Bitcoin as a hard fork?

With lots of breaking changes, i doubt soft-fork is possible.

P.S. with amount of effort OP has done, it's very obvious that he's serious.


You gave the OP merits for the topic. What's your opinion on this?

https://github.com/mechanikalk/bips/blob/master/bip-%3F%3F%3F%3F.mediawiki

Quote

Merge-Mining and Difficulty

A key aspect of BlockReduce is all nodes will mine at the zone, region, and PRIME level at the same time. The simultaneous mining of PRIME, regions, and zones allows BlockReduce to keep the entire network's proof-of-work on PRIME.


I already said I'm not a technical person, I'm actually the stupid one, but would the technical people/coders take this proposal seriously?

▄▄█████████▄▄
▄█████████████████▄
▄████▀▀▀▀█████▀▀▀▀████▄
████▀██████▀█▀██████▀████
██████████████████████████
▐█████▄███████████████▄█████▌
▐███████▄▄█████████▄▄███████▌
▐██████▀█████████████▀██████▌
▐███████████████████████████▌
▀██████████████████████▀
▀████▄████▄▀▀▄████▄████▀
▀███████▀███▀███████▀
▀▀█████████████▀▀
  ▀▀▀▀▀▀▀▀▀
|
★.★.★   8 GAMES   ★   WAGERING CONTEST   ★   JACKPOTS   ★   FAUCET   ★.★.★
  ▄▄▄
▄█ ▄▀█▄
██ ▄▀██
 ▀▄▄█▀
  ▄▄▄
▄█▀ ▀█▄
██   ██
 ▀█▄█▀
  ▄▄▄
▄█▀█▀█▄

 ▀███▀
  ▄▄▄
▄██▀▄█▄
██▀▄███
 ▀▄▄▄▀
  ▄▄▄
▄█ ▄▀█▄
██ █ ██
 ▀▄▄█▀
  ▄▄▄
▄▀▄▄▄▀▄
█▀▀▀▀▄█
 ▀███▀
  ▄▄▄
▄▀   ▀▄
█  █▄ █
 ▀▄██▀
  ▄▄▄
▄█▀ ▀█▄
██   ██
 ▀█▄█▀
  ▄▄▄
▀ █ ▀
▀▀▄▀▀
 ▀▄█▄
  ▄▄▄
▄█ ▄▀█▄
██ ▄▀██
 ▀▄▄█▀
|
ETFbitcoin
Legendary
*
Offline Offline

Activity: 1918
Merit: 2207

Use SegWit and enjoy lower fees.


View Profile WWW
November 19, 2019, 05:59:06 PM
 #28

You gave the OP merits for the topic. What's your opinion on this?

See #2 and #4

I already said I'm not a technical person, I'm actually the stupid one, but would the technical people/coders take this proposal seriously?

It depends on their ideology and understanding about this proposal. But it's easy to imagine due to hard fork and development complexity (mostly communicate or data sharing between different kind of nodes)

Wind_FURY
Hero Member
*****
Offline Offline

Activity: 1372
Merit: 875


Crypto-Games.net: Multiple coins, multiple games


View Profile
November 20, 2019, 05:01:49 AM
 #29

You gave the OP merits for the topic. What's your opinion on this?

See #2 and #4

I already said I'm not a technical person, I'm actually the stupid one, but would the technical people/coders take this proposal seriously?

It depends on their ideology

The ideology?

Quote

and understanding about this proposal.


What about your understanding about the proposal, you believe it's viable?

I'm trying to learn.

Quote

But it's easy to imagine due to hard fork and development complexity (mostly communicate or data sharing between different kind of nodes)


It's "easy to imagine"? I'm confused.

▄▄█████████▄▄
▄█████████████████▄
▄████▀▀▀▀█████▀▀▀▀████▄
████▀██████▀█▀██████▀████
██████████████████████████
▐█████▄███████████████▄█████▌
▐███████▄▄█████████▄▄███████▌
▐██████▀█████████████▀██████▌
▐███████████████████████████▌
▀██████████████████████▀
▀████▄████▄▀▀▄████▄████▀
▀███████▀███▀███████▀
▀▀█████████████▀▀
  ▀▀▀▀▀▀▀▀▀
|
★.★.★   8 GAMES   ★   WAGERING CONTEST   ★   JACKPOTS   ★   FAUCET   ★.★.★
  ▄▄▄
▄█ ▄▀█▄
██ ▄▀██
 ▀▄▄█▀
  ▄▄▄
▄█▀ ▀█▄
██   ██
 ▀█▄█▀
  ▄▄▄
▄█▀█▀█▄

 ▀███▀
  ▄▄▄
▄██▀▄█▄
██▀▄███
 ▀▄▄▄▀
  ▄▄▄
▄█ ▄▀█▄
██ █ ██
 ▀▄▄█▀
  ▄▄▄
▄▀▄▄▄▀▄
█▀▀▀▀▄█
 ▀███▀
  ▄▄▄
▄▀   ▀▄
█  █▄ █
 ▀▄██▀
  ▄▄▄
▄█▀ ▀█▄
██   ██
 ▀█▄█▀
  ▄▄▄
▀ █ ▀
▀▀▄▀▀
 ▀▄█▄
  ▄▄▄
▄█ ▄▀█▄
██ ▄▀██
 ▀▄▄█▀
|
ETFbitcoin
Legendary
*
Offline Offline

Activity: 1918
Merit: 2207

Use SegWit and enjoy lower fees.


View Profile WWW
November 20, 2019, 06:20:24 PM
Last edit: November 21, 2019, 08:25:59 AM by ETFbitcoin
 #30

The ideology?

1. Left wing vs Right wing
2. Small block vs big block
3. On-chain vs off-chain
4. etc.

What about your understanding about the proposal, you believe it's viable?

I'm trying to learn.

As far as i understand, the proposal is viable. But personally i don't like the idea of full node only store specific set of information.

It's "easy to imagine"? I'm confused.

Many would reject due to the complexity, at least that's what i think if we were to ask the experts.

R.I.U. iol
Newbie
*
Offline Offline

Activity: 13
Merit: 1


View Profile
November 21, 2019, 06:40:36 AM
 #31

is there any update on this? Is this still in developement?
Wind_FURY
Hero Member
*****
Offline Offline

Activity: 1372
Merit: 875


Crypto-Games.net: Multiple coins, multiple games


View Profile
November 22, 2019, 10:31:53 AM
 #32

The ideology?

1. Left wing vs Right wing
2. Small block vs big block
3. On-chain vs off-chain
4. etc.


The debate should be technical, not political.

Quote

What about your understanding about the proposal, you believe it's viable?

I'm trying to learn.

As far as i understand, the proposal is viable. But personally i don't like the idea of full node only store specific set of information.


Really? For Bitcoin?

Quote

It's "easy to imagine"? I'm confused.

Many would reject due to the complexity, at least that's what i think if we were to ask the experts.


Wouldn't more complexity widen the attack-vector on the network?

▄▄█████████▄▄
▄█████████████████▄
▄████▀▀▀▀█████▀▀▀▀████▄
████▀██████▀█▀██████▀████
██████████████████████████
▐█████▄███████████████▄█████▌
▐███████▄▄█████████▄▄███████▌
▐██████▀█████████████▀██████▌
▐███████████████████████████▌
▀██████████████████████▀
▀████▄████▄▀▀▄████▄████▀
▀███████▀███▀███████▀
▀▀█████████████▀▀
  ▀▀▀▀▀▀▀▀▀
|
★.★.★   8 GAMES   ★   WAGERING CONTEST   ★   JACKPOTS   ★   FAUCET   ★.★.★
  ▄▄▄
▄█ ▄▀█▄
██ ▄▀██
 ▀▄▄█▀
  ▄▄▄
▄█▀ ▀█▄
██   ██
 ▀█▄█▀
  ▄▄▄
▄█▀█▀█▄

 ▀███▀
  ▄▄▄
▄██▀▄█▄
██▀▄███
 ▀▄▄▄▀
  ▄▄▄
▄█ ▄▀█▄
██ █ ██
 ▀▄▄█▀
  ▄▄▄
▄▀▄▄▄▀▄
█▀▀▀▀▄█
 ▀███▀
  ▄▄▄
▄▀   ▀▄
█  █▄ █
 ▀▄██▀
  ▄▄▄
▄█▀ ▀█▄
██   ██
 ▀█▄█▀
  ▄▄▄
▀ █ ▀
▀▀▄▀▀
 ▀▄█▄
  ▄▄▄
▄█ ▄▀█▄
██ ▄▀██
 ▀▄▄█▀
|
ETFbitcoin
Legendary
*
Offline Offline

Activity: 1918
Merit: 2207

Use SegWit and enjoy lower fees.


View Profile WWW
November 22, 2019, 06:49:34 PM
Last edit: November 28, 2019, 07:17:48 PM by ETFbitcoin
 #33

The debate should be technical, not political.

History prove it's impossible to avoid political completely on debate

Really? For Bitcoin?

Really? Yes
For Bitcoin? It depends, but most would say no

Wouldn't more complexity widen the attack-vector on the network?

Yes, it would.

Wind_FURY
Hero Member
*****
Offline Offline

Activity: 1372
Merit: 875


Crypto-Games.net: Multiple coins, multiple games


View Profile
November 28, 2019, 11:48:15 AM
 #34

The debate should be technical, not political.

History prove it's impossible to avoid political completely on debate


I'm lost. Include every quote for context please.

But for you, you said that you would like OP's idea depending on the "ideology". What about technically?

▄▄█████████▄▄
▄█████████████████▄
▄████▀▀▀▀█████▀▀▀▀████▄
████▀██████▀█▀██████▀████
██████████████████████████
▐█████▄███████████████▄█████▌
▐███████▄▄█████████▄▄███████▌
▐██████▀█████████████▀██████▌
▐███████████████████████████▌
▀██████████████████████▀
▀████▄████▄▀▀▄████▄████▀
▀███████▀███▀███████▀
▀▀█████████████▀▀
  ▀▀▀▀▀▀▀▀▀
|
★.★.★   8 GAMES   ★   WAGERING CONTEST   ★   JACKPOTS   ★   FAUCET   ★.★.★
  ▄▄▄
▄█ ▄▀█▄
██ ▄▀██
 ▀▄▄█▀
  ▄▄▄
▄█▀ ▀█▄
██   ██
 ▀█▄█▀
  ▄▄▄
▄█▀█▀█▄

 ▀███▀
  ▄▄▄
▄██▀▄█▄
██▀▄███
 ▀▄▄▄▀
  ▄▄▄
▄█ ▄▀█▄
██ █ ██
 ▀▄▄█▀
  ▄▄▄
▄▀▄▄▄▀▄
█▀▀▀▀▄█
 ▀███▀
  ▄▄▄
▄▀   ▀▄
█  █▄ █
 ▀▄██▀
  ▄▄▄
▄█▀ ▀█▄
██   ██
 ▀█▄█▀
  ▄▄▄
▀ █ ▀
▀▀▄▀▀
 ▀▄█▄
  ▄▄▄
▄█ ▄▀█▄
██ ▄▀██
 ▀▄▄█▀
|
ETFbitcoin
Legendary
*
Offline Offline

Activity: 1918
Merit: 2207

Use SegWit and enjoy lower fees.


View Profile WWW
November 28, 2019, 07:23:14 PM
 #35

I'm lost. Include every quote for context please.

My bad, i did that because many users often complain about "pyramid quote"

I'm lost. Include every quote for context please.

But for you, you said that you would like OP's idea depending on the "ideology". What about technically?

It's vague question. Technically it's possible, but personally i wouldn't support the idea for Bitcoin.

Wind_FURY
Hero Member
*****
Offline Offline

Activity: 1372
Merit: 875


Crypto-Games.net: Multiple coins, multiple games


View Profile
November 29, 2019, 05:37:45 AM
 #36

I'm lost. Include every quote for context please.

My bad, i did that because many users often complain about "pyramid quote"

I'm lost. Include every quote for context please.

But for you, you said that you would like OP's idea depending on the "ideology". What about technically?

It's vague question. Technically it's possible, but personally i wouldn't support the idea for Bitcoin.

I'm too stupid to know which ideas are viable or not. But if you ask me, the risks taken might not be worth the outcome.

gmaxwell, achow, comments?

But just for clarification, what is "3+ orders of magnitude"? 3 times more?

https://github.com/mechanikalk/bips/blob/master/bip-%3F%3F%3F%3F.mediawiki

Quote

BlockReduce presents a new blockchain topology that offers 3+ orders of magnitude improvement in transaction throughput while avoiding the introduction of hierarchical power structures and centralization.


▄▄█████████▄▄
▄█████████████████▄
▄████▀▀▀▀█████▀▀▀▀████▄
████▀██████▀█▀██████▀████
██████████████████████████
▐█████▄███████████████▄█████▌
▐███████▄▄█████████▄▄███████▌
▐██████▀█████████████▀██████▌
▐███████████████████████████▌
▀██████████████████████▀
▀████▄████▄▀▀▄████▄████▀
▀███████▀███▀███████▀
▀▀█████████████▀▀
  ▀▀▀▀▀▀▀▀▀
|
★.★.★   8 GAMES   ★   WAGERING CONTEST   ★   JACKPOTS   ★   FAUCET   ★.★.★
  ▄▄▄
▄█ ▄▀█▄
██ ▄▀██
 ▀▄▄█▀
  ▄▄▄
▄█▀ ▀█▄
██   ██
 ▀█▄█▀
  ▄▄▄
▄█▀█▀█▄

 ▀███▀
  ▄▄▄
▄██▀▄█▄
██▀▄███
 ▀▄▄▄▀
  ▄▄▄
▄█ ▄▀█▄
██ █ ██
 ▀▄▄█▀
  ▄▄▄
▄▀▄▄▄▀▄
█▀▀▀▀▄█
 ▀███▀
  ▄▄▄
▄▀   ▀▄
█  █▄ █
 ▀▄██▀
  ▄▄▄
▄█▀ ▀█▄
██   ██
 ▀█▄█▀
  ▄▄▄
▀ █ ▀
▀▀▄▀▀
 ▀▄█▄
  ▄▄▄
▄█ ▄▀█▄
██ ▄▀██
 ▀▄▄█▀
|
tromp
Hero Member
*****
Offline Offline

Activity: 634
Merit: 553


View Profile
November 29, 2019, 10:30:05 AM
 #37

But just for clarification, what is "3+ orders of magnitude"? 3 times more?

No, 10^3=1000 time more. One order of magnitude is 10x.
coopex
Copper Member
Newbie
*
Offline Offline

Activity: 9
Merit: 3


View Profile
December 01, 2019, 06:14:54 AM
 #38

I'm lost. Include every quote for context please.

My bad, i did that because many users often complain about "pyramid quote"

I'm lost. Include every quote for context please.

But for you, you said that you would like OP's idea depending on the "ideology". What about technically?

It's vague question. Technically it's possible, but personally i wouldn't support the idea for Bitcoin.

I'm too stupid to know which ideas are viable or not. But if you ask me, the risks taken might not be worth the outcome.

gmaxwell, achow, comments?

But just for clarification, what is "3+ orders of magnitude"? 3 times more?

https://github.com/mechanikalk/bips/blob/master/bip-%3F%3F%3F%3F.mediawiki

Quote

BlockReduce presents a new blockchain topology that offers 3+ orders of magnitude improvement in transaction throughput while avoiding the introduction of hierarchical power structures and centralization.


I've read the paper and it certainly seems feasible. You can compare it to using mapreduce in a blockchain context. Of course the attack vector would increase but that's just how it is with complex cryptographic protocols - it's not something a few security audits and a team of smart engineers can't fix (and a testnet obviously). I don't think Bitcoin would take it on though because the changes would be too dramatic for the developers and the miners to accept - this probably needs to be a separate project unfortunately.
Wind_FURY
Hero Member
*****
Offline Offline

Activity: 1372
Merit: 875


Crypto-Games.net: Multiple coins, multiple games


View Profile
December 01, 2019, 07:43:56 AM
 #39

But just for clarification, what is "3+ orders of magnitude"? 3 times more?

No, 10^3=1000 time more. One order of magnitude is 10x.


From your post-history, I see that you're a developer. What are your initial thoughts on OP's proposal?

▄▄█████████▄▄
▄█████████████████▄
▄████▀▀▀▀█████▀▀▀▀████▄
████▀██████▀█▀██████▀████
██████████████████████████
▐█████▄███████████████▄█████▌
▐███████▄▄█████████▄▄███████▌
▐██████▀█████████████▀██████▌
▐███████████████████████████▌
▀██████████████████████▀
▀████▄████▄▀▀▄████▄████▀
▀███████▀███▀███████▀
▀▀█████████████▀▀
  ▀▀▀▀▀▀▀▀▀
|
★.★.★   8 GAMES   ★   WAGERING CONTEST   ★   JACKPOTS   ★   FAUCET   ★.★.★
  ▄▄▄
▄█ ▄▀█▄
██ ▄▀██
 ▀▄▄█▀
  ▄▄▄
▄█▀ ▀█▄
██   ██
 ▀█▄█▀
  ▄▄▄
▄█▀█▀█▄

 ▀███▀
  ▄▄▄
▄██▀▄█▄
██▀▄███
 ▀▄▄▄▀
  ▄▄▄
▄█ ▄▀█▄
██ █ ██
 ▀▄▄█▀
  ▄▄▄
▄▀▄▄▄▀▄
█▀▀▀▀▄█
 ▀███▀
  ▄▄▄
▄▀   ▀▄
█  █▄ █
 ▀▄██▀
  ▄▄▄
▄█▀ ▀█▄
██   ██
 ▀█▄█▀
  ▄▄▄
▀ █ ▀
▀▀▄▀▀
 ▀▄█▄
  ▄▄▄
▄█ ▄▀█▄
██ ▄▀██
 ▀▄▄█▀
|
tromp
Hero Member
*****
Offline Offline

Activity: 634
Merit: 553


View Profile
December 01, 2019, 06:05:29 PM
Merited by Wind_FURY (1)
 #40

From your post-history, I see that you're a developer. What are your initial thoughts on OP's proposal?

The author appears knowledgeable and this may well be a sensible approach to sharding.
But personally I'm not a fan of sharding and its associated complexity increase.
It should appeal to Ethereum more than Bitcoin...
Wind_FURY
Hero Member
*****
Offline Offline

Activity: 1372
Merit: 875


Crypto-Games.net: Multiple coins, multiple games


View Profile
December 02, 2019, 06:35:17 AM
 #41

From your post-history, I see that you're a developer. What are your initial thoughts on OP's proposal?

The author appears knowledgeable and this may well be a sensible approach to sharding.
But personally I'm not a fan of sharding and its associated complexity increase.
It should appeal to Ethereum more than Bitcoin...


This doesn't sound like "blockchain woo-woo" for you?  

https://github.com/mechanikalk/bips/blob/master/bip-%3F%3F%3F%3F.mediawiki

Quote

BlockReduce is a new blockchain structure which only segments consistency, allowing it to scale to handle tens of thousands of transactions per second without impacting fault tolerance or decentralization.


I'm not trying to troll/criticize, I'm trying to debate/learn. Because from what I have been told, "sharding" doesn't scale the network out, but only gives the impression that it's scaling out.

▄▄█████████▄▄
▄█████████████████▄
▄████▀▀▀▀█████▀▀▀▀████▄
████▀██████▀█▀██████▀████
██████████████████████████
▐█████▄███████████████▄█████▌
▐███████▄▄█████████▄▄███████▌
▐██████▀█████████████▀██████▌
▐███████████████████████████▌
▀██████████████████████▀
▀████▄████▄▀▀▄████▄████▀
▀███████▀███▀███████▀
▀▀█████████████▀▀
  ▀▀▀▀▀▀▀▀▀
|
★.★.★   8 GAMES   ★   WAGERING CONTEST   ★   JACKPOTS   ★   FAUCET   ★.★.★
  ▄▄▄
▄█ ▄▀█▄
██ ▄▀██
 ▀▄▄█▀
  ▄▄▄
▄█▀ ▀█▄
██   ██
 ▀█▄█▀
  ▄▄▄
▄█▀█▀█▄

 ▀███▀
  ▄▄▄
▄██▀▄█▄
██▀▄███
 ▀▄▄▄▀
  ▄▄▄
▄█ ▄▀█▄
██ █ ██
 ▀▄▄█▀
  ▄▄▄
▄▀▄▄▄▀▄
█▀▀▀▀▄█
 ▀███▀
  ▄▄▄
▄▀   ▀▄
█  █▄ █
 ▀▄██▀
  ▄▄▄
▄█▀ ▀█▄
██   ██
 ▀█▄█▀
  ▄▄▄
▀ █ ▀
▀▀▄▀▀
 ▀▄█▄
  ▄▄▄
▄█ ▄▀█▄
██ ▄▀██
 ▀▄▄█▀
|
tromp
Hero Member
*****
Offline Offline

Activity: 634
Merit: 553


View Profile
December 02, 2019, 09:25:01 AM
Merited by ETFbitcoin (1), Wind_FURY (1), mechanikalk (1)
 #42

This doesn't sound like "blockchain woo-woo" for you?  

https://github.com/mechanikalk/bips/blob/master/bip-%3F%3F%3F%3F.mediawiki

I'm not trying to troll/criticize, I'm trying to debate/learn. Because from what I have been told, "sharding" doesn't scale the network out, but only gives the impression that it's scaling out.

I had to look up woo woo on wikipedia where it's said to be "a term used by magician and skeptic James Randi to denote paranormal, supernatural and occult claims". I see no such claims in BlockReduce :-)

The proposal basically deals with the bandwidth problem of on-chain scaling, trading it off against trust that miners are doing the proper cross-shard checks that they're supposed and incentivized to. What it fails to do making the whole chain fully verifiable by a typical desktop computer, as should be apparent from "the total chain will require around 8 Tb/year of storage".

I don't see these tradeoffs as being acceptable to the Bitcoin community, but they might appeal to the Ethereum community.
mechanikalk
Member
**
Offline Offline

Activity: 91
Merit: 63


View Profile WWW
December 02, 2019, 02:37:13 PM
 #43


The proposal basically deals with the bandwidth problem of on-chain scaling, trading it off against trust that miners are doing the proper cross-shard checks that they're supposed and incentivized to. What it fails to do making the whole chain fully verifiable by a typical desktop computer, as should be apparent from "the total chain will require around 8 Tb/year of storage".


Tromp, I appreciate the time that you have taken to look at BlockReduce.  One thing that I would debate is the use of the word sharding.  Although, a miner can depend upon a zone blocks work as an attestation to the correctness of the included transactions, they are not required to.  Much like an SPV node doesn't have to keep the entire chainstate but rather just looks at a block header.  This is not sharding per say, but rather a mode of operation that a node can work within to use less resources.  I would anticipate that serious miners or pools will run and validate full state because they have an economic incentive to do so, while merchants will likely run partial state much like SPV. 

Another way to think about BlockReduce is as a form of multi-level erlay where "sketches" are sent when a zone, or region block is found rather than an arbitrary delay.  The obvious difference being that actual sub-blocks are found which are rewarded to incentivize miners to self organize in a network optimal way.
mechanikalk
Member
**
Offline Offline

Activity: 91
Merit: 63


View Profile WWW
December 02, 2019, 02:47:48 PM
Merited by coopex (1)
 #44

Hi there! I was looking through some old research papers about merge-mining and came upon this thread. I'm very interested in your proposal as it seems like a great way to shard state without losing security via merge mining! I have a question for you though: If miners have to verify all the state that passes up to Prime, they have to run a full node so that they have the state of all the blockchains to properly verify everything. They are incentivized to do this so that they don't mine invalid blocks, but in doing so they might put a strain on the network because their zone and region nodes are not necessarily in the same geographic region as the rest of the zone and region nodes. (Of course, the zone that the miner is located in will be optimal, but I am talking about the rest of the zones and regions necessary for running a full state node).
For example, for n zones, n/2 regions, and m miners running a full state node, we have m - 1 latent nodes in each zone (or n*(m-1) latent zone nodes total) and m - 1 latent nodes in each region (or (n/2)*(m-1) latent region nodes total). Do you consider this an issue for network latency? Is there perhaps some way or incentive for a miner to run a full node and also run each sub-node (zone and region) in the proper geographic location? This might be physically difficult without the use of some cloud provider like AWS.

Looking forward to hearing more! Thanks.

Coopex, great question!  Sorry, it has taken me a bit to get back to you. The miner is incentivized to hold zone state which they are not mining because it reduces the risk they will include a zone block in a region block which eventually gets rolled back in the zone.  If they were to wait or delay including zone blocks in the region blocks, they could also achieve greater certainty, however they would get lower rewards.  Running the alternate zone state allows them to have greater certainty about a zone block faster.  Doing so with a node which is appropriately placed in the network topology will decrease that nodes latency and further decrease risk.  Therefore, miners will be incentivized to keep state and do so in a network optimal way.  I would absolutely expect that a person running full state would do so using something like AWS to allow optimization of the geographic placement of nodes.
coopex
Copper Member
Newbie
*
Offline Offline

Activity: 9
Merit: 3


View Profile
December 02, 2019, 08:27:01 PM
 #45

Hi there! I was looking through some old research papers about merge-mining and came upon this thread. I'm very interested in your proposal as it seems like a great way to shard state without losing security via merge mining! I have a question for you though: If miners have to verify all the state that passes up to Prime, they have to run a full node so that they have the state of all the blockchains to properly verify everything. They are incentivized to do this so that they don't mine invalid blocks, but in doing so they might put a strain on the network because their zone and region nodes are not necessarily in the same geographic region as the rest of the zone and region nodes. (Of course, the zone that the miner is located in will be optimal, but I am talking about the rest of the zones and regions necessary for running a full state node).
For example, for n zones, n/2 regions, and m miners running a full state node, we have m - 1 latent nodes in each zone (or n*(m-1) latent zone nodes total) and m - 1 latent nodes in each region (or (n/2)*(m-1) latent region nodes total). Do you consider this an issue for network latency? Is there perhaps some way or incentive for a miner to run a full node and also run each sub-node (zone and region) in the proper geographic location? This might be physically difficult without the use of some cloud provider like AWS.

Looking forward to hearing more! Thanks.

Coopex, great question!  Sorry, it has taken me a bit to get back to you. The miner is incentivized to hold zone state which they are not mining because it reduces the risk they will include a zone block in a region block which eventually gets rolled back in the zone.  If they were to wait or delay including zone blocks in the region blocks, they could also achieve greater certainty, however they would get lower rewards.  Running the alternate zone state allows them to have greater certainty about a zone block faster.  Doing so with a node which is appropriately placed in the network topology will decrease that nodes latency and further decrease risk.  Therefore, miners will be incentivized to keep state and do so in a network optimal way.  I would absolutely expect that a person running full state would do so using something like AWS to allow optimization of the geographic placement of nodes.

Thanks for your response! I see now that miners are incentivized to run all of their nodes in the least latent way possible. However, miners might not physically be able to do so without moving their mining operation outside of the zone that they operate in, unless they want to pay a cloud provider to host it for them - which may not work if the cloud provider does not offer server hosting close enough or with proper precision to the geographic location of the zone. Perhaps a business could evolve to host servers in close proximity to every zone and move them around when necessary, kind of like high frequency trading does with the stock market, but even then you'd have the business be a centralizing factor.

In any case, my question is more general. Do you consider it an issue if some of the nodes in a zone are more latent than others? Are there bandwidth concerns with users or miners who run latent nodes? What if I just have a really shitty internet connection - could I be causing bandwidth issues for the network, or am I just causing issues for myself?

Thank you for your responses!
Wind_FURY
Hero Member
*****
Offline Offline

Activity: 1372
Merit: 875


Crypto-Games.net: Multiple coins, multiple games


View Profile
December 03, 2019, 04:38:53 AM
 #46

This doesn't sound like "blockchain woo-woo" for you?  

https://github.com/mechanikalk/bips/blob/master/bip-%3F%3F%3F%3F.mediawiki

I'm not trying to troll/criticize, I'm trying to debate/learn. Because from what I have been told, "sharding" doesn't scale the network out, but only gives the impression that it's scaling out.

I had to look up woo woo on wikipedia where it's said to be "a term used by magician and skeptic James Randi to denote paranormal, supernatural and occult claims". I see no such claims in BlockReduce :-)

The proposal basically deals with the bandwidth problem of on-chain scaling, trading it off against trust that miners are doing the proper cross-shard checks that they're supposed and incentivized to. What it fails to do making the whole chain fully verifiable by a typical desktop computer, as should be apparent from "the total chain will require around 8 Tb/year of storage".


Then I would debate that the statement, "allowing it to scale to handle tens of thousands of transactions per second without impacting fault tolerance or decentralization", is u true.


The proposal basically deals with the bandwidth problem of on-chain scaling, trading it off against trust that miners are doing the proper cross-shard checks that they're supposed and incentivized to. What it fails to do making the whole chain fully verifiable by a typical desktop computer, as should be apparent from "the total chain will require around 8 Tb/year of storage".


Tromp, I appreciate the time that you have taken to look at BlockReduce.  One thing that I would debate is the use of the word sharding.  Although, a miner can depend upon a zone blocks work as an attestation to the correctness of the included transactions, they are not required to.  2Much like an SPV node doesn't have to keep the entire chainstate but rather just looks at a block header.  This is not sharding per say, but rather a mode of operation that a node can work within to use less resources.  I would anticipate that serious miners or pools will run and validate full state because they have an economic incentive to do so, while merchants will likely run partial state much like SPV


You don't believe that that will centralize Bitcoin toward the miners? Or you don't believe that users/economic majority should have the ability to run their own full nodes?

▄▄█████████▄▄
▄█████████████████▄
▄████▀▀▀▀█████▀▀▀▀████▄
████▀██████▀█▀██████▀████
██████████████████████████
▐█████▄███████████████▄█████▌
▐███████▄▄█████████▄▄███████▌
▐██████▀█████████████▀██████▌
▐███████████████████████████▌
▀██████████████████████▀
▀████▄████▄▀▀▄████▄████▀
▀███████▀███▀███████▀
▀▀█████████████▀▀
  ▀▀▀▀▀▀▀▀▀
|
★.★.★   8 GAMES   ★   WAGERING CONTEST   ★   JACKPOTS   ★   FAUCET   ★.★.★
  ▄▄▄
▄█ ▄▀█▄
██ ▄▀██
 ▀▄▄█▀
  ▄▄▄
▄█▀ ▀█▄
██   ██
 ▀█▄█▀
  ▄▄▄
▄█▀█▀█▄

 ▀███▀
  ▄▄▄
▄██▀▄█▄
██▀▄███
 ▀▄▄▄▀
  ▄▄▄
▄█ ▄▀█▄
██ █ ██
 ▀▄▄█▀
  ▄▄▄
▄▀▄▄▄▀▄
█▀▀▀▀▄█
 ▀███▀
  ▄▄▄
▄▀   ▀▄
█  █▄ █
 ▀▄██▀
  ▄▄▄
▄█▀ ▀█▄
██   ██
 ▀█▄█▀
  ▄▄▄
▀ █ ▀
▀▀▄▀▀
 ▀▄█▄
  ▄▄▄
▄█ ▄▀█▄
██ ▄▀██
 ▀▄▄█▀
|
mechanikalk
Member
**
Offline Offline

Activity: 91
Merit: 63


View Profile WWW
December 06, 2019, 11:49:25 PM
 #47

Hi there! I was looking through some old research papers about merge-mining and came upon this thread. I'm very interested in your proposal as it seems like a great way to shard state without losing security via merge mining! I have a question for you though: If miners have to verify all the state that passes up to Prime, they have to run a full node so that they have the state of all the blockchains to properly verify everything. They are incentivized to do this so that they don't mine invalid blocks, but in doing so they might put a strain on the network because their zone and region nodes are not necessarily in the same geographic region as the rest of the zone and region nodes. (Of course, the zone that the miner is located in will be optimal, but I am talking about the rest of the zones and regions necessary for running a full state node).
For example, for n zones, n/2 regions, and m miners running a full state node, we have m - 1 latent nodes in each zone (or n*(m-1) latent zone nodes total) and m - 1 latent nodes in each region (or (n/2)*(m-1) latent region nodes total). Do you consider this an issue for network latency? Is there perhaps some way or incentive for a miner to run a full node and also run each sub-node (zone and region) in the proper geographic location? This might be physically difficult without the use of some cloud provider like AWS.

Looking forward to hearing more! Thanks.

Coopex, great question!  Sorry, it has taken me a bit to get back to you. The miner is incentivized to hold zone state which they are not mining because it reduces the risk they will include a zone block in a region block which eventually gets rolled back in the zone.  If they were to wait or delay including zone blocks in the region blocks, they could also achieve greater certainty, however they would get lower rewards.  Running the alternate zone state allows them to have greater certainty about a zone block faster.  Doing so with a node which is appropriately placed in the network topology will decrease that nodes latency and further decrease risk.  Therefore, miners will be incentivized to keep state and do so in a network optimal way.  I would absolutely expect that a person running full state would do so using something like AWS to allow optimization of the geographic placement of nodes.

Thanks for your response! I see now that miners are incentivized to run all of their nodes in the least latent way possible. However, miners might not physically be able to do so without moving their mining operation outside of the zone that they operate in, unless they want to pay a cloud provider to host it for them - which may not work if the cloud provider does not offer server hosting close enough or with proper precision to the geographic location of the zone. Perhaps a business could evolve to host servers in close proximity to every zone and move them around when necessary, kind of like high frequency trading does with the stock market, but even then you'd have the business be a centralizing factor.

In any case, my question is more general. Do you consider it an issue if some of the nodes in a zone are more latent than others? Are there bandwidth concerns with users or miners who run latent nodes? What if I just have a really shitty internet connection - could I be causing bandwidth issues for the network, or am I just causing issues for myself?

Thank you for your responses!

That is a pretty insightful question.  However, the answer is pretty simple.  With all distributed networks from things like Napster to Bitcoin you always have a seed and a leech problem.  In the example of Napster it is driven by storage space more than bandwidth.  In the context of Bitcoin it is driven by bandwidth more than storage.  Therefore, if you are a Bitcoin node that has low bandwidth you are slowing the overall network down (leech), whereas if you have high bandwidth you are speeding it up (seed).  BlockReduce is not different from Bitcoin in this regard. However, BlockReduce rewards mining in the zone chains creating an economic incentive for participants who are mining to optimize for latency.
mechanikalk
Member
**
Offline Offline

Activity: 91
Merit: 63


View Profile WWW
December 07, 2019, 12:07:37 AM
 #48


Then I would debate that the statement, "allowing it to scale to handle tens of thousands of transactions per second without impacting fault tolerance or decentralization", is u true.


The actual effect on fault tolerance and decentralization has to be put into context that currently over 50% of Bitcoin's hashpower comes from only 4 pools.  As BlockReduce scales the node requirements to have a node which does partial state validation would be much less then a if Bitcoin scaled in its current state.  That would mean that although there may be fewer people validating full state, there will be more people, and fewer pools validating partial state. I would argue that having partially validating mining nodes is advantageous over having a deminimis number of pools.  Having smaller economic entities decide on the fate of the protocol rather than a few large pools would be positive for the ecosystem.


You don't believe that that will centralize Bitcoin toward the miners? Or you don't believe that users/economic majority should have the ability to run their own full nodes?


I think that people often times fall into tired narratives about majority of users, and fairness, et cetera without fully considering what any of it really means, or why it might be good or bad.  I would argue that if Bitcoin is meant to be censorship resistant and decentralized, that it must allow the greatest number of people to use it with the fewest intermediaries possible. Making low resource validation the primary focus of decentralization misses the point.  If even 20% of a population self custodianed Bitcoin which they regularly used for transactions it would be effectively impossible to censor or outlaw. When we discuss decentralization taking into account the power of the network which scales should also be a consideration, not just how easily it is validated.
Wind_FURY
Hero Member
*****
Offline Offline

Activity: 1372
Merit: 875


Crypto-Games.net: Multiple coins, multiple games


View Profile
December 07, 2019, 05:42:35 AM
 #49


Then I would debate that the statement, "allowing it to scale to handle tens of thousands of transactions per second without impacting fault tolerance or decentralization", is u true.


The actual effect on fault tolerance and decentralization has to be put into context that currently over 50% of Bitcoin's hashpower comes from only 4 pools.  As BlockReduce scales the node requirements to have a node which does partial state validation would be much less then a if Bitcoin scaled in its current state.  That would mean that although there may be fewer people validating full state, there will be more people, and fewer pools validating partial state. I would argue that having partially validating mining nodes is advantageous over having a deminimis number of pools.  Having smaller economic entities decide on the fate of the protocol rather than a few large pools would be positive for the ecosystem.


Is to REALLY scale out the network is = more partially validating nodes, but fewer fully validating nodes?

Quote


You don't believe that that will centralize Bitcoin toward the miners? Or you don't believe that users/economic majority should have the ability to run their own full nodes?


I think that people often times fall into tired narratives about majority of users, and fairness, et cetera without fully considering what any of it really means, or why it might be good or bad.  I would argue that if Bitcoin is meant to be censorship resistant and decentralized, that it must allow the greatest number of people to use it with the fewest intermediaries possible. Making low resource validation the primary focus of decentralization misses the point.  If even 20% of a population self custodianed Bitcoin which they regularly used for transactions it would be effectively impossible to censor or outlaw. When we discuss decentralization taking into account the power of the network which scales should also be a consideration, not just how easily it is validated.


That goes the opposite path of what you said below. Or might I have misunderstood?

https://bitcointalk.org/index.php?topic=5060909.msg53240986#msg53240986

Quote

Tromp, I appreciate the time that you have taken to look at BlockReduce.  One thing that I would debate is the use of the word sharding.  Although, a miner can depend upon a zone blocks work as an attestation to the correctness of the included transactions, they are not required to.  Much like an SPV node doesn't have to keep the entire chainstate but rather just looks at a block header.  This is not sharding per say, but rather a mode of operation that a node can work within to use less resources.  I would anticipate that serious miners or pools will run and validate full state because they have an economic incentive to do so, while merchants will likely run partial state much like SPV.  


▄▄█████████▄▄
▄█████████████████▄
▄████▀▀▀▀█████▀▀▀▀████▄
████▀██████▀█▀██████▀████
██████████████████████████
▐█████▄███████████████▄█████▌
▐███████▄▄█████████▄▄███████▌
▐██████▀█████████████▀██████▌
▐███████████████████████████▌
▀██████████████████████▀
▀████▄████▄▀▀▄████▄████▀
▀███████▀███▀███████▀
▀▀█████████████▀▀
  ▀▀▀▀▀▀▀▀▀
|
★.★.★   8 GAMES   ★   WAGERING CONTEST   ★   JACKPOTS   ★   FAUCET   ★.★.★
  ▄▄▄
▄█ ▄▀█▄
██ ▄▀██
 ▀▄▄█▀
  ▄▄▄
▄█▀ ▀█▄
██   ██
 ▀█▄█▀
  ▄▄▄
▄█▀█▀█▄

 ▀███▀
  ▄▄▄
▄██▀▄█▄
██▀▄███
 ▀▄▄▄▀
  ▄▄▄
▄█ ▄▀█▄
██ █ ██
 ▀▄▄█▀
  ▄▄▄
▄▀▄▄▄▀▄
█▀▀▀▀▄█
 ▀███▀
  ▄▄▄
▄▀   ▀▄
█  █▄ █
 ▀▄██▀
  ▄▄▄
▄█▀ ▀█▄
██   ██
 ▀█▄█▀
  ▄▄▄
▀ █ ▀
▀▀▄▀▀
 ▀▄█▄
  ▄▄▄
▄█ ▄▀█▄
██ ▄▀██
 ▀▄▄█▀
|
mechanikalk
Member
**
Offline Offline

Activity: 91
Merit: 63


View Profile WWW
December 10, 2019, 12:31:42 AM
 #50


Then I would debate that the statement, "allowing it to scale to handle tens of thousands of transactions per second without impacting fault tolerance or decentralization", is u true.


The actual effect on fault tolerance and decentralization has to be put into context that currently over 50% of Bitcoin's hashpower comes from only 4 pools.  As BlockReduce scales the node requirements to have a node which does partial state validation would be much less then a if Bitcoin scaled in its current state.  That would mean that although there may be fewer people validating full state, there will be more people, and fewer pools validating partial state. I would argue that having partially validating mining nodes is advantageous over having a deminimis number of pools.  Having smaller economic entities decide on the fate of the protocol rather than a few large pools would be positive for the ecosystem.


Is to REALLY scale out the network is = more partially validating nodes, but fewer fully validating nodes?

That goes the opposite path of what you said below. Or might I have misunderstood?

https://bitcointalk.org/index.php?topic=5060909.msg53240986#msg53240986


Yes, scaling the network is adding more network participants, this is accomplished through scaling.  The requirement that all market participants be fully validating nodes is a flaw not a virtue.  BlockReduce allows a larger number of incrementally more expensive ways of participating in the network while also scaling.  I think this is better than an all or nothing approach.  Additionally, when calculating market participants you should consider Bitcoin users in addition to nodes and miners as a metric of success.



mda
Member
**
Offline Offline

Activity: 134
Merit: 10


View Profile
December 10, 2019, 04:05:34 AM
Last edit: December 10, 2019, 12:04:07 PM by mda
 #51

Here is another idea along these lines for you

https://bitcointalk.org/index.php?topic=5109561.

It's basically a big package of altcoins with a built-in swapping mechanism where linear grow of block size leads to exponential grow of throughput.
Wind_FURY
Hero Member
*****
Offline Offline

Activity: 1372
Merit: 875


Crypto-Games.net: Multiple coins, multiple games


View Profile
December 10, 2019, 11:28:37 AM
 #52


Then I would debate that the statement, "allowing it to scale to handle tens of thousands of transactions per second without impacting fault tolerance or decentralization", is u true.


The actual effect on fault tolerance and decentralization has to be put into context that currently over 50% of Bitcoin's hashpower comes from only 4 pools.  As BlockReduce scales the node requirements to have a node which does partial state validation would be much less then a if Bitcoin scaled in its current state.  That would mean that although there may be fewer people validating full state, there will be more people, and fewer pools validating partial state. I would argue that having partially validating mining nodes is advantageous over having a deminimis number of pools.  Having smaller economic entities decide on the fate of the protocol rather than a few large pools would be positive for the ecosystem.


Is to REALLY scale out the network is = more partially validating nodes, but fewer fully validating nodes?

That goes the opposite path of what you said below. Or might I have misunderstood?

https://bitcointalk.org/index.php?topic=5060909.msg53240986#msg53240986


Yes, scaling the network is adding more network participants, this is accomplished through scaling.


More participants partially validating, which won't be part of the whole network, and less participants fully validating is centralizing, making the network smaller. It is anti-scaling.

Quote

The requirement that all market participants be fully validating nodes is a flaw not a virtue.  BlockReduce allows a larger number of incrementally more expensive ways of participating in the network while also scaling.


?

Growing node requirements/costs would only make node count go down, not up. Block Reduce might increase transaction throughput, but it's centralizing.

▄▄█████████▄▄
▄█████████████████▄
▄████▀▀▀▀█████▀▀▀▀████▄
████▀██████▀█▀██████▀████
██████████████████████████
▐█████▄███████████████▄█████▌
▐███████▄▄█████████▄▄███████▌
▐██████▀█████████████▀██████▌
▐███████████████████████████▌
▀██████████████████████▀
▀████▄████▄▀▀▄████▄████▀
▀███████▀███▀███████▀
▀▀█████████████▀▀
  ▀▀▀▀▀▀▀▀▀
|
★.★.★   8 GAMES   ★   WAGERING CONTEST   ★   JACKPOTS   ★   FAUCET   ★.★.★
  ▄▄▄
▄█ ▄▀█▄
██ ▄▀██
 ▀▄▄█▀
  ▄▄▄
▄█▀ ▀█▄
██   ██
 ▀█▄█▀
  ▄▄▄
▄█▀█▀█▄

 ▀███▀
  ▄▄▄
▄██▀▄█▄
██▀▄███
 ▀▄▄▄▀
  ▄▄▄
▄█ ▄▀█▄
██ █ ██
 ▀▄▄█▀
  ▄▄▄
▄▀▄▄▄▀▄
█▀▀▀▀▄█
 ▀███▀
  ▄▄▄
▄▀   ▀▄
█  █▄ █
 ▀▄██▀
  ▄▄▄
▄█▀ ▀█▄
██   ██
 ▀█▄█▀
  ▄▄▄
▀ █ ▀
▀▀▄▀▀
 ▀▄█▄
  ▄▄▄
▄█ ▄▀█▄
██ ▄▀██
 ▀▄▄█▀
|
mechanikalk
Member
**
Offline Offline

Activity: 91
Merit: 63


View Profile WWW
December 27, 2019, 11:33:03 PM
Merited by ETFbitcoin (4), Welsh (4)
 #53


Then I would debate that the statement, "allowing it to scale to handle tens of thousands of transactions per second without impacting fault tolerance or decentralization", is u true.


The actual effect on fault tolerance and decentralization has to be put into context that currently over 50% of Bitcoin's hashpower comes from only 4 pools.  As BlockReduce scales the node requirements to have a node which does partial state validation would be much less then a if Bitcoin scaled in its current state.  That would mean that although there may be fewer people validating full state, there will be more people, and fewer pools validating partial state. I would argue that having partially validating mining nodes is advantageous over having a deminimis number of pools.  Having smaller economic entities decide on the fate of the protocol rather than a few large pools would be positive for the ecosystem.


Is to REALLY scale out the network is = more partially validating nodes, but fewer fully validating nodes?

That goes the opposite path of what you said below. Or might I have misunderstood?

https://bitcointalk.org/index.php?topic=5060909.msg53240986#msg53240986


Yes, scaling the network is adding more network participants, this is accomplished through scaling.


More participants partially validating, which won't be part of the whole network, and less participants fully validating is centralizing, making the network smaller. It is anti-scaling.

I think you should more holistically consider the meaning of centralization.  If I can't go to 7-11 and buy a coke with Bitcoin it is not fully decentralized.  If I need to have 3rd parties involved in a transaction it is not fully decentralized.  If I need to use centralized exchanges to trade with good liquidity it is not fully decentralized.  If it costs $200 to make a transaction it is pricing out network participants and small transactions which is not fully decentralized.

The more people that use Bitcoin, not just the number of people running nodes, is critical in answering the question of is it is decentralized.  Additionally, to have the largest network with the most particpants (most decentralized...?), I would argue that Bitcoin needs to scale on-chain.


Growing node requirements/costs would only make node count go down, not up. Block Reduce might increase transaction throughput, but it's centralizing.


If there are benefits such as a greater number of users, and increased utility at a lower cost, the marginal degree of centralization (fewer fully validating nodes) may very well be worth it.  However, I would contend that with larger user base even if the cost of running a fully validating node increases, the absolute number of full nodes would likely go up not down even if the relative number shrinks.
Wind_FURY
Hero Member
*****
Offline Offline

Activity: 1372
Merit: 875


Crypto-Games.net: Multiple coins, multiple games


View Profile
January 02, 2020, 08:18:23 AM
 #54


Then I would debate that the statement, "allowing it to scale to handle tens of thousands of transactions per second without impacting fault tolerance or decentralization", is u true.


The actual effect on fault tolerance and decentralization has to be put into context that currently over 50% of Bitcoin's hashpower comes from only 4 pools.  As BlockReduce scales the node requirements to have a node which does partial state validation would be much less then a if Bitcoin scaled in its current state.  That would mean that although there may be fewer people validating full state, there will be more people, and fewer pools validating partial state. I would argue that having partially validating mining nodes is advantageous over having a deminimis number of pools.  Having smaller economic entities decide on the fate of the protocol rather than a few large pools would be positive for the ecosystem.


Is to REALLY scale out the network is = more partially validating nodes, but fewer fully validating nodes?

That goes the opposite path of what you said below. Or might I have misunderstood?

https://bitcointalk.org/index.php?topic=5060909.msg53240986#msg53240986


Yes, scaling the network is adding more network participants, this is accomplished through scaling.


More participants partially validating, which won't be part of the whole network, and less participants fully validating is centralizing, making the network smaller. It is anti-scaling.

I think you should more holistically consider the meaning of centralization.  If I can't go to 7-11 and buy a coke with Bitcoin it is not fully decentralized.  If I need to have 3rd parties involved in a transaction it is not fully decentralized.  If I need to use centralized exchanges to trade with good liquidity it is not fully decentralized.  If it costs $200 to make a transaction it is pricing out network participants and small transactions which is not fully decentralized.

The more people that use Bitcoin, not just the number of people running nodes, is critical in answering the question of is it is decentralized.  Additionally, to have the largest network with the most particpants (most decentralized...?), I would argue that Bitcoin needs to scale on-chain.


But if you're willing to decrease the number nodes that are actually part of the network, that would be centralizing the protocol despite the number of users. That's scalng the network in, not out.

Then what are we here for? What's the point?

Quote


Growing node requirements/costs would only make node count go down, not up. Block Reduce might increase transaction throughput, but it's centralizing.


If there are benefits such as a greater number of users, and increased utility at a lower cost, the marginal degree of centralization (fewer fully validating nodes) may very well be worth it.  


More decentralized = more secure. It's better to over-shoot security than under-shoot it.

Quote

However, I would contend that with larger user base even if the cost of running a fully validating node increases, the absolute number of full nodes would likely go up not down even if the relative number shrinks.


That does not make sense.

▄▄█████████▄▄
▄█████████████████▄
▄████▀▀▀▀█████▀▀▀▀████▄
████▀██████▀█▀██████▀████
██████████████████████████
▐█████▄███████████████▄█████▌
▐███████▄▄█████████▄▄███████▌
▐██████▀█████████████▀██████▌
▐███████████████████████████▌
▀██████████████████████▀
▀████▄████▄▀▀▄████▄████▀
▀███████▀███▀███████▀
▀▀█████████████▀▀
  ▀▀▀▀▀▀▀▀▀
|
★.★.★   8 GAMES   ★   WAGERING CONTEST   ★   JACKPOTS   ★   FAUCET   ★.★.★
  ▄▄▄
▄█ ▄▀█▄
██ ▄▀██
 ▀▄▄█▀
  ▄▄▄
▄█▀ ▀█▄
██   ██
 ▀█▄█▀
  ▄▄▄
▄█▀█▀█▄

 ▀███▀
  ▄▄▄
▄██▀▄█▄
██▀▄███
 ▀▄▄▄▀
  ▄▄▄
▄█ ▄▀█▄
██ █ ██
 ▀▄▄█▀
  ▄▄▄
▄▀▄▄▄▀▄
█▀▀▀▀▄█
 ▀███▀
  ▄▄▄
▄▀   ▀▄
█  █▄ █
 ▀▄██▀
  ▄▄▄
▄█▀ ▀█▄
██   ██
 ▀█▄█▀
  ▄▄▄
▀ █ ▀
▀▀▄▀▀
 ▀▄█▄
  ▄▄▄
▄█ ▄▀█▄
██ ▄▀██
 ▀▄▄█▀
|
coopex
Copper Member
Newbie
*
Offline Offline

Activity: 9
Merit: 3


View Profile
January 18, 2020, 01:04:24 AM
 #55



However, I would contend that with larger user base even if the cost of running a fully validating node increases, the absolute number of full nodes would likely go up not down even if the relative number shrinks.


That does not make sense.

I think what he means is that with more people using the network, the number of full nodes that run on the network will go up, but the ratio of full node to user will not increase. But I don't see this as such a big issue if users are allowed to run whichever part of the network that they wish or interact with economically speaking. For example, if I am geographically located in Region 1 Zone 2, I will run a node in Prime, Region 1, and Zone 2 as I do most of my commerce in those networks. Because data from a Zone is compressed when it moves into a Region (and Region data is compressed when it moves into Prime) the resources required for running the three nodes would not be that much higher than running a Bitcoin node. Now if you are a merchant I'm guessing you would want to run nodes in multiple zones (to accept payment in those zones), so the hardware requirement there would be greater, but you have an economic incentive to do so.

I have another question OP. When a transaction moves from zone to zone, there is a 'state transition' that takes place, correct? The transaction would go from zone to region to prime and then to zone (assuming the two zones are not in the same region) so the zone that the transaction enters would need to have some state transition that is not necessarily in a block but is also reversible (I suppose this could show up in a block in the other zone). What happens if the transaction is reversed in Prime due to an orphan or re-org? Does the other zone chain need to re-org its UTXO set?

Looking forward to hearing more, thanks.
Pages: 1 2 3 [All]
  Print  
 
Jump to:  

Sponsored by , a Bitcoin-accepting VPN.
Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!