Bitcoin Forum
November 18, 2024, 02:02:02 AM *
News: Latest Bitcoin Core release: 28.0 [Torrent]
 
   Home   Help Search Login Register More  
Pages: [1] 2 »  All
  Print  
Author Topic: Semi-Full Bitcoin Node. Downloading from ONLY pruned nodes.  (Read 471 times)
spartacusrex (OP)
Hero Member
*****
Offline Offline

Activity: 718
Merit: 545



View Profile
October 08, 2018, 01:25:48 PM
 #1

If a Semi-Full Bitcoin node only stored the complete UTXO, the last months worth of Blocks in full,  and the rest ONLY as block headers, how bad could it be ? You'd still have a complete record of the POW & User Balances, and the last months of data complete. (I believe the coin CryptoNite does something similar)

I know - if you don't have the whole chain, haven't verified every transaction since genesis - you cannot independently be sure that the whole chain is valid.  But is this actually a serious threat ?

A 51% attack could in theory print money - but that would need to go on for over a month (much more than a month actually at 51% only) and is easily recogniseable. I just don't see it.

Are we saying there is even the remotest chance that a block from a month ago has any txn errors ? With all the users that are currently running bitcoin having somehow _missed_ it ? So why do I need to download and verify it ?

I note that ETH is having a hard time at the moment as this is exactly what they have done, since their chain is so large.

As far as a pruned node is concerned - there is no loss in security from pruning data that they have verified themselves. Once a node is up to date, it just has to keep up, whilst pruning aggressively, and not lose any security.

Connecting to a network of these kinds of nodes absolutely does not have the _same_ security as a full blown Bitcoin node, but it's not far off. And if it meant that many more people ran semi-full nodes, I think it could be a bonus.

A user would only need to log on once a month minimum to catch up - before data was discarded. Seems squarely within the realms of possibility.

(I know you can do this already with Bitcoin - I am wondering if pruned nodes was the default install, and the majority of the nodes on the network used this, could the network  still thrive)

Life is Code.
bob123
Legendary
*
Offline Offline

Activity: 1624
Merit: 2481



View Profile WWW
October 08, 2018, 01:54:50 PM
Merited by mocacinno (1)
 #2

A 51% attack could in theory print money - but that would need to go on for over a month (much more than a month actually at 51% only) and is easily recogniseable. I just don't see it.

An malicious actor with more than 50% of the total hashrate can NOT 'print money'


Someone with 51%+ hashrate can decide which transactions to include (also means that he can refuse to include a single one).
He also can double spent his own transactions (since he decides which TX's to include).

But he can NOT steal other peoples money or create money out of nothing.



(I know you can do this already with Bitcoin - I am wondering if pruned nodes was the default install, and the majority of the nodes on the network used this, could the network  still thrive)

AFAIK, pruning is NOT enabled by default.

As long as there are 'enough' full nodes which share the full historical data (probably always will be), that's not a problem at all.

ranochigo
Legendary
*
Offline Offline

Activity: 3038
Merit: 4420


Crypto Swap Exchange


View Profile
October 08, 2018, 01:57:33 PM
 #3

If a Semi-Full Bitcoin node only stored the complete UTXO, the last months worth of Blocks in full,  and the rest ONLY as block headers, how bad could it be ? You'd still have a complete record of the POW & User Balances, and the last months of data complete.
Not very useful. The inability of the client to independently validate every aspect of the purpose of trustless in Bitcoin and it is no different from a SPV client.
I know - if you don't have the whole chain, haven't verified every transaction since genesis - you cannot independently be sure that the whole chain is valid.  But is this actually a serious threat ?
Yes. If you are not 100% sure of the information that you're fed, the only thing you can do is to trust the person who provided you the information, which is risky.
A 51% attack could in theory print money - but that would need to go on for over a month (much more than a month actually at 51% only) and is easily recogniseable. I just don't see it.
51% attacks can't print money out of thin air. They still need UTXO to spend and follow the network rules regarding the block rewards. You simply have to overtake another chain and that wouldn't be noticable at all, until the attack is over. 51% attacks uses long block reorgs to evade detection.
Are we saying there is even the remotest chance that a block from a month ago has any txn errors ? With all the users that are currently running bitcoin having somehow _missed_ it ? So why do I need to download and verify it ?
Because you can't be sure that whoever you are connected to is not malicious.
Connecting to a network of these kinds of nodes absolutely does not have the _same_ security as a full blown Bitcoin node, but it's not far off. And if it meant that many more people ran semi-full nodes, I think it could be a bonus.

A user would only need to log on once a month minimum to catch up - before data was discarded. Seems squarely within the realms of possibility.

(I know you can do this already with Bitcoin - I am wondering if pruned nodes was the default install, and the majority of the nodes on the network used this, could the network  still thrive)

It will definitely not thrive. It is simply not possible for the network to only run on pruned nodes. Without full nodes that keeps all the data, it would be impossible for any node to retrieve the exact transaction data about any transaction back in time. If this continues on, then the problem would only get worse. It would be inherently difficult for anyone to prove that they have made a transaction 2 years ago, should a contract last that long.


Unless, as Bob said, it is possible if there is enough redundancy with the historical nodes but it is simply impossible for those nodes to have sufficient redundancy because no one wants to run them and if they do, there will be a high cost to it.

█▀▀▀











█▄▄▄
▀▀▀▀▀▀▀▀▀▀▀
e
▄▄▄▄▄▄▄▄▄▄▄
█████████████
████████████▄███
██▐███████▄█████▀
█████████▄████▀
███▐████▄███▀
████▐██████▀
█████▀█████
███████████▄
████████████▄
██▄█████▀█████▄
▄█████████▀█████▀
███████████▀██▀
████▀█████████
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
c.h.
▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄
▀▀▀█











▄▄▄█
▄██████▄▄▄
█████████████▄▄
███████████████
███████████████
███████████████
███████████████
███░░█████████
███▌▐█████████
█████████████
███████████▀
██████████▀
████████▀
▀██▀▀
HeRetiK
Legendary
*
Offline Offline

Activity: 3122
Merit: 2178


Playgram - The Telegram Casino


View Profile
October 08, 2018, 02:17:12 PM
Merited by ABCbits (1)
 #4

The impression I get is that people either decide to run a full node on purpose or just go straight for a SPV wallet. Running a "semi-full" node (eg. Bitcoin Core with pruning enabled) seems to be the exception. Accordingly I doubt that providing the ability to run a semi-full node increases the overall node count much. However I'm just extrapolating from anecdotal observations without having anything substantial to back this claim up, so don't take my word for it.

I think the problem at hand is, that the fewer full nodes there are, the more traffic they need to bear. This in turn will make running a full node even harder, causing more full nodes to drop off, further increasing the traffic on the remaining nodes until only a handful of very costly full nodes are left. And every new pruned node that comes online needs these full nodes to bootstrap, lest they won't even become a semi-full node.

▄▄███████▄▄███████
▄███████████████▄▄▄▄▄
▄████████████████████▀░
▄█████████████████████▄░
▄█████████▀▀████████████▄
██████████████▀▀█████████
████████████████████████
██████████████▄▄█████████
▀█████████▄▄████████████▀
▀█████████████████████▀░
▀████████████████████▄░
▀███████████████▀▀▀▀▀
▀▀███████▀▀███████

▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄
 
Playgram.io
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀

▄▄▄░░
▀▄







▄▀
▀▀▀░░
▄▄▄███████▄▄▄
▄▄███████████████▄▄
▄███████████████████▄
▄██████████████▀▀█████▄
▄██████████▀▀█████▐████▄
██████▀▀████▄▄▀▀█████████
████▄▄███▄██▀█████▐██████
█████████▀██████████████
▀███████▌▐██████▐██████▀
▀███████▄▄███▄████████▀
▀███████████████████▀
▀▀███████████████▀▀
▀▀▀███████▀▀▀
██████▄▄███████▄▄████████
███▄███████████████▄░░▀█▀
███████████░█████████░░
░█████▀██▄▄░▄▄██▀█████░
█████▄░▄███▄███▄░▄█████
███████████████████████
███████████████████████
██░▄▄▄░██░▄▄▄░██░▄▄▄░██
██░░░░██░░░░██░░░░████
██░░░░██░░░░██░░░░████
██▄▄▄▄▄██▄▄▄▄▄██▄▄▄▄▄████
███████████████████████
███████████████████████
 
PLAY NOW

on Telegram
[/
spartacusrex (OP)
Hero Member
*****
Offline Offline

Activity: 718
Merit: 545



View Profile
October 08, 2018, 02:25:13 PM
Last edit: October 08, 2018, 02:35:35 PM by spartacusrex
 #5

Thanks for the input.

1) It is possible to print money in a 51% attack if other users don't have the full history. 51% Attacker outruns the whole chain by more than the month that everyone does store, so that NO-ONE has the history. Then you can do what you like. not very likely I agree.. (Outrunning a month with 51% takes Years)

2) You can still verify the longest chain via POW even with this maximal pruning. It is not a blind trust-the-peer situation.  

3) A peer cannot tamper / alter / change the data he is providing you - because.. Hashing!. Either it is the chain data or it isn't. At that stage I would simply go with the chain with the most POW.

4) The man in the middle attack - where the attacker cuts me off from the valid network, so I only see their chain, is a concern even without the pruning. 

5) As long as you keep up with the network, log in once a month, you have the _same_ security as non-pruned bitcoin - as you still validate the whole chain.

I think what this does is change the requirements for the network from - everyone needs a big hard drive - to - everyone needs a small hard drive and to log in once a month. Fair enough.

I agree that if you miss your 1 month window.. you'll need to place trust in the longest POW chain, but that seems like a given anyway.

------------------------

EDIT : Transactions from years back would still be available as you could provide the merkle proofs that linked to the block headers with the original data. You'd have to store them yourself though.

Life is Code.
aliashraf
Legendary
*
Offline Offline

Activity: 1456
Merit: 1175

Always remember the cause!


View Profile WWW
October 08, 2018, 07:48:28 PM
 #6

Discussing HashCash-like improvement for bitcoin I  brought it up as a necessary step:
...  I'm thinking of a hybrid approach by giving space to wallets for participating in consensus without eliminating block miners. So many radical changes would be necessary for this to happen, on top of them getting rid of blockchain bloat and spv wallets,  interchangeability of fee and work, defining total work of a block in a more general way, ....
SPV wallets constitute the most stupid part of bitcoin. They should be eliminated completely and unlike op I don't believe in a "semi-full node" replacement either. What he suggests, snapshotting the UTXO, is the key to this agenda.

Using such a snapshot, has been proposed by many people earlier and ignored mostly because it was considered to be one of those "dangerous" proposals that need a hard fork to be implemented and in this weird community, bitcoin, had forking is cursed, ... long story.

@eurekafag AFAIK is the first person who said something about it, july 2010(!), he used the term snapshotting (it is why I used it above, to show my respect). The topic got no attention but another user, @Bytecoin rephrased it two days later and posted a more comprehensive proposal.

Satoshi Nakamoto was still around and he never made a comment regarding this, Gavin Andersen didn't get it, neither @Theymos, ... just 2 and a half pages of non-productive discussions. Obviously in mid 2010 there were few blocks, few UTXOs and so many other problems and priorities.

Almost one year later, july 2011, Gregory Maxwell, made a contribution to this subject he basically proposed something that later was termed, UTXO Commitment, it was Merkle era, people were excited about the magical power of Merkle Trees and Maxwell proposed maintaining a Merkle Hash Tree of UTXO by full nodes that enables them to spot an unspent output efficiently while miners include the root of such a tree in coinbase transaction (later others proposed including it directly in block header) this way, 'lite clients' would be able to ask for proof of any tx input as being committed to the UTXO Merkle root included in latest blocks.  

Basically, Maxwell's proposal needs a hard fork because full nodes MUST validate the UTXO Merkle root once  it is provided:
What if the coinbase TXN included the merkle root for a tree over all open transactions, and this was required by the network to be accurate if it is provided.
'A hard fork?! Better to forget about it or at most put it, with all due respects, in the long list of hard-fork-wish-list', it was and still is how proposals could be handled in bitcoin community. Few replies, again non-productive and Maxwell's proposal got no more stem.

In August 2012, Andrew Miller published a concrete proposal (and reference implementation) for a Merkle-tree of unspent-outputs (UTXOs)  in bitcointalk: again no serious discussion.
Andrew explicitly mentioned his proposal as the one which "belongs to Hardfork Wishlist".

Peter Todd went further and proposed TXO Commitment by which he meant committing the Merkle hash root of the state to each transaction, he also introduced a new concept 'delayed commitment' which is a key feature, imo.

I hate this hradfork fobia in bitcoin, bcash was not bad because of it being a hard fork it was bad because of the wrong technical direction they chose, imo. But I agree that a hardfork is not the decision a community should make very frequently and if there is a way to avoid it without too many sacrifices, it is better to be avoided.

So, the question is not whether op's idea is good (of course it is), the question is whether it could be implemented without a hardfork?
DooMAD
Legendary
*
Offline Offline

Activity: 3948
Merit: 3191


Leave no FUD unchallenged


View Profile
October 08, 2018, 08:17:14 PM
 #7

SPV wallets constitute the most stupid part of bitcoin. They should be eliminated completely

Is that before or after you eliminate ASICs, mining pools, off-chain development, free will, etc?  This seems to be a common theme with you.  Is there anything you'd leave intact in the horrific scenario where it was left up to you to decide these things?

▄▄▄███████▄▄▄
▄█████████████████▄▄
▄██
█████████▀██▀████████
████████▀
░░░░▀░░██████████
███████████▌░░▄▄▄░░░▀████████
███████
█████░░░███▌░░░█████████
███
████████░░░░░░░░░░▄█████████
█████████▀░░░▄████░░░░█████████
███
████▄▄░░░░▀▀▀░░░░▄████████
█████
███▌▄█░░▄▄▄▄█████████
▀████
██████▄██
██████████▀
▀▀█████████████████▀▀
▀▀▀███████▀▀
.
.BitcoinCleanUp.com.


















































.
.     Debunking Bitcoin's Energy Use     .
███████████████████████████████
███████████████████████████████
███████████████████████████████
███████▀█████████▀▀▀▀█▀████████
███████▌░▀▀████▀░░░░░░░▄███████
███████▀░░░░░░░░░░░░░░▐████████
████████▄░░░░░░░░░░░░░█████████
████████▄░░░░░░░░░░░▄██████████
███████▀▀▀░░░░░░░▄▄████████████
█████████▄▄▄▄▄▄████████████████
███████████████████████████████
███████████████████████████████
███████████████████████████████
...#EndTheFUD...
aliashraf
Legendary
*
Offline Offline

Activity: 1456
Merit: 1175

Always remember the cause!


View Profile WWW
October 08, 2018, 08:24:17 PM
 #8

SPV wallets constitute the most stupid part of bitcoin. They should be eliminated completely

Is that before or after you eliminate ASICs, mining pools, off-chain development, free will, etc?  This seems to be a common theme with you.  Is there anything you'd leave intact if it were up to you?
Not much, with the exception of free will (why should you mention this?) other ones are pure garbages plus spv wallets, but fortunately for you and other respected "investors", it is not up to me and you can sell your shits to people. Oh wait, you can't anymore? Sorry, but it is not my fault.
DooMAD
Legendary
*
Offline Offline

Activity: 3948
Merit: 3191


Leave no FUD unchallenged


View Profile
October 08, 2018, 08:50:02 PM
Last edit: October 08, 2018, 09:00:24 PM by DooMAD
 #9

Is that before or after you eliminate ASICs, mining pools, off-chain development, free will, etc?  This seems to be a common theme with you.  Is there anything you'd leave intact if it were up to you?
Not much, with the exception of free will (why should you mention this?)

Because most of the "improvements" you propose for Bitcoin involve depriving people of their right to do something which they already currently do.  You think you can just ban all the things you don't like, as though you were some sort of dictator.  That's not progress, that's oppression.  Something which is generally considered the opposite of progression.  It's also a mentality which is largely impotent in a permissionless system, so good luck with that.

▄▄▄███████▄▄▄
▄█████████████████▄▄
▄██
█████████▀██▀████████
████████▀
░░░░▀░░██████████
███████████▌░░▄▄▄░░░▀████████
███████
█████░░░███▌░░░█████████
███
████████░░░░░░░░░░▄█████████
█████████▀░░░▄████░░░░█████████
███
████▄▄░░░░▀▀▀░░░░▄████████
█████
███▌▄█░░▄▄▄▄█████████
▀████
██████▄██
██████████▀
▀▀█████████████████▀▀
▀▀▀███████▀▀
.
.BitcoinCleanUp.com.


















































.
.     Debunking Bitcoin's Energy Use     .
███████████████████████████████
███████████████████████████████
███████████████████████████████
███████▀█████████▀▀▀▀█▀████████
███████▌░▀▀████▀░░░░░░░▄███████
███████▀░░░░░░░░░░░░░░▐████████
████████▄░░░░░░░░░░░░░█████████
████████▄░░░░░░░░░░░▄██████████
███████▀▀▀░░░░░░░▄▄████████████
█████████▄▄▄▄▄▄████████████████
███████████████████████████████
███████████████████████████████
███████████████████████████████
...#EndTheFUD...
spartacusrex (OP)
Hero Member
*****
Offline Offline

Activity: 718
Merit: 545



View Profile
October 08, 2018, 10:00:43 PM
 #10


Gentlemen - Please stop. Thank you.

--------------------------------------------------------------

Discussing HashCash-like improvement for bitcoin I  brought it up as a necessary step:
...  I'm thinking of a hybrid approach by giving space to wallets for participating in consensus without eliminating block miners. So many radical changes would be necessary for this to happen, on top of them getting rid of blockchain bloat and spv wallets,  interchangeability of fee and work, defining total work of a block in a more general way, ....
SPV wallets constitute the most stupid part of bitcoin. They should be eliminated completely and unlike op I don't believe in a "semi-full node" replacement either. What he suggests, snapshotting the UTXO, is the key to this agenda.

Using such a snapshot, has been proposed by many people earlier and ignored mostly because it was considered to be one of those "dangerous" proposals that need a hard fork to be implemented and in this weird community, bitcoin, had forking is cursed, ... long story.

@eurekafag AFAIK is the first person who said something about it, july 2010(!), he used the term snapshotting (it is why I used it above, to show my respect). The topic got no attention but another user, @Bytecoin rephrased it two days later and posted a more comprehensive proposal.

Satoshi Nakamoto was still around and he never made a comment regarding this, Gavin Andersen didn't get it, neither @Theymos, ... just 2 and a half pages of non-productive discussions. Obviously in mid 2010 there were few blocks, few UTXOs and so many other problems and priorities.

Almost one year later, july 2011, Gregory Maxwell, made a contribution to this subject he basically proposed something that later was termed, UTXO Commitment, it was Merkle era, people were excited about the magical power of Merkle Trees and Maxwell proposed maintaining a Merkle Hash Tree of UTXO by full nodes that enables them to spot an unspent output efficiently while miners include the root of such a tree in coinbase transaction (later others proposed including it directly in block header) this way, 'lite clients' would be able to ask for proof of any tx input as being committed to the UTXO Merkle root included in latest blocks. 

Basically, Maxwell's proposal needs a hard fork because full nodes MUST validate the UTXO Merkle root once  it is provided:
What if the coinbase TXN included the merkle root for a tree over all open transactions, and this was required by the network to be accurate if it is provided.
'A hard fork?! Better to forget about it or at most put it, with all due respects, in the long list of hard-fork-wish-list', it was and still is how proposals could be handled in bitcoin community. Few replies, again non-productive and Maxwell's proposal got no more stem.

In August 2012, Andrew Miller published a concrete proposal (and reference implementation) for a Merkle-tree of unspent-outputs (UTXOs)  in bitcointalk: again no serious discussion.
Andrew explicitly mentioned his proposal as the one which "belongs to Hardfork Wishlist".

Peter Todd went further and proposed TXO Commitment by which he meant committing the Merkle hash root of the state to each transaction, he also introduced a new concept 'delayed commitment' which is a key feature, imo.

I hate this hradfork fobia in bitcoin, bcash was not bad because of it being a hard fork it was bad because of the wrong technical direction they chose, imo. But I agree that a hardfork is not the decision a community should make very frequently and if there is a way to avoid it without too many sacrifices, it is better to be avoided.

So, the question is not whether op's idea is good (of course it is), the question is whether it could be implemented without a hardfork?

Good breakdown.

I would also add Bram Cohen's UTX0 Merkle Set Proposal : https://diyhpl.us/wiki/transcripts/sf-bitcoin-meetup/2017-07-08-bram-cohen-merkle-sets/

It uses 1 bit per txn-output to store spent or unspent. It's super simple and gives a 256x space advantage over regular list of 32 byte hashses, and you provide the proofs yourself when you want to spend (unlike MMR though they don't change, but are bigger)

( I was using a system where I stored using Brams' first and then MMR for later, but ended up going for just MMR. )
 

Life is Code.
aliashraf
Legendary
*
Offline Offline

Activity: 1456
Merit: 1175

Always remember the cause!


View Profile WWW
October 08, 2018, 10:31:15 PM
Last edit: October 09, 2018, 04:35:37 AM by aliashraf
 #11

....
I hate this hradfork fobia in bitcoin, bcash was not bad because of it being a hard fork it was bad because of the wrong technical direction they chose, imo. But I agree that a hardfork is not the decision a community should make very frequently and if there is a way to avoid it without too many sacrifices, it is better to be avoided.

So, the question is not whether op's idea is good (of course it is), the question is whether it could be implemented without a hardfork?

Good breakdown.

I would also add Bram Cohen's UTX0 Merkle Set Proposal : https://diyhpl.us/wiki/transcripts/sf-bitcoin-meetup/2017-07-08-bram-cohen-merkle-sets/

It uses 1 bit per txn-output to store spent or unspent. It's super simple and gives a 256x space advantage over regular list of 32 byte hashses, and you provide the proofs yourself when you want to spend (unlike MMR though they don't change, but are bigger)

( I was using a system where I stored using Brams' first and then MMR for later, but ended up going for just MMR. )
I do agree with MMR being the most powerful platform for implementing UTXO commitments, actually I have been investigating the subject for a while and I have a couple of more points to share but for the time being I'm curious whether you have any idea about how we could avoid a hard fork for this? Just asking because I got one  Wink
spartacusrex (OP)
Hero Member
*****
Offline Offline

Activity: 718
Merit: 545



View Profile
October 09, 2018, 09:00:05 AM
 #12

I do agree with MMR being the most powerful platform for implementing UTXO commitments, actually I have been investigating the subject for a while and I have a couple of more points to share but for the time being I'm curious whether you have any idea about how we could avoid a hard fork for this? Just asking because I got one  Wink

lol.. I'm ashamed to say I hadn't actually thought about that bit..   I suppose there are the usual suspects to do it softly softly.. you could either stuff it in the coinbase - or as an OP_RETURN in the first transaction in the block...  I have a feeling your method will be more cunning.

Would a block definitely be considered invalid if the commitment was wrong or missing ? (I should think yes) - but maybe users of the scheme could craft specific transactions that they share for each other only.. via the blocks.. and we don't have to fork at all.

-----------------

What I am more curious about is a solution to storing the old pruned data from the blocks.. in a distributed way. With all these file-store coins (I'll be honest I am not 100% up on how they are functioning), would it not be possible for the network to store JUST this one large file.. ?  

Life is Code.
HeRetiK
Legendary
*
Offline Offline

Activity: 3122
Merit: 2178


Playgram - The Telegram Casino


View Profile
October 09, 2018, 09:58:02 AM
 #13

What I am more curious about is a solution to storing the old pruned data from the blocks.. in a distributed way. With all these file-store coins (I'll be honest I am not 100% up on how they are functioning), would it not be possible for the network to store JUST this one large file.. ?  

Nice thinking.

Challenge being that storage coins expect to be paid for their services.

That is, miners (or whatever the terminology is for users providing storage space) expect to receive a fee, usually in the form of the respective native token. Who'd pay for that? We'd be back to relying on people voluntary hosting a full node, but with extra steps involved. The effective cost of hosting a full node in terms of bandwidth and harddisk space stays the same and would likewise increase the fewer nodes are involved (in this case, storage coin nodes responsible for hosting the blockchain).

▄▄███████▄▄███████
▄███████████████▄▄▄▄▄
▄████████████████████▀░
▄█████████████████████▄░
▄█████████▀▀████████████▄
██████████████▀▀█████████
████████████████████████
██████████████▄▄█████████
▀█████████▄▄████████████▀
▀█████████████████████▀░
▀████████████████████▄░
▀███████████████▀▀▀▀▀
▀▀███████▀▀███████

▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄
 
Playgram.io
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀

▄▄▄░░
▀▄







▄▀
▀▀▀░░
▄▄▄███████▄▄▄
▄▄███████████████▄▄
▄███████████████████▄
▄██████████████▀▀█████▄
▄██████████▀▀█████▐████▄
██████▀▀████▄▄▀▀█████████
████▄▄███▄██▀█████▐██████
█████████▀██████████████
▀███████▌▐██████▐██████▀
▀███████▄▄███▄████████▀
▀███████████████████▀
▀▀███████████████▀▀
▀▀▀███████▀▀▀
██████▄▄███████▄▄████████
███▄███████████████▄░░▀█▀
███████████░█████████░░
░█████▀██▄▄░▄▄██▀█████░
█████▄░▄███▄███▄░▄█████
███████████████████████
███████████████████████
██░▄▄▄░██░▄▄▄░██░▄▄▄░██
██░░░░██░░░░██░░░░████
██░░░░██░░░░██░░░░████
██▄▄▄▄▄██▄▄▄▄▄██▄▄▄▄▄████
███████████████████████
███████████████████████
 
PLAY NOW

on Telegram
[/
aliashraf
Legendary
*
Offline Offline

Activity: 1456
Merit: 1175

Always remember the cause!


View Profile WWW
October 09, 2018, 04:51:19 PM
Last edit: October 09, 2018, 07:26:39 PM by aliashraf
Merited by bones261 (2)
 #14

I do agree with MMR being the most powerful platform for implementing UTXO commitments, actually I have been investigating the subject for a while and I have a couple of more points to share but for the time being I'm curious whether you have any idea about how we could avoid a hard fork for this? Just asking because I got one  Wink

lol.. I'm ashamed to say I hadn't actually thought about that bit..   I suppose there are the usual suspects to do it softly softly.. you could either stuff it in the coinbase - or as an OP_RETURN in the first transaction in the block...  I have a feeling your method will be more cunning.
Sure it is.  Cheesy

Quote
Would a block definitely be considered invalid if the commitment was wrong or missing ? (I should think yes) -
No, you shouldn't. And here is the trick:
The very interesting point about UTXO commitment is its potential to be 'delayed' (and yes, I'm borrowing this term from Peter Todd) i.e. you have time to decide about its validity. To be more precise, you MUST wait for a minimum number of commitments to start pruning the history, right? Otherwise you will be in the risk of being committed to a short-to-middle range double spend attack without any bridges behind to commit to chain rewrite, you will reject any (implied) rewrite request that goes beyond the snapshot you are committed to, because there is no history and no genesis.

You should wait for like 10,000 commitments, imo. Once you got that thresholds you are ready to get rid of the history because it takes like 8 billion dollars (nowadays) to rewrite Bitcoin blockchain in that range and it is why UTXO commitment works after all.

Another interesting issue here is your free will: you could choose a more realistic and effective strategy by pruning after 1000 blocks once you are confidentially sure about that nobody commits a 1 billion dollars attack against the network, yeah?

Now we are ready to walk through the algorithm (it is the alpha version, published for the first time, feel free to suggest improvements) which I purposely call it Soft Delayed Utxo Commitment

 
Soft Delayed Utxo Commitment
1- A SDUC compatible node takes a snapashot from UTXO every 1,000 blocks and generates a Merkle root for the set using a deterministic method that allows insertions and deletions of items such that the most recent snapshot and the last 1000 blocks always generate the same new snapshot as if it was supposed to be generated using the previous snapshot (if any) and the last 2,000 blocks.

2- A SDUC node is configurable (and queryable) for the number of  commitments it needs for committing permanently and irreversibly to an UTXO snapshot via its Merkle root. It is never allowed to be less than 1,000 commitments.

3- We define commitments for an UTXO as the number of blocks that have embedded a commit to its Merkle root.

4- SDUC mining nodes, commit to a UTXO snapshot by embedding its Merkle hash root in the coinbase transaction of their blocks as an special input. They are free to commit to as many UTXO Merkle root as they wish (by introducing more special inputs in the coinbase) but they should be stacked properly with the last UTXO Merkle root being interpreted as a reference to the state of the system after the block numbered floor(#BlockHeight/1000 ) and the next item below it referring to the state at  floor(#BlockHeight/2000 ) and so on.

5- In networking layer, we add proper message formats for SDUC nodes to identify each other and consolidate their chains and snapshots.

6- SDUC nodes bootstrap in a different way compared to legacy nodes:
        
  • phase 1: SDUC node acts like a spv node and downloads block headers.
  • phase 2: SDUC node spots at least one SDUC peer that conforms to its expected number of commitments expectation i.e. it has proof for the desired number_of_commitments for each UTXO commitment it presents and the bootstrapping SDUC node is interested in. Thereafter it consolidates its chain with the peer, in a top-down approach down to the (at least) first completely confirmed UTXO snapshot, confirmed from the bootstrapping SDUC node's point of view, obviously.

7- SDUC nodes are free to ignore all the history beyond their most recent UTXO which is held confirmed by virtue of blocks with configured number_of_commitments on the specific.

-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------        

Implementation is straightforward but for such a BIP to be adopted in bitcoin core (or any other bitcoin clone) there are two options:
1- Forking the source code and building an alternative client.
Generally, I hate this approach and as much as I'm not satisfied with conservatist atmosphere in Core devs I strongly believe in keeping development centralized and open. But it would not be an issue if it was not about miners and the need for having at least a percentage of them (like 5% I suppose) running the client software and you just can't bring a software out of nowhere and ask miners to run it with the amount of stakes they have put in their business.

2- The hard but more promising way: Convincing Doomad and cellard not to ruin this topic and having a productive discussion, cooling down Gregory Maxwell and convincing him to contribute instead of arguing or showing no interest, formalizing a BIP, working hard on the implementation details, testing and praying for the commit of the BIP.

At the end of the day we have a soft-soft migration path SDUS nodes grow smoothly without any conflict or chain split because every node in sense is a SDUC node and it can be interpreted just about the numer_of_commitments parameter being set to a very large number for legacy nodes that the owners prefer and have enough resources to stick with the whole blockchain history and more reasonable values for a growing number of nodes which need more robust and efficient management of their resources. They could coexist in peace for a very long time even forever.

Quote
but maybe users of the scheme could craft specific transactions that they share for each other only.. via the blocks.. and we don't have to fork at all.
kinda ... you are super smart dude, we should hang out a bit more  Wink

Quote
What I am more curious about is a solution to storing the old pruned data from the blocks.. in a distributed way. With all these file-store coins (I'll be honest I am not 100% up on how they are functioning), would it not be possible for the network to store JUST this one large file.. ?  
Not a big deal, imo. There will always be nodes with very large number_of_commitments set and we are super safe. For a hypothetical scenario in which we are short of such nodes, your solution works, nothing will be lost and anybody would be able to rebuild the blockchain from the ground up perhaps using a special software.
spartacusrex (OP)
Hero Member
*****
Offline Offline

Activity: 718
Merit: 545



View Profile
October 09, 2018, 10:25:00 PM
 #15

Nice.

I hadn't thought too much about how to do it with a soft fork, and had been banking on just hard-forking in the best I could come up with..

Sooo.. with that in mind - this is the version I have settled on after much playing. (I need to think more about yours..)

I started with the delayed commitment - ( do you mean ..INSERT and UPDATE of items.. ?  ) and you pick a step counter that starts a new epoch. Here once every 1000 blocks. But you always get into difficulties at the boudaries, and re-orgs can bounce you from one MMR root to another. Making providing proofs slightly more complex, (you just send both/all) and other vagaries.

You not only embed the root hash of the MMR into the block, you add all the MMR peaks, so that you have all the information required to add data as well as update.  

After a while I realised that the delay was complicating some matters and not helping in others.

what I _actually_ wanted was much simpler. I want the current MMR state embedded in every block. Real-time.

Much better. Every block HAS to commit to the correct MMR, so that (block n-1 MMR) + (block n txns) = (block n MMR), or it's invalid. Everyone DOES agree anyway - an ordered UTXO is the same for everyone - so now the Miners have to commit to it.

I use an overlapping set of the the last 50 MMR states, blocktime ordered, reconstructing up to date proofs for inputs  to check for txn validity, given an original MMR proof from the user that references any of the previous 50.. works well..  

Life is Code.
aliashraf
Legendary
*
Offline Offline

Activity: 1456
Merit: 1175

Always remember the cause!


View Profile WWW
October 09, 2018, 10:52:46 PM
Last edit: October 10, 2018, 05:47:32 AM by aliashraf
 #16

You don't need to refresh UTXO  for every single block, why should you?

Suppose you got 2 recent UTXOs like as of 2500, and 1500 blocks below the current heighth, you might just use either of them (and previous 2500 or 1500 blocks) to decide about validity of a txn, it is up to you and how much do you expect a given UTXO Merkle root should be confirmed and how many blocks have actually committed to it .... that simple!

Many people have suggested commitment to the latest state by each block which is not helpful at all. Actually would be a distraction from what we are looking for, fast bootstrapping and elimination of spv wallets by having full nodes with the least possible amount of resources.

Committing to the latest state (like in txns or in blocks) would be useful for validation purposes but not for pruning. Here we could delay commitment for as much as few thousands of blocks because we could afford maintaining such amounts of blocks in any commodity device.

Please note that committing to the same stack of UTXO Merkle roots is very crucial for the algorithm, because it is how commitments accumulate and the UTXOs become consolidated.

I suggest we finalize this issue before proceeding anymore, if you don't mind.
spartacusrex (OP)
Hero Member
*****
Offline Offline

Activity: 718
Merit: 545



View Profile
October 10, 2018, 10:02:27 AM
 #17

I already have an implementation written and functioning as part of a larger system. Works well.

Actually there are many benefits to having it real time. A couple..

1) You end up needing the information all the time anyway. Might as well calculate once at the beginning as a one-time hit of creating a block, and allow everyone to use it for ever-more, rather than constantly re-evaluating from x blocks back.

2) You can validate and participate in the network without needing any information other than the longest chain of block headers. The next block can be validated entirely from it's information and the information embedded in the headers of the previous block.

3) Re-org MMR calculations are simple.

there are more..

I think intuitively, obviously, you want each block to commit the current MMR state, rather than some delayed commitment. That makes each block far more useful. A straight state machine, where next block of data only relies on previous and no other extraneous information.

Life is Code.
aliashraf
Legendary
*
Offline Offline

Activity: 1456
Merit: 1175

Always remember the cause!


View Profile WWW
October 10, 2018, 01:53:40 PM
Last edit: October 12, 2018, 08:29:07 AM by aliashraf
 #18

I afraid you could be distracted from the cause as I mentioned above:
I understand real-time MMR refresh is low cost and (more importantly) fun from a coding point of view and I like it too but it is not the protocol we desperately need and it would be a pain to have it soft, if ever.

A peaceful transition requires spontaneous UTXO commitments to be re-committed hundreds of times for being viable as a replacement for the history. It is true that nodes could make conclusions about such aggregated UTXO continuous commits but it is not an elegant choice to make.

And it is useless:
In legacy blockchains, by committing to the previous block they are committing to the UTXO as well, what's the point of a spare commitment.

Actually it is a bad practice:
It is known that redundancy puts an information system in the risk of inconsistency. Imagine a simple general ledger system that for every single transaction includes balances of the accounts involved, it is possible but is not recommended as any designer/programmer is aware of.

I maintain that we don't need a rolling UTXO schema for the purpose of pruning and should focus on consolidating one snapshot in each few thousand blocks, instead.
spartacusrex (OP)
Hero Member
*****
Offline Offline

Activity: 718
Merit: 545



View Profile
October 15, 2018, 10:25:47 AM
 #19

..fun from a coding point of view

This is very true..


And it is useless:
In legacy blockchains, by committing to the previous block they are committing to the UTXO as well, what's the point of a spare commitment.

Actually it is a bad practice:
It is known that redundancy puts an information system in the risk of inconsistency.

I agree that if you had all the transactions the MMR commitment per block would be 'spare', since you can always work it out anyway, but in this particular system you do not always have the transactions. And the MMR commitment in the block header cannot be reproduced from a list of block headers alone. But by adding it, you can start validating blocks immediately - with just the header list. So it is not redundant as it adds an ability that wasn't there before. Whether or not you think it is a useful ability is another point.

-----------------------------

Actually - I am thinking that a system like this will HAVE to be used at some point.. Are you expected to validate a 1000 year chain of transactions if you want to sync a full node 1000 years from now ? That would take years (and it is already impossible to sync certain chains). Validating the longest header chain via POW would still be easy though.

.. Clearly 1000 years is a long way off  Tongue .. but 10 to 20 years isn't. And that could already be too much.

Life is Code.
aliashraf
Legendary
*
Offline Offline

Activity: 1456
Merit: 1175

Always remember the cause!


View Profile WWW
October 15, 2018, 04:52:33 PM
Last edit: January 22, 2019, 07:45:24 PM by aliashraf
 #20

..fun from a coding point of view

This is very true..


And it is useless:
In legacy blockchains, by committing to the previous block they are committing to the UTXO as well, what's the point of a spare commitment.

Actually it is a bad practice:
It is known that redundancy puts an information system in the risk of inconsistency.

I agree that if you had all the transactions the MMR commitment per block would be 'spare', since you can always work it out anyway, but in this particular system you do not always have the transactions. And the MMR commitment in the block header cannot be reproduced from a list of block headers alone. But by adding it, you can start validating blocks immediately - with just the header list. So it is not redundant as it adds an ability that wasn't there before. Whether or not you think it is a useful ability is another point.
Who talked about block headers? Oh ... it was me, sorry, but it was about fresh bootstrapping. When a SDUC node starts freshly it needs to find the chain with most work hence it downloads headers and queries coinbase txns top down to find the most recent UTXO that it could rely on. There after it should query and download the whole blocks.

I've prepared an illustration below:
UTXOs are generated every 1000 blocks. Legacy miners who doesn't support SDUC, remain silent but SDUC miners commit to as much as previous UTXOs as they can. Depending on the ratio of SDUC compatible miners, UTXOs become consolidated enough and would be considered as a replacement for the history down to genesis block. By enough I mean the security level a node chooses deliberately. For every 10,000 commitments we get a rough 1 billion dollars security as of this writing.
Pages: [1] 2 »  All
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!