Bitcoin Forum
December 13, 2018, 04:56:17 PM *
News: Latest Bitcoin Core release: 0.17.0 [Torrent].
 
   Home   Help Search Login Register More  
Pages: [1]
  Print  
Author Topic: stupid question: why not move transactions outside blocks ?  (Read 167 times)
wpwrak
Newbie
*
Offline Offline

Activity: 4
Merit: 0


View Profile
December 30, 2017, 02:19:48 PM
 #1

When hearing about scalability issues in Bitcoin and others, a common theme is the limited block capacity. What I immediately thought of is "why not replace the transactions with hashes ?", i.e., going from about 250 bytes to maybe 32, and if that's not enough, one could use Merkle trees. The actual transaction data would travel separately and mempool synchronization would have to be made tighter.

Now, I'm sure that I'm not the first to think of such an approach. Given that I've never heard such a thing mentioned, it must have been discussed and rejected early on. I would like to find out what problems were found with this kind of approach.

Would someone have a pointer to that discussion or a summary ?

Thanks,
- Werner
1544720177
Hero Member
*
Offline Offline

Posts: 1544720177

View Profile Personal Message (Offline)

Ignore
1544720177
Reply with quote  #2

1544720177
Report to moderator
Advertised sites are not endorsed by the Bitcoin Forum. They may be unsafe, untrustworthy, or illegal in your jurisdiction. Advertise here.
1544720177
Hero Member
*
Offline Offline

Posts: 1544720177

View Profile Personal Message (Offline)

Ignore
1544720177
Reply with quote  #2

1544720177
Report to moderator
1544720177
Hero Member
*
Offline Offline

Posts: 1544720177

View Profile Personal Message (Offline)

Ignore
1544720177
Reply with quote  #2

1544720177
Report to moderator
1544720177
Hero Member
*
Offline Offline

Posts: 1544720177

View Profile Personal Message (Offline)

Ignore
1544720177
Reply with quote  #2

1544720177
Report to moderator
Xylber
Full Member
***
Offline Offline

Activity: 294
Merit: 103


ES/FR/EN translator


View Profile WWW
December 30, 2017, 02:38:09 PM
 #2

Not sure, but, Is it not the way Lightning Network works?

Stake.com The Bitcoin Casino Gambling Done Right

Most Popular Bitcoin Gambling Site Home of the High Rollers Primedice
Crypto_mastermind
Newbie
*
Offline Offline

Activity: 2
Merit: 0


View Profile
December 30, 2017, 04:09:37 PM
 #3

Yeah I'm pretty sure that's how the Raiden network actually works, correct me if I'm wrong, and that's just one solution. So there are people looking at this as a solution to the limited block sizes with any blockchain.
HeRetiK
Hero Member
*****
Offline Offline

Activity: 952
Merit: 846


the forkings will continue until morale improves


View Profile
December 30, 2017, 05:14:01 PM
 #4

When hearing about scalability issues in Bitcoin and others, a common theme is the limited block capacity. What I immediately thought of is "why not replace the transactions with hashes ?", i.e., going from about 250 bytes to maybe 32, and if that's not enough, one could use Merkle trees. The actual transaction data would travel separately and mempool synchronization would have to be made tighter.

Now, I'm sure that I'm not the first to think of such an approach. Given that I've never heard such a thing mentioned, it must have been discussed and rejected early on. I would like to find out what problems were found with this kind of approach.

Would someone have a pointer to that discussion or a summary ?

Thanks,
- Werner

Maybe I misunderstand you, but what you describe sounds pretty close to what SegWit is already doing:

https://github.com/bitcoin/bips/blob/master/bip-0141.mediawiki



Not sure, but, Is it not the way Lightning Network works?

No, LN is using Bitcoin's smart contract capabilities to create what is basically a separate ledger between LN nodes.


wpwrak
Newbie
*
Offline Offline

Activity: 4
Merit: 0


View Profile
December 30, 2017, 05:16:20 PM
 #5

Not sure, but, Is it not the way Lightning Network works?

I'd picture Lightning more like an account shared by two parties where deposits and withdrawals are costly but how you split what's in the account is up to the two parties involved, and nobody else needs to know about anything but the final balance.

Lightning then provides tools to manage trust issues and to let you bridge between multiple such shared accounts, forming a network.

So Lightning reduces the number of transactions that are visible on the blockchain. What I've described should be simpler and largely orthogonal: it would allow growing the amount of information covered by a block, without increasing the block size. That information does of course still have to be somewhere, so this wouldn't result in a reduction of the number of transactions nodes have to deal with, similar to directly increasing the block size, but avoiding to have to move one huge chunk of data around at the time of accepting transactions into the blockchain, since that data would already be in the mempool.

- Werner

coinmachina
Jr. Member
*
Offline Offline

Activity: 30
Merit: 10


View Profile
December 30, 2017, 06:05:47 PM
 #6

The actual transaction data would travel separately and mempool synchronization would have to be made tighter.

The actual transaction data would still have to reach every node in the network.

The reason we aren't simply using 1GB blocks is that the network needs to synchronize when a new block is found. If the blocks are too large this cannot be done fast enough leading to a higher stale block rate.

Now, if a miner wants to verify that a block is valid, he needs to know the actual transaction data. Otherwise he could not check whether the transactions in the block are valid (i.e. no double spends, valid signatures). Therefore, even if the blocks only contain hashes of transactions, the actual transactions beeing added to the blockchain by that block would need to be synchronized among nodes. So effectively you do not gain anything in terms of efficiency with your approach.
wpwrak
Newbie
*
Offline Offline

Activity: 4
Merit: 0


View Profile
December 30, 2017, 06:36:25 PM
 #7

The actual transaction data would still have to reach every node in the network.

Yes, but it should be in the mempool when verified close to the time of mining, shouldn't it ?

Then it would have to be stored along with the blockchain (i.e., this doesn't help if the amount of persistent storage is an issue) and you'd need some mechanism to update nodes that don't have that data, plus you need to handle cases where, say, a new block overtakes a transaction referenced by it, but I think the "tip" of global activity should generally not need much extra work.

- Werner
wpwrak
Newbie
*
Offline Offline

Activity: 4
Merit: 0


View Profile
December 30, 2017, 06:50:54 PM
 #8

Maybe I misunderstand you, but what you describe sounds pretty close to what SegWit is already doing:

https://github.com/bitcoin/bips/blob/master/bip-0141.mediawiki

Hmm, I'm still struggling to understand what SegWit exactly is :-) The BIP touches a lot of issues and suggests many future developments, so I'm not sure how much of what SegWit is intended to do is actually implemented.

One explanation I found basically describes it as an accounting trick: you move part of the data in the block to a different place and declare bytes there to be smaller than bytes elsewhere, which allows you to grow blocks a little without exceeding the 1 MB limit. But, if I understand this right, the transactions would still be part of the same block.

If SegWit allows most of the transaction data to travel through a different channel, then it would be indeed what I've been looking for.

Thanks !

- Werner
Quickseller
Copper Member
Legendary
*
Offline Offline

Activity: 1624
Merit: 1219

in 2 min-groin injury, dildo on field, & 6-9 score


View Profile WWW
December 31, 2017, 02:10:35 AM
 #9

Maybe I misunderstand you, but what you describe sounds pretty close to what SegWit is already doing:

https://github.com/bitcoin/bips/blob/master/bip-0141.mediawiki
SegWit involves storing signatures of transactions separate from the transactions, and outside of blocks.

With SegWit, you can determine the UTXO by downloading the blockchain. With what the OP is proposing, it would not be realistic to not store the transactions even if they were stored outside the blocks. 

3PjXm2XYDKLV5mN3oiKzNTyVvSkqP3ujeq <-- tipping address Advertise here
lionelho
Full Member
***
Offline Offline

Activity: 136
Merit: 100



View Profile
December 31, 2017, 03:56:41 AM
 #10

Maybe I misunderstand you, but what you describe sounds pretty close to what SegWit is already doing:

https://github.com/bitcoin/bips/blob/master/bip-0141.mediawiki

Hmm, I'm still struggling to understand what SegWit exactly is :-) The BIP touches a lot of issues and suggests many future developments, so I'm not sure how much of what SegWit is intended to do is actually implemented.

One explanation I found basically describes it as an accounting trick: you move part of the data in the block to a different place and declare bytes there to be smaller than bytes elsewhere, which allows you to grow blocks a little without exceeding the 1 MB limit. But, if I understand this right, the transactions would still be part of the same block.

If SegWit allows most of the transaction data to travel through a different channel, then it would be indeed what I've been looking for.

Thanks !

- Werner
I think it mostly is because the efficiency. Your approach still need to download the actual transaction data during the verification phase. LN does not need to put and verify every small transactions on the main blockchain hence it's more scalable and efficient.

DeepOnion    ▬▬  Anonymous and Untraceable  ▬▬    ENJOY YOUR PRIVACY  •  JOIN DEEPONION
▐▐▐▐▐▐▐▐   ANN  Whitepaper  Facebook  Twitter  Telegram  Discord    ▌▌▌▌▌▌▌▌
Get $ONION  (✔Cryptopia  ✔KuCoin)  |  VoteCentral  Register NOW!  |  Download DeepOnion
pebwindkraft
Full Member
***
Offline Offline

Activity: 258
Merit: 240


View Profile
December 31, 2017, 09:38:54 AM
 #11

Quote
SegWit involves storing signatures of transactions separate from the transactions, and outside of blocks.

I'm trying to follow what you say. I'm not talking about data representation (I fully understand, that SegWit tx are displayed to non-segwit nodes in a special format).
I am more interested to see what happens at the byte level, when only SegWit nodes/clients are involved.
When I have a segwit tx, I can see the signatures at the end, like in this example:

Code:
01000000000102fff7f7881a8099afa6940d42d1e7f6362bec38171ea3edf433541db4e4ad969f00000000494830450221008b9d1dc26ba6a9cb62127b02742fa9d754cd3bebf337f7a55d114c8e5cdd30be022040529b194ba3f9281a99f2b1c0a19c0489bc22ede944ccf4ecbab4cc618ef3ed01eeffffffef51e1b804cc89d182d279655c3aa89e815b1b309fe287d9b2b55d57b90ec68a0100000000ffffffff02202cb206000000001976a9148280b37df378db99f66f85c95a783a76ac7a6d5988ac9093510d000000001976a9143bde42dbee7e4dbe6a21b2d50ce2f0167faa815988ac000247304402203609e17b84f6a7d30c80bfa610b5b4542f32a8a0d5447a12fb1366d7f01cc44a0220573a954c4518331561406f90300e8f3358f51928d43c212a8caed02de67eebee0121025476c2e83188368da1ff3e292e7acafcdb3566bb0ad253f62fc70f07aeee635711000000

decoding to:

Code:
VERSION 01000000
 SEGWIT (BIP141): this is a segwit tx, marker=00
        (BIP141): flag=01
 TX_IN COUNT [var_int]: hex=02, decimal=2
  V_IN ... FFFFFFFF
 TX_OUT COUNT, hex=02, decimal=2
  V_OUT ...
 WITNESS TXIN[1] stack elements: hex=02, decimal=2
  WITNESS data[0]:  47304402203609E17B84F6A7D30C80BFA610B5B4542F32A8A0D5447A12FB1366D7F01CC44A0220573A954C4518331561406F90300E8F3358F51928D43C212A8CAED02DE67EEBEE01
  WITNESS data[1]: 21025476C2E83188368DA1FF3E292E7ACAFCDB3566BB0AD253F62FC70F07AEEE6357
LOCK_TIME 11000000

From my perspective I see, that the location of the scriptsig is used differently, and the witness code has moved to the end of the transaction structure. Which leads me to the conclusion, that signature data is still "inside" the transaction?

Also when you say outside of blocks, I do not understand what this means. When not in the block, how can the tx be verified by miners?
I read about the concept of "extended" blocks, but did not grasp it (yet). Is this maybe an abstraction layer when coding? At the end, I am looking at the raw transactions, and their bits and bytes - aka: how they appear in the block chain... What am I missing?
coinmachina
Jr. Member
*
Offline Offline

Activity: 30
Merit: 10


View Profile
December 31, 2017, 10:07:02 AM
 #12

The actual transaction data would still have to reach every node in the network.

Yes, but it should be in the mempool when verified close to the time of mining, shouldn't it ?

I think so.


Then it would have to be stored along with the blockchain (i.e., this doesn't help if the amount of persistent storage is an issue) and you'd need some mechanism to update nodes that don't have that data, plus you need to handle cases where, say, a new block overtakes a transaction referenced by it, but I think the "tip" of global activity should generally not need much extra work.

- Werner

But what I don't see is how using hashes is any better than simply increasing the block size. You can make use of the fact that most miners already have the transactions of the new block in their mempool even if you don't use hashes. And as far as I know that is already being done at the moment.



cdb1690
Full Member
***
Offline Offline

Activity: 266
Merit: 100


View Profile
December 31, 2017, 10:49:43 AM
 #13

Yes, but it should be in the mempool when verified close to the time of mining, shouldn't it ?

Then it would have to be stored along with the blockchain (i.e., this doesn't help if the amount of persistent storage is an issue) and you'd need some mechanism to update nodes that don't have that data, plus you need to handle cases where, say, a new block overtakes a transaction referenced by it, but I think the "tip" of global activity should generally not need much extra work.

- Werner
The amount of persistent storage is not as big of an issue as network bandwidth is. As a full node, you'll have to download every new block and every transaction as well as send them to your peers. Bigger blocks = more network bandwidth consumed. Make the blocks big enough (= several MBs) or block interval short enough and it could be terrabytes of network traffic per month.

★ ★ ★ ★ ★   DeepOnion    Anonymous and Untraceable Cryptocurrency    TOR INTEGRATED & SECURED   ★ ★ ★ ★ ★
› › › › ›  JOIN THE NEW AIRDROP ✈️        VERIFIED WITH DEEPVAULT  ‹ ‹ ‹ ‹ ‹
▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬   ANN  WHITEPAPER  FACEBOOK  TWITTER  YOUTUBE  FORUM   ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬
Anti-Cen
Member
**
Offline Offline

Activity: 210
Merit: 26

High fees = low BTC price


View Profile
December 31, 2017, 09:17:09 PM
 #14

Not the worlds worse plan put having a pointer in the BC pointing to a file holding the header and block would result in more disk seeks
and you have got to read the block to see if coins in it relate to the current transactions.

The amount of persistent storage is not as big of an issue as network bandwidth is. As a full node, you'll have to download every new block and every transaction as well as send them to your peers. Bigger blocks = more network bandwidth consumed. Make the blocks big enough (= several MBs) or block interval short enough and it could be terrabytes of network traffic per month.

off the bat a few megs of data seems small but all them sync messages will soon add up but lets face it the
structure of the block-chain is at fault here and having 20,000 nodes all replicating gigs of data each day
was never going to end well anyway and they knew years ago that it would not scale.

Tweaking the block-size or timing long term won't fix the system but just as a quick fix for now an
increase in the size would seem like an obvious move or the silly transactions fees will resolve the problem
for them because everyone will leave BTC and dump it.

Lets face it our horse only has three legs.

Mining is CPU-wars and Intel, AMD like it nearly as much as big oil likes miners wasting electricity. Is this what mankind has come too.
Anti-Cen
Member
**
Offline Offline

Activity: 210
Merit: 26

High fees = low BTC price


View Profile
December 31, 2017, 10:15:31 PM
 #15

No, LN is using Bitcoin's smart contract capabilities to create what is basically a separate ledger between LN nodes.

Smart contracts like ETH uses to support other alt-coins, yes i understand you now but whats held in these
LN ledgers is not the original coin so it must be fake BTC or IOU's like i was saying

These special LN nodes that i take it are not full nodes must have the power to move coins out from our wallet
that's held on the main BTC block-chain during settlement so how can they do that without having our private keys ?

Paying money in no trouble but unless our new special wallets are programmed to automatically settle up at the
end of the month then how do we pay the balance/bill (That are a month late)  or do we need to deposit BTC at one
of these banks.......err..........sorry Hubs as working cash flow if no one is giving us IOU's

of course in a trustless system no one is going to let you spend over your agreed spending limit unless, well unless
they can charge you interest and re-name it as a transaction fee in the process.

Are you sure these hubs are not mini banks with counter risk that are being hidden under the BTC network with some kind
of dare I say it, trust relationship between client wallets and each other LN nodes or sub-branches 

if I owe $100 in the LN network then a constant check could be made on my main balance to ensure that I have the
money in the main account to cover the debt and somehow that money could be locked only if I try to move it I guess
Yes I think i am getting close here !
Little to no counter risk in that case for the LN Nodes, no cash deposit needed and the LN nodes will always balance
down to zero so no funny money is flying around.

Who's paying the $45 transaction fee at the end of the month to the miners during settlement or do we get some type of
special discount as compensation for this fix even if we only spend $1 using LN because that get us back to something that
feels like a bank charging us fixed fees on the account ?

Interesting solution to a problem that should not exist so maybe we get a new "Settle up button" that uses the channel to our trusted hub
and get to say ourselves how often we press it and get charged in the process

Feels like the LN Node are really full nodes, makes sense but are running an extension which are not really smart contracts
(Buzz word) at all and won't really commit anything to the main BC so I would be safe buying my weed each week using
such a service !

Will need a stitch up between LN nodes and full BTC nodes (Might be same machine or even process) but adding locks to wallets might
not be a bad plan and something that i can live with so how close are we now ?



 

Mining is CPU-wars and Intel, AMD like it nearly as much as big oil likes miners wasting electricity. Is this what mankind has come too.
achow101
Moderator
Legendary
*
expert
Offline Offline

Activity: 1610
Merit: 1799


bc1qshxkrpe4arppq89fpzm6c0tpdvx5cfkve2c8kl


View Profile WWW
January 01, 2018, 06:29:52 AM
 #16

In what way would this help? Full nodes still have to download all of the data; they still have to download and process all of the blocks and transactions. Your scheme would mean that a full node is actually downloading more data as it would have to download the block full of hashes. Furthermore, such a solution would still require a hard fork, I don't see a way that this could be soft forked in like segwit. So it really doesn't do anything to help. It has the same effect as increasing the maximum block size, but does so in a much less efficient way.

What your idea would be good for is for helping reduce the transmission times of blocks. A data structure that is like a block but only contains the txids. Then the actual transactions are pulled from the mempool and the block reconstructed. We actually already have such a scheme, BIP 152 Compact blocks.

Pages: [1]
  Print  
 
Jump to:  

Sponsored by , a Bitcoin-accepting VPN.
Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!