Bitcoin Forum

Bitcoin => Bitcoin Discussion => Topic started by: adamstgBit on September 11, 2015, 03:08:47 PM



Title: Scaling Bitcoin Above 3 Million TX pre block
Post by: adamstgBit on September 11, 2015, 03:08:47 PM
Put block limit at 1GB
Have minners send blocks using encoding  ( new 1GB block need only 4MBs to be propagated )
Requires full nodes and miners to be sitting behind >2MBPS internet connection with unlimited bandwidth.
and there you have it, bitcoin may now include 3 Million TX pre block, is that worth a little centralization?
how much centralization would this really require?
not much!

First off, the centralization would happen VERY SLOWLY, because obviously bitcoin isn't suddenly going to experience a 100,000X increase in transactions over night.

Second, once / if bitcoin does reach close to 5000TPS the requirements for miners and full node to run isn't very high, we're still taking home garde computers with home grade internet being able to handle this.

Third, ATM miners and full nodes are already somewhat centralized it's likely that these minimal requirements won't affect anyone at all. China might be affected once TX vol gose up 8X then it currently is today, chinese poeple may be required to run their full node outside of chain. so what? they will still be able to mine they won't give a shit, only you give a shit because you're scared of a Gigabyte.


Title: Re: Scaling Bitcoin Above 3 Million TX pre block
Post by: LiteCoinGuy on September 11, 2015, 06:08:53 PM
but you would kill Blockstreams business model with this plan - you bastard!


/s


Title: Re: Scaling Bitcoin Above 3 Million TX pre block
Post by: mallard on September 11, 2015, 06:57:32 PM
Have minners send blocks using encoding  ( new 1GB block need only 4MBs to be propagated )

Maybe you could be a bit more specific with this?
Or maybe you could be a bit more specific with your whole post.


Title: Re: Scaling Bitcoin Above 3 Million TX pre block
Post by: adamstgBit on September 11, 2015, 07:04:36 PM
Have minners send blocks using encoding  ( new 1GB block need only 4MBs to be propagated )

Maybe you could be a bit more specific with this?
Or maybe you could be a bit more specific with your whole post.

this is what i mean.

The p2p protocol presently only supports propagation of solved blocks in full; i.e., blocks are not compressed.  

However, the Corallo Relay Network does support a sort of compression.  Rather than transmitting all the transactions in a solved blocks, since most the other miners know about them already, it just transmits indices that refer to each transaction (sort of like a map for how the TXs fit in the block). Greg Maxwell claims that the Corallo Relay Network attains a coding gain of about 250 (1 MB is compressed to about 4 kilobytes); however, I believe it is less in practice.  

Techniques like invertible bloom lookup tables (IBLTs) could also be used to compress solved blocks in the future; Rusty Russell is presently researching this possibility.    


i could be more specific, but i just want the general idea in OP



Title: Re: Scaling Bitcoin Above 3 Million TX pre block
Post by: brg444 on September 11, 2015, 07:18:09 PM
What would we do without your genius  ???

Scrap this weekend's Scaling Bitcoin conference you solved it all for us!!



Title: Re: Scaling Bitcoin Above 3 Million TX pre block
Post by: adamstgBit on September 11, 2015, 07:56:00 PM
What would we do without your genius  ???

Scrap this weekend's Scaling Bitcoin conference you solved it all for us!!


your welcome.


Title: Re: Scaling Bitcoin Above 3 Million TX pre block
Post by: runpaint on September 11, 2015, 11:25:39 PM
Has anyone ever talked about compressing the blocks?


Title: Re: Scaling Bitcoin Above 3 Million TX pre block
Post by: jonald_fyookball on September 11, 2015, 11:27:23 PM
Has anyone ever talked about compressing the blocks?

Dunno... What about using pointers?  The question would be where do you store the stuff pointed to?


Title: Re: Scaling Bitcoin Above 3 Million TX pre block
Post by: adamstgBit on September 11, 2015, 11:31:02 PM
Has anyone ever talked about compressing the blocks?

Dunno... What about using pointers?  The question would be where do you store the stuff pointed to?

there's already some miners that use a method of compressing blocks.

The p2p protocol presently only supports propagation of solved blocks in full; i.e., blocks are not compressed.  

However, the Corallo Relay Network does support a sort of compression.  Rather than transmitting all the transactions in a solved blocks, since most the other miners know about them already, it just transmits indices that refer to each transaction (sort of like a map for how the TXs fit in the block). Greg Maxwell claims that the Corallo Relay Network attains a coding gain of about 250 (1 MB is compressed to about 4 kilobytes); however, I believe it is less in practice.  

Techniques like invertible bloom lookup tables (IBLTs) could also be used to compress solved blocks in the future; Rusty Russell is presently researching this possibility.    


this isn't standard way of propagating new blocks but it could be...

using this method miners need only be able to keep up with the transactions as they happen and keep them in their mem pool
communicating the contents of a new block is trivial .

even with a home internet connection you could gather TX's are a rate of ~5000TPS



Title: Re: Scaling Bitcoin Above 3 Million TX pre block
Post by: brg444 on September 11, 2015, 11:39:08 PM
Has anyone ever talked about compressing the blocks?

Dunno... What about using pointers?  The question would be where do you store the stuff pointed to?

there's already some miners that use a method of compressing blocks.

The p2p protocol presently only supports propagation of solved blocks in full; i.e., blocks are not compressed. 

However, the Corallo Relay Network does support a sort of compression.  Rather than transmitting all the transactions in a solved blocks, since most the other miners know about them already, it just transmits indices that refer to each transaction (sort of like a map for how the TXs fit in the block). Greg Maxwell claims that the Corallo Relay Network attains a coding gain of about 250 (1 MB is compressed to about 4 kilobytes); however, I believe it is less in practice. 

Techniques like invertible bloom lookup tables (IBLTs) could also be used to compress solved blocks in the future; Rusty Russell is presently researching this possibility.   


That's not compression.


Title: Re: Scaling Bitcoin Above 3 Million TX pre block
Post by: jonald_fyookball on September 11, 2015, 11:41:37 PM
whats the best place to read a summary of the corallo relay network?


Title: Re: Scaling Bitcoin Above 3 Million TX pre block
Post by: brg444 on September 11, 2015, 11:45:54 PM
whats the best place to read a summary of the corallo relay network?

http://bitcoinrelaynetwork.org/


Title: Re: Scaling Bitcoin Above 3 Million TX pre block
Post by: adamstgBit on September 11, 2015, 11:46:50 PM
Has anyone ever talked about compressing the blocks?

Dunno... What about using pointers?  The question would be where do you store the stuff pointed to?

there's already some miners that use a method of compressing blocks.

The p2p protocol presently only supports propagation of solved blocks in full; i.e., blocks are not compressed.  

However, the Corallo Relay Network does support a sort of compression.  Rather than transmitting all the transactions in a solved blocks, since most the other miners know about them already, it just transmits indices that refer to each transaction (sort of like a map for how the TXs fit in the block). Greg Maxwell claims that the Corallo Relay Network attains a coding gain of about 250 (1 MB is compressed to about 4 kilobytes); however, I believe it is less in practice.  

Techniques like invertible bloom lookup tables (IBLTs) could also be used to compress solved blocks in the future; Rusty Russell is presently researching this possibility.    


That's not compression.

your right, but still its 250 times faster then sending out the whole block

the network could be optimized further by compressing each TX's before sending it out on the network

might not get much coding gain trying to "zip" TX its just ~300 very random bytes


Title: Re: Scaling Bitcoin Above 3 Million TX pre block
Post by: brg444 on September 11, 2015, 11:54:24 PM
Has anyone ever talked about compressing the blocks?

Dunno... What about using pointers?  The question would be where do you store the stuff pointed to?

there's already some miners that use a method of compressing blocks.

The p2p protocol presently only supports propagation of solved blocks in full; i.e., blocks are not compressed.  

However, the Corallo Relay Network does support a sort of compression.  Rather than transmitting all the transactions in a solved blocks, since most the other miners know about them already, it just transmits indices that refer to each transaction (sort of like a map for how the TXs fit in the block). Greg Maxwell claims that the Corallo Relay Network attains a coding gain of about 250 (1 MB is compressed to about 4 kilobytes); however, I believe it is less in practice.  

Techniques like invertible bloom lookup tables (IBLTs) could also be used to compress solved blocks in the future; Rusty Russell is presently researching this possibility.    


That's not compression.

your right, but still its 250 times faster then sending out the whole block

the network could be optimized further by compressing each TX's before sending it out on the network

might not get much coding gain trying to "zip" TX its just ~300 very random bytes

All of this doesn't avoid the necessity for the network to handle the full weight of these blocks. It's a transmission method and nothing more.


Title: Re: Scaling Bitcoin Above 3 Million TX pre block
Post by: adamstgBit on September 12, 2015, 12:00:35 AM
Has anyone ever talked about compressing the blocks?

Dunno... What about using pointers?  The question would be where do you store the stuff pointed to?

there's already some miners that use a method of compressing blocks.

The p2p protocol presently only supports propagation of solved blocks in full; i.e., blocks are not compressed.  

However, the Corallo Relay Network does support a sort of compression.  Rather than transmitting all the transactions in a solved blocks, since most the other miners know about them already, it just transmits indices that refer to each transaction (sort of like a map for how the TXs fit in the block). Greg Maxwell claims that the Corallo Relay Network attains a coding gain of about 250 (1 MB is compressed to about 4 kilobytes); however, I believe it is less in practice.  

Techniques like invertible bloom lookup tables (IBLTs) could also be used to compress solved blocks in the future; Rusty Russell is presently researching this possibility.    


That's not compression.

your right, but still its 250 times faster then sending out the whole block

the network could be optimized further by compressing each TX's before sending it out on the network

might not get much coding gain trying to "zip" TX its just ~300 very random bytes

All of this doesn't avoid the necessity for the network to handle the full weight of these blocks. It's a transmission method and nothing more.

agreed

what we don't agree on is what is an expectable max weight for a block

i think its close to 500MB maybe 1GB


Title: Re: Scaling Bitcoin Above 3 Million TX pre block
Post by: brg444 on September 12, 2015, 12:04:06 AM
Has anyone ever talked about compressing the blocks?

Dunno... What about using pointers?  The question would be where do you store the stuff pointed to?

there's already some miners that use a method of compressing blocks.

The p2p protocol presently only supports propagation of solved blocks in full; i.e., blocks are not compressed.  

However, the Corallo Relay Network does support a sort of compression.  Rather than transmitting all the transactions in a solved blocks, since most the other miners know about them already, it just transmits indices that refer to each transaction (sort of like a map for how the TXs fit in the block). Greg Maxwell claims that the Corallo Relay Network attains a coding gain of about 250 (1 MB is compressed to about 4 kilobytes); however, I believe it is less in practice.  

Techniques like invertible bloom lookup tables (IBLTs) could also be used to compress solved blocks in the future; Rusty Russell is presently researching this possibility.    


That's not compression.

your right, but still its 250 times faster then sending out the whole block

the network could be optimized further by compressing each TX's before sending it out on the network

might not get much coding gain trying to "zip" TX its just ~300 very random bytes

All of this doesn't avoid the necessity for the network to handle the full weight of these blocks. It's a transmission method and nothing more.

agreed

what we don't agree on is what is an expectable max weight for a block

i think its close to 500MB maybe 1GB

Do you figure you're smarter than everyone or that all of us are retarded?

No one agree on any size but certainly you're the only coming up with that type of numbers. Maybe you'd like to consider you simply don't understand the issue well enough?


Title: Re: Scaling Bitcoin Above 3 Million TX pre block
Post by: jonald_fyookball on September 12, 2015, 12:32:53 AM
if bitcoin had no block limit at all, it might be just fine. 
Miners would form consensus on appropriate size via
longest chain.


Title: Re: Scaling Bitcoin Above 3 Million TX pre block
Post by: adamstgBit on September 12, 2015, 12:38:16 AM
Has anyone ever talked about compressing the blocks?

Dunno... What about using pointers?  The question would be where do you store the stuff pointed to?

there's already some miners that use a method of compressing blocks.

The p2p protocol presently only supports propagation of solved blocks in full; i.e., blocks are not compressed.  

However, the Corallo Relay Network does support a sort of compression.  Rather than transmitting all the transactions in a solved blocks, since most the other miners know about them already, it just transmits indices that refer to each transaction (sort of like a map for how the TXs fit in the block). Greg Maxwell claims that the Corallo Relay Network attains a coding gain of about 250 (1 MB is compressed to about 4 kilobytes); however, I believe it is less in practice.  

Techniques like invertible bloom lookup tables (IBLTs) could also be used to compress solved blocks in the future; Rusty Russell is presently researching this possibility.    


That's not compression.

your right, but still its 250 times faster then sending out the whole block

the network could be optimized further by compressing each TX's before sending it out on the network

might not get much coding gain trying to "zip" TX its just ~300 very random bytes

All of this doesn't avoid the necessity for the network to handle the full weight of these blocks. It's a transmission method and nothing more.

agreed

what we don't agree on is what is an expectable max weight for a block

i think its close to 500MB maybe 1GB

Do you figure you're smarter than everyone or that all of us are retarded?

No one agree on any size but certainly you're the only coming up with that type of numbers. Maybe you'd like to consider you simply don't understand the issue well enough?

i didn't pull this number out of my ass

its what a typical home connection is able to download in 10mins

maybe what we should do is find the node with the shittesty connection on the network currently ( a node from china) see how much it can download in 10mins, half that and use that as the upper limit

i bet that would be about 50MB



Title: Re: Scaling Bitcoin Above 3 Million TX pre block
Post by: jonald_fyookball on September 12, 2015, 12:40:34 AM
Adam,the download time must be MUCH lower than the block interval.
Otherwise, the reorgs will be too high.

Do you get that?


Title: Re: Scaling Bitcoin Above 3 Million TX pre block
Post by: adamstgBit on September 12, 2015, 12:44:36 AM
Adam,the download time must be MUST lower than the block interval.
Otherwise, the reorgs will be too high.

Do you get that?

reorgs?
with new block propagation being trivial thanks to "Corallo Relay Network" we only need nodes to be able to keep up with the TPS
so if every user on the network can comfortably download 100MB in 10mins, there shouldn't be any problem using 100MB as the block limit
actually there WOULD be a problem if block were bigger than this limit, which is what the limit should be about in the first place.



Title: Re: Scaling Bitcoin Above 3 Million TX pre block
Post by: jonald_fyookball on September 12, 2015, 12:51:53 AM
Adam,the download time must be MUST lower than the block interval.
Otherwise, the reorgs will be too high.

Do you get that?

reorgs?
with new block propagation being trivial thanks to "Corallo Relay Network" we only need nodes to be able to keep up with the TPS
so if every user on the network can comfortably download 100MB in 10mins, there shouldn't be any problem using 100MB as the block limit
actually there WOULD be a problem if block were bigger than this limit, which is what the limit should be about in the first place.




Blockchain reorgs.

The reason why proof-of-work and the longest chain rule are effective in maintaining distributed consensus,
is that the time between blocks is much bigger than the time it takes to broadcast a block to the network.

If two miners find a block at about the same time and start broadcasting them to about half the network
each, you have a fork.  One half is on one chain, and one half is on another.  It gets resolved by the
longest chain rule when the next block is found.  Since the winning blocks takes only a few seconds
to reach the network, the fork is quickly resolved.   However, if the blocks take a long time to broadcast,
then more competing blocks are going to be solved and broadcast.  I hope you get my point, but it
is clear to me that a huge difference in broadcast time vs block time is necessary for the longest chain
rule to maintain order, minimize reorgs and prevent network splits.  The longer the time, the more reorgs and the more
problems you will get.  Would a full minute be ok for broadcast times?  Probably, although reorgs will
increase.  Several minutes?  Now its starting to be a real issue.  10 minutes? No way.







Title: Re: Scaling Bitcoin Above 3 Million TX pre block
Post by: adamstgBit on September 12, 2015, 12:56:49 AM
The block limit should reflect a sort of requirement we require users to be able to handle in order to run nodes.
we don't want that limit to be so high as to exclude ordinary users
we don't want it so low that its starts to limit TPS and slow confirmation time.
luckily for us, we aren't in 1995 anymore and typical home users can download at ~1MB per second.
of course we want some comfort zone so we should be talking about 100-300MB block limit
which gives bitcoin PLENTY of space to grow
many when we hit that limit again internet speeds will have 100X
 



Title: Re: Scaling Bitcoin Above 3 Million TX pre block
Post by: adamstgBit on September 12, 2015, 12:59:13 AM
Adam,the download time must be MUST lower than the block interval.
Otherwise, the reorgs will be too high.

Do you get that?

reorgs?
with new block propagation being trivial thanks to "Corallo Relay Network" we only need nodes to be able to keep up with the TPS
so if every user on the network can comfortably download 100MB in 10mins, there shouldn't be any problem using 100MB as the block limit
actually there WOULD be a problem if block were bigger than this limit, which is what the limit should be about in the first place.




Blockchain reorgs.

The reason why proof-of-work and the longest chain rule are effective in maintaining distributed consensus,
is that the time between blocks is much bigger than the time it takes to broadcast a block to the network.

If two miners find a block at about the same time and start broadcasting them to about half the network
each, you have a fork.  One half is on one chain, and one half is on another.  It gets resolved by the
longest chain rule when the next block is found.  Since the winning blocks takes only a few seconds
to reach the network, the fork is quickly resolved.   However, if the blocks take a long time to broadcast,
then more competing blocks are going to be solved and broadcast.  I hope you get my point, but it
is clear to me that a huge difference in broadcast time vs block time is necessary for the longest chain
rule to maintain order, minimize reorgs and prevent network splits.  The longer the time, the more reorgs and the more
problems you will get.  Would a full minute be ok for broadcast times?  Probably, although reorgs will
increase.  Several minutes?  Now its starting to be a real issue.  10 minutes? No way.



right this is a BIG PROBLEM i understand
the solution is using the same method the "Corallo Relay Network" uses
broadcasting a block in this method is 250 times faster!
and more optimizations can be done

a 1GB block can be broadcast with 4MB with this method, broadcasting a block is no longer an issue.


Title: Re: Scaling Bitcoin Above 3 Million TX pre block
Post by: jonald_fyookball on September 12, 2015, 01:03:12 AM
Adam,the download time must be MUST lower than the block interval.
Otherwise, the reorgs will be too high.

Do you get that?

reorgs?
with new block propagation being trivial thanks to "Corallo Relay Network" we only need nodes to be able to keep up with the TPS
so if every user on the network can comfortably download 100MB in 10mins, there shouldn't be any problem using 100MB as the block limit
actually there WOULD be a problem if block were bigger than this limit, which is what the limit should be about in the first place.




Blockchain reorgs.

The reason why proof-of-work and the longest chain rule are effective in maintaining distributed consensus,
is that the time between blocks is much bigger than the time it takes to broadcast a block to the network.

If two miners find a block at about the same time and start broadcasting them to about half the network
each, you have a fork.  One half is on one chain, and one half is on another.  It gets resolved by the
longest chain rule when the next block is found.  Since the winning blocks takes only a few seconds
to reach the network, the fork is quickly resolved.   However, if the blocks take a long time to broadcast,
then more competing blocks are going to be solved and broadcast.  I hope you get my point, but it
is clear to me that a huge difference in broadcast time vs block time is necessary for the longest chain
rule to maintain order, minimize reorgs and prevent network splits.  The longer the time, the more reorgs and the more
problems you will get.  Would a full minute be ok for broadcast times?  Probably, although reorgs will
increase.  Several minutes?  Now its starting to be a real issue.  10 minutes? No way.



right this is a BIG PROBLEM i understand
the solution is using the same method the "Corallo Relay Network" uses
broadcasting a block in this method is 250 times faster!
and more optimizations can be done

broadcasting a block is no longer an issue


250 times faster? huh?  Where did you get that from?

My understanding is this is just a bunch of dedicated nodes
on a good connection, it's not going to magically warp speed the whole network.


Title: Re: Scaling Bitcoin Above 3 Million TX pre block
Post by: worhiper_-_ on September 12, 2015, 01:10:59 AM
Bitcoin started steadily being used in 2010, five years after that under normal use conditions it's still not suffering from the 1Mb block limit. I've seen people arguing that merely increasing the block size limit will ramp up adoption but I think it's much more complicated than that. After so many concentrated efforts, bitcoin's usercases doesn't seem to be growing at rampant rates, not that there's a way to be 100% certain about that, but at least data in the blockchain makes that evident.

There are many concepts that come against increasing the block size cap eight fold. For example, with a higher cap, the fee market would likely change. Getting into the blockchain would be worth less, while right now, putting data into the blockchain costs a somewhat significant sum. With the destruction of the current fee market for the upcoming years and blocks 8x times bigger you'd expect that we'd get something back. Like for example 'stress tests' being harder and more expensive to bring to reality. But that's also not the case. Bigger blocks would just be easier (and also cheaper) to be filled with trash transactions. In fact, calculation's we've seen about tx/s going up with the size are only taking into account transactions of certain size. Limiting the mempool was a suggestion to counter this, that wouldn't really work out well if the goal of increasing the block size cap was to make bitcoin handle more tx/s.


Title: Re: Scaling Bitcoin Above 3 Million TX pre block
Post by: johnyj on September 12, 2015, 01:11:16 AM
For any suggestions, first consider a worst case scenario, if that works, then you can make your point. 1GB block in a worst case scenario will definitely separate the network into many forks, each fork will just grow on each USA/EUROPE/CHINA chain and never receive block fast enough from other chains, so the bitcoin is totally broken, no transaction can be confirmed globally


Title: Re: Scaling Bitcoin Above 3 Million TX pre block
Post by: adamstgBit on September 12, 2015, 01:14:00 AM
Adam,the download time must be MUST lower than the block interval.
Otherwise, the reorgs will be too high.

Do you get that?

reorgs?
with new block propagation being trivial thanks to "Corallo Relay Network" we only need nodes to be able to keep up with the TPS
so if every user on the network can comfortably download 100MB in 10mins, there shouldn't be any problem using 100MB as the block limit
actually there WOULD be a problem if block were bigger than this limit, which is what the limit should be about in the first place.




Blockchain reorgs.

The reason why proof-of-work and the longest chain rule are effective in maintaining distributed consensus,
is that the time between blocks is much bigger than the time it takes to broadcast a block to the network.

If two miners find a block at about the same time and start broadcasting them to about half the network
each, you have a fork.  One half is on one chain, and one half is on another.  It gets resolved by the
longest chain rule when the next block is found.  Since the winning blocks takes only a few seconds
to reach the network, the fork is quickly resolved.   However, if the blocks take a long time to broadcast,
then more competing blocks are going to be solved and broadcast.  I hope you get my point, but it
is clear to me that a huge difference in broadcast time vs block time is necessary for the longest chain
rule to maintain order, minimize reorgs and prevent network splits.  The longer the time, the more reorgs and the more
problems you will get.  Would a full minute be ok for broadcast times?  Probably, although reorgs will
increase.  Several minutes?  Now its starting to be a real issue.  10 minutes? No way.



right this is a BIG PROBLEM i understand
the solution is using the same method the "Corallo Relay Network" uses
broadcasting a block in this method is 250 times faster!
and more optimizations can be done

broadcasting a block is no longer an issue


250 times faster? huh?  Where did you get that from?

My understanding is this is just a bunch of dedicated nodes
on a good connection, it's not going to magically warp speed the whole network.

no joke man 250X faster
its not " just a bunch of dedicated nodes,on a good connection"
it only sends out "pointers to TXs" the new block includes, all miners have pretty much the same mem pool, so they can use the pointers to make the block. and they can check and make sure they made the exact same block with the merkle root. if they are missing any TX they can ask a peer.

miners are already using this, but its not standard and it isn't P2P
this method needs to be implemented on a P2P level.


Title: Re: Scaling Bitcoin Above 3 Million TX pre block
Post by: johnyj on September 12, 2015, 01:25:24 AM
Corallo Relay Network is just a group of private controlled servers on internet backbone, it can not be applied to all of the nodes around the world. If everyone is using Corallo Relay Network to relay 1GB blocks and it is shutdown by the government, bitcoin is dead right away


Title: Re: Scaling Bitcoin Above 3 Million TX pre block
Post by: adamstgBit on September 12, 2015, 01:53:18 AM
Corallo Relay Network is just a group of private controlled servers on internet backbone, it can not be applied to all of the nodes around the world. If everyone is using Corallo Relay Network to relay 1GB blocks and it is shutdown by the government, bitcoin is dead right away


http://sourceforge.net/p/bitcoin/mailman/message/32676543/

Quote
Essentially instead of relaying entire
blocks, nodes keep a rolling window of recently-seen transactions and
skip those when relaying blocks.


Title: Re: Scaling Bitcoin Above 3 Million TX pre block
Post by: adamstgBit on September 12, 2015, 01:57:27 AM
we have the tech more or less laid out and working
we just need to optimize it a little and make it part of the standard protocol
we can scale bitcoin
we can scale bitcoin to 4K TPS running on a silly home computer

excited?

you should be.


Title: Re: Scaling Bitcoin Above 3 Million TX pre block
Post by: adamstgBit on September 12, 2015, 01:58:58 AM
in other news

https://i.imgur.com/Z755srN.jpg


Title: Re: Scaling Bitcoin Above 3 Million TX pre block
Post by: johnyj on September 12, 2015, 12:16:30 PM
Corallo Relay Network is just a group of private controlled servers on internet backbone, it can not be applied to all of the nodes around the world. If everyone is using Corallo Relay Network to relay 1GB blocks and it is shutdown by the government, bitcoin is dead right away


http://sourceforge.net/p/bitcoin/mailman/message/32676543/

Quote
Essentially instead of relaying entire
blocks, nodes keep a rolling window of recently-seen transactions and
skip those when relaying blocks.

Their public nodes are listed as:

public.us-west.relay.mattcorallo.com
public.us-east.relay.mattcorallo.com
public.eu.relay.mattcorallo.com
public.{jpy,hk}.relay.mattcorallo.com
public.bjs.relay.mattcorallo.com
public.{sgp,au}.relay.mattcorallo.com

All registered under mattcorallo.com  if your rely on their service, then when this company is down, bitcoin is over


Title: Re: Scaling Bitcoin Above 3 Million TX pre block
Post by: Mickeyb on September 12, 2015, 12:25:07 PM
we have the tech more or less laid out and working
we just need to optimize it a little and make it part of the standard protocol
we can scale bitcoin
we can scale bitcoin to 4K TPS running on a silly home computer

excited?

you should be.


I am very excited! :)

All we need now is to make devs start making changes and implementing new things. I'll be even more excited when we get to this point!


Title: Re: Scaling Bitcoin Above 3 Million TX pre block
Post by: RoadTrain on September 12, 2015, 12:43:21 PM
Seeing you posts, Adam, I'm really excited by your apparent lack of understanding of the technical side of Bitcoin. I figured, maybe you could consider refraining from posting nonsense, and educate yourself first?
Because I see no point arguing with you, when you can't grasp some relatively simple technical ideas.


Title: Re: Scaling Bitcoin Above 3 Million TX pre block
Post by: jonald_fyookball on September 12, 2015, 01:37:33 PM
Corallo Relay Network is just a group of private controlled servers on internet backbone, it can not be applied to all of the nodes around the world. If everyone is using Corallo Relay Network to relay 1GB blocks and it is shutdown by the government, bitcoin is dead right away


http://sourceforge.net/p/bitcoin/mailman/message/32676543/

Quote
Essentially instead of relaying entire
blocks, nodes keep a rolling window of recently-seen transactions and
skip those when relaying blocks.

point out to me where the 250x increase is, por favor.


Title: Re: Scaling Bitcoin Above 3 Million TX pre block
Post by: brg444 on September 12, 2015, 01:47:43 PM
Matt Corallo just pointed out at the conference that propagation has little to do w/ bandwidth connectivity but general tcp packet loss which can occurs regardless of your connectivity.

I know there is some posts on the dev list referencing this. Don't ask me to explain in details this is a bit beyond me technically


Title: Re: Scaling Bitcoin Above 3 Million TX pre block
Post by: sAt0sHiFanClub on September 12, 2015, 03:25:01 PM
Seeing you posts, Adam, I'm really excited by your apparent lack of understanding of the technical side of Bitcoin. I figured, maybe you could consider refraining from posting nonsense, and educate yourself first?
Because I see no point arguing with you, when you can't grasp some relatively simple technical ideas.

Quit with the lame "Its beneath me to explain" angle.  If you have an issue, state it, and support your contention with a relevant tech reference.

Or you can explain it to me. I've nearly 15 years experience in writing network correlators and rating engines for the mobile teleco industry, so there is little you can teach me on HF data propagation that i don't know.


Title: Re: Scaling Bitcoin Above 3 Million TX pre block
Post by: sAt0sHiFanClub on September 12, 2015, 03:46:55 PM
Matt Corallo just pointed out at the conference that propagation has little to do w/ bandwidth connectivity but general tcp packet loss which can occurs regardless of your connectivity.

I know there is some posts on the dev list referencing this. Don't ask me to explain in details this is a bit beyond me technically

You mean tcp re-transmission rates?  Thats a function of network congestion ( assuming we can ignore radio interference, etc.), which is kinda related to your 'connectivity'.

And round we go go again. tcp doesn't loose packets, it drops them when it cannot forward them as quickly as it receives them. This is everything to do with the quality of your connection, not the protocol.


Title: Re: Scaling Bitcoin Above 3 Million TX pre block
Post by: CIYAM on September 12, 2015, 03:52:29 PM
Why is it that no-one with any technical credibility backs @adamstgBit's claims (and in response please show people that have quoted you as their source rather than yourself misquoting them)?

Apparently he is smarter than everyone in the world I guess - so why doesn't he just fork Bitcoin (perhaps BitcoinAB) and see how that goes?

Prior to this whole block size thing I thought this guy was reasonable but now that he creates a new thread every day full of bullshit claims I can only wonder whether in fact he sold his account and whoever is posting this stuff is actually some newbie (and that wouldn't surprise me one bit).


Title: Re: Scaling Bitcoin Above 3 Million TX pre block
Post by: RoadTrain on September 12, 2015, 05:17:40 PM
Seeing you posts, Adam, I'm really excited by your apparent lack of understanding of the technical side of Bitcoin. I figured, maybe you could consider refraining from posting nonsense, and educate yourself first?
Because I see no point arguing with you, when you can't grasp some relatively simple technical ideas.

Quit with the lame "Its beneath me to explain" angle.  If you have an issue, state it, and support your contention with a relevant tech reference.
That's what I usually do, when there's hope for reasonable discussion.

Quote
Or you can explain it to me. I've nearly 15 years experience in writing network correlators and rating engines for the mobile teleco industry, so there is little you can teach me on HF data propagation that i don't know.
Explain what? That the network can't handle 1GB blocks without completely redefining what Bitcoin is?

What Matt Corallo's relay network does is trying to lower orphan rates for miners. It has nothing to do with increasing network throughput (tx rate), only lowers the amount of data to be transmitted with a block. After all, full nodes will still have to download full transaction data.
Moreover, it depends on two unreliable assumptions:
1) participating miners are cooperative, i.e. only/mostly include txs that other miners have in their mempools.
2) that participants' mempools are highly synchronized.

The latter is especially speculative when we try to project it onto the whole network. If we could make sure mempools are synchronized, we wouldn't need to have blockchain in the first place. But nodes' relay/mempool acception policy is highly customizable. E.g. during the recent stress test, many users had to increase their fee acception thresholds so that their nodes are stable. That means very different mempools for users.


Title: Re: Scaling Bitcoin Above 3 Million TX pre block
Post by: Quantus on September 12, 2015, 05:31:21 PM
Any thing you compress has to be uncompressed on each node to be confirmed before it can be propagated out to the next node.
This would slow propagation with the current block size.

However if a node was a few weeks/months/years behind then it may benefit from compressed 'blocks-of-Blocks'. This would require a lot of programming to set up and test.


Edit: I think adamstgBit should stop creating shitty threads on this topic its not helping anyone.


Title: Re: Scaling Bitcoin Above 3 Million TX pre block
Post by: CIYAM on September 12, 2015, 05:34:46 PM
Any thing you compress has to be uncompressed on each node to be confirmed before it can be propagated out to the next node.
This would slow propagation with the current block size.

The whole point of Corallo has nothing to do with compression - it is to do with nodes already being aware of txs so blocks can just use txid's rather than the actual tx content.

It is simply saving bandwidth in terms of information that was already communicated.

The current @adamstgBit forum member seems to be completely unaware of this and thinks that some magic "compression" has been invented (am pretty sure the old @adamstgBit would have known better which makes it more likely that this account has been sold to a newbie).


Title: Re: Scaling Bitcoin Above 3 Million TX pre block
Post by: jonald_fyookball on September 12, 2015, 05:54:58 PM
Any thing you compress has to be uncompressed on each node to be confirmed before it can be propagated out to the next node.
This would slow propagation with the current block size.

The whole point of Corallo has nothing to do with compression - it is to do with nodes already being aware of txs so blocks can just use txid's rather than the actual tx content.

It is simply saving bandwidth in terms of information that was already communicated.

The current @adamstgBit forum member seems to be completely unaware of this and thinks that some magic "compression" has been invented (am pretty sure the old @adamstgBit would have known better which makes it more likely that this account has been sold to a newbie).


Agree with both these points.


Title: Re: Scaling Bitcoin Above 3 Million TX pre block
Post by: sAt0sHiFanClub on September 12, 2015, 07:13:51 PM

Quote
Or you can explain it to me. I've nearly 15 years experience in writing network correlators and rating engines for the mobile teleco industry, so there is little you can teach me on HF data propagation that i don't know.

Explain what? That the network can't handle 1GB blocks without completely redefining what Bitcoin is?

What Matt Corallo's relay network does is trying to lower orphan rates for miners. It has nothing to do with increasing network throughput (tx rate), only lowers the amount of data to be transmitted with a block. After all, full nodes will still have to download full transaction data.
Moreover, it depends on two unreliable assumptions:
1) participating miners are cooperative, i.e. only/mostly include txs that other miners have in their mempools.
2) that participants' mempools are highly synchronized.

The latter is especially speculative when we try to project it onto the whole network. If we could make sure mempools are synchronized, we wouldn't need to have blockchain in the first place. But nodes' relay/mempool acception policy is highly customizable. E.g. during the recent stress test, many users had to increase their fee acception thresholds so that their nodes are stable. That means very different mempools for users.

I don't see why you have to redefine what bitcoin is to increase transaction throughput.  :D

I think we are conflating 2 different aspects of the same issue. The orphan rate is a direct function of the complexity and scale of the p2p network, and the volume of data in each discrete unit. (blocks) There is currently a ~2% orphan rate which miners ( in their interest) would like to see reduced. So we [Matts relay network] do that by relaying only the information they need. They already have the tx's in the mempool, so all they need is the merkle root to confirm that the tx's they include match the MR in the block. Any tx's they dont have, they ask peers for. Its not compression, but it has the same effect as compression - redundant data is not resent.  All fine and dandy.  

But one of the central pillars of smallblockism is that having larger blocks will result in increased orphan rates as the larger blocks take longer to propagate across the network.  But Matts relay network, if applied to bitcoin as a whole rather than a select cadre of approved players, could deliver a performance increase that would obviate that argument.

tl;dr  Matts RN could have benfits both for the miners orphan concerns and the tx throughput ( more tx\s per block)


Title: Re: Scaling Bitcoin Above 3 Million TX pre block
Post by: UserVVIP on September 12, 2015, 07:16:01 PM
Don't you think that it is a little too much btc for that amount?


Title: Re: Scaling Bitcoin Above 3 Million TX pre block
Post by: thejaytiesto on September 12, 2015, 07:46:27 PM
I think compression of blocks was already addressed by gmaxwell in here but i can't find the actual facts. In any case if this hasn't been considered as teh end all be all solutions agains the blocksize problem im sure there are drawbacks, so im pretty sure we will end up needing bigger blocks and blockstream type of tech anyway.


Title: Re: Scaling Bitcoin Above 3 Million TX pre block
Post by: jonald_fyookball on September 12, 2015, 07:55:34 PM

Quote
Or you can explain it to me. I've nearly 15 years experience in writing network correlators and rating engines for the mobile teleco industry, so there is little you can teach me on HF data propagation that i don't know.

Explain what? That the network can't handle 1GB blocks without completely redefining what Bitcoin is?

What Matt Corallo's relay network does is trying to lower orphan rates for miners. It has nothing to do with increasing network throughput (tx rate), only lowers the amount of data to be transmitted with a block. After all, full nodes will still have to download full transaction data.
Moreover, it depends on two unreliable assumptions:
1) participating miners are cooperative, i.e. only/mostly include txs that other miners have in their mempools.
2) that participants' mempools are highly synchronized.

The latter is especially speculative when we try to project it onto the whole network. If we could make sure mempools are synchronized, we wouldn't need to have blockchain in the first place. But nodes' relay/mempool acception policy is highly customizable. E.g. during the recent stress test, many users had to increase their fee acception thresholds so that their nodes are stable. That means very different mempools for users.

I don't see why you have to redefine what bitcoin is to increase transaction throughput.  :D

I think we are conflating 2 different aspects of the same issue. The orphan rate is a direct function of the complexity and scale of the p2p network, and the volume of data in each discrete unit. (blocks) There is currently a ~2% orphan rate which miners ( in their interest) would like to see reduced. So we [Matts relay network] do that by relaying only the information they need. They already have the tx's in the mempool, so all they need is the merkle root to confirm that the tx's they include match the MR in the block. Any tx's they dont have, they ask peers for. Its not compression, but it has the same effect as compression - redundant data is not resent.  All fine and dandy.  

But one of the central pillars of smallblockism is that having larger blocks will result in increased orphan rates as the larger blocks take longer to propagate across the network.  But Matts relay network, if applied to bitcoin as a whole rather than a select cadre of approved players, could deliver a performance increase that would obviate that argument.

tl;dr  Matts RN could have benfits both for the miners orphan concerns and the tx throughput ( more tx\s per block)


I'm having trouble following this.

"But one of the central pillars of smallblockism is that having larger blocks will result in increased orphan rates as the larger blocks take longer to propagate across the network.  But Matts relay network, if applied to bitcoin as a whole rather than a select cadre of approved players, could deliver a performance increase that would obviate that argument."

Doesn't the entire block need to be sent somewhere (at least from a miner to one of the nodes in the relay network)?
 


Title: Re: Scaling Bitcoin Above 3 Million TX pre block
Post by: RoadTrain on September 12, 2015, 08:23:50 PM
I don't see why you have to redefine what bitcoin is to increase transaction throughput.  :D
That's quite a straw man here, I didn't say that, please don't overgeneralize.  ???

I think we are conflating 2 different aspects of the same issue. The orphan rate is a direct function of the complexity and scale of the p2p network, and the volume of data in each discrete unit. (blocks) There is currently a ~2% orphan rate which miners ( in their interest) would like to see reduced. So we [Matts relay network] do that by relaying only the information they need. They already have the tx's in the mempool, so all they need is the merkle root to confirm that the tx's they include match the MR in the block. Any tx's they dont have, they ask peers for. Its not compression, but it has the same effect as compression - redundant data is not resent.  All fine and dandy.
Once again, this all is based on a weak assumption that miners are cooperative -- in the worst case scenarion we are falling back on the regular propagation protocol. While the Matt's RN doesn't have any major downsides per se, it's effectively downplaying the issue at hand -- that in the worst case scenario the information to be transmitted scales linearly with block sizes. While it appears we can easily increase blocksizes thanks to Matt's RN, it's becoming worse in case of uncooperative behavior.

But one of the central pillars of smallblockism is that having larger blocks will result in increased orphan rates as the larger blocks take longer to propagate across the network.  But Matts relay network, if applied to bitcoin as a whole rather than a select cadre of approved players, could deliver a performance increase that would obviate that argument.
I'd like to know how exactly Matt's RN would obviate it. It would mask it, yes, but it's not a magic bullet.


Title: Re: Scaling Bitcoin Above 3 Million TX pre block
Post by: sAt0sHiFanClub on September 12, 2015, 09:22:47 PM

I'm having trouble following this.

"But one of the central pillars of smallblockism is that having larger blocks will result in increased orphan rates as the larger blocks take longer to propagate across the network.  But Matts relay network, if applied to bitcoin as a whole rather than a select cadre of approved players, could deliver a performance increase that would obviate that argument."

Doesn't the entire block need to be sent somewhere (at least from a miner to one of the nodes in the relay network)?
 

A block is a (header + n(tx)) The vast majority of the transactions should already be in the nodes mempool, so why do they have to be sent a second time? Obviously due to the nature of the network, not all nodes will have the exact same set of tx'x, but any missing ones can be requested. Also, the relay keeps track of what tx's have been sent.

This is one of the long time conundrums of bitcoin - the redundant resending of transactions. You get a series of 10 byte ids instead of ~250b transactions.


Title: Re: Scaling Bitcoin Above 3 Million TX pre block
Post by: jonald_fyookball on September 12, 2015, 09:26:38 PM

I'm having trouble following this.

"But one of the central pillars of smallblockism is that having larger blocks will result in increased orphan rates as the larger blocks take longer to propagate across the network.  But Matts relay network, if applied to bitcoin as a whole rather than a select cadre of approved players, could deliver a performance increase that would obviate that argument."

Doesn't the entire block need to be sent somewhere (at least from a miner to one of the nodes in the relay network)?
 

A block is a (header + n(tx)) The vast majority of the transactions should already be in the nodes mempool, so why do they have to be sent a second time? Obviously due to the nature of the network, not all nodes will have the exact same set of tx'x, but any missing ones can be requested. Also, the relay keeps track of what tx's have been sent.

This is one of the long time conundrums of bitcoin - the redundant resending of transactions. You get a series of 10 byte ids instead of ~250b transactions.

Ok assuming the miner only sends a condensed version of the block with pointers to the relay network, the relay network still has to broadcast the full block then to other nodes, correct?




Title: Re: Scaling Bitcoin Above 3 Million TX pre block
Post by: sAt0sHiFanClub on September 12, 2015, 09:35:07 PM
I don't see why you have to redefine what bitcoin is to increase transaction throughput.  :D
That's quite a straw man here, I didn't say that, please don't overgeneralize.  ???

Lets not split hairs. You said 1GB blocks require a redefine of bitcoin. Larger blocks have more tx's. Blocks are fixed in time. More tx's/constant time = higher tx rate.

Quote
I think we are conflating 2 different aspects of the same issue. The orphan rate is a direct function of the complexity and scale of the p2p network, and the volume of data in each discrete unit. (blocks) There is currently a ~2% orphan rate which miners ( in their interest) would like to see reduced. So we [Matts relay network] do that by relaying only the information they need. They already have the tx's in the mempool, so all they need is the merkle root to confirm that the tx's they include match the MR in the block. Any tx's they dont have, they ask peers for. Its not compression, but it has the same effect as compression - redundant data is not resent.  All fine and dandy.
Once again, this all is based on a weak assumption that miners are cooperative -- in the worst case scenarion we are falling back on the regular propagation protocol. While the Matt's RN doesn't have any major downsides per se, it's effectively downplaying the issue at hand -- that in the worst case scenario the information to be transmitted scales linearly with block sizes. While it appears we can easily increase blocksizes thanks to Matt's RN, it's becoming worse in case of uncooperative behavior.

But one of the central pillars of smallblockism is that having larger blocks will result in increased orphan rates as the larger blocks take longer to propagate across the network.  But Matts relay network, if applied to bitcoin as a whole rather than a select cadre of approved players, could deliver a performance increase that would obviate that argument.
I'd like to know how exactly Matt's RN would obviate it. It would mask it, yes, but it's not a magic bullet.

As it is, its just a step in the right direction, but I'm also saying that it is an idea that can be developed and deployed across the network in general. But, yeah, i don't think its a magic bullet, but it is certainly an indicator that positive thought exists in Bitcoin, and solutions to its inherent problems can be found.


Title: Re: Scaling Bitcoin Above 3 Million TX pre block
Post by: sAt0sHiFanClub on September 12, 2015, 09:40:38 PM

I'm having trouble following this.

"But one of the central pillars of smallblockism is that having larger blocks will result in increased orphan rates as the larger blocks take longer to propagate across the network.  But Matts relay network, if applied to bitcoin as a whole rather than a select cadre of approved players, could deliver a performance increase that would obviate that argument."

Doesn't the entire block need to be sent somewhere (at least from a miner to one of the nodes in the relay network)?
 

A block is a (header + n(tx)) The vast majority of the transactions should already be in the nodes mempool, so why do they have to be sent a second time? Obviously due to the nature of the network, not all nodes will have the exact same set of tx'x, but any missing ones can be requested. Also, the relay keeps track of what tx's have been sent.

This is one of the long time conundrums of bitcoin - the redundant resending of transactions. You get a series of 10 byte ids instead of ~250b transactions.

Ok assuming the miner only sends a condensed version of the block with pointers to the relay network, the relay network still has to broadcast the full block then to other nodes, correct?

Correct - while the relay backbone remains a separate network, then yes. (but not over the relay network - they get propagated over the vanilla p2p as far as i know)  But I can imagine a case where it could be extended to a wider network.





Title: Re: Scaling Bitcoin Above 3 Million TX pre block
Post by: jonald_fyookball on September 12, 2015, 09:45:47 PM

I'm having trouble following this.

"But one of the central pillars of smallblockism is that having larger blocks will result in increased orphan rates as the larger blocks take longer to propagate across the network.  But Matts relay network, if applied to bitcoin as a whole rather than a select cadre of approved players, could deliver a performance increase that would obviate that argument."

Doesn't the entire block need to be sent somewhere (at least from a miner to one of the nodes in the relay network)?
 

A block is a (header + n(tx)) The vast majority of the transactions should already be in the nodes mempool, so why do they have to be sent a second time? Obviously due to the nature of the network, not all nodes will have the exact same set of tx'x, but any missing ones can be requested. Also, the relay keeps track of what tx's have been sent.

This is one of the long time conundrums of bitcoin - the redundant resending of transactions. You get a series of 10 byte ids instead of ~250b transactions.

Ok assuming the miner only sends a condensed version of the block with pointers to the relay network, the relay network still has to broadcast the full block then to other nodes, correct?

Correct - while the relay backbone remains a separate network, then yes. (but not over the relay network - they get propagated over the vanilla p2p as far as i know)  But I can imagine a case where it could be extended to a wider network.





Well then as much as I don't like to agree with the small blockers, their argument is correct that orphan rates will increase since the full block needs at some point to be broadcast.   Although as I pointed out to Adam, would need to be much bigger (60mb) before this is a problem with current Internet speeds.


Title: Re: Scaling Bitcoin Above 3 Million TX pre block
Post by: RoadTrain on September 12, 2015, 09:46:41 PM
I don't see why you have to redefine what bitcoin is to increase transaction throughput.  :D
That's quite a straw man here, I didn't say that, please don't overgeneralize.  ???

Lets not split hairs. You said 1GB blocks require a redefine of bitcoin. Larger blocks have more tx's. Blocks are fixed in time. More tx's/constant time = higher tx rate.

That's exactly an informal fallacy. Larger blocks mean more txs, BUT more txs don't necessarily mean larger blocks. You are equalizing them.

I have said 1GB blocks require a redefine of bitcoin. I haven't said you have to redefine what bitcoin is to increase transaction throughput. But you have weakened/replaced my argument to make it easier to refute -- straw man.


Title: Re: Scaling Bitcoin Above 3 Million TX pre block
Post by: sAt0sHiFanClub on September 12, 2015, 10:06:26 PM
I don't see why you have to redefine what bitcoin is to increase transaction throughput.  :D
That's quite a straw man here, I didn't say that, please don't overgeneralize.  ???

Lets not split hairs. You said 1GB blocks require a redefine of bitcoin. Larger blocks have more tx's. Blocks are fixed in time. More tx's/constant time = higher tx rate.
That's exactly an informal fallacy. Larger blocks mean more txs, BUT more txs don't necessarily mean larger blocks. You are equalizing them.

Whaaat?  Bitcoin is engineered to generate a block once every ~10 minutes. That is set in stone. So of course more transactions mean larger blocks - unless you are shrinking the transaction size  ???  What you said makes no logical sense.

Quote
I have said 1GB blocks require a redefine of bitcoin. I haven't said you have to redefine what bitcoin is to increase transaction throughput. But you have weakened/replaced my argument to make it easier to refute -- straw man.

The tx throughput can vary, the rate of block creation is fixed. We can have as many transactions as users generate, but we still have the same number of blocks.

edit: Maybe we are getting hung up on the 1Gb thing. Same holds true for 2Mb, 4Mb ..... 32Mb blocks. Above 32Mb, you need to change how bitcoin sends messages, but thats academic to this discussion.



Title: Re: Scaling Bitcoin Above 3 Million TX pre block
Post by: RoadTrain on September 12, 2015, 10:29:56 PM
@sAt0sHiFanClub, I don't know your definition of a transaction, but if we take the Lightning Network as an example, it does increase transaction throughput without neccessarily increasing the blocksize limit. Large blocks is not the only way of increasing throughput.


Title: Re: Scaling Bitcoin Above 3 Million TX pre block
Post by: Quantus on September 12, 2015, 10:51:26 PM

I'm having trouble following this.

"But one of the central pillars of smallblockism is that having larger blocks will result in increased orphan rates as the larger blocks take longer to propagate across the network.  But Matts relay network, if applied to bitcoin as a whole rather than a select cadre of approved players, could deliver a performance increase that would obviate that argument."

Doesn't the entire block need to be sent somewhere (at least from a miner to one of the nodes in the relay network)?
 

A block is a (header + n(tx)) The vast majority of the transactions should already be in the nodes mempool, so why do they have to be sent a second time? Obviously due to the nature of the network, not all nodes will have the exact same set of tx'x, but any missing ones can be requested. Also, the relay keeps track of what tx's have been sent.

This is one of the long time conundrums of bitcoin - the redundant resending of transactions. You get a series of 10 byte ids instead of ~250b transactions.

This really helped me understand your argument.  It would be great if this was implemented but it still would not address the issues of blockchain storage or the threat of spam.


Title: Re: Scaling Bitcoin Above 3 Million TX pre block
Post by: jonald_fyookball on September 12, 2015, 10:54:33 PM
@sAt0sHiFanClub, I don't know your definition of a transaction, but if we take the Lightning Network as an example, it does increase transaction throughput without neccessarily increasing the blocksize limit. Large blocks is not the only way of increasing throughput.

As far as I'm aware the LN is off the main chain so it's irrevlevant to actually scaling main chain.



Title: Re: Scaling Bitcoin Above 3 Million TX pre block
Post by: sAt0sHiFanClub on September 12, 2015, 11:00:33 PM
@sAt0sHiFanClub, I don't know your definition of a transaction, but if we take the Lightning Network as an example, it does increase transaction throughput without neccessarily increasing the blocksize limit. Large blocks is not the only way of increasing throughput.

Your going off on a tangent now. LN is paperware at the moment. Lets stick to bitcoin for now, eh?



Title: Re: Scaling Bitcoin Above 3 Million TX pre block
Post by: RoadTrain on September 12, 2015, 11:07:22 PM
@sAt0sHiFanClub, I don't know your definition of a transaction, but if we take the Lightning Network as an example, it does increase transaction throughput without neccessarily increasing the blocksize limit. Large blocks is not the only way of increasing throughput.

Your going off on a tangent now. LN is paperware at the moment. Lets stick to bitcoin for now, eh?
So your definition of a transaction only includes on-chain transactions, then your statement is correct. My definition also includes trustless off-chain transactions (one way of scaling Bitcoin), and under that definition mine is correct.

Oh, we really have to be precise...


Title: Re: Scaling Bitcoin Above 3 Million TX pre block
Post by: sAt0sHiFanClub on September 12, 2015, 11:42:47 PM
@sAt0sHiFanClub, I don't know your definition of a transaction, but if we take the Lightning Network as an example, it does increase transaction throughput without neccessarily increasing the blocksize limit. Large blocks is not the only way of increasing throughput.

Your going off on a tangent now. LN is paperware at the moment. Lets stick to bitcoin for now, eh?
So your definition of a transaction only includes on-chain transactions, then your statement is correct. My definition also includes trustless off-chain transactions (one way of scaling Bitcoin), and under that definition mine is correct.

Oh, we really have to be precise...

I think you have reflected your point through the nearest axis, and then continued into another domain entirely.



Title: Re: Scaling Bitcoin Above 3 Million TX pre block
Post by: RoadTrain on September 12, 2015, 11:47:27 PM
@sAt0sHiFanClub, I don't know your definition of a transaction, but if we take the Lightning Network as an example, it does increase transaction throughput without neccessarily increasing the blocksize limit. Large blocks is not the only way of increasing throughput.

Your going off on a tangent now. LN is paperware at the moment. Lets stick to bitcoin for now, eh?
So your definition of a transaction only includes on-chain transactions, then your statement is correct. My definition also includes trustless off-chain transactions (one way of scaling Bitcoin), and under that definition mine is correct.

Oh, we really have to be precise...

I think you have reflected your point through the nearest axis, and then continued into another domain entirely.
You are being overly cryptic here, I'm not an English native, so if you want me to answer, please rephrase.


Title: Re: Scaling Bitcoin Above 3 Million TX pre block
Post by: jonald_fyookball on September 13, 2015, 12:24:58 AM
@sAt0sHiFanClub, I don't know your definition of a transaction, but if we take the Lightning Network as an example, it does increase transaction throughput without neccessarily increasing the blocksize limit. Large blocks is not the only way of increasing throughput.

Your going off on a tangent now. LN is paperware at the moment. Lets stick to bitcoin for now, eh?
So your definition of a transaction only includes on-chain transactions, then your statement is correct. My definition also includes trustless off-chain transactions (one way of scaling Bitcoin), and under that definition mine is correct.

Oh, we really have to be precise...

How is LN trustless?


Title: Re: Scaling Bitcoin Above 3 Million TX pre block
Post by: RoadTrain on September 13, 2015, 12:38:49 AM
@sAt0sHiFanClub, I don't know your definition of a transaction, but if we take the Lightning Network as an example, it does increase transaction throughput without neccessarily increasing the blocksize limit. Large blocks is not the only way of increasing throughput.

Your going off on a tangent now. LN is paperware at the moment. Lets stick to bitcoin for now, eh?
So your definition of a transaction only includes on-chain transactions, then your statement is correct. My definition also includes trustless off-chain transactions (one way of scaling Bitcoin), and under that definition mine is correct.

Oh, we really have to be precise...

How is LN trustless?
Quote from: LN paper
If Bitcoin transactions can be signed with a new sighash type that addresses malleability,
these transfers may occur between untrusted parties along the transfer route by contracts which, in the event of uncooperative or hostile
participants, are enforceable via broadcast over the bitcoin blockchain
in the event of uncooperative or hostile participants, through a series
of decrementing timelocks.


Title: Re: Scaling Bitcoin Above 3 Million TX pre block
Post by: sAt0sHiFanClub on September 13, 2015, 10:49:09 AM

How is LN trustless?
Quote from: LN paper
If Bitcoin transactions can be signed with a new sighash type that addresses malleability,
these transfers may occur between untrusted parties along the transfer route by contracts which, in the event of uncooperative or hostile
participants, are enforceable via broadcast over the bitcoin blockchain
in the event of uncooperative or hostile participants, through a series
of decrementing timelocks.

The piece you quoted relates to bitcoin transactions if we introduce new sighash functionality, they are not a unique property of LN.  This will require an enormous development effort as its complex to achieve workable contracts without malleability.  Sighash signing types allow for changes to the tx before they are final. But even when final, malleability still persists no matter how it was signed. Thats why most of this functionality was removed from bitcoin.

LN as it would be workable now assumes a certain level of trust LN Presentation, slide 31 (https://lightning.network/lightning-network.pdf)

Quote
● If Carol refuses to disclose R, she will hold
up the channel between Alice and Bob
○ If her channel expires after Alice and Bob’s she can
steal funds by redeeming the hashlock!
Bob has to be rich for this to really work
3rd party low-trust multisig and/or extremely
small values sent can mostly work today




Title: Re: Scaling Bitcoin Above 3 Million TX pre block
Post by: RoadTrain on September 13, 2015, 07:39:24 PM

How is LN trustless?
Quote from: LN paper
If Bitcoin transactions can be signed with a new sighash type that addresses malleability,
these transfers may occur between untrusted parties along the transfer route by contracts which, in the event of uncooperative or hostile
participants, are enforceable via broadcast over the bitcoin blockchain
in the event of uncooperative or hostile participants, through a series
of decrementing timelocks.

The piece you quoted relates to bitcoin transactions if we introduce new sighash functionality, they are not a unique property of LN.  This will require an enormous development effort as its complex to achieve workable contracts without malleability.  Sighash signing types allow for changes to the tx before they are final. But even when final, malleability still persists no matter how it was signed. Thats why most of this functionality was removed from bitcoin.

LN as it would be workable now assumes a certain level of trust LN Presentation, slide 31 (https://lightning.network/lightning-network.pdf)

Quote
● If Carol refuses to disclose R, she will hold
up the channel between Alice and Bob
○ If her channel expires after Alice and Bob’s she can
steal funds by redeeming the hashlock!
Bob has to be rich for this to really work
3rd party low-trust multisig and/or extremely
small values sent can mostly work today

I don't see how it's relevant. Of course, for LN to actually work at its best, we have to do some adjustments to the Bitcoin protocol. I didn't state we don't have to. All of this is stated in the paper explicitly.


Title: Re: Scaling Bitcoin Above 3 Million TX pre block
Post by: adamstgBit on September 14, 2015, 02:01:53 AM
Any thing you compress has to be uncompressed on each node to be confirmed before it can be propagated out to the next node.
This would slow propagation with the current block size.

The whole point of Corallo has nothing to do with compression - it is to do with nodes already being aware of txs so blocks can just use txid's rather than the actual tx content.

It is simply saving bandwidth in terms of information that was already communicated.

The current @adamstgBit forum member seems to be completely unaware of this and thinks that some magic "compression" has been invented (am pretty sure the old @adamstgBit would have known better which makes it more likely that this account has been sold to a newbie).


yes i know it's not technically compression, that doesn't  change the fact that this magical magic makes sending a block 250times faster....
it's all about bandwidth and this saves 250X more bandwidth, how is this not exciting?
using this method all miners need to do is keep up with TPS so there mempool is in sync with all other miners
which makes it alot easier to handle huge blocks.



Title: Re: Scaling Bitcoin Above 3 Million TX pre block
Post by: adamstgBit on September 14, 2015, 02:14:37 AM

I'm having trouble following this.

"But one of the central pillars of smallblockism is that having larger blocks will result in increased orphan rates as the larger blocks take longer to propagate across the network.  But Matts relay network, if applied to bitcoin as a whole rather than a select cadre of approved players, could deliver a performance increase that would obviate that argument."

Doesn't the entire block need to be sent somewhere (at least from a miner to one of the nodes in the relay network)?
 

A block is a (header + n(tx)) The vast majority of the transactions should already be in the nodes mempool, so why do they have to be sent a second time? Obviously due to the nature of the network, not all nodes will have the exact same set of tx'x, but any missing ones can be requested. Also, the relay keeps track of what tx's have been sent.

This is one of the long time conundrums of bitcoin - the redundant resending of transactions. You get a series of 10 byte ids instead of ~250b transactions.

Ok assuming the miner only sends a condensed version of the block with pointers to the relay network, the relay network still has to broadcast the full block then to other nodes, correct?

Correct - while the relay backbone remains a separate network, then yes. (but not over the relay network - they get propagated over the vanilla p2p as far as i know)  But I can imagine a case where it could be extended to a wider network.





Well then as much as I don't like to agree with the small blockers, their argument is correct that orphan rates will increase since the full block needs at some point to be broadcast.   Although as I pointed out to Adam, would need to be much bigger (60mb) before this is a problem with current Internet speeds.

the full block is never sent, miners accumulate TX's as they come in, one of the functions of the relay network is to relay those TX fast, but when it's time to send out a new block every miner has all the TXs in the meme pool, so this condensed version of the block ( god forbid i call it compressed ) this is all anyone really needs.

As long as miners can keep up with the TX's as they come in and by doing so keep all their mem pools in sync ( it doesn't have to be perfectly sync...) a full block never needs to be broadcast.

This method will reduce orphan rates due to slow block propagation, miners ( the smart ones) currently use the relay network for exactly that purpose, they are able to get the new block 250X faster and this gives them an edge.

edit: wait ya you're right, because this isn't a standard way of sending out block the relay network learns of the new block by receiving the block in full. and then relays the condensed version of the block. but whatever this could idea can make smallblockist argument that having larger blocks will result in increased orphan, no longer true. which is a big step in the right direction IMO.


Title: Re: Scaling Bitcoin Above 3 Million TX pre block
Post by: jonald_fyookball on September 14, 2015, 02:23:29 AM

I'm having trouble following this.

"But one of the central pillars of smallblockism is that having larger blocks will result in increased orphan rates as the larger blocks take longer to propagate across the network.  But Matts relay network, if applied to bitcoin as a whole rather than a select cadre of approved players, could deliver a performance increase that would obviate that argument."

Doesn't the entire block need to be sent somewhere (at least from a miner to one of the nodes in the relay network)?
 

A block is a (header + n(tx)) The vast majority of the transactions should already be in the nodes mempool, so why do they have to be sent a second time? Obviously due to the nature of the network, not all nodes will have the exact same set of tx'x, but any missing ones can be requested. Also, the relay keeps track of what tx's have been sent.

This is one of the long time conundrums of bitcoin - the redundant resending of transactions. You get a series of 10 byte ids instead of ~250b transactions.

Ok assuming the miner only sends a condensed version of the block with pointers to the relay network, the relay network still has to broadcast the full block then to other nodes, correct?

Correct - while the relay backbone remains a separate network, then yes. (but not over the relay network - they get propagated over the vanilla p2p as far as i know)  But I can imagine a case where it could be extended to a wider network.





Well then as much as I don't like to agree with the small blockers, their argument is correct that orphan rates will increase since the full block needs at some point to be broadcast.   Although as I pointed out to Adam, would need to be much bigger (60mb) before this is a problem with current Internet speeds.

the full block is never sent, miners accumulate TX's as they come in, one of the functions of the relay network is to relay those TX fast, but when it's time to send out a new block every miner has all the TXs in the meme pool, so this condensed version of the block ( god forbid i call it compressed ) this is all anyone really needs.

As long as miners can keep up with the TX's as they come in and by doing so keep all their mem pools in sync ( it doesn't have to be perfectly sync...) a full block never needs to be broadcast.

This method will reduce orphan rates due to slow block propagation, miners ( the smart ones) currently use the relay network for exactly that purpose, they are able to get the new block 250X faster and this gives them an edge.

I think you don't know what you're talking about, (although I don't know the technical details well enough to prove it.)


Title: Re: Scaling Bitcoin Above 3 Million TX pre block
Post by: brg444 on September 14, 2015, 02:26:22 AM

I'm having trouble following this.

"But one of the central pillars of smallblockism is that having larger blocks will result in increased orphan rates as the larger blocks take longer to propagate across the network.  But Matts relay network, if applied to bitcoin as a whole rather than a select cadre of approved players, could deliver a performance increase that would obviate that argument."

Doesn't the entire block need to be sent somewhere (at least from a miner to one of the nodes in the relay network)?
 

A block is a (header + n(tx)) The vast majority of the transactions should already be in the nodes mempool, so why do they have to be sent a second time? Obviously due to the nature of the network, not all nodes will have the exact same set of tx'x, but any missing ones can be requested. Also, the relay keeps track of what tx's have been sent.

This is one of the long time conundrums of bitcoin - the redundant resending of transactions. You get a series of 10 byte ids instead of ~250b transactions.

Ok assuming the miner only sends a condensed version of the block with pointers to the relay network, the relay network still has to broadcast the full block then to other nodes, correct?

Correct - while the relay backbone remains a separate network, then yes. (but not over the relay network - they get propagated over the vanilla p2p as far as i know)  But I can imagine a case where it could be extended to a wider network.





Well then as much as I don't like to agree with the small blockers, their argument is correct that orphan rates will increase since the full block needs at some point to be broadcast.   Although as I pointed out to Adam, would need to be much bigger (60mb) before this is a problem with current Internet speeds.

the full block is never sent, miners accumulate TX's as they come in, one of the functions of the relay network is to relay those TX fast, but when it's time to send out a new block every miner has all the TXs in the meme pool, so this condensed version of the block ( god forbid i call it compressed ) this is all anyone really needs.

As long as miners can keep up with the TX's as they come in and by doing so keep all their mem pools in sync ( it doesn't have to be perfectly sync...) a full block never needs to be broadcast.

This method will reduce orphan rates due to slow block propagation, miners ( the smart ones) currently use the relay network for exactly that purpose, they are able to get the new block 250X faster and this gives them an edge.

Here, some homework material:

http://diyhpl.us/wiki/transcripts/scalingbitcoin/

Come back when you've read and understood most of this  :)


Title: Re: Scaling Bitcoin Above 3 Million TX pre block
Post by: adamstgBit on September 14, 2015, 02:28:13 AM

I'm having trouble following this.

"But one of the central pillars of smallblockism is that having larger blocks will result in increased orphan rates as the larger blocks take longer to propagate across the network.  But Matts relay network, if applied to bitcoin as a whole rather than a select cadre of approved players, could deliver a performance increase that would obviate that argument."

Doesn't the entire block need to be sent somewhere (at least from a miner to one of the nodes in the relay network)?
 

A block is a (header + n(tx)) The vast majority of the transactions should already be in the nodes mempool, so why do they have to be sent a second time? Obviously due to the nature of the network, not all nodes will have the exact same set of tx'x, but any missing ones can be requested. Also, the relay keeps track of what tx's have been sent.

This is one of the long time conundrums of bitcoin - the redundant resending of transactions. You get a series of 10 byte ids instead of ~250b transactions.

Ok assuming the miner only sends a condensed version of the block with pointers to the relay network, the relay network still has to broadcast the full block then to other nodes, correct?

Correct - while the relay backbone remains a separate network, then yes. (but not over the relay network - they get propagated over the vanilla p2p as far as i know)  But I can imagine a case where it could be extended to a wider network.





Well then as much as I don't like to agree with the small blockers, their argument is correct that orphan rates will increase since the full block needs at some point to be broadcast.   Although as I pointed out to Adam, would need to be much bigger (60mb) before this is a problem with current Internet speeds.

the full block is never sent, miners accumulate TX's as they come in, one of the functions of the relay network is to relay those TX fast, but when it's time to send out a new block every miner has all the TXs in the meme pool, so this condensed version of the block ( god forbid i call it compressed ) this is all anyone really needs.

As long as miners can keep up with the TX's as they come in and by doing so keep all their mem pools in sync ( it doesn't have to be perfectly sync...) a full block never needs to be broadcast.

This method will reduce orphan rates due to slow block propagation, miners ( the smart ones) currently use the relay network for exactly that purpose, they are able to get the new block 250X faster and this gives them an edge.

Here, some homework material:

http://diyhpl.us/wiki/transcripts/scalingbitcoin/

Come back when you've read and understood most of this  :)

http://vignette3.wikia.nocookie.net/random-ness/images/b/b8/Oh_you.jpg/revision/latest?cb=20110419012743


Title: Re: Scaling Bitcoin Above 3 Million TX pre block
Post by: brg444 on September 14, 2015, 02:29:12 AM

I'm having trouble following this.

"But one of the central pillars of smallblockism is that having larger blocks will result in increased orphan rates as the larger blocks take longer to propagate across the network.  But Matts relay network, if applied to bitcoin as a whole rather than a select cadre of approved players, could deliver a performance increase that would obviate that argument."

Doesn't the entire block need to be sent somewhere (at least from a miner to one of the nodes in the relay network)?
 

A block is a (header + n(tx)) The vast majority of the transactions should already be in the nodes mempool, so why do they have to be sent a second time? Obviously due to the nature of the network, not all nodes will have the exact same set of tx'x, but any missing ones can be requested. Also, the relay keeps track of what tx's have been sent.

This is one of the long time conundrums of bitcoin - the redundant resending of transactions. You get a series of 10 byte ids instead of ~250b transactions.

Ok assuming the miner only sends a condensed version of the block with pointers to the relay network, the relay network still has to broadcast the full block then to other nodes, correct?

Correct - while the relay backbone remains a separate network, then yes. (but not over the relay network - they get propagated over the vanilla p2p as far as i know)  But I can imagine a case where it could be extended to a wider network.





Well then as much as I don't like to agree with the small blockers, their argument is correct that orphan rates will increase since the full block needs at some point to be broadcast.   Although as I pointed out to Adam, would need to be much bigger (60mb) before this is a problem with current Internet speeds.

the full block is never sent, miners accumulate TX's as they come in, one of the functions of the relay network is to relay those TX fast, but when it's time to send out a new block every miner has all the TXs in the meme pool, so this condensed version of the block ( god forbid i call it compressed ) this is all anyone really needs.

As long as miners can keep up with the TX's as they come in and by doing so keep all their mem pools in sync ( it doesn't have to be perfectly sync...) a full block never needs to be broadcast.

This method will reduce orphan rates due to slow block propagation, miners ( the smart ones) currently use the relay network for exactly that purpose, they are able to get the new block 250X faster and this gives them an edge.

I think you don't know what you're talking about, (although I don't know the technical details well enough to prove it.)

You are correct. The full block is broadcasted to full nodes and validated in full by all of them.


Title: Re: Scaling Bitcoin Above 3 Million TX pre block
Post by: adamstgBit on September 14, 2015, 02:30:32 AM

I'm having trouble following this.

"But one of the central pillars of smallblockism is that having larger blocks will result in increased orphan rates as the larger blocks take longer to propagate across the network.  But Matts relay network, if applied to bitcoin as a whole rather than a select cadre of approved players, could deliver a performance increase that would obviate that argument."

Doesn't the entire block need to be sent somewhere (at least from a miner to one of the nodes in the relay network)?
 

A block is a (header + n(tx)) The vast majority of the transactions should already be in the nodes mempool, so why do they have to be sent a second time? Obviously due to the nature of the network, not all nodes will have the exact same set of tx'x, but any missing ones can be requested. Also, the relay keeps track of what tx's have been sent.

This is one of the long time conundrums of bitcoin - the redundant resending of transactions. You get a series of 10 byte ids instead of ~250b transactions.

Ok assuming the miner only sends a condensed version of the block with pointers to the relay network, the relay network still has to broadcast the full block then to other nodes, correct?

Correct - while the relay backbone remains a separate network, then yes. (but not over the relay network - they get propagated over the vanilla p2p as far as i know)  But I can imagine a case where it could be extended to a wider network.





Well then as much as I don't like to agree with the small blockers, their argument is correct that orphan rates will increase since the full block needs at some point to be broadcast.   Although as I pointed out to Adam, would need to be much bigger (60mb) before this is a problem with current Internet speeds.

the full block is never sent, miners accumulate TX's as they come in, one of the functions of the relay network is to relay those TX fast, but when it's time to send out a new block every miner has all the TXs in the meme pool, so this condensed version of the block ( god forbid i call it compressed ) this is all anyone really needs.

As long as miners can keep up with the TX's as they come in and by doing so keep all their mem pools in sync ( it doesn't have to be perfectly sync...) a full block never needs to be broadcast.

This method will reduce orphan rates due to slow block propagation, miners ( the smart ones) currently use the relay network for exactly that purpose, they are able to get the new block 250X faster and this gives them an edge.

I think you don't know what you're talking about, (although I don't know the technical details well enough to prove it.)

 you're right, because this isn't a standard way of sending out block the relay network learns of the new block by receiving the block in full. and then relays the condensed version of the block. but whatever it could easily be THE WAY ( at which point there will never be a need to send out the whole block) and this could idea can make smallblockist argument that having larger blocks will result in increased orphaned blocks, no longer true. which is a big step in the right direction IMO.


Title: Re: Scaling Bitcoin Above 3 Million TX pre block
Post by: adamstgBit on September 14, 2015, 02:33:20 AM
it be gr8 if someone that is more knowledgeable then me, back me on this...

wheres Peter?


Title: Re: Scaling Bitcoin Above 3 Million TX pre block
Post by: jonald_fyookball on September 14, 2015, 02:38:22 AM

I'm having trouble following this.

"But one of the central pillars of smallblockism is that having larger blocks will result in increased orphan rates as the larger blocks take longer to propagate across the network.  But Matts relay network, if applied to bitcoin as a whole rather than a select cadre of approved players, could deliver a performance increase that would obviate that argument."

Doesn't the entire block need to be sent somewhere (at least from a miner to one of the nodes in the relay network)?
 

A block is a (header + n(tx)) The vast majority of the transactions should already be in the nodes mempool, so why do they have to be sent a second time? Obviously due to the nature of the network, not all nodes will have the exact same set of tx'x, but any missing ones can be requested. Also, the relay keeps track of what tx's have been sent.

This is one of the long time conundrums of bitcoin - the redundant resending of transactions. You get a series of 10 byte ids instead of ~250b transactions.

Ok assuming the miner only sends a condensed version of the block with pointers to the relay network, the relay network still has to broadcast the full block then to other nodes, correct?

Correct - while the relay backbone remains a separate network, then yes. (but not over the relay network - they get propagated over the vanilla p2p as far as i know)  But I can imagine a case where it could be extended to a wider network.





Well then as much as I don't like to agree with the small blockers, their argument is correct that orphan rates will increase since the full block needs at some point to be broadcast.   Although as I pointed out to Adam, would need to be much bigger (60mb) before this is a problem with current Internet speeds.

the full block is never sent, miners accumulate TX's as they come in, one of the functions of the relay network is to relay those TX fast, but when it's time to send out a new block every miner has all the TXs in the meme pool, so this condensed version of the block ( god forbid i call it compressed ) this is all anyone really needs.

As long as miners can keep up with the TX's as they come in and by doing so keep all their mem pools in sync ( it doesn't have to be perfectly sync...) a full block never needs to be broadcast.

This method will reduce orphan rates due to slow block propagation, miners ( the smart ones) currently use the relay network for exactly that purpose, they are able to get the new block 250X faster and this gives them an edge.

I think you don't know what you're talking about, (although I don't know the technical details well enough to prove it.)

 you're right, because this isn't a standard way of sending out block the relay network learns of the new block by receiving the block in full. and then relays the condensed version of the block. but whatever it could easily be THE WAY ( at which point there will never be a need to send out the whole block) and this could idea can make smallblockist argument that having larger blocks will result in increased orphaned blocks, no longer true. which is a big step in the right direction IMO.

Makes sense in theory.


Title: Re: Scaling Bitcoin Above 3 Million TX pre block
Post by: brg444 on September 14, 2015, 02:41:08 AM

I'm having trouble following this.

"But one of the central pillars of smallblockism is that having larger blocks will result in increased orphan rates as the larger blocks take longer to propagate across the network.  But Matts relay network, if applied to bitcoin as a whole rather than a select cadre of approved players, could deliver a performance increase that would obviate that argument."

Doesn't the entire block need to be sent somewhere (at least from a miner to one of the nodes in the relay network)?
 

A block is a (header + n(tx)) The vast majority of the transactions should already be in the nodes mempool, so why do they have to be sent a second time? Obviously due to the nature of the network, not all nodes will have the exact same set of tx'x, but any missing ones can be requested. Also, the relay keeps track of what tx's have been sent.

This is one of the long time conundrums of bitcoin - the redundant resending of transactions. You get a series of 10 byte ids instead of ~250b transactions.

Ok assuming the miner only sends a condensed version of the block with pointers to the relay network, the relay network still has to broadcast the full block then to other nodes, correct?

Correct - while the relay backbone remains a separate network, then yes. (but not over the relay network - they get propagated over the vanilla p2p as far as i know)  But I can imagine a case where it could be extended to a wider network.





Well then as much as I don't like to agree with the small blockers, their argument is correct that orphan rates will increase since the full block needs at some point to be broadcast.   Although as I pointed out to Adam, would need to be much bigger (60mb) before this is a problem with current Internet speeds.

the full block is never sent, miners accumulate TX's as they come in, one of the functions of the relay network is to relay those TX fast, but when it's time to send out a new block every miner has all the TXs in the meme pool, so this condensed version of the block ( god forbid i call it compressed ) this is all anyone really needs.

As long as miners can keep up with the TX's as they come in and by doing so keep all their mem pools in sync ( it doesn't have to be perfectly sync...) a full block never needs to be broadcast.

This method will reduce orphan rates due to slow block propagation, miners ( the smart ones) currently use the relay network for exactly that purpose, they are able to get the new block 250X faster and this gives them an edge.

I think you don't know what you're talking about, (although I don't know the technical details well enough to prove it.)

you're right, because this isn't a standard way of sending out block the relay network learns of the new block by receiving the block in full. and then relays the condensed version of the block. but whatever it could easily be THE WAY ( at which point there will never be a need to send out the whole block) and this could idea can make smallblockist argument that having larger blocks will result in increased orphaned blocks, no longer true. which is a big step in the right direction IMO.

No. You are confused.

The relay network serves as a more efficient messaging route for the miners. The nodes have nothing to do with it. Once the chain moves forward with a new block  every nodes needs to be validate it in full.

Moreover the relay network is currently used by a great majority of the large miners and especially those in China for example that have historically experienced orphan concerns. Unfortunately this is seemingly not enough for some of them who choose to do SPV mining so as to even more mitigate their risks of mining orphans.


Title: Re: Scaling Bitcoin Above 3 Million TX pre block
Post by: adamstgBit on September 14, 2015, 02:44:46 AM
@brg444

here's a similar idea

http://diyhpl.us/wiki/transcripts/scalingbitcoin/bitcoin-block-propagation-iblt-rusty-russell/


Title: Re: Scaling Bitcoin Above 3 Million TX pre block
Post by: brg444 on September 14, 2015, 02:50:15 AM
@brg444

here's a similar idea

http://diyhpl.us/wiki/transcripts/scalingbitcoin/bitcoin-block-propagation-iblt-rusty-russell/

I'm sorry but it is obvious you don't understand how Bitcoin works. I'd like to tell it another way but at this point I won't be wasting my efforts until you correct these fundamental misunderstandings on your own.



Title: Re: Scaling Bitcoin Above 3 Million TX pre block
Post by: jonald_fyookball on September 14, 2015, 03:00:53 AM
Well, I think one problem with only sending pointers is what happens when a node doesn't have a particular transaction?  How does it get it?


Title: Re: Scaling Bitcoin Above 3 Million TX pre block
Post by: adamstgBit on September 14, 2015, 03:01:42 AM
Well, I think one problem with only sending pointers is what happens when a node doesn't have a particular transaction?  How does it get it?

would just ask around for the ones it's missing

it would be interesting to know exactly how insync a node from chain and one from US are


Title: Re: Scaling Bitcoin Above 3 Million TX pre block
Post by: brg444 on September 14, 2015, 03:05:23 AM
Well, I think one problem with only sending pointers is what happens when a node doesn't have a particular transaction?  How does it get it?

Nodes validate & store all transactions of every block added to the chain. If that is not the case then it is not Bitcoin.


Title: Re: Scaling Bitcoin Above 3 Million TX pre block
Post by: jonald_fyookball on September 14, 2015, 03:07:51 AM
Well, I think one problem with only sending pointers is what happens when a node doesn't have a particular transaction?  How does it get it?

would just ask around for the ones it's missing

it would be interesting to know exactly how insync a node from chain and one from US are

it would be interesting.  I have a feeling it might be an issue.



Title: Re: Scaling Bitcoin Above 3 Million TX pre block
Post by: jonald_fyookball on September 14, 2015, 03:10:08 AM
Well, I think one problem with only sending pointers is what happens when a node doesn't have a particular transaction?  How does it get it?

Nodes validate & store all transactions of every block added to the chain. If that is not the case then it is not Bitcoin.

maybe you misunderstood.  in the 'relay only' mode being hypothesized by Adam, the blocks are never fully broadcast, only the transaction headers or something.  If a transaction was referenced but wasn't in the nodes mempool it would have to be fetched.


Title: Re: Scaling Bitcoin Above 3 Million TX pre block
Post by: adamstgBit on September 14, 2015, 03:11:42 AM
Well, I think one problem with only sending pointers is what happens when a node doesn't have a particular transaction?  How does it get it?

Nodes validate & store all transactions of every block added to the chain. If that is not the case then it is not Bitcoin.

is it true miners have all the recent TX's in there mempool and when they get a new block they receive those same TX's all over again with the newblock?

is this a little redundant? maybe we can broadcast only the transaction headers or something.


Title: Re: Scaling Bitcoin Above 3 Million TX pre block
Post by: brg444 on September 14, 2015, 03:25:44 AM
Well, I think one problem with only sending pointers is what happens when a node doesn't have a particular transaction?  How does it get it?

Nodes validate & store all transactions of every block added to the chain. If that is not the case then it is not Bitcoin.

is it true miners have all the recent TX's in there mempool and when they get a new block they receive those same TX's all over again with the newblock?

is this a little redundant? maybe we can broadcast only the transaction headers or something.

That is what the relay network does but once the headers are broadcasted the miners still need to fully validate it before starting mining a new block on top.

Again, this has absolutely no impact on the full nodes that are not mining and need to receive, validate and store full blocks.


Title: Re: Scaling Bitcoin Above 3 Million TX pre block
Post by: brg444 on September 14, 2015, 03:26:43 AM
Well, I think one problem with only sending pointers is what happens when a node doesn't have a particular transaction?  How does it get it?

Nodes validate & store all transactions of every block added to the chain. If that is not the case then it is not Bitcoin.

maybe you misunderstood.  in the 'relay only' mode being hypothesized by Adam, the blocks are never fully broadcast, only the transaction headers or something.  If a transaction was referenced but wasn't in the nodes mempool it would have to be fetched.

.... ::)


Title: Re: Scaling Bitcoin Above 3 Million TX pre block
Post by: jonald_fyookball on September 14, 2015, 03:27:12 AM
Well, I think one problem with only sending pointers is what happens when a node doesn't have a particular transaction?  How does it get it?

Nodes validate & store all transactions of every block added to the chain. If that is not the case then it is not Bitcoin.

is it true miners have all the recent TX's in there mempool and when they get a new block they receive those same TX's all over again with the newblock?

is this a little redundant? maybe we can broadcast only the transaction headers or something.

That is what the relay network does but once the headers are broadcasted the miners still need to fully validate it before starting mining a new block on top.

Again, this has absolutely no impact on the full nodes that are not mining and need to receive, validate and store full blocks.

yes they will validate it but the point is the transmission payload is reduced in size.


Title: Re: Scaling Bitcoin Above 3 Million TX pre block
Post by: Soros Shorts on September 14, 2015, 05:42:16 AM
is it true miners have all the recent TX's in there mempool and when they get a new block they receive those same TX's all over again with the newblock?

No miners don't necessarily have all the recent tx's in mempool. In fact, it is possible to run a mining node with a heavily filtered mempool, e.g. a very small mempool which excludes spam transactions or transactions with small/no fees.

In the OP you also did not address SPV clients. Assuming a block header size of 4MB, if I turn off my Android client for 1 week and I turn it back on I would have to download about 4.036 GB worth of block header information to sync up a week's worth of transactions. It would take too long to do so. Furthermore, after I'm done would have used up the monthly cap of my wireless plan.


Title: Re: Scaling Bitcoin Above 3 Million TX pre block
Post by: CryptInvest on September 14, 2015, 05:51:51 AM
Why bitcoiners opposed Lightning Network? The third party can't access to the funds an issue with the speed of the transaction (and indifferent to the blocksize) and the size of blockchain.


Title: Re: Scaling Bitcoin Above 3 Million TX pre block
Post by: RoadTrain on September 14, 2015, 08:14:11 AM
Well, I think one problem with only sending pointers is what happens when a node doesn't have a particular transaction?  How does it get it?
A node simply asks its peers about it.

But then another problem might arise that I'd like to discuss (I didn't see any simulations). What if a node has to request a fair amount of transactions from its peers? Considering that every request has some latency (latency != bandwidth), and some peers might still not have this particular tx due to uneven propagation, can this process end up slower than simply bundling all txs with a block? If so, at which blocksizes and which missing txs percentages?


Title: Re: Scaling Bitcoin Above 3 Million TX pre block
Post by: coinplus on September 14, 2015, 10:03:25 AM
I first got excited on hearing lightning network pegging into bitcoin so than get faster transactions than visa network. But allowing third party into bitcoin is not acceptable by our bitcoin community. May be some payment processors use lightning network for their own use of fastening transactions.


Title: Re: Scaling Bitcoin Above 3 Million TX pre block
Post by: sAt0sHiFanClub on September 14, 2015, 02:01:33 PM
Well, I think one problem with only sending pointers is what happens when a node doesn't have a particular transaction?  How does it get it?

Nodes validate & store all transactions of every block added to the chain. If that is not the case then it is not Bitcoin.

is it true miners have all the recent TX's in there mempool and when they get a new block they receive those same TX's all over again with the newblock?

is this a little redundant? maybe we can broadcast only the transaction headers or something.

That is what the relay network does but once the headers are broadcasted the miners still need to fully validate it before starting mining a new block on top.

Again, this has absolutely no impact on the full nodes that are not mining and need to receive, validate and store full blocks.

Normal nodes are not time critical in receiving blocks. They can continue as normal even with much larger blocks. Only miners who are racing to create new blocks and need to know which one they have to build off get their knickers in a twist if there is any propagation impedance ( which would be exasperated by larger blocks)

I mentioned earlier that this redundancy needs to be addressed. A development of the ideas of the relay network should be an option for all nodes.  If they have 90% of the tx's in a block already in their mempool then they only need to adopt an efficient polling scheme to request the missing tx's from their peers.


Title: Re: Scaling Bitcoin Above 3 Million TX pre block
Post by: adamstgBit on September 14, 2015, 02:12:14 PM
is it true miners have all the recent TX's in there mempool and when they get a new block they receive those same TX's all over again with the newblock?

No miners don't necessarily have all the recent tx's in mempool. In fact, it is possible to run a mining node with a heavily filtered mempool, e.g. a very small mempool which excludes spam transactions or transactions with small/no fees.

In the OP you also did not address SPV clients. Assuming a block header size of 4MB, if I turn off my Android client for 1 week and I turn it back on I would have to download about 4.036 GB worth of block header information to sync up a week's worth of transactions. It would take too long to do so. Furthermore, after I'm done would have used up the monthly cap of my wireless plan.

4GB is your month cap? wtf you want your phone to download the blockchain??? thats nuts you need SPV client.
isn't there some block pruning happening currently so that older blocks become much smaller?
isn't the whole point of SPV to let some server deal with downloading and validating the blockchain?
well then that miner will simply be at a slight disadvantage because he will need to ask peers for the TX he is missing, but even then its  likely that getting only the TX he's missing would be faster then downloading the new block in full.


Title: Re: Scaling Bitcoin Above 3 Million TX pre block
Post by: sAt0sHiFanClub on September 14, 2015, 02:13:13 PM
I first got excited on hearing lightning network pegging into bitcoin so than get faster transactions than visa network. But allowing third party into bitcoin is not acceptable by our bitcoin community. May be some payment processors use lightning network for their own use of fastening transactions.

Lightning Network is a load of bollox at the moment, so I wouldn't worry about it too much. Its just another alt-coin, with cheap lipstick applied, in its present vapour form. I watched Poon and the other guy try to sex it up during their presentation and failed miserably. They played down the very complex challenges of tx malleability that need to be addressed before this will ever get off the paper stage. The current test network has absolutely no meaningful* interaction with the blockchain, and unless they can perform sighash miracles in the near future, it never will.

* and where it does, it requires Trust.  Yes, trust. In a trust-less network. Such progress.



Title: Re: Scaling Bitcoin Above 3 Million TX pre block
Post by: adamstgBit on September 14, 2015, 02:25:31 PM
Well, I think one problem with only sending pointers is what happens when a node doesn't have a particular transaction?  How does it get it?
A node simply asks its peers about it.

But then another problem might arise that I'd like to discuss (I didn't see any simulations). What if a node has to request a fair amount of transactions from its peers? Considering that every request has some latency (latency != bandwidth), and some peers might still not have this particular tx due to uneven propagation, can this process end up slower than simply bundling all txs with a block? If so, at which blocksizes and which missing txs percentages?

right this is the kind of things we'd need to investigate...

the request for missing TX's could be done all at once **here's the list of TX i don't know about if anyone knows about these tell me** kinda thing. a distributed relay network whose sole purpose is to collect and relay TX accross the network would help.

also in theory a miner could include a bunch of TX's the network has never seen at which point other miners would be busy asking every other miner about TX's that no other minner has seen. i guess miners could consider that block invalid and orphen it. part of the protocol for miners including TX that he knows haven't been seen by the network would be to include the full TX... and also miners would try not to include TX that have likely Not fully propagated throughout the network ( TX they just heard about <10sec ago ?).


Title: Re: Scaling Bitcoin Above 3 Million TX pre block
Post by: sAt0sHiFanClub on September 14, 2015, 04:22:54 PM


right this is the kind of things we'd need to investigate...

the request for missing TX's could be done all at once **here's the list of TX i don't know about if anyone knows about these tell me** kinda thing. a distributed relay network whose sole purpose is to collect and relay TX accross the network would help.

also in theory a miner could include a bunch of TX's the network has never seen at which point other miners would be busy asking every other miner about TX's that no other minner has seen. i guess miners could consider that block invalid and orphen it. part of the protocol for miners including TX that he knows haven't been seen by the network would be to include the full TX... and also miners would try not to include TX that have likely Not fully propagated throughout the network ( TX they just heard about <10sec ago ?).

Plenty there to start off a bit of requirement gathering.   ;)

Checking other nodes ( 3 steps, ~350 peers) would be pretty quick. If you didnt get a response for a tx from that many, its time to think of ignoring the block and picking another.


Title: Re: Scaling Bitcoin Above 3 Million TX pre block
Post by: TransaDox on September 14, 2015, 04:40:07 PM


right this is the kind of things we'd need to investigate...

the request for missing TX's could be done all at once **here's the list of TX i don't know about if anyone knows about these tell me** kinda thing. a distributed relay network whose sole purpose is to collect and relay TX accross the network would help.

also in theory a miner could include a bunch of TX's the network has never seen at which point other miners would be busy asking every other miner about TX's that no other minner has seen. i guess miners could consider that block invalid and orphen it. part of the protocol for miners including TX that he knows haven't been seen by the network would be to include the full TX... and also miners would try not to include TX that have likely Not fully propagated throughout the network ( TX they just heard about <10sec ago ?).

Plenty there to start off a bit of requirement gathering.   ;)

Checking other nodes ( 3 steps, ~350 peers) would be pretty quick. If you didnt get a response for a tx from that many, its time to think of ignoring the block and picking another.

.....and/or if blocks were held on NNTP servers, I2P nodes, Tor servers etc you could download from there when peers don't have what you want ;)

I think  Just-In-Time Block Requesting is an obvious step. The debate would be around if we want the clients/nodes/miners to keep track of  blocks to ensure full coverage or whether to use existing remote storage systems but the first step would use the latter.


Title: Re: Scaling Bitcoin Above 3 Million TX pre block
Post by: sAt0sHiFanClub on September 14, 2015, 04:52:01 PM


right this is the kind of things we'd need to investigate...

the request for missing TX's could be done all at once **here's the list of TX i don't know about if anyone knows about these tell me** kinda thing. a distributed relay network whose sole purpose is to collect and relay TX accross the network would help.

also in theory a miner could include a bunch of TX's the network has never seen at which point other miners would be busy asking every other miner about TX's that no other minner has seen. i guess miners could consider that block invalid and orphen it. part of the protocol for miners including TX that he knows haven't been seen by the network would be to include the full TX... and also miners would try not to include TX that have likely Not fully propagated throughout the network ( TX they just heard about <10sec ago ?).

Plenty there to start off a bit of requirement gathering.   ;)

Checking other nodes ( 3 steps, ~350 peers) would be pretty quick. If you didnt get a response for a tx from that many, its time to think of ignoring the block and picking another.

.....and/or if blocks were held on NNTP servers, I2P nodes, Tor servers etc you could download from there when peers don't have what you want ;)

I think  Just-In-Time Block Requesting is an obvious step. The debate would be around if we want the clients/nodes/miners to keep track of  blocks to ensure full coverage or whether to use existing remote storage systems but the first step would use the latter.

We are not looking for blocks - just transactions.  Miners will push/publish blocks when they find a solution. Its just to confirm that the transactions they claim make up that block exist - nodes should already have the vast majority in mempool, but due to slight differences there is a requirement to find missing ones. This can only be done through other peers.