Bitcoin Forum
November 02, 2024, 04:40:45 AM *
News: Latest Bitcoin Core release: 28.0 [Torrent]
 
   Home   Help Search Login Register More  
Pages: « 1 [2] 3 4 5 »  All
  Print  
Author Topic: Scaling Bitcoin Above 3 Million TX pre block  (Read 3361 times)
jonald_fyookball
Legendary
*
Offline Offline

Activity: 1302
Merit: 1008


Core dev leaves me neg feedback #abuse #political


View Profile
September 12, 2015, 12:51:53 AM
 #21

Adam,the download time must be MUST lower than the block interval.
Otherwise, the reorgs will be too high.

Do you get that?

reorgs?
with new block propagation being trivial thanks to "Corallo Relay Network" we only need nodes to be able to keep up with the TPS
so if every user on the network can comfortably download 100MB in 10mins, there shouldn't be any problem using 100MB as the block limit
actually there WOULD be a problem if block were bigger than this limit, which is what the limit should be about in the first place.




Blockchain reorgs.

The reason why proof-of-work and the longest chain rule are effective in maintaining distributed consensus,
is that the time between blocks is much bigger than the time it takes to broadcast a block to the network.

If two miners find a block at about the same time and start broadcasting them to about half the network
each, you have a fork.  One half is on one chain, and one half is on another.  It gets resolved by the
longest chain rule when the next block is found.  Since the winning blocks takes only a few seconds
to reach the network, the fork is quickly resolved.   However, if the blocks take a long time to broadcast,
then more competing blocks are going to be solved and broadcast.  I hope you get my point, but it
is clear to me that a huge difference in broadcast time vs block time is necessary for the longest chain
rule to maintain order, minimize reorgs and prevent network splits.  The longer the time, the more reorgs and the more
problems you will get.  Would a full minute be ok for broadcast times?  Probably, although reorgs will
increase.  Several minutes?  Now its starting to be a real issue.  10 minutes? No way.






adamstgBit (OP)
Legendary
*
Offline Offline

Activity: 1904
Merit: 1037


Trusted Bitcoiner


View Profile WWW
September 12, 2015, 12:56:49 AM
 #22

The block limit should reflect a sort of requirement we require users to be able to handle in order to run nodes.
we don't want that limit to be so high as to exclude ordinary users
we don't want it so low that its starts to limit TPS and slow confirmation time.
luckily for us, we aren't in 1995 anymore and typical home users can download at ~1MB per second.
of course we want some comfort zone so we should be talking about 100-300MB block limit
which gives bitcoin PLENTY of space to grow
many when we hit that limit again internet speeds will have 100X
 


adamstgBit (OP)
Legendary
*
Offline Offline

Activity: 1904
Merit: 1037


Trusted Bitcoiner


View Profile WWW
September 12, 2015, 12:59:13 AM
 #23

Adam,the download time must be MUST lower than the block interval.
Otherwise, the reorgs will be too high.

Do you get that?

reorgs?
with new block propagation being trivial thanks to "Corallo Relay Network" we only need nodes to be able to keep up with the TPS
so if every user on the network can comfortably download 100MB in 10mins, there shouldn't be any problem using 100MB as the block limit
actually there WOULD be a problem if block were bigger than this limit, which is what the limit should be about in the first place.




Blockchain reorgs.

The reason why proof-of-work and the longest chain rule are effective in maintaining distributed consensus,
is that the time between blocks is much bigger than the time it takes to broadcast a block to the network.

If two miners find a block at about the same time and start broadcasting them to about half the network
each, you have a fork.  One half is on one chain, and one half is on another.  It gets resolved by the
longest chain rule when the next block is found.  Since the winning blocks takes only a few seconds
to reach the network, the fork is quickly resolved.   However, if the blocks take a long time to broadcast,
then more competing blocks are going to be solved and broadcast.  I hope you get my point, but it
is clear to me that a huge difference in broadcast time vs block time is necessary for the longest chain
rule to maintain order, minimize reorgs and prevent network splits.  The longer the time, the more reorgs and the more
problems you will get.  Would a full minute be ok for broadcast times?  Probably, although reorgs will
increase.  Several minutes?  Now its starting to be a real issue.  10 minutes? No way.



right this is a BIG PROBLEM i understand
the solution is using the same method the "Corallo Relay Network" uses
broadcasting a block in this method is 250 times faster!
and more optimizations can be done

a 1GB block can be broadcast with 4MB with this method, broadcasting a block is no longer an issue.

jonald_fyookball
Legendary
*
Offline Offline

Activity: 1302
Merit: 1008


Core dev leaves me neg feedback #abuse #political


View Profile
September 12, 2015, 01:03:12 AM
 #24

Adam,the download time must be MUST lower than the block interval.
Otherwise, the reorgs will be too high.

Do you get that?

reorgs?
with new block propagation being trivial thanks to "Corallo Relay Network" we only need nodes to be able to keep up with the TPS
so if every user on the network can comfortably download 100MB in 10mins, there shouldn't be any problem using 100MB as the block limit
actually there WOULD be a problem if block were bigger than this limit, which is what the limit should be about in the first place.




Blockchain reorgs.

The reason why proof-of-work and the longest chain rule are effective in maintaining distributed consensus,
is that the time between blocks is much bigger than the time it takes to broadcast a block to the network.

If two miners find a block at about the same time and start broadcasting them to about half the network
each, you have a fork.  One half is on one chain, and one half is on another.  It gets resolved by the
longest chain rule when the next block is found.  Since the winning blocks takes only a few seconds
to reach the network, the fork is quickly resolved.   However, if the blocks take a long time to broadcast,
then more competing blocks are going to be solved and broadcast.  I hope you get my point, but it
is clear to me that a huge difference in broadcast time vs block time is necessary for the longest chain
rule to maintain order, minimize reorgs and prevent network splits.  The longer the time, the more reorgs and the more
problems you will get.  Would a full minute be ok for broadcast times?  Probably, although reorgs will
increase.  Several minutes?  Now its starting to be a real issue.  10 minutes? No way.



right this is a BIG PROBLEM i understand
the solution is using the same method the "Corallo Relay Network" uses
broadcasting a block in this method is 250 times faster!
and more optimizations can be done

broadcasting a block is no longer an issue


250 times faster? huh?  Where did you get that from?

My understanding is this is just a bunch of dedicated nodes
on a good connection, it's not going to magically warp speed the whole network.

worhiper_-_
Hero Member
*****
Offline Offline

Activity: 700
Merit: 500


View Profile
September 12, 2015, 01:10:59 AM
 #25

Bitcoin started steadily being used in 2010, five years after that under normal use conditions it's still not suffering from the 1Mb block limit. I've seen people arguing that merely increasing the block size limit will ramp up adoption but I think it's much more complicated than that. After so many concentrated efforts, bitcoin's usercases doesn't seem to be growing at rampant rates, not that there's a way to be 100% certain about that, but at least data in the blockchain makes that evident.

There are many concepts that come against increasing the block size cap eight fold. For example, with a higher cap, the fee market would likely change. Getting into the blockchain would be worth less, while right now, putting data into the blockchain costs a somewhat significant sum. With the destruction of the current fee market for the upcoming years and blocks 8x times bigger you'd expect that we'd get something back. Like for example 'stress tests' being harder and more expensive to bring to reality. But that's also not the case. Bigger blocks would just be easier (and also cheaper) to be filled with trash transactions. In fact, calculation's we've seen about tx/s going up with the size are only taking into account transactions of certain size. Limiting the mempool was a suggestion to counter this, that wouldn't really work out well if the goal of increasing the block size cap was to make bitcoin handle more tx/s.
johnyj
Legendary
*
Offline Offline

Activity: 1988
Merit: 1012


Beyond Imagination


View Profile
September 12, 2015, 01:11:16 AM
 #26

For any suggestions, first consider a worst case scenario, if that works, then you can make your point. 1GB block in a worst case scenario will definitely separate the network into many forks, each fork will just grow on each USA/EUROPE/CHINA chain and never receive block fast enough from other chains, so the bitcoin is totally broken, no transaction can be confirmed globally

adamstgBit (OP)
Legendary
*
Offline Offline

Activity: 1904
Merit: 1037


Trusted Bitcoiner


View Profile WWW
September 12, 2015, 01:14:00 AM
 #27

Adam,the download time must be MUST lower than the block interval.
Otherwise, the reorgs will be too high.

Do you get that?

reorgs?
with new block propagation being trivial thanks to "Corallo Relay Network" we only need nodes to be able to keep up with the TPS
so if every user on the network can comfortably download 100MB in 10mins, there shouldn't be any problem using 100MB as the block limit
actually there WOULD be a problem if block were bigger than this limit, which is what the limit should be about in the first place.




Blockchain reorgs.

The reason why proof-of-work and the longest chain rule are effective in maintaining distributed consensus,
is that the time between blocks is much bigger than the time it takes to broadcast a block to the network.

If two miners find a block at about the same time and start broadcasting them to about half the network
each, you have a fork.  One half is on one chain, and one half is on another.  It gets resolved by the
longest chain rule when the next block is found.  Since the winning blocks takes only a few seconds
to reach the network, the fork is quickly resolved.   However, if the blocks take a long time to broadcast,
then more competing blocks are going to be solved and broadcast.  I hope you get my point, but it
is clear to me that a huge difference in broadcast time vs block time is necessary for the longest chain
rule to maintain order, minimize reorgs and prevent network splits.  The longer the time, the more reorgs and the more
problems you will get.  Would a full minute be ok for broadcast times?  Probably, although reorgs will
increase.  Several minutes?  Now its starting to be a real issue.  10 minutes? No way.



right this is a BIG PROBLEM i understand
the solution is using the same method the "Corallo Relay Network" uses
broadcasting a block in this method is 250 times faster!
and more optimizations can be done

broadcasting a block is no longer an issue


250 times faster? huh?  Where did you get that from?

My understanding is this is just a bunch of dedicated nodes
on a good connection, it's not going to magically warp speed the whole network.

no joke man 250X faster
its not " just a bunch of dedicated nodes,on a good connection"
it only sends out "pointers to TXs" the new block includes, all miners have pretty much the same mem pool, so they can use the pointers to make the block. and they can check and make sure they made the exact same block with the merkle root. if they are missing any TX they can ask a peer.

miners are already using this, but its not standard and it isn't P2P
this method needs to be implemented on a P2P level.

johnyj
Legendary
*
Offline Offline

Activity: 1988
Merit: 1012


Beyond Imagination


View Profile
September 12, 2015, 01:25:24 AM
 #28

Corallo Relay Network is just a group of private controlled servers on internet backbone, it can not be applied to all of the nodes around the world. If everyone is using Corallo Relay Network to relay 1GB blocks and it is shutdown by the government, bitcoin is dead right away

adamstgBit (OP)
Legendary
*
Offline Offline

Activity: 1904
Merit: 1037


Trusted Bitcoiner


View Profile WWW
September 12, 2015, 01:53:18 AM
 #29

Corallo Relay Network is just a group of private controlled servers on internet backbone, it can not be applied to all of the nodes around the world. If everyone is using Corallo Relay Network to relay 1GB blocks and it is shutdown by the government, bitcoin is dead right away


http://sourceforge.net/p/bitcoin/mailman/message/32676543/

Quote
Essentially instead of relaying entire
blocks, nodes keep a rolling window of recently-seen transactions and
skip those when relaying blocks.

adamstgBit (OP)
Legendary
*
Offline Offline

Activity: 1904
Merit: 1037


Trusted Bitcoiner


View Profile WWW
September 12, 2015, 01:57:27 AM
 #30

we have the tech more or less laid out and working
we just need to optimize it a little and make it part of the standard protocol
we can scale bitcoin
we can scale bitcoin to 4K TPS running on a silly home computer

excited?

you should be.

adamstgBit (OP)
Legendary
*
Offline Offline

Activity: 1904
Merit: 1037


Trusted Bitcoiner


View Profile WWW
September 12, 2015, 01:58:58 AM
 #31

in other news


johnyj
Legendary
*
Offline Offline

Activity: 1988
Merit: 1012


Beyond Imagination


View Profile
September 12, 2015, 12:16:30 PM
 #32

Corallo Relay Network is just a group of private controlled servers on internet backbone, it can not be applied to all of the nodes around the world. If everyone is using Corallo Relay Network to relay 1GB blocks and it is shutdown by the government, bitcoin is dead right away


http://sourceforge.net/p/bitcoin/mailman/message/32676543/

Quote
Essentially instead of relaying entire
blocks, nodes keep a rolling window of recently-seen transactions and
skip those when relaying blocks.

Their public nodes are listed as:

public.us-west.relay.mattcorallo.com
public.us-east.relay.mattcorallo.com
public.eu.relay.mattcorallo.com
public.{jpy,hk}.relay.mattcorallo.com
public.bjs.relay.mattcorallo.com
public.{sgp,au}.relay.mattcorallo.com

All registered under mattcorallo.com  if your rely on their service, then when this company is down, bitcoin is over

Mickeyb
Hero Member
*****
Offline Offline

Activity: 798
Merit: 1000

Move On !!!!!!


View Profile
September 12, 2015, 12:25:07 PM
 #33

we have the tech more or less laid out and working
we just need to optimize it a little and make it part of the standard protocol
we can scale bitcoin
we can scale bitcoin to 4K TPS running on a silly home computer

excited?

you should be.


I am very excited! Smiley

All we need now is to make devs start making changes and implementing new things. I'll be even more excited when we get to this point!
RoadTrain
Legendary
*
Offline Offline

Activity: 1386
Merit: 1009


View Profile
September 12, 2015, 12:43:21 PM
 #34

Seeing you posts, Adam, I'm really excited by your apparent lack of understanding of the technical side of Bitcoin. I figured, maybe you could consider refraining from posting nonsense, and educate yourself first?
Because I see no point arguing with you, when you can't grasp some relatively simple technical ideas.
jonald_fyookball
Legendary
*
Offline Offline

Activity: 1302
Merit: 1008


Core dev leaves me neg feedback #abuse #political


View Profile
September 12, 2015, 01:37:33 PM
 #35

Corallo Relay Network is just a group of private controlled servers on internet backbone, it can not be applied to all of the nodes around the world. If everyone is using Corallo Relay Network to relay 1GB blocks and it is shutdown by the government, bitcoin is dead right away


http://sourceforge.net/p/bitcoin/mailman/message/32676543/

Quote
Essentially instead of relaying entire
blocks, nodes keep a rolling window of recently-seen transactions and
skip those when relaying blocks.

point out to me where the 250x increase is, por favor.

brg444
Hero Member
*****
Offline Offline

Activity: 644
Merit: 504

Bitcoin replaces central, not commercial, banks


View Profile
September 12, 2015, 01:47:43 PM
 #36

Matt Corallo just pointed out at the conference that propagation has little to do w/ bandwidth connectivity but general tcp packet loss which can occurs regardless of your connectivity.

I know there is some posts on the dev list referencing this. Don't ask me to explain in details this is a bit beyond me technically

"I believe this will be the ultimate fate of Bitcoin, to be the "high-powered money" that serves as a reserve currency for banks that issue their own digital cash." Hal Finney, Dec. 2010
sAt0sHiFanClub
Hero Member
*****
Offline Offline

Activity: 546
Merit: 500


Warning: Confrmed Gavinista


View Profile WWW
September 12, 2015, 03:25:01 PM
 #37

Seeing you posts, Adam, I'm really excited by your apparent lack of understanding of the technical side of Bitcoin. I figured, maybe you could consider refraining from posting nonsense, and educate yourself first?
Because I see no point arguing with you, when you can't grasp some relatively simple technical ideas.

Quit with the lame "Its beneath me to explain" angle.  If you have an issue, state it, and support your contention with a relevant tech reference.

Or you can explain it to me. I've nearly 15 years experience in writing network correlators and rating engines for the mobile teleco industry, so there is little you can teach me on HF data propagation that i don't know.

We must make money worse as a commodity if we wish to make it better as a medium of exchange
sAt0sHiFanClub
Hero Member
*****
Offline Offline

Activity: 546
Merit: 500


Warning: Confrmed Gavinista


View Profile WWW
September 12, 2015, 03:46:55 PM
 #38

Matt Corallo just pointed out at the conference that propagation has little to do w/ bandwidth connectivity but general tcp packet loss which can occurs regardless of your connectivity.

I know there is some posts on the dev list referencing this. Don't ask me to explain in details this is a bit beyond me technically

You mean tcp re-transmission rates?  Thats a function of network congestion ( assuming we can ignore radio interference, etc.), which is kinda related to your 'connectivity'.

And round we go go again. tcp doesn't loose packets, it drops them when it cannot forward them as quickly as it receives them. This is everything to do with the quality of your connection, not the protocol.

We must make money worse as a commodity if we wish to make it better as a medium of exchange
CIYAM
Legendary
*
Offline Offline

Activity: 1890
Merit: 1086


Ian Knowles - CIYAM Lead Developer


View Profile WWW
September 12, 2015, 03:52:29 PM
 #39

Why is it that no-one with any technical credibility backs @adamstgBit's claims (and in response please show people that have quoted you as their source rather than yourself misquoting them)?

Apparently he is smarter than everyone in the world I guess - so why doesn't he just fork Bitcoin (perhaps BitcoinAB) and see how that goes?

Prior to this whole block size thing I thought this guy was reasonable but now that he creates a new thread every day full of bullshit claims I can only wonder whether in fact he sold his account and whoever is posting this stuff is actually some newbie (and that wouldn't surprise me one bit).

With CIYAM anyone can create 100% generated C++ web applications in literally minutes.

GPG Public Key | 1ciyam3htJit1feGa26p2wQ4aw6KFTejU
RoadTrain
Legendary
*
Offline Offline

Activity: 1386
Merit: 1009


View Profile
September 12, 2015, 05:17:40 PM
Last edit: September 12, 2015, 06:11:36 PM by RoadTrain
 #40

Seeing you posts, Adam, I'm really excited by your apparent lack of understanding of the technical side of Bitcoin. I figured, maybe you could consider refraining from posting nonsense, and educate yourself first?
Because I see no point arguing with you, when you can't grasp some relatively simple technical ideas.

Quit with the lame "Its beneath me to explain" angle.  If you have an issue, state it, and support your contention with a relevant tech reference.
That's what I usually do, when there's hope for reasonable discussion.

Quote
Or you can explain it to me. I've nearly 15 years experience in writing network correlators and rating engines for the mobile teleco industry, so there is little you can teach me on HF data propagation that i don't know.
Explain what? That the network can't handle 1GB blocks without completely redefining what Bitcoin is?

What Matt Corallo's relay network does is trying to lower orphan rates for miners. It has nothing to do with increasing network throughput (tx rate), only lowers the amount of data to be transmitted with a block. After all, full nodes will still have to download full transaction data.
Moreover, it depends on two unreliable assumptions:
1) participating miners are cooperative, i.e. only/mostly include txs that other miners have in their mempools.
2) that participants' mempools are highly synchronized.

The latter is especially speculative when we try to project it onto the whole network. If we could make sure mempools are synchronized, we wouldn't need to have blockchain in the first place. But nodes' relay/mempool acception policy is highly customizable. E.g. during the recent stress test, many users had to increase their fee acception thresholds so that their nodes are stable. That means very different mempools for users.
Pages: « 1 [2] 3 4 5 »  All
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!