Carlton Banks
Legendary
Offline
Activity: 3430
Merit: 3080
|
|
March 16, 2017, 10:03:51 PM Last edit: March 16, 2017, 10:17:50 PM by Carlton Banks |
|
And you seem to have STILL not read the following sentence: Don't forget that the original intention of the proposals we're discussing here is to achieve approval for Segwit by miners of the "big blockers" fraction to get rid of the stalemate. I for myself, for now, would be perfectly happy with Segwit alone for now and then let the Core devs decide further scaling measures.
I have. Have you? I don't think you understand the politics here: the inconsistencies in the position of the miners blocking Segwit/promoting BU are obvious; they're saying "but we want big blocks!" when they're being offered big blocks. It's a bit like the way you're arguing, really The obstructionist miners are doing so to add to the conflict, they're not interested in being constructive, it's incredibly transparent that's their intention, to those that aren't completely naive of course there are several ways (and likely several more we haven't thought of) that could be employed to get that kind of transaction density with 4MB Segwit blocks, bigger than that is unnecessary when there are other options. Again: More precision please. We aren't advancing here. I, for myself, would be perfectly well with a solution like that: - Segwit - 50% TX density improvement by better encoding - Block time reduced to 5 minutes (again, I would be in favour, but I don't think it will come) - 1 MB base / 4 MB weight. Please, respond to the actually argument I'm making, not derailing back into defending x=y linear growth in blocksize. It's indefensible, when the same rate of growth could be achieved a less dangerous way, using actual scaling paradigms that multiply the utility of EXISTING CAPACITY, not adding extra burden to the capacity at the same scale.
I'm interested in LN and sidechains like Rootstock, but I have already pointed out that even with a well-functioning LN we need more on-chain capacity. If the solutions you mention (TX encoding, block time) are providing them, then why don't you link me to the relevant discussions of it? gmaxwell posted on Bitcointalk about the tx encoding efficiency hard fork a few weeks ago. He mentioned a factor of improvement, why aren't you motivated to find out for yourself what it is, instead of taking a demonstrably controversial, fruitless and very naive route? Again, if you're really interested in actual scaling paradigms and not dangerous non-scaling resource use increases, you would be sufficiently motivated to look for yourself. I've read it and don't need to read it again. You sound interested, so what's the problem? Are you interested and motivated, or not? And AGAIN please, respond to the actually argument I'm making, not derailing back into arguments defending x=y linear growth in blocksize. Using actual scaling paradigms that multiply the utility of EXISTING CAPACITY is far more valuable and sensible than your idea of adding extra burden to the capacity at the same scale.
|
Vires in numeris
|
|
|
d5000 (OP)
Legendary
Offline
Activity: 4102
Merit: 7562
Decentralization Maximalist
|
|
March 16, 2017, 11:32:48 PM |
|
So your position is basically: The miners are totally wrong. So let's ignore the miners. Not very constructive.
I'm not saying that block size is the only way and not even the best way to go, that is an invention of yours, so your last phrase is totally dishonest and wrong. I've only said that in the actual conditions (~3 tps at 1MB blocks) a Segwit + 1 base/4 weight MB capacity very probably won't be enough in five years, even with LN and sidechains, if we really want mass adoption (> 50 million users).
I don't care if a transaction capacity increase is achieved with bigger blocks or other measures. So I'll investigate Gmaxwell's transaction encoding proposal but it's not easy to find. So really, if you are interested in continuing a constructive discussion, please provide me a source.
The "smaller block interval" proposals (with coinbase blocks separated or not), anyway, only would improve one side of the scaling problems: block propagation. IBD and storage space would not be affected.
|
|
|
|
kiklo
Legendary
Offline
Activity: 1092
Merit: 1000
|
|
March 17, 2017, 12:05:50 AM |
|
-snip-
Regarding your recent mention of reducing the block generation time, I asked on IRC. Apparently even with all the relay improvements, they are not adequate to reduce the generation time safely to e.g. 5 minutes claimed Maxwell. There were a few more people giving their opinion at the time, but I forgot to save a copy of the chat. This is quite unfortunate though, I'd definitely like to see 2017 data for this kind of expertiment. Maybe a testnet with constantly full 1 MB blocks and 5 min block interval. TestNet was called Litecoin at 2½ minutes and multiple alts. G.Maxwell is blowing smoke, if BTC can't do your 5 minute block speed then the whole thing should be scrapped and written from scratch. (maybe copy litecoin source code that has 4X the transaction capacity of BTC without deadwit.)What lie did he make up to explain why BTC sucks more than every other coin in existence in regards to blockspeed? FYI: Have you Noticed that BTC is unable to do anything else to fix transactions capacity issues except segwit if you talk to G.Maxwell. Answer is simple , Talk to someone that won't lie to you.
|
|
|
|
Carlton Banks
Legendary
Offline
Activity: 3430
Merit: 3080
|
|
March 17, 2017, 08:44:41 AM |
|
So your position is basically: The miners are totally wrong. So let's ignore the miners. Not very constructive.
You're a dirty little exaggerater when you have no real argument, huh I said nothing of the sort. SOME of the miners are behaving irresponsibly as they are not respecting what their actual responsibilities are. Bitcoin 13.1 was the Segwit soft fork implementation only, and the users overwhelmingly demonstrated they are ready for it and supportive of it, even now 13.1 is actually more numerous on the network than the release that came immediately after it. DON"T YOU DARE SUGGEST I'M DOING SOMETHING WRONG BY POINTING THAT OUT. You're a disgrace, the sort of fool who will not back down in an argument beacuse of your precious little ego. You're wrong, you're suggesting something staggeringly foolish and I will not be brow-beaten by someone who cannot swallow their misplaced pride on such an important issue and I think I'm not saying that block size is the only way and not even the best way to go, that is an invention of yours, so your last phrase is totally dishonest and wrong. I've only said that in the actual conditions (~3 tps at 1MB blocks) a Segwit + 1 base/4 weight MB capacity very probably won't be enough in five years, even with LN and sidechains, if we really want mass adoption (> 50 million users). Dedicating a huge thread, labelled as BIP 102, but is actually BIP 102 on Schwarzenegger doses of steroids SURE AS HELL DOES LOOK LIKE SOMEONE PUSHING BIGGER BLOCKS AT ANY COST. If you're not dismissing on-chain scaling improvements in favour of just flat-out increasing the resources the Bitcoin network needs to run, then why are you: - Dismissing on-chain scaling improvements
- Heavily promoting increases in the resources the Bitcoin network needs to run
I don't care if a transaction capacity increase is achieved with bigger blocks or other measures. Well, you could've fooled me. So I'll investigate Gmaxwell's transaction encoding proposal but it's not easy to find. So really, if you are interested in continuing a constructive discussion, please provide me a source. NO. I don't want to hear another word from you or anyone else about blocksizes until you can demonstrate that you've investigated anything else but, and that you understand it's the worst and most dangerous way to increase capacity, and most of all that it DOES NOT CONSTITUTE A SCALING PARADIGM AT ALL. There's not much point in trying to argue otherwise, it's an incontrovertible fact.
|
Vires in numeris
|
|
|
DooMAD
Legendary
Offline
Activity: 3948
Merit: 3191
Leave no FUD unchallenged
|
|
March 17, 2017, 03:16:08 PM |
|
I don't want to hear another word from you or anyone else about blocksizes until you can demonstrate that you've investigated anything else but.
If you don't want to hear it then you're probably in the wrong thread. Feel free to join another discussion more befitting to your hardline sensitivities. Or better yet, maybe learn how human interaction works and think before you bark. Most people are on board with SegWit, but many are still concerned what comes after that. Even if you can't accept the fact that not everyone wants to be pressured into using Lightning, or whatever off-chain solution ends up being proposed, you should still get used to the fact that they're going to keep talking about the blocksize, because it's not going away. I think it's highly unlikely that the pressure will dissipate in that regard. People see this arbitrary and temporary bottleneck which wasn't part of the original design, so it's hardly unreasonable to question it (despite your obvious opinion to the contrary). So what you have to consider is, the more you tell people not to discuss it, the more you isolate yourself and make people think there's no point engaging with you, when all you're going to do is tell them how wrong they are because they don't see things as you do. I'll phrase it as politely as I can here, but you don't exactly have a winning personality. Maybe you feel like you've justified your view enough in the past and you're just repeating yourself at this point. But whatever your reasons are, if you can't even be bothered to explain why you think anyone who wants to look at the blocksize is basically the devil, and just scream at people that they're wrong, you're not going to win many people over. Just a suggestion. But if you like being seen as the mindless attack dog, keep up the sterling work!
|
|
|
|
franky1
Legendary
Offline
Activity: 4396
Merit: 4761
|
|
March 17, 2017, 03:37:11 PM Last edit: March 17, 2017, 03:55:12 PM by franky1 |
|
TestNet was called Litecoin at 2½ minutes and multiple alts. G.Maxwell is blowing smoke, if BTC can't do your 5 minute block speed then the whole thing should be scrapped and written from scratch. (maybe copy litecoin source code that has 4X the transaction capacity of BTC without deadwit.)What lie did he make up to explain why BTC sucks more than every other coin in existence in regards to blockspeed? FYI: Have you Noticed that BTC is unable to do anything else to fix transactions capacity issues except segwit if you talk to G.Maxwell. Answer is simple , Talk to someone that won't lie to you. i do have to say this... though litecoin and other coins may have lower blocktimes.. they also have lower node counts (~700ish active nodes). once you start thinking of the time to download, verify and then send out a block. .. the more nodes there are, the more 'hops' each relay layer of nodes needs to do. and the more total time needed for the entir network to get the block before it should be happily ready to get the next block EG imagine a coin with just 8 nodes.. one node can send a block to all 8 nodes in 1 hop but imagine there were say over 4000 nodes =1*8*8*8*8 =1+8+64+512+4096=4681 thats 4 hops to gt to all 4000 nodes (bitcoin needs about 5 relays/hops due to having ~7000ish)
now thats 4 times the time it takes for everyone on the network to get and verify the block before the next blockheight gets sent. reducing the time between the next block height means less time for the previous block height to get around and everyone agree on it. which can lead to more orphans if some have not got the data to agree on block X+2 if they have not even got x+1 yet
based on say a download, verify and relay out to all 8 peers takes less than 30 seconds...EVER in a network of say 9 nodes (1pool node and 8 verifying nodes) you could get away with blocks being built every 30 seconds and have nice wiggle room in a network of say 73 nodes (1pool node and 72 verifying nodes) you could get away with blocks being built every 60 seconds and have nice wiggle room in a network of say 585 nodes (1pool node and 584 verifying nodes) you could get away with blocks being built every 1min30 seconds and have nice wiggle room in a network of say 4681 nodes (1pool node and 4680 verifying nodes) you could get away with blocks being built every 2min and have nice wiggle room in a network of say 37449 nodes (1pool node and 37448 verifying nodes) you could get away with blocks being built every 2min 30seconds and have nice wiggle room but thats on the bases of download, verify and relay out to all 8 peers takes less than 30 seconds...EVER if the average propagation was say 1minute. then suddenly bitcoin wouldnt cope with 5min blocks.. if the average propagation was say 2minute. then suddenly bitcoin wouldnt cope with 10min blocks.. so its a cat and mouse game between propagation times. node counts. yep its not just about blocksize harming node counts. its also node counts can cause delays and orphan risks to blocks (but at just 1mb it would take a hell of alot of nodes to do that)
|
I DO NOT TRADE OR ACT AS ESCROW ON THIS FORUM EVER. Please do your own research & respect what is written here as both opinion & information gleaned from experience. many people replying with insults but no on-topic content substance, automatically are 'facepalmed' and yawned at
|
|
|
Carlton Banks
Legendary
Offline
Activity: 3430
Merit: 3080
|
|
March 17, 2017, 04:03:10 PM |
|
I don't want to hear another word from you or anyone else about blocksizes until you can demonstrate that you've investigated anything else but.
If you don't want to hear it then you're probably in the wrong thread. Feel free to join another discussion more befitting to your hardline sensitivities. Or better yet, maybe learn how human interaction works and think before you bark. Most people are on board with SegWit, but many are still concerned what comes after that. Even if you can't accept the fact that not everyone wants to be pressured into using Lightning, or whatever off-chain solution ends up being proposed, you should still get used to the fact that they're going to keep talking about the blocksize, because it's not going away. I think it's highly unlikely that the pressure will dissipate in that regard. People see this arbitrary and temporary bottleneck which wasn't part of the original design, so it's hardly unreasonable to question it (despite your obvious opinion to the contrary). So what you have to consider is, the more you tell people not to discuss it, the more you isolate yourself and make people think there's no point engaging with you, when all you're going to do is tell them how wrong they are because they don't see things as you do. I'll phrase it as politely as I can here, but you don't exactly have a winning personality. Maybe you feel like you've justified your view enough in the past and you're just repeating yourself at this point. But whatever your reasons are, if you can't even be bothered to explain why you think anyone who wants to look at the blocksize is basically the devil, and just scream at people that they're wrong, you're not going to win many people over. There is one situation in particular where I reserve the right to be belligerent; when people are doing something dangerous . Play the ball, not the man. Any personal criticism coming from me relates specifically to your behaviour in this thread, not wholesale attacks on character. And you STILL have yet to provide actual reasoning for your x=y linear blocksize growth proposal, you still have yet to provide logical refutations to my insistence that blocksize should be the last resort, not the first. You're the bad actor here, you cannot accept that you are wrong, and have only unbacked assertions to prosecute your position, what you're suggesting is bad engineering practice, and actual engineers know this full well. You have presented zero evidence to the contrary, and are only continuing because your pride is hurt, otherwise you would accept reasonable reasoning. Instead you try to discuss anything but the points I have raised. You're campaigning on blocksize for it's own sake only, deaf and blind to any alternative, which exist. You are the bad actor, the poor debater, the intransigent. To attack me because of your own panoply of short comings is a total disgrace, you have no place debating any technical matters whatsoever.
|
Vires in numeris
|
|
|
dinofelis
|
|
March 17, 2017, 04:05:41 PM |
|
TestNet was called Litecoin at 2½ minutes and multiple alts. G.Maxwell is blowing smoke, if BTC can't do your 5 minute block speed then the whole thing should be scrapped and written from scratch. (maybe copy litecoin source code that has 4X the transaction capacity of BTC without deadwit.)What lie did he make up to explain why BTC sucks more than every other coin in existence in regards to blockspeed? FYI: Have you Noticed that BTC is unable to do anything else to fix transactions capacity issues except segwit if you talk to G.Maxwell. Answer is simple , Talk to someone that won't lie to you. i do have to say this... though litecoin and other coins may have lower blocktimes.. they also have lower node counts (~700ish active nodes). once you start thinking of the time to download, verify and then send out a block. .. the more nodes there are, the more 'hops' each relay layer of nodes needs to do. and the more total time needed for the entir network to get the block before it should be happily ready to get the next block EG imagine a coin with just 8 nodes.. one node can send a block to all 8 nodes in 1 hop but imagine there were say over 4000 nodes Consider the pool installing a node that can have 20 000 outgoing connections. Like a small data center. All non-mining nodes connect directly to this one. Like you connect to Facebook's servers. You scream: decentralisation ! But you have in any case only one single data source: your single pool. No other entity is making a block chain. So you can just as well connect to your single source of data. In your story, replace 1 pool by 10 pools. Well, any of these pools is a reliable source of data, and ONLY these 10 pools are the source of their (common) block chain. There's no other source. If they hold it back, they are hurting themselves. EACH of these pools can serve all the other nodes. Nodes can chose which node(s) they want to connect to. No need to connect to proxy nodes.
|
|
|
|
d5000 (OP)
Legendary
Offline
Activity: 4102
Merit: 7562
Decentralization Maximalist
|
|
March 17, 2017, 04:08:04 PM |
|
@Carlton Banks: You have just demonstrated that you are the wrong person to discuss about a "compromise". You haven't brought in anything constructive into the discussion, only destructive ad hominem, so I will ignore your input from here on. You never proved NOTHING. So don't blame me and others trying to find a intermediate position if the fork comes and Bitcoin's eternally split. This thread is an intent to avoid a fork. I know that some from the hardcore-Core ( ) fraction is totally OK with a fork, but it could harm Bitcoin seriously - If you don't see that, it's not my problem. I can also switch to an altcoin then, as I'm currency-agnostic.
|
|
|
|
franky1
Legendary
Offline
Activity: 4396
Merit: 4761
|
|
March 17, 2017, 04:11:57 PM Last edit: March 17, 2017, 05:17:01 PM by franky1 |
|
You scream: decentralisation ! But you have in any case only one single data source: your single pool. No other entity is making a block chain. So you can just as well connect to your single source of data.
no id scream centralisation.. one source one brand.. is still centralisation. in other topics and for many months and years i have said.. distributed centralisation is NOT decentralisation.. a network of only core, whether it be 3000 nodes, 8000 or 80000 is meaningless if all the code was the same. especially if a bug popped up, they would all be affected. however. diversity(different code bases and even wrote in different languages (go, ruby, java, c#)) and more than one option is decentralisation.. i would rather there be 10 different codebases for node users to freely choose. not 2 and not 1 core wanting domination and also wanting the fibre ring fencing the pools as the upper upstream filter.. is not about decentralisation.. just distributing.. but still centralising..
|
I DO NOT TRADE OR ACT AS ESCROW ON THIS FORUM EVER. Please do your own research & respect what is written here as both opinion & information gleaned from experience. many people replying with insults but no on-topic content substance, automatically are 'facepalmed' and yawned at
|
|
|
Carlton Banks
Legendary
Offline
Activity: 3430
Merit: 3080
|
|
March 17, 2017, 04:14:32 PM |
|
d5000, you have the same problem as all dishonest debaters trying to throw up smokescreens and diversions to cover their tracks:
People only need to read the posts to see that you were being evasive and squirming, and I was being direct and clearly replying to you. All anyone has to do is read, to see you're the one using dishonest playground tactics. You will be exposed again and again, if you continue in this way. You can't convince people with dishonesty.
It's pretty extremist to want to do something extremely stupid in the light of contrary evidence if you ask me.
|
Vires in numeris
|
|
|
dinofelis
|
|
March 17, 2017, 04:16:22 PM |
|
You scream: decentralisation ! But you have in any case only one single data source: your single pool. No other entity is making a block chain. So you can just as well connect to your single source of data.
no id scream centralisation.. one source one brand.. is still centralisation. If you only have 1 pool (in your example) then there is only one source of block chain data. So whether you get it DIRECTLY from them, or from someone who copied it from them, there's no "decentralization" in that story. i would rather there be 10 different codebases of a node. not 2 and not 1
That doesn't change the fact that if you have only one pool, making one single block chain, you either accept it, or you don't have a block chain. Even if you have 10 pools, connected together and agreeing (building on one another's blocks), there is STILL only one block chain, but now, 10 different pools have it and can send it to you. All the others can simply copy from one of them, and send it to you too.
|
|
|
|
franky1
Legendary
Offline
Activity: 4396
Merit: 4761
|
|
March 17, 2017, 05:25:34 PM |
|
You scream: decentralisation ! But you have in any case only one single data source: your single pool. No other entity is making a block chain. So you can just as well connect to your single source of data.
no id scream centralisation.. one source one brand.. is still centralisation. If you only have 1 pool (in your example) then there is only one source of block chain data. So whether you get it DIRECTLY from them, or from someone who copied it from them, there's no "decentralization" in that story. agreed. never said the opposite i would rather there be 10 different codebases of a node. not 2 and not 1
That doesn't change the fact that if you have only one pool, ... i see your using my simple explanation of relay timing example.. arbitrary numbers.. but yea in a 1*8*8 vs 1*8*8*8*8*8*8 would still be centralised.. but distributed. where as 20*8*8 vs 20*8*8*8*8*8*8 would be more decentralised.. especially if those 20 pools had different code bases and the different nodes(layers of 8 ) had differing code bases. which was mentioned before: diversity(different code bases and even wrote in different languages (go, ruby, java, c#)) and more than one option is decentralisation..
|
I DO NOT TRADE OR ACT AS ESCROW ON THIS FORUM EVER. Please do your own research & respect what is written here as both opinion & information gleaned from experience. many people replying with insults but no on-topic content substance, automatically are 'facepalmed' and yawned at
|
|
|
DooMAD
Legendary
Offline
Activity: 3948
Merit: 3191
Leave no FUD unchallenged
|
|
March 17, 2017, 05:45:49 PM |
|
People only need to read the posts to see that you were being evasive and squirming, and I was being direct and clearly replying to you.
Said the weaseliest weasel that ever weaseled in all of weaseldom without a hint of irony. So, Carlton, do you think there should be a cut off point where coins not moved to a quantum proof key are frozen by the network? All anyone has to do is read, to see you're the one using dishonest playground tactics. You will be exposed again and again, if you continue in this way. You can't convince people with dishonesty.
Looked in a mirror lately? I mean... it's like you don't even see it.
|
|
|
|
Carlton Banks
Legendary
Offline
Activity: 3430
Merit: 3080
|
|
March 17, 2017, 05:59:39 PM |
|
Where are your technical arguments?
If I'm so lacking in value as a source of truth, why do you need to attack nothing but my character? Why do you need to attack at all, surely I'm ssssssso transparent that anyone can see it without your "help"?
You can't make any actual arguments, or defend your own nonsense, and so you must attack nothing but me.
Let's see what people really think (and oh look, no-one's interested in reckless blocksize hard-forks, as usual)
|
Vires in numeris
|
|
|
dinofelis
|
|
March 17, 2017, 07:49:18 PM |
|
i see your using my simple explanation of relay timing example.. arbitrary numbers.. but yea in a 1*8*8 vs 1*8*8*8*8*8*8 would still be centralised.. but distributed. where as 20*8*8 vs 20*8*8*8*8*8*8 would be more decentralised.. especially if those 20 pools had different code bases and the different nodes(layers of 8 ) had differing code bases.
My point was that the argument of "the more nodes, the more time it takes" is not correct. There is no reason for nodes not to connect directly to one of the mining pool nodes (which can afford to have a datacenter-type node). Then, all non-mining nodes are one single hop away from the pool-datacenter and are served directly. No need for a P2P slow network. This has nothing to do with decentralisation. There is only one SOURCE of the block chain: the minerpool network. If there is only one pool, well, this sole pool is the block chain source. If there are 20 of them, they are well-connected between them, and all of them, most of them, or some of them will set up a data center to serve the non-mining nodes. In any case, each of these miners is a good, primary source of the block chain, cannot cheat (if it withholds blocks, it will orphan its own blocks, it cannot make fake blocks - wasted hash rate - and they are eager to get the latest blocks from their co-miners), has no incentive to cheat and wants good connections to customer nodes to get to their transactions first, so that they can get hold of the most interesting fees first. Whether you get the block chain directly from the source, or after several P2P hops, doesn't matter if you're not in a hurry, but if you are, then you better get it directly from a minerpool server. And then, it doesn't really matter how many of the other nodes are also being served by that same data center. Also, if the network interface on the network is well defined, whatever software you actually run on your client node, doesn't really matter. If you are happy with it, that's fine. People can use firefox or internet explorer or chrome or whatever as a browser ; so your node software, which is a "block chain downloader and browser/checker" can also be of various brands. Doesn't matter, as long as it understands the (sole) block chain that is being served.
|
|
|
|
kiklo
Legendary
Offline
Activity: 1092
Merit: 1000
|
|
March 18, 2017, 04:55:30 AM |
|
In other words: You can't DOS the network at 1 MB using native keys post Segwit. Which is my whole point. Stop with these strawman arguments.
you need to really study more. simply saying "cant b'coz cant" or "wrong because ad-hom" is becoming very apparent as your rebuttal. please study these things beyond the 2 paragraph sales pitches of empty promises. I don't need to study anything. You have a fallacious way of arguing and reasoning. You completely changed my argument in order to refute it with your own. You created an argument that I did not make, also known as a strawman argument. You can't DOS the network with native keys with Segwit. Period. You should buy this with your employers money: Cute Book. Segwit would open up many attacks on the Time Locking System , included in deadwit, say a hacker finds a way to break the time lock early , they will be able to steal BTC like crazy and it could be months before anyone knows they were robbed. Hacking an LN hub, to steal BTC is a matter of time, nothing else.
|
|
|
|
rav3n_pl
Legendary
Offline
Activity: 1361
Merit: 1003
Don`t panic! Organize!
|
|
March 18, 2017, 01:52:08 PM |
|
I would like to see adaptive block size: make it once and for ever. Raise to 2MB now, and what, another HF in 2 years or less? https://github.com/bitpay/bitcoin/issues/42
|
|
|
|
Lauda
Legendary
Offline
Activity: 2674
Merit: 2965
Terminated.
|
|
March 18, 2017, 02:27:21 PM |
|
Segwit would open up many attacks on the Time Locking System , included in deadwit, say a hacker finds a way to break the time lock early , they will be able to steal BTC like crazy and it could be months before anyone knows they were robbed. That is not Segwit, that is LN. You are confusing two completely different things. That version didn't gather much support if any. Bitcoin scales very inefficiently on the first layer, that's why it requires a secondary layer.
|
"The Times 03/Jan/2009 Chancellor on brink of second bailout for banks" 😼 Bitcoin Core ( onion)
|
|
|
rav3n_pl
Legendary
Offline
Activity: 1361
Merit: 1003
Don`t panic! Organize!
|
|
March 18, 2017, 03:15:49 PM |
|
That version didn't gather much support if any. Bitcoin scales very inefficiently on the first layer, that's why it requires a secondary layer.
This is why I propose to use BOTH SegWit and Adaptive Block Size (Bitcoin ABS? XD )
|
|
|
|
|