coin_gambler
Sr. Member
Offline
Activity: 462
Merit: 250
CryptoTalk.Org - Get Paid for every Post!
|
|
February 18, 2016, 08:43:10 PM |
|
im glad that the price is holding right now, i hope that one day it will reach new heights
|
|
|
|
BitUsher
Legendary
Offline
Activity: 994
Merit: 1035
|
|
February 18, 2016, 08:44:21 PM |
|
i'd love to hear you and todd talk about this.
Some people seem to be under the impression that Todd is married to keeping the maxBlockSize at 1MB . In reality that number has little importance in itself. Peter Todd, amir taaki and Luke Jr are indeed much more conservative than other developers when it comes to anything that can negatively effect security,.... but they aren't like any of those Bitcoin Assets Group* who seem to be married in the idea that the protocol never change. Peter Todd sounds grumpy and pessimistic , but we need people like him to discover bugs and weaknesses to strengthen the protocol. Rasing the Blocksize for him isn't a problem if it can be done securely and node count doesn't keep dropping. Peter is well aware the LN needs much bigger blocks to be useful. why does people still clinging on to the idea bitcoin *needs* to scale or be instant or nearly free to be valuable?
Here is an an example of 2 people I know of that fit in the category of Bitcoin perfectly fine never scaling. I and most (if not all) core developers don't agree because we are competing with other altcoins and cannot allow themto catchup and to dramatically surpass us in usability.
|
|
|
|
JayJuanGee
Legendary
Offline
Activity: 3836
Merit: 10832
Self-Custody is a right. Say no to"Non-custodial"
|
|
February 18, 2016, 08:45:55 PM |
|
Uh, I'm not an expert, but doesn't Lightning Network have a routing problem? And by problem, I mean in the sense that pigs have a flying problem.
So to sum up:
Bigblockers think technology will happen that will solve the node harddrive bloat problem and we are ridiculed by people who think technology will happen to solve LN's routing problem for the reason that we can't count on tech that hasn't been invented yet. Hmmmm.
If we are walking 1 mile per day, we do not need to make detailed and rock solid plans regarding exactly how we are going to attempt to cross bridges that are 500 miles away...., but, agreed, it does not hurt to plan ahead. On the other hand, in the meantime, there are going to be 500 days of other obstacles that we are going to need to contend with, and we are not 100% certain that we are going to arrive at those same bridges that are about 500 days into the future because our direction may be changed a bit and our resources may also change a bit.
|
|
|
|
8up
|
|
February 18, 2016, 08:50:44 PM |
|
Don't get me wrong. I'd like to see LN beeing a big success. But having a role-out within a few month and beeing adopted is two completley differnt things.
No one is suggesting it being widely adopted in 2016, or 2017... I said it rolls out in 2016(from a sidechain testnet to a live bitcoin testnet). LN won't become useful until the capacity is much bigger (4-8MB at least) reinforcing my point why the blocksize needs to grow dramatically for the LN to be viable. Which kinda blows up the whole conspiracy theory that Blockstream is delaying the blocksize repeatedly as a stalling tactic because they never want it to grow. why does people still clinging on to the idea bitcoin *needs* to scale or be instant or nearly free to be valuable? it's about >>> valuable or === valuable for everything else there is economics (adoption competition && security markets)
|
|
|
|
BitUsher
Legendary
Offline
Activity: 994
Merit: 1035
|
|
February 18, 2016, 08:51:26 PM |
|
The underlined part does not match real world experience of the last 20 years:
1995: Pentium 100 / 4-16MB RAM / 800mb - 2GB disks / dialup and ISDN connections at speeds like XX kbps 2015: multicore i7s / 4-16GB RAM / 1TB-4TB disks / home connections in XX mbps+.
We are seeing 1000x in processing, ram, storage, internet - and we are really getting those incrementally, not as radical breakthroughs.
If the software can currently do something like 10tx/s (with 2MB blocks + segwit), then with 1000x in hardware (and no change in software), just on the current trendline of tech progression, we'll be seeing 10.000 tx/sec by 2035 and hundreds of thousands of tx/sec further ahead.
If you add software improvements, breakthrough advances (?), or tapping new resources (like GPU processing which is already >10x the CPU processing power) for General Purpose computing tasks like validation, real time compression etc), we might get there far earlier.
QC-resistance may be one of the factors that regresses scaling if it employs a bloated scheme, otherwise it's a certainty that massive scaling *will* happen. Just not now - it'll take time.
Where are you getting your data that we have seem a 1000x increase in bandwidth and network propagation times between 95 and 2015? A no , I don't want you to cite me theoretical peak numbers but real average bandwidth numbers and real propagation times. If anything Network bandwidth is becoming more limited in certain areas due to ISP softcaps from overselling their network.
|
|
|
|
sAt0sHiFanClub
|
|
February 18, 2016, 08:51:48 PM |
|
I don't understand. Isn't the size of blocks the limit for number of tx?
Even if you change the network, a XMB block can still contain only an amount of Tx's proportionnal to X no?
No , the LN is a extremely efficient caching layer that doesn't involve trusting third parties and can settle much higher txs. you have absolutely no way of knowing if that is going to be true or not in the future You are being either deliberately disingenuous or you have no idea the current state of flux within LN design ( because despite having a mock up running, it is still very much at the design stage) ** apologies if my posts are a bit behind the curve. Busy day on Classic.
|
|
|
|
AlexGR
Legendary
Offline
Activity: 1708
Merit: 1049
|
|
February 18, 2016, 08:52:12 PM |
|
The difficult thing about this conversation is that we are repeatedly being misrepresented or misunderstood. Core has already agreed to compromise and kick the can by increasing capacity.... which means that you really aren't interested in 2MB, but something much bigger... which at this point in time(remember we want bigger blocks too) would be disastrous for bitcoin. Even by Gavin's own calculations Classic could cause a 40% node drop off (worse case scenario) , this is absolutely unacceptable and we need to start reversing this trend immediately.
segwit is huge, it makes any blocksize effectly double the size but core is still not budging from 1MB will they ever? todd need to come clean, and literally say " we will not increase block size no matter what happens, because we believe in lighting " 1. You go into the airport and you are only allowed 1 bag that weighs a max of 100 pounds. (1MB limit - current scheme) 2. You go into the airport and you are only allowed 1 bag that weighs a max of 200 pounds. (2MB hard fork - proposed scheme of classic) 3. You go into the airport and you are allowed 2 bags that weigh a combined 170-300 pounds depending the items (multisig txs) you carry (segwit scheme by core) Why would you insist on whether it's one bag or two bags? Does it matter to you? Similarly, segwit allows 1.7MB to 3MB+ of data. Just because it is in two separate data structures, doesn't mean it's less data or that it's ...still 1MB. It isn't. If one file in your PC saves 1MB and the other saves another 700kb, it's still 1.7MB of data in txs (and not 1MB). And you also solve the malleability problem that is there for years.
|
|
|
|
adamstgBit
Legendary
Offline
Activity: 1904
Merit: 1037
Trusted Bitcoiner
|
|
February 18, 2016, 08:52:16 PM |
|
im glad that the price is holding right now, i hope that one day it will reach new heights
don't worry, altho we all strongly disagree everyone fighting what they think the best possible outcome is. we'll pull through, learn somthing, and bitcoin will be better off due to this painful process.
|
|
|
|
billyjoeallen
Legendary
Offline
Activity: 1106
Merit: 1007
Hide your women
|
|
February 18, 2016, 08:52:18 PM |
|
billyjoeallen's short doesn't seem so ridiculous today.
Why do you say that? I had near the worst possible timing. Shorting over $420 makes sense, but the best thing possible for me to do is just break even. My guess is this thing will peter out at around $450. That was the short I shouldn't have covered. Look, all the weak hands have seen shaken out in the two year bear market. The problem is that nobody in their right mind will buy now. That won't stop a pump, because too many people are not in their right mind, but it can't be sustainable until TWO very serious problems are fixed: Mining concentration and scaling. Of the two, scaling is actually the easy one.
|
|
|
|
adamstgBit
Legendary
Offline
Activity: 1904
Merit: 1037
Trusted Bitcoiner
|
|
February 18, 2016, 08:59:28 PM |
|
The difficult thing about this conversation is that we are repeatedly being misrepresented or misunderstood. Core has already agreed to compromise and kick the can by increasing capacity.... which means that you really aren't interested in 2MB, but something much bigger... which at this point in time(remember we want bigger blocks too) would be disastrous for bitcoin. Even by Gavin's own calculations Classic could cause a 40% node drop off (worse case scenario) , this is absolutely unacceptable and we need to start reversing this trend immediately.
segwit is huge, it makes any blocksize effectively double the size but core is still not budging from 1MB will they ever? todd need to come clean, and literally say " we will not increase block size no matter what happens, because we believe in lighting " 1. You go into the airport and you are only allowed 1 bag that weighs a max of 100 pounds. (1MB limit - current scheme) 2. You go into the airport and you are only allowed 1 bag that weighs a max of 200 pounds. (2MB hard fork - proposed scheme of classic) 3. You go into the airport and you are allowed 2 bags that weigh a combined 170-300 pounds depending the items (multisig txs) you carry (segwit scheme by core) Why would you insist on whether it's one bag or two bags? Does it matter to you? Similarly, segwit allows 1.7MB to 3MB+ of data. Just because it is in two separate data structures, doesn't mean it's less data or that it's ...still 1MB. It isn't. If one file in your PC saves 1MB and the other saves another 700kb, it's still 1.7MB of data in txs (and not 1MB). And you also solve the malleability problem that is there for years. segwit effectively doubles capacity for any block limit, double capacity of 2MB is better then double capacity 1MB. i feel we aren't fully taking adv of what segwit dose if we don't also increase block size
|
|
|
|
ChartBuddy
Legendary
Online
Activity: 2296
Merit: 1801
1CBuddyxy4FerT3hzMmi1Jz48ESzRw1ZzZ
|
|
February 18, 2016, 09:00:56 PM |
|
|
|
|
|
AlexGR
Legendary
Offline
Activity: 1708
Merit: 1049
|
|
February 18, 2016, 09:00:58 PM |
|
Where are you getting your data that we have seem a 1000x increase in bandwidth and network propagation times between 95 and 2015?
I was actually using the internet back in '95 so my personal first hand experience is the data. To give you an idea, back in '95, my entire country was connected to the Internet through 2 major ISPs. Their total connectivity added up to ....512 KBPS, which had to be shared to >5000 users who were using dialup (typically 14.4 to 28.8 kbps - although a bad phone line could drop that rate dramatically) or leased lines (64kbps). Today total connectivity is in line with the 1000x figure (at least 500 gbps). My home connection exceeds x1000 gains if I connect to a VDSL and is at 500x-1000x with ADSL.
|
|
|
|
becoin
Legendary
Offline
Activity: 3431
Merit: 1233
|
|
February 18, 2016, 09:01:06 PM |
|
but it can't be sustainable until TWO very serious problems are fixed: Mining concentration and scaling. Of the two, scaling is actually the easy one.
Who cares? We shall see new high above $1200 by the end of the year.
|
|
|
|
sAt0sHiFanClub
|
|
February 18, 2016, 09:02:33 PM |
|
^Blitzkrieg = Lightning Network? My German is shit.
you already nailed it with BlitzNetwerk. Too funny.
|
|
|
|
BitUsher
Legendary
Offline
Activity: 994
Merit: 1035
|
|
February 18, 2016, 09:03:41 PM |
|
segwit effectively doubles capacity for any block limit, double capacity of 2MB is better then double capacity 1MB. i feel we aren't fully taking adv of what segwit dose if we don't also increase block size
It doesn't work that way ... the 2 are only related insomuch that increasing maxBlockSize to 2MB with segwit is even more dangerous in certain aspects and less in others vs simply changing the maxBlocksize to 4MB. You just want a capacity of 28 TPS right away is all ... and there is nothing wrong with wanting more cake... I want more too.... but can we just hit the gym first before we load up on carbs?
|
|
|
|
adamstgBit
Legendary
Offline
Activity: 1904
Merit: 1037
Trusted Bitcoiner
|
|
February 18, 2016, 09:05:34 PM |
|
segwit effectively doubles capacity for any block limit, double capacity of 2MB is better then double capacity 1MB. i feel we aren't fully taking adv of what segwit dose if we don't also increase block size
It doesn't work that way ... the 2 are only related insomuch that increasing maxBlockSize to 2MB with segwit is even more dangerous in certain aspects and less in others vs simply changing the maxBlocksize to 4MB. You just want a capacity of 28 TPS right away is all ... and there is nothing wrong with wanting more cake... I want more too.... but can we just hit the gym first before we load up on carbs? its best to load on a carbs b4 hitting the gym but wtv. todd's crap about 2MB adversely affecting the mining landscape is insulting. does anyone actually believe that? every time i've ask poeple to show me 1 small minner not already mining at a pool, they turn to poop throwing. I'd like to see todd throw some poop, or admit he misspoke.
|
|
|
|
AlexGR
Legendary
Offline
Activity: 1708
Merit: 1049
|
|
February 18, 2016, 09:08:18 PM |
|
The difficult thing about this conversation is that we are repeatedly being misrepresented or misunderstood. Core has already agreed to compromise and kick the can by increasing capacity.... which means that you really aren't interested in 2MB, but something much bigger... which at this point in time(remember we want bigger blocks too) would be disastrous for bitcoin. Even by Gavin's own calculations Classic could cause a 40% node drop off (worse case scenario) , this is absolutely unacceptable and we need to start reversing this trend immediately.
segwit is huge, it makes any blocksize effectively double the size but core is still not budging from 1MB will they ever? todd need to come clean, and literally say " we will not increase block size no matter what happens, because we believe in lighting " 1. You go into the airport and you are only allowed 1 bag that weighs a max of 100 pounds. (1MB limit - current scheme) 2. You go into the airport and you are only allowed 1 bag that weighs a max of 200 pounds. (2MB hard fork - proposed scheme of classic) 3. You go into the airport and you are allowed 2 bags that weigh a combined 170-300 pounds depending the items (multisig txs) you carry (segwit scheme by core) Why would you insist on whether it's one bag or two bags? Does it matter to you? Similarly, segwit allows 1.7MB to 3MB+ of data. Just because it is in two separate data structures, doesn't mean it's less data or that it's ...still 1MB. It isn't. If one file in your PC saves 1MB and the other saves another 700kb, it's still 1.7MB of data in txs (and not 1MB). And you also solve the malleability problem that is there for years. segwit effectively doubles capacity for any block limit, double capacity of 2MB is better then double capacity 1MB. i feel we aren't fully taking adv of what segwit dose if we don't also increase block size Think of segwit as follows (simplistically): It's a 1.7mb block, where the data are split in two files. You don't need to make it 3.4mb - that's like 5x our current 0.6mb avg block size. It's just free room for spamming and bloating. Satoshi gave the spammers 1MB (minus legit transactions) of room to play with. I don't see why any of the next devs should give them 3. That would be gross incompetence / criminal intent. If actual demand for txs rise, you give them more room etc.
|
|
|
|
adamstgBit
Legendary
Offline
Activity: 1904
Merit: 1037
Trusted Bitcoiner
|
|
February 18, 2016, 09:16:37 PM |
|
|
|
|
|
BitUsher
Legendary
Offline
Activity: 994
Merit: 1035
|
|
February 18, 2016, 09:17:40 PM |
|
Where are you getting your data that we have seem a 1000x increase in bandwidth and network propagation times between 95 and 2015?
I was actually using the internet back in '95 so my personal first hand experience is the data. To give you an idea, back in '95, my entire country was connected to the Internet through 2 major ISPs. Their total connectivity added up to ....512 KBPS, which had to be shared to >5000 users who were using dialup (typically 14.4 to 28.8 kbps - although a bad phone line could drop that rate dramatically) or leased lines (64kbps). Today total connectivity is in line with the 1000x figure (at least 500 gbps). My home connection exceeds x1000 gains if I connect to a VDSL and is at 500x-1000x with ADSL. The exception that breaks the rule. Ok, well you were under a very specific circumstance of a country that didn't adopt the internet as quickly. Keep in mind that in other parts of the world things were much different. Broadband rolled out in 96 and going that far back isn't fair because the rate of progression has slowed and in some cases reversed (Softcaps limiting total bandwidth per month) while demand keeps increasing (HD video streaming) There is a reason BIP 103 used 17.7 % per year ... because that is what matches todays data and is the most accurate forecast. https://github.com/bitcoin/bips/blob/master/bip-0103.mediawikiThis does include mobile bandwidth as well (In some areas , only mobile bandwidth is available)... so If we were to use just fixed bandwidth the numbers would be close to 50% per year... so would you be ok with a 50% a year blocksize increase? Of course , IMHO , we should be much more conservative than that because propagation time and network latency isn't also increasing at the same rates and really important networks like TOR are also not increasing at the same rates.
|
|
|
|
BitUsher
Legendary
Offline
Activity: 994
Merit: 1035
|
|
February 18, 2016, 09:21:13 PM |
|
todd's crap about 2MB adversely affecting the mining landscape is insulting. does anyone actually believe that?
every time i've ask poeple to show me 1 small minner not already mining at a pool, they turn to poop throwing.
I'd like to see todd throw some poop, or admit he misspoke.
Are you reading into his comments ? Where did he say that solo miners would need to flee to pools at 2MB ? Are you going to also accuse Gavin of lying when he suggests there could be up to a 40% node drop off (solo miners run full nodes too) worse possible outcome with 2MB blocksizes?
|
|
|
|
|