Bitcoin Forum
January 16, 2019, 12:44:14 AM *
News: The copper membership price will increase by about 300% around Friday.
 
   Home   Help Search Login Register More  
Pages: [1] 2 »  All
  Print  
Author Topic: Dynamic Scaling?  (Read 342 times)
Elliander
Member
**
Offline Offline

Activity: 65
Merit: 10


View Profile
December 10, 2017, 05:48:02 PM
 #1

Is there a reason why Bitcoin doesn't have an implementation of dynamic scaling the way it has dynamic difficulty settings? For example, instead of having to argue over how big a block should be every few years, why don't we let the network itself decide? From there, maybe offset the increased bandwidth of node operators with some kind of reward for running the node itself?

It just seems to me that this type of approach would allow the size of a block to increase or shrink according to the actual needs of a network, so it would be larger during times of major spending, and smaller during times of relative inactivity, leading to a lower average bandwidth usage than just increasing it. Even if there a reason to keep it from going above a certain point, wouldn't it make more sense to use an arbitrary ceiling so the block can shrink to below that size, but not rise above it - at least initially - and maybe implement some kind of community agreed upon protocol that would set concrete rules for if and under what circumstances the ceiling is raised, and then does so as some secondary dynamic system after a period of initial testing?

So, for example, one dynamic rule would increase or decrease the size of a block according to the actual needs of the network, whereas a second dynamic rule would set the floor and ceiling value that the first rule has to work within even if doing so causes a bottleneck. From there this second rule would adjust the ceiling value according to the average capabilities of the people running the nodes, so if enough people want the ceiling to go up they just have to upgrade their equipment to handle it, and by rewarding people who run the nodes (maybe based on network activity) it gives them incentive to help scale up the network.

I've had this thought for a long while actually. At first glance it would seem to benefit miners more to have more fees, but I argue that the miners are hurt because with higher fees people are less willing to make transactions and then Bitcoin itself becomes a store of value rather than the transfer of value that I feel it should be. With an improved network, the transaction fees go down and more people make more transactions and so the miners are able to survive on transaction fees alone when the last coin is mined.

While secondary coins could also be used to make purchases, there would be exchange fees involved and that would also influence the prices of each coin and other coins have the same problem. By my calculations, in order for Bitcoin (or any cryptocurrency with the same number of coins) to gain the status of a world reserve currency it would need to be worth around a million dollars a coin, which makes a single Satoshi worth around a penny. From there, the maximum digits past zero would need to be increased for the value to increase much more and still be functional in the same manner. I'd argue that there should also be an agreed upon ruleset that says when a digit is added, well in advance of the need for it, based on some predicted value to occur within the network automatically.

Is there any real reason why something like this can't work, or is somehow less preferred than a scenario of people arguing over small details every so often in perpetuity?

Immortal until proven otherwise.
1547599454
Hero Member
*
Offline Offline

Posts: 1547599454

View Profile Personal Message (Offline)

Ignore
1547599454
Reply with quote  #2

1547599454
Report to moderator
Advertised sites are not endorsed by the Bitcoin Forum. They may be unsafe, untrustworthy, or illegal in your jurisdiction. Advertise here.
Elliander
Member
**
Offline Offline

Activity: 65
Merit: 10


View Profile
December 13, 2017, 06:24:28 AM
 #2

I'd like to add that I don't believe that Lightning Network is the best solution to scalability. While it can certainly address the currently high fees, I have some long-term concerns about relying on it. The thing is, the Bitcoin network was designed such that less coins are minted over time so that the miners can gradually transition to being paid in fees. Eventually the only financial incentive miners will have is transaction fees, which means more transactions would lead to better mining incentives.

Using the lightning network to open channels through which several transactions can flow without fees circumvents that and can actually lead to higher average fees for everyone else. At present, it would work great because that would remove from the Bitcoin network large amount of transactions that would otherwise congest the network and improves usability for smaller transactions, but that doesn't side step the need to address network scalability issues. If anything, it seems like the lightning network would be better used as a tool for decentralized exchange between cryptocurrencies, but shouldn't replace the ability of any particular coin to act as an exchange of value.

If node operators were rewarded by the network the way miners currently are according to their bandwidth they would have a financial incentive to scale up their capabilities leading to the network as a whole scaling up if dynamic scaling was implemented. That would ensure that transactions everywhere confirm cheaply, and the lightning network can still be of use for quicker in person transactions at stores where you don't want to wait more than a few moments for the transaction to confirm.

If we go the route of relying on secondary networks to exchange value, how exactly are the miners going to be paid when there are no coins to mint and no on block transactions to confirm?

Immortal until proven otherwise.
Colorblind
Member
**
Offline Offline

Activity: 308
Merit: 31

This text is irrelevant


View Profile
December 13, 2017, 06:37:37 AM
 #3

I kinda liked your proposal at first glance, but then I realized it is already implemented.

You see block size IS already "dynamic". Code only specifies maximum block size, so as a miner you can generate blocks of any length up to that size. Theoretically you could make limitless block size and let single miner empty the mempool in next block but that will end up in blockchain spam and insane chain bloating (up to the point bitcoin becomes absoletely unusable). Therefore you need to specify certain reasonable limit for each block. And that's what has been done. Some people argue that this limit should be higher, and by all means it can be higher but that isn't the solution since bigger blocks means bigger blockchain, bandwidth usage and bloat. Lightning (perhaps not in it's current state) is completely different - it allows to send barrages of valid transactions without actually posting most of them in blockchain, so blockchain will remain it's function of "bank", keeping your money safe, while lightning will allow you to use your money effectively in small chunks.

Elliander
Member
**
Offline Offline

Activity: 65
Merit: 10


View Profile
December 13, 2017, 11:16:37 AM
 #4

I realize that a block can be empty - with no transactions - but don't the node operators still have to download the full file sizes? I know a block always has these values:

Magic no (value always 0xD9B4BEF9)   - 4 bytes
Blocksize - 4 bytes
Blockheader - 80 bytes
Transaction counter - 9 bytes

Meaning that an empty block would require, at a minimum, 97 bytes. However, if the block size is set to 1 MB allowing that to be filled with transaction data, the question is if the nodes end up having to download a block file size of 1 MB if empty, or 97 bytes if empty. Like, the difference between an image file that contains padded zeroes and one that's been trimmed. If they don't download more than they have to, that would mean they do scale down as needed. If not, it really should work that way ESPECIALLY if people are talking about increasing the block size.

The second part of the proposal though - having the maximum block size increase dynamically according to the capabilities of the network and to provide incentive to node operators to make the required upgrades to allow the block ceiling value to be increased dynamically, what about that?

I fully agree that a limitless size would cause problems, and there should be a reasonable ceiling value set, but I still don't see why that limit can't be set by rules rather than opinion. Instead of long delays in making needed network changes, it could happen the moment a majority of the node operators are ready to handle it. By locking out node operators who are not scaled up with the majority, and providing financial incentive to node operators for higher bandwidth usage, it would create a similar arms race to the ASIC miners to ensure that the network can expand rapidly without the need for arguments over every little increase in block size.

Immortal until proven otherwise.
Xynerise
Sr. Member
****
Offline Offline

Activity: 280
Merit: 282

39twH4PSYgDSzU7sLnRoDfthR6gWYrrPoD


View Profile
December 14, 2017, 08:40:35 AM
 #5

Do you mean like in Monero?
Or Flexcap in Bitcoin Unlimited?
https://bitcoinmagazine.com/articles/can-flexcaps-settle-bitcoin-s-block-size-dispute-1446747479/
Shu1llerr
Newbie
*
Offline Offline

Activity: 27
Merit: 0


View Profile
December 14, 2017, 03:30:08 PM
 #6

tell me if there are good materials, perhaps text or video, where you can learn the topic of scalping? sorry if not on the topic, wildly sorry. just can not find good materials.
Elliander
Member
**
Offline Offline

Activity: 65
Merit: 10


View Profile
December 14, 2017, 07:11:00 PM
 #7


Similar. In that case though the size is set purely by the miner who mines the block, so it could still lead to bloat that would potentially cripple the bandwidth of a node operator. I mean, basically it would create an incentive for mining pools to accept more and more transactions, but at some point there would be a bottleneck that could cause network problems. To me that seems more like not having a ceiling value at all, whereas I am of the opinion that we need a ceiling, just one that scales with network capability.

That being said, the system of requiring pools to give up a piece of the reward to do so is an interesting solution. I wonder how well that would work in practice though. For it to be worth giving up X% of a block reward , the transaction fees would have to more than offset that. With heavy congestion it would certainly help motivate a miner to alleviate that, but wouldn't it also create the risk of failing to complete the block all together? I mean, as I understand, there is never really any progress to completing the block. It's more or less a lottery where more hashing rate just means more tries per second, but accepting a higher difficulty to process more transactions effectively means the odds of "guessing" correct goes down. Operating the entire pool without such a risk might very well decrease the number of transactions they can confirm because it increases the chance someone else will beat them to it.

Another problem, as your reference points out, is that it would introduce an investment attack. Giving a large pool of miners the ability to artificially inflate the difficulty rating long term would allow other miners to be pushed out completely - namely those using less efficient hardware, or living in areas with a higher electric cost. It would fundamentally alter the minimum price of production as well.

For example, I calculated that if the price of Bitcoin fell to $3k USD with the current difficulty level people paying 10 cents per kwh would only earn around $100 a month per Antminer S9, 20 cents per kwh would just break even, 30 cents per kwh would mean total loss. For those running Avalon Miners, which have more electric cost for the hash rate, such a price would mean no profit at all. That's why I place the minimum price of Bitcoin around that price - because that's when the network starts to break down without miners going offline.

As it stands already, there is not enough competition in the ASIC miner scene, but there is some competition. Wouldn't it therefore benefit a company like Bitmain to use their miners to inflate the network difficulty long term to make their competition not profitable at all? To drive them out of business? Similarly, wouldn't it benefit a country that wants to destroy Bitcoin to buy up (or seize) a large number of miners to create their own pool, then inflate the difficulty in such a way that they neither make or lose money on the operation, so that no one else can make or lose money, thus forcing all miners out, and then they simply flip a switch to shut down the entire remaining network?

Seems like giving miners that much power is power too much.

Immortal until proven otherwise.
cryptodontus
Newbie
*
Offline Offline

Activity: 37
Merit: 0


View Profile
December 15, 2017, 08:43:01 AM
 #8

I kinda liked your proposal at first glance, but then I realized it is already implemented.

You see block size IS already "dynamic". Code only specifies maximum block size, so as a miner you can generate blocks of any length up to that size. Theoretically you could make limitless block size and let single miner empty the mempool in next block but that will end up in blockchain spam and insane chain bloating (up to the point bitcoin becomes absoletely unusable). Therefore you need to specify certain reasonable limit for each block. And that's what has been done. Some people argue that this limit should be higher, and by all means it can be higher but that isn't the solution since bigger blocks means bigger blockchain, bandwidth usage and bloat. Lightning (perhaps not in it's current state) is completely different - it allows to send barrages of valid transactions without actually posting most of them in blockchain, so blockchain will remain it's function of "bank", keeping your money safe, while lightning will allow you to use your money effectively in small chunks.
I hate to be all doom-y, but what if lightning network isn't effective, or is too cumbersome or complex? I don't really see a problem with raising block size cap as needed. lightning network seem like it will become just as centralized (or more) as "big block" bitcoin.

I feel like we rushed into too many changes this year.
DooMAD
Legendary
*
Offline Offline

Activity: 1848
Merit: 1217


Leave no FUD unchallenged


View Profile WWW
December 15, 2017, 06:32:08 PM
 #9

Ever since BIP106 was first proposed, I've been a fan of the idea of dynamic scaling.  Although shortly after that, I decided that the original concept was far too unrestrictive and had the capability to result in dangerously large size increases if it was abused.  So over time, I've been looking at different tweaks and adjustments, partly to curtail any excessive increases, but also to incorporate SegWit, limit the potential for gaming the system and even prevent dramatic swings in fee pressure.  So far, that's where I've got to.  Still hoping some coders will take an interest and get it to the next level where it might actually be practical to implement.

btcton
Legendary
*
Offline Offline

Activity: 1176
Merit: 1007

Professional SysAdmin / Hobbyist Developer


View Profile
December 16, 2017, 12:49:42 AM
 #10

I'd like to add that I don't believe that Lightning Network is the best solution to scalability. While it can certainly address the currently high fees, I have some long-term concerns about relying on it. The thing is, the Bitcoin network was designed such that less coins are minted over time so that the miners can gradually transition to being paid in fees. Eventually the only financial incentive miners will have is transaction fees, which means more transactions would lead to better mining incentives.

Using the lightning network to open channels through which several transactions can flow without fees circumvents that and can actually lead to higher average fees for everyone else. At present, it would work great because that would remove from the Bitcoin network large amount of transactions that would otherwise congest the network and improves usability for smaller transactions, but that doesn't side step the need to address network scalability issues. If anything, it seems like the lightning network would be better used as a tool for decentralized exchange between cryptocurrencies, but shouldn't replace the ability of any particular coin to act as an exchange of value.

If node operators were rewarded by the network the way miners currently are according to their bandwidth they would have a financial incentive to scale up their capabilities leading to the network as a whole scaling up if dynamic scaling was implemented. That would ensure that transactions everywhere confirm cheaply, and the lightning network can still be of use for quicker in person transactions at stores where you don't want to wait more than a few moments for the transaction to confirm.

If we go the route of relying on secondary networks to exchange value, how exactly are the miners going to be paid when there are no coins to mint and no on block transactions to confirm?
To answer the first bolded text: While you are not wrong about the nature of decreasing the block reward in order to put the focus on the transaction fees as the rewards for miners, the scaling issue of transaction fees does not really relate to that. What matters is where the reward lies, not the amount. The amount will always be related to the supply and demand of the available hardware in the network. If transaction fees decrease, miners get paid less and therefore mine less, which lowers the overall difficulty of mining and makes it more profitable for the remaining miners who are still in it. If they increase, the number of miners will increase with it as will the difficulty to balance everything out.

To answer the second bolded text: There will always be coins to mine. The Lightning Network still depends on miners. All it does is perform the reconciliation of may more separate transactions off-chain into a single final entry that once "closed" gets put into the blockchain. To get put into the blockchain, miners still have to choose the transaction and will still get paid a fee. The difference is that the fee will be much less than it would be with all the separate transactions, which is fine considering the supply (miners) numbers will adjust accordingly.

Elliander
Member
**
Offline Offline

Activity: 65
Merit: 10


View Profile
December 16, 2017, 05:57:51 PM
 #11

To answer the first bolded text: While you are not wrong about the nature of decreasing the block reward in order to put the focus on the transaction fees as the rewards for miners, the scaling issue of transaction fees does not really relate to that. What matters is where the reward lies, not the amount. The amount will always be related to the supply and demand of the available hardware in the network. If transaction fees decrease, miners get paid less and therefore mine less, which lowers the overall difficulty of mining and makes it more profitable for the remaining miners who are still in it. If they increase, the number of miners will increase with it as will the difficulty to balance everything out.

To answer the second bolded text: There will always be coins to mine. The Lightning Network still depends on miners. All it does is perform the reconciliation of may more separate transactions off-chain into a single final entry that once "closed" gets put into the blockchain. To get put into the blockchain, miners still have to choose the transaction and will still get paid a fee. The difference is that the fee will be much less than it would be with all the separate transactions, which is fine considering the supply (miners) numbers will adjust accordingly.

That scenario would still see to decrease the usability of direct bitcoin transactions. It means that everyone has to pay more fees entirely for the benefit of a small group of centralized companies paying less fees overall.

To give an example, suppose that we are in a future where miners only get paid in transaction fees and the lightning network is heavily relied upon to compensate for having never implemented dynamic scaling. Online merchants who want to process millions of transactions a day around the world utilize a system where one on block transaction opens the channel for the day, and another on block transaction closes the channel for the day. All transactions that go through their payment system is taken care of in one go keeping the fees low for everyone sending transactions between the two companies. However, the people who initially send the coin to those centralized wallets are going to pay extremely high amounts which would discourage independent wallet usage (and increase the risk of what happens when centralized companies go out of business). Meanwhile, the people who just want to send money to someone across the world wouldn't benefit from the lightning network at all since they are dealing with just a single transaction, so they have to pay more as well.

The reason why everyone is stuck paying more is because anyone with multiple transactions just processes one, which means that the cost has to be spread across fewer payers, and those with less transactions ultimately pay the most.


I hate to be all doom-y, but what if lightning network isn't effective, or is too cumbersome or complex? I don't really see a problem with raising block size cap as needed. lightning network seem like it will become just as centralized (or more) as "big block" bitcoin.

I feel like we rushed into too many changes this year.

I agree with that sentiment. Even though the lightning network itself is decentralized, it's only value (that I can see) is in allowing two centralized systems to communicate with a total of 2 transactions - one to open the stream of communication, and one to close the stream of communication. That means that, for example, two exchanges could use the lightning network to allow inexpensive transactions to flow between practically instantly, but it won't allow two individuals using a decentralized platform to send anything at any increased speed or decreased cost. While I actually do like the lightning network as a tool for reducing the congestion (why should I have to wait because a big company has a million transactions it wants to process?), I don't like the idea of the entire network depending on it for avoiding all congestion. We still need a solution to keep the network working smoothly for everyone else.

As it stands right now, I've stopped sending Bitcoin all together. It's too expensive and takes too long. When I want to send someone anything, I'm either using Litecoin or Dashcoin. My opinion is that as it becomes easier and easier to transfer between coins - especially if fully decentralized platforms come about - Bitcoin will become less usable to the average person without dynamic scaling.  

Ever since BIP106 was first proposed, I've been a fan of the idea of dynamic scaling.  Although shortly after that, I decided that the original concept was far too unrestrictive and had the capability to result in dangerously large size increases if it was abused.  So over time, I've been looking at different tweaks and adjustments, partly to curtail any excessive increases, but also to incorporate SegWit, limit the potential for gaming the system and even prevent dramatic swings in fee pressure.  So far, that's where I've got to.  Still hoping some coders will take an interest and get it to the next level where it might actually be practical to implement.

My opinion is that an open cap is too unrestrictive, but a solid cap is too restrictive. That's why I think we need a way for the network to raise the cap on it's own within a set of limitations so that it can't bloat.

EDIT: I took a look at your thread, which looks similar to the first part of what I suggested here, but what's to stop the block size from increasing too fast for the network to handle? I think dynamic scaling needs to focus on both the needs of the network as well as the capability of the network with two distinct scaling solutions used together. The first adjusts the network according to the needs of the network, and the second adjust the network according to the capabilities of the network. Together, it allows flexibility, but there would have to be some incentive for the node operators to be willing to expand to handle the increased traffic.

Immortal until proven otherwise.
DannyHamilton
Legendary
*
Offline Offline

Activity: 2198
Merit: 1390



View Profile
December 16, 2017, 06:32:18 PM
 #12

I realize that a block can be empty - with no transactions - but don't the node operators still have to download the full file sizes? I know a block always has these values:

Magic no (value always 0xD9B4BEF9)   - 4 bytes
Blocksize - 4 bytes
Blockheader - 80 bytes
Transaction counter - 9 bytes

Meaning that an empty block would require, at a minimum, 97 bytes. However, if the block size is set to 1 MB allowing that to be filled with transaction data, the question is if the nodes end up having to download a block file size of 1 MB if empty, or 97 bytes if empty.


97 bytes.

There is no required block size.

There is a block size LIMIT.  Meaning that blocks are not allowed to be LARGER THAN the limit.  Blocks smaller than the limit are perfectly valid and happen all the time.

I still don't see why that limit can't be set by rules rather than opinion.

The limit IS set by rules.

Changing those rules currently requires a hard fork.  Hard forks require an overwhelming majority of users to agree.  Getting agreement requires swaying opinions.

Consensus is hard.

Instead of long delays in making needed network changes, it could happen the moment a majority of the node operators are ready to handle it.

And how would you count "a majority of the node operators"? There is no reliable way to know how many node operators there are, and it is cheap and easy to set up millions of programs all pretending to be nodes in order to "stuff the ballot box".

By locking out node operators who are not scaled up with the majority, and providing financial incentive to node operators for higher bandwidth usage, it would create a similar arms race to the ASIC miners to ensure that the network can expand rapidly without the need for arguments over every little increase in block size.

The problem with autoscaling is that there isn't a reliable metric that can be used to determine when the size should scale up.

The difficulty can be scaled according to the time between blocks. This is a reliable metric.  An attacker can't change the time between blocks without actually completing the proof-of-work (in which case if they are able to do that, then the difficulty NEEDS to increase).

btcton
Legendary
*
Offline Offline

Activity: 1176
Merit: 1007

Professional SysAdmin / Hobbyist Developer


View Profile
December 16, 2017, 06:36:50 PM
 #13

To answer the first bolded text: While you are not wrong about the nature of decreasing the block reward in order to put the focus on the transaction fees as the rewards for miners, the scaling issue of transaction fees does not really relate to that. What matters is where the reward lies, not the amount. The amount will always be related to the supply and demand of the available hardware in the network. If transaction fees decrease, miners get paid less and therefore mine less, which lowers the overall difficulty of mining and makes it more profitable for the remaining miners who are still in it. If they increase, the number of miners will increase with it as will the difficulty to balance everything out.

To answer the second bolded text: There will always be coins to mine. The Lightning Network still depends on miners. All it does is perform the reconciliation of may more separate transactions off-chain into a single final entry that once "closed" gets put into the blockchain. To get put into the blockchain, miners still have to choose the transaction and will still get paid a fee. The difference is that the fee will be much less than it would be with all the separate transactions, which is fine considering the supply (miners) numbers will adjust accordingly.

That scenario would still see to decrease the usability of direct bitcoin transactions. It means that everyone has to pay more fees entirely for the benefit of a small group of centralized companies paying less fees overall.

To give an example, suppose that we are in a future where miners only get paid in transaction fees and the lightning network is heavily relied upon to compensate for having never implemented dynamic scaling. Online merchants who want to process millions of transactions a day around the world utilize a system where one on block transaction opens the channel for the day, and another on block transaction closes the channel for the day. All transactions that go through their payment system is taken care of in one go keeping the fees low for everyone sending transactions between the two companies. However, the people who initially send the coin to those centralized wallets are going to pay extremely high amounts which would discourage independent wallet usage (and increase the risk of what happens when centralized companies go out of business). Meanwhile, the people who just want to send money to someone across the world wouldn't benefit from the lightning network at all since they are dealing with just a single transaction, so they have to pay more as well.

The reason why everyone is stuck paying more is because anyone with multiple transactions just processes one, which means that the cost has to be spread across fewer payers, and those with less transactions ultimately pay the most.


In a vacuum, you are correct. However, it is important to notice how a lot of the network traffic is, indeed, taken up by these small group of centralized companies. For instance, right now as the trading volume of Bitcoin increases, a lot of the transactions being sent and received through the network are related to some exchange disproportionately. Managing all of these transactions off-chain and then finalizing them on-chain would help free up space for the direct transactions you are talking about who would end up paying less fees because of less network congestion. The centralized companies do benefit directly from this change, as you say, but it also indirectly helps users using independent wallets, which I assume would include most of us here.

As for what happens if the congestion in the network is actually caused by a big margin by direct transactions, I do concede that I am not sure how the Lightning Network could help in that regard. That being said, I certainly do not know all of the specifics of the Lightning Network, but you do bring up a good point.

--

As a side note: If only most discussion threads could be like this one where people can actually discuss without spammers repeating the same thing others have already said, that would be great.

Carlton Banks
Legendary
*
Offline Offline

Activity: 2240
Merit: 1474



View Profile
December 16, 2017, 08:41:24 PM
 #14

suppose that we are in a future where miners only get paid in transaction fees and the lightning network is heavily relied upon to compensate for having never implemented dynamic scaling. Online merchants who want to process millions of transactions a day around the world utilize a system where one on block transaction opens the channel for the day, and another on block transaction closes the channel for the day. All transactions that go through their payment system is taken care of in one go keeping the fees low for everyone sending transactions between the two companies. However, the people who initially send the coin to those centralized wallets are going to pay extremely high amounts which would discourage independent wallet usage (and increase the risk of what happens when centralized companies go out of business). Meanwhile, the people who just want to send money to someone across the world wouldn't benefit from the lightning network at all since they are dealing with just a single transaction, so they have to pay more as well.

The reason why everyone is stuck paying more is because anyone with multiple transactions just processes one, which means that the cost has to be spread across fewer payers, and those with less transactions ultimately pay the most.

It doesn't work like that. People can pay each other directly, and there is no need to close the channels ("at the end of the day" or otherwise, lol)

Please stop wasting your time (and everyone elses): learn how the Lightning concept works first, then start talking again.

Vires in numeris
Elliander
Member
**
Offline Offline

Activity: 65
Merit: 10


View Profile
December 16, 2017, 10:12:49 PM
 #15

I realize that a block can be empty - with no transactions - but don't the node operators still have to download the full file sizes? I know a block always has these values:

Magic no (value always 0xD9B4BEF9)   - 4 bytes
Blocksize - 4 bytes
Blockheader - 80 bytes
Transaction counter - 9 bytes

Meaning that an empty block would require, at a minimum, 97 bytes. However, if the block size is set to 1 MB allowing that to be filled with transaction data, the question is if the nodes end up having to download a block file size of 1 MB if empty, or 97 bytes if empty.

97 bytes.

There is no required block size.

There is a block size LIMIT.  Meaning that blocks are not allowed to be LARGER THAN the limit.  Blocks smaller than the limit are perfectly valid and happen all the time.


That's good to hear.


Quote


Instead of long delays in making needed network changes, it could happen the moment a majority of the node operators are ready to handle it.

And how would you count "a majority of the node operators"? There is no reliable way to know how many node operators there are, and it is cheap and easy to set up millions of programs all pretending to be nodes in order to "stuff the ballot box".

By locking out node operators who are not scaled up with the majority, and providing financial incentive to node operators for higher bandwidth usage, it would create a similar arms race to the ASIC miners to ensure that the network can expand rapidly without the need for arguments over every little increase in block size.

The problem with autoscaling is that there isn't a reliable metric that can be used to determine when the size should scale up.

The difficulty can be scaled according to the time between blocks. This is a reliable metric.  An attacker can't change the time between blocks without actually completing the proof-of-work (in which case if they are able to do that, then the difficulty NEEDS to increase).


Good question and good points. To give an example, the last time I wrote a sorting algorithm for a CS class I measured the metrics by setting an integer to increment each time any checks were performed. In this way I was able to get reliable information separate from the processing power of a given machine. All I focused on was the core essence of what the program was doing.

So what does a node to? What is the core essence of it's functionality that acts to limit the capabilities of the network?

Quote
"in order to validate and relay transactions, bitcoin requires more than a network of miners processing transactions, it must broadcast messages across a network using 'nodes'. This is the first step in the transaction process that results in a block confirmation." - https://www.coindesk.com/bitcoin-nodes-need/

So then, a reliable metric for node operation is to keep track of the information being broadcast to the network. Since it's the capabilities of the network that we care about, maybe there would be a simple integer appended to the information broadcast to the network indicating if the node is below or near it's peak capabilities? If every node attached this information to the messages it transmits to the network, and then it was read as an aggregate from the completed blocks, it would only add 2 bytes to the minimum size for a block and then the block as a whole could be read to determine the average. The information transmitted is also an aggregate of various factors applied to the machine, to give an average which ends up being more of a vote.

So, as an example: Suppose we have 10 nodes participating in a specific block. 6 of them report a 0 - indicating that they are well below capacity. 3 reports a 1 to indicate that are near capacity. 1 reports a 2 indicating it is at it's limit. That means we have 6 votes to increase the cap, and 4 votes against, but only IF the current block size equals the current block size limit. If they are full, the next block has a slightly higher ceiling. The amount of which could be based on an additional digit within that integer as some signal to the network of how much more it can handle. The consequence is that the node which is at capacity won't be able to participate in as many transactions, so will get fewer votes.

That is a simplified example since a given node might have thousands of transactions that it has participated in, but that's OK. If it handles more transactions it has more votes, meaning that smaller nodes that are not as capable might participate in less transactions given it's self identified capability rating. Giving node operators who have a wallet attached to the node a piece of the transaction fee for participating in this process would provide incentive to a node operator to upscale to handle more transactions.

Now, if the reverse happens where a majority of node operators vote that they are at capacity, the network might cap there, or if the votes are leaning towards it being too much the network might scale back - even if the block isn't full. A formula could decide how much to scale it back as well.

I don't see any reason why we can't do this.

Immortal until proven otherwise.
Elliander
Member
**
Offline Offline

Activity: 65
Merit: 10


View Profile
December 16, 2017, 10:15:01 PM
Last edit: December 16, 2017, 10:39:23 PM by Elliander
 #16

suppose that we are in a future where miners only get paid in transaction fees and the lightning network is heavily relied upon to compensate for having never implemented dynamic scaling. Online merchants who want to process millions of transactions a day around the world utilize a system where one on block transaction opens the channel for the day, and another on block transaction closes the channel for the day. All transactions that go through their payment system is taken care of in one go keeping the fees low for everyone sending transactions between the two companies. However, the people who initially send the coin to those centralized wallets are going to pay extremely high amounts which would discourage independent wallet usage (and increase the risk of what happens when centralized companies go out of business). Meanwhile, the people who just want to send money to someone across the world wouldn't benefit from the lightning network at all since they are dealing with just a single transaction, so they have to pay more as well.

The reason why everyone is stuck paying more is because anyone with multiple transactions just processes one, which means that the cost has to be spread across fewer payers, and those with less transactions ultimately pay the most.

It doesn't work like that. People can pay each other directly, and there is no need to close the channels ("at the end of the day" or otherwise, lol)

According to the Lightning network FAQ:

Quote

The system utilizes bidirectional payment channels that consist of multi-signature addresses.
One on-chain transaction is needed to open a channel, and another on-chain transaction can close the channel.

Once a channel is open, value can be transferred instantly between counterparties, who are exchanging real bitcoin transactions, but without broadcasting them to the bitcoin network.

- https://medium.com/@AudunGulbrands1/lightning-faq-67bd2b957d70

That means, yes, one transaction is needed to open a channel and another is needed to close the channel, but on-chain. That doesn't mean there's a limit to how long the channel can remain open though, so in cases of people who leave it open indefinitely that ends up being even worse for everyone else who uses on chain transactions compared to those who occasionally close the channel.

Additionally, I never said that people couldn't pay directly. I said that people who weren't sending a large number of transactions wouldn't benefit from use of the lightning network themselves. (although I did make it clear that in the short term it has a benefit by easing congestion)

Please stop wasting your time (and everyone elses): learn how the Lightning concept works first, then start talking again.

Please don't respond to threads with a condescending attitude, especially with inaccurate information, lest you be seen as trolling. That last like was unnecessary and detracts from the conversation flow. Even if I was wrong (and I wasn't) there's nothing wrong with being wrong and being corrected.

Immortal until proven otherwise.
Elliander
Member
**
Offline Offline

Activity: 65
Merit: 10


View Profile
December 16, 2017, 10:29:46 PM
 #17


In a vacuum, you are correct. However, it is important to notice how a lot of the network traffic is, indeed, taken up by these small group of centralized companies. For instance, right now as the trading volume of Bitcoin increases, a lot of the transactions being sent and received through the network are related to some exchange disproportionately. Managing all of these transactions off-chain and then finalizing them on-chain would help free up space for the direct transactions you are talking about who would end up paying less fees because of less network congestion. The centralized companies do benefit directly from this change, as you say, but it also indirectly helps users using independent wallets, which I assume would include most of us here.

As for what happens if the congestion in the network is actually caused by a big margin by direct transactions, I do concede that I am not sure how the Lightning Network could help in that regard. That being said, I certainly do not know all of the specifics of the Lightning Network, but you do bring up a good point.


I completely agree. In the short term at least, the Lightning network is clearly needed as a solution to congestion by offloading transactions from large centralized companies with large numbers of transactions so that there would be less congestion, and therefore lower fees and faster confirmations for everyone else. I am totally on board with that use case.

However, my concerns are more for the long term. With what happens when miners need to be paid in transaction fees to remain in operation at all. In order for the network to be profitable for miners when the last coin has been mined (and, really, long before then) we'd have to have a shift towards being paid in transaction fees which means the network itself must both expand to handle more transactions and to also have an increased number of transactions overall. If we kept the block ceiling where it is, or even just a little above it, there won't really be enough transactions for the miners to benefit enough to remain in operation.

My point is basically that we shouldn't rely heavily on the lightning network as a silver bullet, and we should also plan for a single change to account for all the changes to block sizes that will be needed rather than dealing with these problems every few years on repetition.

As a side note: If only most discussion threads could be like this one where people can actually discuss without spammers repeating the same thing others have already said, that would be great.

I hope it can stay that way Smiley

Immortal until proven otherwise.
achow101
Moderator
Legendary
*
expert
Offline Offline

Activity: 1638
Merit: 1938


bc1qshxkrpe4arppq89fpzm6c0tpdvx5cfkve2c8kl


View Profile WWW
December 17, 2017, 12:52:39 AM
 #18

So then, a reliable metric for node operation is to keep track of the information being broadcast to the network. Since it's the capabilities of the network that we care about, maybe there would be a simple integer appended to the information broadcast to the network indicating if the node is below or near it's peak capabilities? If every node attached this information to the messages it transmits to the network, and then it was read as an aggregate from the completed blocks, it would only add 2 bytes to the minimum size for a block and then the block as a whole could be read to determine the average. The information transmitted is also an aggregate of various factors applied to the machine, to give an average which ends up being more of a vote.

So, as an example: Suppose we have 10 nodes participating in a specific block. 6 of them report a 0 - indicating that they are well below capacity. 3 reports a 1 to indicate that are near capacity. 1 reports a 2 indicating it is at it's limit. That means we have 6 votes to increase the cap, and 4 votes against, but only IF the current block size equals the current block size limit. If they are full, the next block has a slightly higher ceiling. The amount of which could be based on an additional digit within that integer as some signal to the network of how much more it can handle. The consequence is that the node which is at capacity won't be able to participate in as many transactions, so will get fewer votes.

That is a simplified example since a given node might have thousands of transactions that it has participated in, but that's OK. If it handles more transactions it has more votes, meaning that smaller nodes that are not as capable might participate in less transactions given it's self identified capability rating. Giving node operators who have a wallet attached to the node a piece of the transaction fee for participating in this process would provide incentive to a node operator to upscale to handle more transactions.

Now, if the reverse happens where a majority of node operators vote that they are at capacity, the network might cap there, or if the votes are leaning towards it being too much the network might scale back - even if the block isn't full. A formula could decide how much to scale it back as well.

I don't see any reason why we can't do this.
Any metric that requires self reporting and cannot be independently verified to be true is easily gameable. Suppose I really want larger blocks. What prevents me from spinning up a few thousand fake nodes (you can't tell they are fake, they all look real as that is just the nature of nodes) and then having each node vote saying that it is super overloaded and we need Gigabyte sized blocks now? What prevents me from lying? What if I am actually experiencing no load at all but I claim to everyone else that I am absolutely overloaded? There is no way for you to verify that what I am saying is true or now without having direct access into my machine. Are you really going to trust what I say and take it at face value?



That doesn't mean there's a limit to how long the channel can remain open though, so in cases of people who leave it open indefinitely that ends up being even worse for everyone else who uses on chain transactions compared to those who occasionally close the channel.
Why would it be worse for everyone else? They aren't making transactions on chain, so no one else is effected.



In general, changing what the block size limit is requires considering more than just transaction fees and capacity. You also have to consider things like the potential for increased orphan rates, the potential for fee sniping, whether nodes can support the larger block size, any potential attack surfaces or ways that different block sizes can exacerbate current issues (e.g. quadratic sighashing), etc.



Lastly, a meta note. Please don't make so many consecutive posts to respond to each post individually. Do them all in one post. You can separate each response using a horizontal rule as I have done here.

DannyHamilton
Legendary
*
Offline Offline

Activity: 2198
Merit: 1390



View Profile
December 17, 2017, 02:29:48 AM
Last edit: December 17, 2017, 02:21:15 PM by DannyHamilton
 #19

So then, a reliable metric for node operation is to keep track of the information being broadcast to the network. Since it's the capabilities of the network that we care about, maybe . . .
Any metric that requires self reporting and cannot be independently verified to be true is easily gameable . . .

As you've hopefully noticed from achow101's post...

When trying to come up with solution, you need to assume that a significant percentage of the network is actively trying to take advantage of your system to make things worse for everyone else.  You either need to make it too expensive for them to bother (which is what proof-of-work accomplishes) or you need to make the metric something that is independently verifiable (which is how the difficulty adjustment works).

Keep in mind, that you can't assume that users will be running the same software as you.  Unlike your CS class sorting system where you knew exactly what it was going to do, and could count on it since you wrote the program, in a distributed system you need to assume that adversaries will write their own software that will try to participate on your network without you realizing it.

DooMAD
Legendary
*
Offline Offline

Activity: 1848
Merit: 1217


Leave no FUD unchallenged


View Profile WWW
December 17, 2017, 11:38:16 AM
Last edit: December 17, 2017, 11:51:34 AM by DooMAD
 #20

Ever since BIP106 was first proposed, I've been a fan of the idea of dynamic scaling.  Although shortly after that, I decided that the original concept was far too unrestrictive and had the capability to result in dangerously large size increases if it was abused.  So over time, I've been looking at different tweaks and adjustments, partly to curtail any excessive increases, but also to incorporate SegWit, limit the potential for gaming the system and even prevent dramatic swings in fee pressure.  So far, that's where I've got to.  Still hoping some coders will take an interest and get it to the next level where it might actually be practical to implement.

My opinion is that an open cap is too unrestrictive, but a solid cap is too restrictive. That's why I think we need a way for the network to raise the cap on it's own within a set of limitations so that it can't bloat.

EDIT: I took a look at your thread, which looks similar to the first part of what I suggested here, but what's to stop the block size from increasing too fast for the network to handle? I think dynamic scaling needs to focus on both the needs of the network as well as the capability of the network with two distinct scaling solutions used together. The first adjusts the network according to the needs of the network, and the second adjust the network according to the capabilities of the network. Together, it allows flexibility, but there would have to be some incentive for the node operators to be willing to expand to handle the increased traffic.

And therein lies the rub, there's currently no way to forecast or determine the limits when it comes to the capability of the network, other than asking nodes directly to set their own individual preferred caps.  Somehow I doubt many people on these boards would consider implementing ideas borrowed from Bitcoin Unlimited.   Cheesy

Joking aside, combining algorithmic blockweight adjustments with the allowance of each node to set an upper limit to the size they'd be willing to accept would work if it weren't for the fact that it could easily force nodes off the network in a hardfork if they set their own personal limit lower than that of the majority of other participants, so even if people were willing to take ideas from BU, it still has some pretty serious shortcomings.  If anyone can come up with a solution to that conundrum, I'm eager to hear it.  Until then, it's a sticking point.  Which is why all I could really do is make any increases as small as possible and allow the network to undo the increase if and when the demand isn't there anymore.

In essence, any attempt to place any hard upper limit inevitably results in hardforks at some point in future, unless you're an absolute genius and manage to find a workaround or hack to implement it as an opt-in soft fork.


Please stop wasting your time (and everyone elses): learn how the Lightning concept works first, then start talking again.

Please don't respond to threads with a condescending attitude, especially with inaccurate information, lest you be seen as trolling. That last like was unnecessary and detracts from the conversation flow. Even if I was wrong (and I wasn't) there's nothing wrong with being wrong and being corrected.

If you dare to discuss on-chain scaling, Carlton will jump down your throat for even the slightest perceived transgression.  Not that I'm excusing the behaviour, it's just that I can't imagine it changing any time soon.  It just seems to be the way of things.  Don't let it put you off.

Pages: [1] 2 »  All
  Print  
 
Jump to:  

Bitcointalk.org is not available or authorized for sale. Do not believe any fake listings.
Sponsored by , a Bitcoin-accepting VPN.
Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!