Bitcoin Forum
April 23, 2024, 02:55:44 PM *
News: Latest Bitcoin Core release: 27.0 [Torrent]
 
   Home   Help Search Login Register More  
Pages: [1] 2 3 »  All
  Print  
Author Topic: Won't Bitcoin block size be resolved through simple market economics?  (Read 702 times)
No_2 (OP)
Hero Member
*****
Offline Offline

Activity: 901
Merit: 1031


BTC: the beginning of stake-based public resources


View Profile
December 18, 2020, 09:55:28 AM
Merited by Welsh (8), suchmoon (4), ranochigo (2), o_e_l_e_o (2), ABCbits (1), Heisenberg_Hunter (1)
 #1

I know this question gets asked a lot, but in this post I want to discuss my answer, not the question. I give this answer so often to so many people that I just wanted to run it past the community to check it seems sane.

Q: "Bitcoin can only handle 7 transactions a second. Surely this makes it unusable for day to day transactions?"

A: "The block size is one of the main components limiting transaction throughput. Miners receive a reward for each block: this reward is composed of a subsidy, originally 50BTC per block, currently 6.25BTC per block; and fees for each transaction paid for by individual users. Over time the reward will become so insignificant that the miners will collectively be incentivised to focus on increasing the amount of fees they receive for each block, encouraging them to reach consensus on increasing the number of transactions in each block. This has already happened with segregated witness; other techniques will no doubt be implemented to further increase this, possibly even just increasing the permitted data size of a block. So I believe this problem will just solve itself over time and isn't something to worry about long term."

So is the answer I'm giving feasible or has the code base ossified so much this type of change is simply off the table?

I always tell people that Bitcoin is still bootstrapping; it won't have matured until its around 130 years old. So any conversations about its usability at this point is a getting ahead of its self. Like a cake that's still baking if you try to eat it now you'll find the middle is still liquid and raw. I think also the idea something is architected more for the use of future generations than us sitting here now discussing it seems very alien to people.
1713884144
Hero Member
*
Offline Offline

Posts: 1713884144

View Profile Personal Message (Offline)

Ignore
1713884144
Reply with quote  #2

1713884144
Report to moderator
1713884144
Hero Member
*
Offline Offline

Posts: 1713884144

View Profile Personal Message (Offline)

Ignore
1713884144
Reply with quote  #2

1713884144
Report to moderator
1713884144
Hero Member
*
Offline Offline

Posts: 1713884144

View Profile Personal Message (Offline)

Ignore
1713884144
Reply with quote  #2

1713884144
Report to moderator
Advertised sites are not endorsed by the Bitcoin Forum. They may be unsafe, untrustworthy, or illegal in your jurisdiction.
1713884144
Hero Member
*
Offline Offline

Posts: 1713884144

View Profile Personal Message (Offline)

Ignore
1713884144
Reply with quote  #2

1713884144
Report to moderator
NotATether
Legendary
*
Online Online

Activity: 1582
Merit: 6677


bitcoincleanup.com / bitmixlist.org


View Profile WWW
December 18, 2020, 10:55:34 AM
Merited by Welsh (6), suchmoon (4), o_e_l_e_o (2), ABCbits (1), Pmalek (1), Heisenberg_Hunter (1)
 #2

The thing is we have no idea how large the blockchain will be 5 years from now, or 10 or 20 years. That means we cannot know in advance the storage requirements for running a full node. We can only estimate the future size in the short term by extrapolating historical data of the blockchain size like this chart: https://www.statista.com/statistics/647523/worldwide-bitcoin-blockchain-size/

From January 2019 to February 2020 the size increase was about 60GB. From February 2020 up to now is about 50GB so it's safe to assume this increase will become 60GB by March 2021 (13 months time difference) so possibly a total blockchain size of 320GB compared to the present size of 308GB.
Now extrapolating over the next 30 years, we have:

2022-04: 380GB
2023-05: 440GB
2024-06: 500GB
2025-07: 560GB
2026-08: 620GB
2027-09: 680GB
2028-10: 740GB
2029-11: 800GB
2030-12: 860GB
2032-01: 920GB
2033-02: 980GB
2034-03: 1040GB
2035-04: 1100GB
2036-05: 1160GB
2037-06: 1220GB
2038-07: 1280GB
2039-08: 1340GB
2040-09: 1400GB
2041-10: 1460GB
2042-11: 1540GB
2043-12: 1600GB
2045-01: 1660GB
2046-02: 1720GB
2047-03: 1780GB
2048-04: 1840GB
2049-05: 1900GB
2050-06: 1960GB
2051-07: 2020GB

So in 30 years time if the blockchain continues to increase by a constant size, we can no longer store it in 2TB hard drives. Sure you can easily find 4TB consumer models but this already excludes a lot of bitcoiners who are on an budget. Definitely more than the percentage of bitcoiners forced to stop running in 10 years time because they don't have the budget for a 2TB drive. And with these numbers, at around 2140 it'll probably even surpass 4TB as well.

And these disk size breakthroughs are important because a lot of nodes run on cloud dedicated servers which have a 500GB/1TB/2TB/4TB drive. Nobody makes a consumer 8TB or 16TB drive, so at that point bitcoiners would have to know how to set up RAID or LVM to combine multiple disks to run a full node. This can't easily be done on Windows or macOS and maybe even some Linux users will struggle setting it up.

All these dates and sizes assumes the blockchain increases by a fixed constant, but I noticed sort of a logarithmic increase in the size so we'll probably hit these limits sooner. A logarithmic chart also means that more people and businesses are using bitcoin compared to last year.



In summary, we already have long-term storage problems even with the present block size, no need to increase it and accelerate the dates these storage limits will be met. And to finally answer your question, increased fees will probably be the way forward after the reward is exhausted, not increasing the block size. Hopefully the majority of users will move to a second-layer protocol on top of bitcoin such as LN to avoid the high fees.

I don't see them increasing both because higher fees hurt users, and higher block sizes hurt full node runners.

.
.BLACKJACK ♠ FUN.
█████████
██████████████
████████████
█████████████████
████████████████▄▄
░█████████████▀░▀▀
██████████████████
░██████████████
████████████████
░██████████████
████████████
███████████████░██
██████████
CRYPTO CASINO &
SPORTS BETTING
▄▄███████▄▄
▄███████████████▄
███████████████████
█████████████████████
███████████████████████
█████████████████████████
█████████████████████████
█████████████████████████
███████████████████████
█████████████████████
███████████████████
▀███████████████▀
█████████
.
o_e_l_e_o
In memoriam
Legendary
*
Offline Offline

Activity: 2268
Merit: 18507


View Profile
December 18, 2020, 11:46:08 AM
Merited by Welsh (2), d5000 (1), heyuniverse (1)
 #3

So in 30 years time if the blockchain continues to increase by a constant size, we can no longer store it in 2TB hard drives. Sure you can easily find 4TB consumer models but this already excludes a lot of bitcoiners who are on an budget. Definitely more than the percentage of bitcoiners forced to stop running in 10 years time because they don't have the budget for a 2TB drive. And with these numbers, at around 2140 it'll probably even surpass 4TB as well.
But hard drive sizes are increasing exponentially while their prices are steadily decreasing. 30 years ago, $500 would have bought you somewhere around 30MB. In 2008 when the whitepaper was published, $100 would have got you 500GB. Today, I can get 4TB for about $60-80. In another 30 years' time, 2TB could well be the smallest amount of storage you can easily purchase, in the same way that you can't easily buy a 2GB hard drive today. By the time we get to 2140, I imagine most consumer drives will be measured in at least petabytes.

I don't think hard drive size or cost will be the limiting factor in bitcoin's scalability. My assumption is that by 2140 the majority of on chain transactions will be opening and closing of Lightning channels (or some other second layer solution) and the majority of day to day bitcoin transactions will take place off chain.
ranochigo
Legendary
*
Offline Offline

Activity: 2954
Merit: 4163


View Profile
December 18, 2020, 11:51:45 AM
 #4

The thing is we have no idea how large the blockchain will be 5 years from now, or 10 or 20 years. That means we cannot know in advance the storage requirements for running a full node. We can only estimate the future size in the short term by extrapolating historical data of the blockchain size like this chart: https://www.statista.com/statistics/647523/worldwide-bitcoin-blockchain-size/
I would estimate with the worst case scenario which is the blocks being filled completely, ie. roughly ~2.3MB per block.

And these disk size breakthroughs are important because a lot of nodes run on cloud dedicated servers which have a 500GB/1TB/2TB/4TB drive. Nobody makes a consumer 8TB or 16TB drive, so at that point bitcoiners would have to know how to set up RAID or LVM to combine multiple disks to run a full node. This can't easily be done on Windows or macOS and maybe even some Linux users will struggle setting it up.
I think storage density has the potential to become bigger and flash based storage would become exponentially cheaper with improvement in technology. I think it'll come with the risk of centralisation but it'll happen sooner or later. It'll depend on how fast we can improve on the technology.

I don't think using a prediction is pragmatic nor realistic. Unfortunately, certain miners could be driven by an agenda that would motivate them to stagnant the capacity and thus preventing any improvement in the scalability. It was what happened with Segwit and there was a bunch of drama but I don't like drama and shall not be elaborating too much on it, and also because it has been discussed again and again.

I wouldn't say that miners won't do what you described but it's hard to tell. Instead of trying to predict what's coming, I would instead argue in the point that there are solutions to scalability and huge progress has been made in that area. LN would enable a larger capacity and a smaller fees for the transaction and it's pretty useful right now already. I have no doubt that optimizations would come with Bitcoin, MAST, Taproot etc etc. But the part with miners reaching a consensus on scalability approach can be quite a stretch. There is no telling what that optimization can be or whether it'll benefit the miners, if at all and that will definitely come with some resistance.

.
.HUGE.
▄██████████▄▄
▄█████████████████▄
▄█████████████████████▄
▄███████████████████████▄
▄█████████████████████████▄
███████▌██▌▐██▐██▐████▄███
████▐██▐████▌██▌██▌██▌██
█████▀███▀███▀▐██▐██▐█████

▀█████████████████████████▀

▀███████████████████████▀

▀█████████████████████▀

▀█████████████████▀

▀██████████▀▀
█▀▀▀▀











█▄▄▄▄
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
.
CASINSPORTSBOOK
▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄
▀▀▀▀█











▄▄▄▄█
Charles-Tim
Legendary
*
Offline Offline

Activity: 1526
Merit: 4807



View Profile
December 18, 2020, 12:21:11 PM
 #5

I don't see them increasing both because higher fees hurt users,
Before bitcoin cash hard fork, the issue was the fact that bitcoin cash community were demanding for block size to be increased which later led to hard fork that resulted into bitcoin cash creation. But, bitcoin community that see bitcoin as an asset still defended this fact and yet all the hard fork coins even after are nothing before bitcoin today. My opinion about bitcoin is that it is an appreciating asset, no need to increase the transaction fee because bitcoin itself will increase against fait in such a way miners will see mining profitable. If bitcoin possibly is increasing in value (price) which could also probably increase more in value (price) in later time, with the increase, it will be able to sustain miners to mine profitably. Only bitcoin to be an appreciative asset is enough for bitcoin to defend itself after all bitcoin are mined.

What I am trying to comment is the fact that bitcoin being an appreciative asset will make the mining fee to increase in fiat prices but not increasing in bitcoin itself. People that find it difficult to pay for onchain fee will later be resorting to lightning payment possibly.

.
HUGE
▄██████████▄▄
▄█████████████████▄
▄█████████████████████▄
▄███████████████████████▄
▄█████████████████████████▄
███████▌██▌▐██▐██▐████▄███
████▐██▐████▌██▌██▌██▌██
█████▀███▀███▀▐██▐██▐█████

▀█████████████████████████▀

▀███████████████████████▀

▀█████████████████████▀

▀█████████████████▀

▀██████████▀▀
█▀▀▀▀











█▄▄▄▄
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
.
CASINSPORTSBOOK
▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄
▀▀▀▀█











▄▄▄▄█
ABCbits
Legendary
*
Offline Offline

Activity: 2856
Merit: 7403


Crypto Swap Exchange


View Profile
December 18, 2020, 12:44:01 PM
Merited by Welsh (4), vapourminer (1), Heisenberg_Hunter (1), NotATether (1), aliashraf (1)
 #6

You're missing the point, bitcoin community mostly agree that running a full node should be cheap, which means block size are limited with hardware and internet growth rate.

IMO blockchain size and internet connection aren't the worst part, but rather CPU, RAM and storage speed,
1. Can the CPU verify transaction and block real time?
2. How much RAM needed to store all cached data?
3. Can the storage handle intensive read/write? Ethereum already suffering this problem

Nobody makes a consumer 8TB or 16TB drive, so at that point bitcoiners would have to know how to set up RAID or LVM to combine multiple disks to run a full node. This can't easily be done on Windows or macOS and maybe even some Linux users will struggle setting it up.

At very least, you could setup RAID and LVM with GUI application. I don't know about macOS, but Windows GUI application to setup RAID, even though they use term "storage spaces" and "storage pools"
It's different case if we're talking about hardware-level or BIOS-level RAID setup.

█▀▀▀











█▄▄▄
▀▀▀▀▀▀▀▀▀▀▀
e
▄▄▄▄▄▄▄▄▄▄▄
█████████████
████████████▄███
██▐███████▄█████▀
█████████▄████▀
███▐████▄███▀
████▐██████▀
█████▀█████
███████████▄
████████████▄
██▄█████▀█████▄
▄█████████▀█████▀
███████████▀██▀
████▀█████████
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
c.h.
▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄
▀▀▀█











▄▄▄█
▄██████▄▄▄
█████████████▄▄
███████████████
███████████████
███████████████
███████████████
███░░█████████
███▌▐█████████
█████████████
███████████▀
██████████▀
████████▀
▀██▀▀
aliashraf
Legendary
*
Offline Offline

Activity: 1456
Merit: 1174

Always remember the cause!


View Profile WWW
December 18, 2020, 09:12:06 PM
Last edit: December 18, 2020, 09:30:37 PM by aliashraf
Merited by vapourminer (1), d5000 (1), Heisenberg_Hunter (1)
 #7

The scaling problem is not that simple:

First of all, as @ETFbitcoin has mentioned above, it has nothing to do with blockchain size and hard disk capacities, we have pruning technology already working fine and no worries about bootstrapping nodes which can be addressed by simple UTXO commitment techniques.

Secondly, it can not be postponed for decades, like what op proposes, bitcoin is not a sci-fi technology for the 22nd century, people need scaling NOW.

Thirdly, it is not achievable without sharding because you can't get hundreds of transactions per second processed and timestamped by ALL participating nodes simultaneously, going through consensus tunnel in a p2p decentralized network, it is just impossible, now and for the next few decades at least.

So, let's recognize scaling as a crucial problem that takes a lot of original/out-of-the-box thinking, speculative investment, courage, and good faith, all four being rare resources nowadays in the community.

NotATether
Legendary
*
Online Online

Activity: 1582
Merit: 6677


bitcoincleanup.com / bitmixlist.org


View Profile WWW
December 18, 2020, 10:38:24 PM
 #8

1. Can the CPU verify transaction and block real time?

^ This. I can see CPU time for verification being a huge bottleneck as the blockchain gets bigger since we've made virtually no advancements in CPU clock speeds for the past 5 years. Block verification might even be single threaded (!) which makes multi-core advancements worthless, CMIIW.

.
.BLACKJACK ♠ FUN.
█████████
██████████████
████████████
█████████████████
████████████████▄▄
░█████████████▀░▀▀
██████████████████
░██████████████
████████████████
░██████████████
████████████
███████████████░██
██████████
CRYPTO CASINO &
SPORTS BETTING
▄▄███████▄▄
▄███████████████▄
███████████████████
█████████████████████
███████████████████████
█████████████████████████
█████████████████████████
█████████████████████████
███████████████████████
█████████████████████
███████████████████
▀███████████████▀
█████████
.
pooya87
Legendary
*
Offline Offline

Activity: 3430
Merit: 10495



View Profile
December 19, 2020, 04:09:34 AM
 #9

Block verification might even be single threaded (!) which makes multi-core advancements worthless, CMIIW.
In a perfect world transactions in a block could be verified in parallel but in reality each tx can reference another output that was created in the same block earlier which makes the code a lot more complicated and can create bottlenecks that slows the verification process down.

We've already got some optimization though. For instance in SegWit the hashes don't have to be computed for every input in a tx, most of it can be cashed and only a much smaller hash be computed. Or with Schnorr (upcoming) we can perform batch verification instead of verifying each signature alone.

.
.BLACKJACK ♠ FUN.
█████████
██████████████
████████████
█████████████████
████████████████▄▄
░█████████████▀░▀▀
██████████████████
░██████████████
████████████████
░██████████████
████████████
███████████████░██
██████████
CRYPTO CASINO &
SPORTS BETTING
▄▄███████▄▄
▄███████████████▄
███████████████████
█████████████████████
███████████████████████
█████████████████████████
█████████████████████████
█████████████████████████
███████████████████████
█████████████████████
███████████████████
▀███████████████▀
█████████
.
NotATether
Legendary
*
Online Online

Activity: 1582
Merit: 6677


bitcoincleanup.com / bitmixlist.org


View Profile WWW
December 19, 2020, 01:44:48 PM
 #10

We also can split it to GPU (verify condition which could be checked independently) and CPU (verify condition which can't be checked independently), but AFAIK it's more difficult to implement.

I imagine that ensuring the OpenCL runtime for a system that'll run Bitcoin Core are present will be a big problem because of the number of people running nodes within VMs where OpenCL-on-CPU performance is mediocre.

Maybe we should focus more on finding algorithms that speculatively group addresses by their probability of being referenced together and unroll the verification check accordingly.

Even parallelism across just 2-4 threads is very beneficial since nodes usually have that many to spare.

.
.BLACKJACK ♠ FUN.
█████████
██████████████
████████████
█████████████████
████████████████▄▄
░█████████████▀░▀▀
██████████████████
░██████████████
████████████████
░██████████████
████████████
███████████████░██
██████████
CRYPTO CASINO &
SPORTS BETTING
▄▄███████▄▄
▄███████████████▄
███████████████████
█████████████████████
███████████████████████
█████████████████████████
█████████████████████████
█████████████████████████
███████████████████████
█████████████████████
███████████████████
▀███████████████▀
█████████
.
aliashraf
Legendary
*
Offline Offline

Activity: 1456
Merit: 1174

Always remember the cause!


View Profile WWW
December 19, 2020, 03:07:31 PM
Last edit: December 19, 2020, 03:48:07 PM by aliashraf
 #11

Block verification might even be single threaded (!) which makes multi-core advancements worthless, CMIIW.
In a perfect world transactions in a block could be verified in parallel but in reality each tx can reference another output that was created in the same block earlier which makes the code a lot more complicated and can create bottlenecks that slows the verification process down.

This is easy:
Code:
set temp_UTXO as emptylist;
mark all txns in block as UNPROCESSED;
set potentialInputFlag = TRUE;
set waitingFlag = TRUE;

While (potentialInputFlag && waitingFlag ){
  set potentialInputFlag = FALSE;
  set waitingFlag = FALSE;
 for each unprocessed txn in block; //parallel
  try {
    isValid(txn); //raises error if txn invalid; checks temp_UTXO for inputs as well
    mark txn as valid;
    temp_UTXO.add(txn);
    set potentialInputFlag = TRUE;
   }
  catch (err) {
   if err.cause == INPUT_NOT_FOUND{
    set waitingFlag = TRUE;
   else
    raiserror(err);
   }
 }
The idea is postponing rejection of txns with unknown input as long as there are new txns that become valid in each loop. Note that the main loop is not parallel but in each round all unprocessed txns are processed in parallel. Obviously it converges proportional with the length of the longest chain of txns in the block.

pooya87
Legendary
*
Offline Offline

Activity: 3430
Merit: 10495



View Profile
December 20, 2020, 04:23:22 AM
 #12

Block verification might even be single threaded (!) which makes multi-core advancements worthless, CMIIW.
In a perfect world transactions in a block could be verified in parallel but in reality each tx can reference another output that was created in the same block earlier which makes the code a lot more complicated and can create bottlenecks that slows the verification process down.

There are many thing to verify when verify a transaction, so i wonder if it's possible to verify some condition which could be checked independently (total bitcoin on output <= total bitcoin on input, signed with valid private key, etc.).
That way we could use a core/thread only to verify few condition which can't be checked independently (e.g. whether the input is in UTXO).

We also can split it to GPU (verify condition which could be checked independently) and CPU (verify condition which can't be checked independently), but AFAIK it's more difficult to implement.
We want to compute the most expensive operations (ie. ECDSA and possibly hashing) concurrently not the simplest ones that don't take that much time to begin with. For example checking values in each transaction is a simple Int64 comparison and I believe it is 1 CPU cycle. Additionally we have to look up each UTXO in the database which I don't believe can be parallelized.

This is easy:
It sounds easy but it has to be actual code and be benchmarked to see if it actually increases the speed or not specially for a block like this (DB is from UTXO database that we already have from all the previous blocks):
Code:
      input(s)
tx0:  000
tx1:  DB
tx2:  DB
tx3:  tx1 DB
tx4:  DB tx1 tx3 DB
tx5:  DB
tx6:  tx1 tx3 tx2 DB tx5
tx7:  DB tx6 tx1 tx3 tx2 DB tx4
...

.
.BLACKJACK ♠ FUN.
█████████
██████████████
████████████
█████████████████
████████████████▄▄
░█████████████▀░▀▀
██████████████████
░██████████████
████████████████
░██████████████
████████████
███████████████░██
██████████
CRYPTO CASINO &
SPORTS BETTING
▄▄███████▄▄
▄███████████████▄
███████████████████
█████████████████████
███████████████████████
█████████████████████████
█████████████████████████
█████████████████████████
███████████████████████
█████████████████████
███████████████████
▀███████████████▀
█████████
.
aliashraf
Legendary
*
Offline Offline

Activity: 1456
Merit: 1174

Always remember the cause!


View Profile WWW
December 20, 2020, 04:57:41 AM
Last edit: December 20, 2020, 05:15:14 AM by aliashraf
 #13

This is easy:
It sounds easy but it has to be actual code and be benchmarked to see if it actually increases the speed or not
Haha, it reminds me of a familiar tone, seriously   Cheesy
I don't do this, waiting for the final code to start discussing it, do you? If yes, let me just say it, you are becoming too Gregorized  Cheesy
Quote
specially for a block like this (DB is from UTXO database that we already have from all the previous blocks):
Code:
      input(s)
tx0:  000
tx1:  DB
tx2:  DB
tx3:  tx1 DB
tx4:  DB tx1 tx3 DB
tx5:  DB
tx6:  tx1 tx3 tx2 DB tx5
tx7:  DB tx6 tx1 tx3 tx2 DB tx4
...

Your hypothetical example is not common, leave it as an exceptional worst case scenario, not a "special" one.
pooya87
Legendary
*
Offline Offline

Activity: 3430
Merit: 10495



View Profile
December 20, 2020, 06:51:47 AM
 #14

Haha, it reminds me of a familiar tone, seriously   Cheesy
I don't do this, waiting for the final code to start discussing it, do you? If yes, let me just say it, you are becoming too Gregorized  Cheesy
Sure we can discuss it, but as someone who has written a lot of code and optimized a lot of it (even though I'm not an expert) I've learned to not say anything with certainty whenever it comes to optimization. It has to be thoroughly and correctly benchmarked first.

Quote
Your hypothetical example is not common, leave it as an exceptional worst case scenario, not a "special" one.
It is not as uncommon as you think. Nowadays with increasing fees we will see more cases of CPFP which is basically what my example represents (parent tx is in the same block).
There is also spam attacks that could fill the mempool with high paying chain transactions like this.

.
.BLACKJACK ♠ FUN.
█████████
██████████████
████████████
█████████████████
████████████████▄▄
░█████████████▀░▀▀
██████████████████
░██████████████
████████████████
░██████████████
████████████
███████████████░██
██████████
CRYPTO CASINO &
SPORTS BETTING
▄▄███████▄▄
▄███████████████▄
███████████████████
█████████████████████
███████████████████████
█████████████████████████
█████████████████████████
█████████████████████████
███████████████████████
█████████████████████
███████████████████
▀███████████████▀
█████████
.
NotATether
Legendary
*
Online Online

Activity: 1582
Merit: 6677


bitcoincleanup.com / bitmixlist.org


View Profile WWW
December 20, 2020, 07:07:36 AM
 #15

The idea is postponing rejection of txns with unknown input as long as there are new txns that become valid in each loop. Note that the main loop is not parallel but in each round all unprocessed txns are processed in parallel. Obviously it converges proportional with the length of the longest chain of txns in the block.

Postponing followed by processing unknown input transactions at some later date will use more cpu cycles overall than processing them first in, first out which is what we are trying to avoid.

Also, when this gets all the transactions for a particular address, in order to process them all correctly in parallel, truly all the transactions have to be present first. The balance might be calculated as negative, or more/less than it truly is if not all the transactions are present. And you can only avoid this 100% by postponing all transactions i.e you're creating a table with some of the information that transaction processing would've obtained so you might as well get the rest of the info during that stage to save time. So the postponing stage becomes equivalent to the processing stage when you do that.

.
.BLACKJACK ♠ FUN.
█████████
██████████████
████████████
█████████████████
████████████████▄▄
░█████████████▀░▀▀
██████████████████
░██████████████
████████████████
░██████████████
████████████
███████████████░██
██████████
CRYPTO CASINO &
SPORTS BETTING
▄▄███████▄▄
▄███████████████▄
███████████████████
█████████████████████
███████████████████████
█████████████████████████
█████████████████████████
█████████████████████████
███████████████████████
█████████████████████
███████████████████
▀███████████████▀
█████████
.
PrimeNumber7
Copper Member
Legendary
*
Offline Offline

Activity: 1610
Merit: 1899

Amazon Prime Member #7


View Profile
December 20, 2020, 07:51:42 AM
 #16

IMO blockchain size and internet connection aren't the worst part, but rather CPU, RAM and storage speed,
1. Can the CPU verify transaction and block real time?
2. How much RAM needed to store all cached data?
3. Can the storage handle intensive read/write? Ethereum already suffering this problem
Internet connection is an important factor in being able to effectively use a full node. If you cannot quickly receive a block, none of your bullet points matter because your node will not even start verifying if a block is valid in a timely manner.

Regarding your 1st point, a user will need a CPU that can quickly validate a block.

Regarding your 2nd point, the UTXO set is held in RAM (I believe). The UTXO set is generally increasing over time, and if the maximum block size were to be increased, the UTXO set will increase at a faster rate. In theory, a portion of the UTXO set could be stored on disk based on certain criteria, and this would result in blocks that spend a UTXO stored on disk to take longer to validate. It would also potentially expose nodes to a DDoS attack if an attacker were to send many invalid blocks to nodes.
pooya87
Legendary
*
Offline Offline

Activity: 3430
Merit: 10495



View Profile
December 20, 2020, 08:41:38 AM
Merited by Welsh (2)
 #17

Internet connection is an important factor in being able to effectively use a full node. If you cannot quickly receive a block, none of your bullet points matter because your node will not even start verifying if a block is valid in a timely manner.
High internet speed (low latency) is important for a mining node not a regular full node. For example you are downloading 4 MB tops and even with a slow internet speed you can get it in a very short time (shouldn't be more than 30 sec worse case).
Initial sync is a different matter but it is only happening once.

Quote
and if the maximum block size were to be increased, the UTXO set will increase at a faster rate.
Not necessarily. UTXO count mainly grows with number of users not with block size.

.
.BLACKJACK ♠ FUN.
█████████
██████████████
████████████
█████████████████
████████████████▄▄
░█████████████▀░▀▀
██████████████████
░██████████████
████████████████
░██████████████
████████████
███████████████░██
██████████
CRYPTO CASINO &
SPORTS BETTING
▄▄███████▄▄
▄███████████████▄
███████████████████
█████████████████████
███████████████████████
█████████████████████████
█████████████████████████
█████████████████████████
███████████████████████
█████████████████████
███████████████████
▀███████████████▀
█████████
.
aliashraf
Legendary
*
Offline Offline

Activity: 1456
Merit: 1174

Always remember the cause!


View Profile WWW
December 20, 2020, 08:51:19 AM
Last edit: December 20, 2020, 11:23:28 AM by aliashraf
 #18

The idea is postponing rejection of txns with unknown input as long as there are new txns that become valid in each loop. Note that the main loop is not parallel but in each round all unprocessed txns are processed in parallel. Obviously it converges proportional with the length of the longest chain of txns in the block.

Postponing followed by processing unknown input transactions at some later date will use more cpu cycles overall than processing them first in, first out which is what we are trying to avoid.
Misleading assertion: More cpu cyles in multiple threads is orders of magnitude more efficient than single thread processing.

Quote

Also, when this gets all the transactions for a particular address, in order to process them all correctly in parallel, truly all the transactions have to be present first. The balance might be calculated as negative, or more/less than it truly is if not all the transactions are present. And you can only avoid this 100% by postponing all transactions i.e you're creating a table with some of the information that transaction processing would've obtained so you might as well get the rest of the info during that stage to save time. So the postponing stage becomes equivalent to the processing stage when you do that.
Check the algorithm again.
A transaction either has ALL its inputs present in DB+temp or not, in the second case it will be postponed for the next round while in each round we process the first GROUP all at once in PARALLEL. The loop converges eventually after n rounds with n being the depth of the longest chain of transactions in the block.
 


Haha, it reminds me of a familiar tone, seriously   Cheesy
I don't do this, waiting for the final code to start discussing it, do you? If yes, let me just say it, you are becoming too Gregorized  Cheesy
Sure we can discuss it, but as someone who has written a lot of code and optimized a lot of it (even though I'm not an expert) I've learned to not say anything with certainty whenever it comes to optimization. It has to be thoroughly and correctly benchmarked first.
I've been there too but when it comes to parallelism your hesitation is not helpful: make it multi thread when you got a parallel algorithm, period.

Quote
Quote
Your hypothetical example is not common, leave it as an exceptional worst case scenario, not a "special" one.
It is not as uncommon as you think. Nowadays with increasing fees we will see more cases of CPFP which is basically what my example represents (parent tx is in the same block).
There is also spam attacks that could fill the mempool with high paying chain transactions like this.
Spams don't pay high fees otherwise they are no longer spam and CPFP is not a big concern either. I've not examined the real blockchain statistically for this, but I am afraid you're going too far.
tromp
Legendary
*
Offline Offline

Activity: 976
Merit: 1076


View Profile
December 20, 2020, 09:05:51 PM
 #19

it won't have matured until its around 130 years old.

It will have mostly matured in 4 years:

https://john-tromp.medium.com/a-case-for-using-soft-total-supply-1169a188d153
PrimeNumber7
Copper Member
Legendary
*
Offline Offline

Activity: 1610
Merit: 1899

Amazon Prime Member #7


View Profile
December 21, 2020, 02:17:35 AM
 #20

Internet connection is an important factor in being able to effectively use a full node. If you cannot quickly receive a block, none of your bullet points matter because your node will not even start verifying if a block is valid in a timely manner.
High internet speed (low latency) is important for a mining node not a regular full node. For example you are downloading 4 MB tops and even with a slow internet speed you can get it in a very short time (shouldn't be more than 30 sec worse case).
If it takes too long to download a block, you could potentially be accepting a transaction that relies on inputs that have already been spent. In other words, it opens you up to double spend attacks.

Quote
and if the maximum block size were to be increased, the UTXO set will increase at a faster rate.
Not necessarily. UTXO count mainly grows with number of users not with block size.
The UTXO set and number of on-chain users is constrained by the maximum block size. You could also say that the UTXO set is a factor of actual book sizes.


It would also potentially expose nodes to a DDoS attack if an attacker were to send many invalid blocks to nodes.

The attacker would be banned 24 hours.
It is trivial to gain access to IP addresses. An attacker could use a single server that controls many IP addresses.
Pages: [1] 2 3 »  All
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!