Bitcoin Forum
November 21, 2017, 05:01:17 AM *
News: Latest stable version of Bitcoin Core: 0.15.1  [Torrent].
 
   Home   Help Search Donate Login Register  
Pages: [1]
  Print  
Author Topic: <<How Software Gets Bloated: From Telephony to Bitcoin>>  (Read 1261 times)
johnyj
Legendary
*
Offline Offline

Activity: 1834


Beyond Imagination


View Profile
April 07, 2016, 12:17:16 AM
 #1

http://hackingdistributed.com/2016/04/05/how-software-gets-bloated/

In my experience, software bloat almost always comes from smart, often the smartest, devs who are technically the most competent. Couple their abilities with a few narrowly interpreted constraints, a well-intentioned effort to save the day (specifically, to save today at the expense of tomorrow)
.
.
.
As systems get bloated, the effort required to understand and change them grows exponentially. It's hard to get new developers interested in a software project if we force them to not just learn how it works, but also how it got there, because its process of evolution is so critical to the final shape it ended up in. They became engineers precisely because they were no good at history

1511240477
Hero Member
*
Offline Offline

Posts: 1511240477

View Profile Personal Message (Offline)

Ignore
1511240477
Reply with quote  #2

1511240477
Report to moderator
1511240477
Hero Member
*
Offline Offline

Posts: 1511240477

View Profile Personal Message (Offline)

Ignore
1511240477
Reply with quote  #2

1511240477
Report to moderator
"I'm sure that in 20 years there will either be very large transaction volume or no volume." -- Satoshi
Advertised sites are not endorsed by the Bitcoin Forum. They may be unsafe, untrustworthy, or illegal in your jurisdiction. Advertise here.
DumbFruit
Sr. Member
****
Offline Offline

Activity: 428


View Profile
April 07, 2016, 06:40:54 PM
 #2

He glosses over the important part in order to make an extremely tenuous inference;
"Most of my brain feels that this is a brilliant trick, except my deja-vu neurons are screaming with 'this is the exact same repurposing trick as in the phone switch.' It's just across software versions in a distributed system, as opposed to different layers in a single OS."
Bolded for emphasis.

That is not "just" a minor difference that can be hand-waved. That is the raison détre of the soft fork. It is not because Bitcoin developers are concerned that they might mess something up, or that it would be a lot more work to do a hard fork, which seems to be the implication in this article.

By their (dumb) fruits shall ye know them indeed...
AlexGR
Legendary
*
Offline Offline

Activity: 1414



View Profile
April 10, 2016, 12:00:48 PM
 #3

I feel bloat at I/O ..

Jeff Dean's "Numbers that everyone should know"7 emphasizes the painful latencies involved with all forms of I/O. For years, the consistent message to developers has been that good performance is guaranteed by keeping the working set of an application small enough to fit into RAM, and ideally into processor caches. If it isn't that small, we are in trouble.

...This could change pretty soon - but give it a few years before it hits mainstream adoption (maybe they want to milk the corporate profits first):

http://www.extremetech.com/extreme/211087-intel-micron-reveal-xpoint-a-new-memory-architecture-that-claims-to-outclass-both-ddr4-and-nand
gmaxwell
Moderator
Legendary
*
qt
Offline Offline

Activity: 2338



View Profile
April 10, 2016, 07:09:50 PM
 #4

He glosses over the important part in order to make an extremely tenuous inference;
"Most of my brain feels that this is a brilliant trick, except my deja-vu neurons are screaming with 'this is the exact same repurposing trick as in the phone switch.' It's just across software versions in a distributed system, as opposed to different layers in a single OS."
Bolded for emphasis.

That is not "just" a minor difference that can be hand-waved. That is the raison détre of the soft fork. It is not because Bitcoin developers are concerned that they might mess something up, or that it would be a lot more work to do a hard fork, which seems to be the implication in this article.

It also ignores, or isn't aware, of the fact that we implemented segwit in the "green-field" form (with no constraint on history), in Elements Alpha-- and consider the form proposed for Bitcoin superior and are changing future elements test chains to use it.

Applying this to Bitcoin conflates misleading source code with protocol design. None of these soft fork things result in things like mislabeled fields in an implementation or other similar technical debt-- if that happens it's purely the fault of the implementation.


Bitcoin will not be compromised
Peter R
Legendary
*
Offline Offline

Activity: 1064



View Profile
April 14, 2016, 03:38:02 AM
 #5

None of these soft fork things result in things like mislabeled fields in an implementation or other similar technical debt...

Is not placing the Merkle root for the block's witness data in a transaction rather than in the block's header, not a quintessential example of a "kludge" or a "hack"?

What about the accounting trick needed to permit more bytes of transactional data per block without a hard-forking change?  This hack doesn't just add technical debt, it adversely affects the economics of the system itself.

Run Bitcoin Unlimited (www.bitcoinunlimited.info)
gmaxwell
Moderator
Legendary
*
qt
Offline Offline

Activity: 2338



View Profile
April 14, 2016, 06:12:33 AM
 #6

Is not placing the Merkle root for the block's witness data in a transaction rather than in the block's header, not a quintessential example of a "kludge" or a "hack"?
The block header would emphatically _not_ be the right place to put it. Increasing the header size would drive up costs for applications of the system, including ones which do not need the witness data. No one proposing a hardfork is proposing header changes either, so even if was-- thats a false comparison. (But I wish you luck in a campaign to consign much of the existing hardware to the scrap heap, if you'd like to try!)

Quote
What about the accounting trick needed to permit more bytes of transactional data per block without a hard-forking change?  This hack doesn't just add technical debt, it adversely affects the economics of the system itself.
Fixing the _incorrect costing_ of limiting based on a particular serialization's size-- one which ignores the fact that signatures are immediately prunable while UTXO are perpetual state which is far more costly to the system-- was an intentional and mandatory part of the design. Prior to segwit the discussion coming out of Montreal was that some larger sizes could potentially be safe, if there were a way to do so without worsening the UTXO bloat problem.

In a totally new system that costing would likely have been done slightly differently, e.g. also reducing the cost of the vin txid and indexes relative the txouts; but it's unclear that this would have been materially better; and whatever it was would likely have been considerably more complex (look at the complex table of costs in the ethereum greenfield system), likely with little benefit.

Again we also find that your technical understanding is lacking. No "discount" is required to get a capacity increase. It could have been constructed as two linear constraints: a 1MB limit on the non-witness data, and a 2MB limit on the composite. Multiple constraints are less desirable, because the multiple dimensions potentially makes cost calculation depend on the mixture of demands; but would be perfectly functional, the discounting itself is not driven by anything related to capacity; but instead is just a better approximation of the system's long term operating costs. By better accounting for the operating costs the amount of headroom needed for safety is reduced.

Bitcoin will not be compromised
Pages: [1]
  Print  
 
Jump to:  

Sponsored by , a Bitcoin-accepting VPN.
Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!