Bitcoin Forum
November 11, 2024, 05:06:49 AM *
News: Latest Bitcoin Core release: 28.0 [Torrent]
 
   Home   Help Search Login Register More  
Pages: « 1 2 [3]  All
  Print  
Author Topic: The MOST Important Change to Bitcoin  (Read 17253 times)
lachesis
Full Member
***
Offline Offline

Activity: 210
Merit: 105


View Profile
July 28, 2010, 06:15:56 AM
 #41

Generally a compact custom format works better in terms of bandwidth and disk usage, but I do see some advantages for something like this.  I am curious about how this works for forward compatibility where a new field or some aspect is added that wasn't accounted for in an earlier specification.  It does become a big deal, and I'll admit that XML and similar kinds of data protocols tend not to break as easily compared to rigid custom protocols.
By the same logic, C is faster than Python. However, I've run across a few Python programs that were orders of magnitude faster than similar C programs. How? The C programs were sloppy and Python's standard library functions are increasingly well-optimized.

As an example, the CAddress serialization in Bitcoin, passed around in at least the addr message and version message, is 26 bytes. That's 8 bytes for the nServices field (uint64, and always 1 on my system), 12 bytes marked "reserved", and the standard 6 bytes: 4 bytes for IP, 2 bytes for port number. While I agree that including the ability to support IPv6 and other services (or whatever nServices is for) in the future is a great idea, I also think it's a bit wasteful to use 26 bytes for what can currently be encoded as 6 bytes. With protocol buffers, this would be smaller now but retain the ability to extend in the future.

I agree that encryption and compression are a bit harder to take into account, but at the very least they could be layered on top of the protocol buffers. Build the byte string with something extensible, then encrypt/compress it and wrap it in a tiny header that says that you did that. You lose 2 or 4 bytes for the trouble, but you gain the ability to change the format down the road. Either that, or encrypt only certain protocol buffer fields and put the serialization on top of the encryption.

Bitcoin Calculator | Scallion | GPG Key | WoT Rating | 1QGacAtYA7E8V3BAiM7sgvLg7PZHk5WnYc
RHorning
Full Member
***
Offline Offline

Activity: 224
Merit: 141


View Profile
July 28, 2010, 04:32:15 PM
 #42

Generally a compact custom format works better in terms of bandwidth and disk usage, but I do see some advantages for something like this.  I am curious about how this works for forward compatibility where a new field or some aspect is added that wasn't accounted for in an earlier specification.  It does become a big deal, and I'll admit that XML and similar kinds of data protocols tend not to break as easily compared to rigid custom protocols.
By the same logic, C is faster than Python. However, I've run across a few Python programs that were orders of magnitude faster than similar C programs. How? The C programs were sloppy and Python's standard library functions are increasingly well-optimized.

It is a myth that C is the most optimized programming language and one that produces the most efficient software.  If you want to get into language bashing wars, I'm all for it and C would be one of my first targets.  For myself, I prefer Object Pascal as my preferred programming language, but some of that is habit and intimate knowledge of the fine points of the compilers which use that programming language.  I have certainly issued a challenge to any developer to compare binary files for similar implementations, and also note compilation speeds on larger (> 100k lines of code) projects.  Most C compilers lose on nearly every metric.  Enough of that diversion.

A custom specification that doesn't rely upon a protocol framework is almost always going to be faster, but it is also much more fragile in terms of future changes and debugging.  It doesn't have to be fragile in terms of forward compatibility, but you have to be very careful in terms of how the protocol is implemented to get that to happen.  A formal protocol framework helps you avoid those kind of problems with an existing structure, but it does add overhead to the implementation.

I don't understand what you're saying about speed, protocol buffers were designed by Google to satisfy three requirements:

My complaint here is using XML as the yardstick of efficiency.  It is hardly the most efficient data protocol format, but the advantages it does offer and the typical application which uses XML doesn't necessarily need the hyper-efficient data formatting either.  I'm not disputing what Google has done here either, as it gives a sort of XML-like data protocol framework with increased efficiency, including CPU speed.  It just doesn't solve all of the problems for everybody and it shouldn't be viewed as the ultimate solution for programming either.

I guess I'm sort of comparing an efficient data framework to programming in a high level language like C or Python vs. Assembly language.  Assembly programming is by far and away more efficient, but it does take some skill to be able to program in that manner.  It is also something very poorly (if at all) taught by most computer programming courses of study.
martin
Full Member
***
Offline Offline

Activity: 150
Merit: 100



View Profile WWW
July 29, 2010, 01:34:31 PM
 #43


I don't understand what you're saying about speed, protocol buffers were designed by Google to satisfy three requirements:

My complaint here is using XML as the yardstick of efficiency.  It is hardly the most efficient data protocol format, but the advantages it does offer and the typical application which uses XML doesn't necessarily need the hyper-efficient data formatting either.  I'm not disputing what Google has done here either, as it gives a sort of XML-like data protocol framework with increased efficiency, including CPU speed.  It just doesn't solve all of the problems for everybody and it shouldn't be viewed as the ultimate solution for programming either.

I guess I'm sort of comparing an efficient data framework to programming in a high level language like C or Python vs. Assembly language.  Assembly programming is by far and away more efficient, but it does take some skill to be able to program in that manner.  It is also something very poorly (if at all) taught by most computer programming courses of study.

I wouldn't have used XML as a good indicator of performance personally, however it's the only numbers presented on the protobuf website. I might throw together a test using the C# implementation of protocol buffers to get some numbers for protocol buffers vs hardcoded binary packets. However, I suspect speed isn't *really* a problem here, the serialisation time is on the scale of hundreds of nanoseconds - not a problem!

As for size, I suspect protocol buffers are going to be smaller than a handwritten packet layout by satoshi for a couple of reasons:
1) Bitcoin includes reserved fields for forwards compatibility, ptocol buffers don't
2) Protocol buffers include complex things like variable length encoding etc, which would be a silly micro optimisation for Satoshi to include, but comes for free with protocol buffers (and can significantly decrease the size of a packet)
3) Losing a couple of bytes on the size of a packet (if, indeed packets do get bigger, I suspect they won't) but gaining cross language compatibility, standardisation of the packet layout, significant ease of use in code AND forwards compatibility is a *very* good tradeoff.
kryptqnick
Legendary
*
Offline Offline

Activity: 3276
Merit: 1403


Join the world-leading crypto sportsbook NOW!


View Profile
December 08, 2016, 08:04:30 PM
 #44

That was an interesting topic, so I decided to put it on the first page. What can be changed? I think, the quantity of btc in general. It is true that 21 million is too small if we talk about bitcoin becoming very popular and global. The price will increase, because of the limited amount and then 20 000 satoshi per transaction may become too much. And there just won't be enough satoshi to use and there is nothing smaller than a satoshi now.

  ▄▄███████▄███████▄▄▄
 █████████████
▀▀▀▀▀▀████▄▄
███████████████
       ▀▀███▄
███████████████
          ▀███
 █████████████
             ███
███████████▀▀               ███
███                         ███
███                         ███
 ███                       ███
  ███▄                   ▄███
   ▀███▄▄             ▄▄███▀
     ▀▀████▄▄▄▄▄▄▄▄▄████▀▀
         ▀▀▀███████▀▀▀
░░░████▄▄▄▄
░▄▄░
▄▄███████▄▀█████▄▄
██▄████▌▐█▌█████▄██
████▀▄▄▄▌███░▄▄▄▀████
██████▄▄▄█▄▄▄██████
█░███████░▐█▌░███████░█
▀▀██▀░██░▐█▌░██░▀██▀▀
▄▄▄░█▀░█░██░▐█▌░██░█░▀█░▄▄▄
██▀░░░░▀██░▐█▌░██▀░░░░▀██
▀██
█████▄███▀▀██▀▀███▄███████▀
▀███████████████████████▀
▀▀▀▀███████████▀▀▀▀
█████████████LEADING CRYPTO SPORTSBOOK & CASINO█████████████
MULTI
CURRENCY
1500+
CASINO GAMES
CRYPTO EXCLUSIVE
CLUBHOUSE
FAST & SECURE
PAYMENTS
.
..PLAY NOW!..
WhiteSkinnedFREAK
Member
**
Offline Offline

Activity: 67
Merit: 10


View Profile
December 09, 2016, 01:24:59 AM
 #45

I'm really not sure to be honest

we need to really keep it decentralised tho
burner2014
Hero Member
*****
Offline Offline

Activity: 952
Merit: 515


View Profile
December 09, 2016, 05:07:54 AM
 #46

I don't think there is still need to change, I trust the one who made it. I know that he deeply study it. And for me, it is great. But, I believe that there is still needs to improve like faster transaction confirmation and more advertisement but over all it is good.
stark101
Newbie
*
Offline Offline

Activity: 42
Merit: 0


View Profile
December 11, 2016, 02:47:45 PM
 #47

The most important change to bitcoin hmm i think there is nothing to change at all. In my opinion it is already better in terms of the development of it. Rather bitcoin change the world for the better. The internet brought us free distribution of information and bitcoin brings us free distribution of commerce.
calkob
Hero Member
*****
Offline Offline

Activity: 1106
Merit: 521


View Profile
December 11, 2016, 03:42:24 PM
 #48

I would say that rewarding none mining nodes a small amount of the block reward would have been a great idea but probably not a need that could have been foreseen, i think i am right in saying that satoshi never foreseen Asic chips and massive mining farms in china.  I think he thought that every node would be mining and thus would eventually be rewarded.
Pages: « 1 2 [3]  All
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!