Bitcoin Forum
April 19, 2024, 12:31:51 AM *
News: Latest Bitcoin Core release: 26.0 [Torrent]
 
   Home   Help Search Login Register More  
Pages: « 1 2 3 4 [5]  All
  Print  
Author Topic: What would you change about the Bitcoin protocol?  (Read 12854 times)
oskar (OP)
Newbie
*
Offline Offline

Activity: 9
Merit: 0


View Profile
June 07, 2011, 11:27:43 PM
 #81

Wow, I didn't expect to see my old thread when I browsed this forum today. To be honest, I took a pretty long break from bitcoin, and only recently began thinking about it and reading the code again. In fact, Sunday was the first day I even ran the client! That being said, allow me to do my best to summarize most ideas shared in this thread.

Keep in mind that this thread was not just about ideas that could potentially make it to a future bitcoin release, but ideas that could be used in an entirely new protocol. Everyone will benefit from competition in the cryptocurrency market.

1. Data Serialization: Vladimir suggests using BERT/YAML for binary and text serialization, and comboy suggests JSON for the latter.

(I'm putting this first because even though it's on page 2, I believe it's the biggest flaw in the current Bitcoin protocol, and a reason for many other problems listed here. Bitcoin is largely a binary-encoded protocol, so the composition of packets are set in stone. A more fungible serialization format like the aforementioned ones (or better yet, protocol buffers!) would allow the addition of new fields that older clients could safely ignore. All this talk about client versions, encryption algorithms, timestamp lengths, and more, would be much less of a worry, so I'll include them as 1a through 1d.)

1a. Client Handshakes: Cdecker and realnowhereman suggest making it possible to specify the client verson separately from the protocol version. grondilu suggests allowing a client to specify what algorithm they use for their key. realnowhereman also suggests flags to say whether a node is a generator, whether it accepts transactions, etc.

1b. Integer Lengths: realnowhereman suggests cleaning up the arbitrary use of 64-bit and 32-bit integer lengths; for example, having timestamps be 64-bit consistently.

1c. Hostnames: just_someguy suggests supporting host names alongside IP addresses.

1d. Transaction Scripts: realnowhereman and alkor suggest getting rid of the complexity of the custom scripting language, and implementing a simpler system.

2. Byte Order: error, Cdecker, and realnowhereman suggest standardizing on big endian, to be consistent with the native byte order for network addresses and hashes, and to match some mobile platforms. Others, including me, jgarzik, and xf2_org suggest little endian to match the most common computing architectures. Not entirely sure what side I come down on now.

3. Coin Divisibility: genjix suggests using INT128 to allow Bitcoins to be more divisible. realnowhereman suggests a fixed-point base 2 integer. Luke-Jr suggests a varint fraction that specifies the numerator and denominator separately. ribuck suggests he give it a rest please =)

4. Block Size Limit: caveden suggests having the protocol automatically adjust the block size limit according to the transaction rate.

5. Block Discovery Interval: dirtyfilthy and comboy suggest lowering the expected block discovery interval, to take into account the fact that networks speed up over time.

6. Hashing Algorithm: Vladimir and comboy suggest using multiple hashing algorithms for block generation. realnowhereman suggests the opposite: use a trivial CRC to make it quick for mobile devices to verify blocks. Difficulty could still be increased by increasing the number of 0-bits required in the beginning of the hash, or even increasing the number of iterations required as I suggested.

7. Block Downloads: realnowhereman and ByteCoin suggests making it possible to download recent blocks first.

8. Misc: To save space… =) realnowhereman suggests: (1) not specifying one's own IP, since such information could be inaccurate if behind a NAT; (2) getting rid of the unnecessary verack and use of RIPEMD; (3) a client that works in a multi-user environment; and (4) the ability to query a node for transactions it has queued.

Thanks for all the great ideas, everyone. I didn't have time to get a close look at Stevie1024's paper, so I'll do that later. I haven't decided whether I personally want to work on a Bitcoin client or an entirely new protocol, but I really hope either way we end up with a better, more robust cryptocurrency system in the future.
In order to achieve higher forum ranks, you need both activity points and merit points.
Advertised sites are not endorsed by the Bitcoin Forum. They may be unsafe, untrustworthy, or illegal in your jurisdiction.
John Tobey
Hero Member
*****
Offline Offline

Activity: 481
Merit: 529



View Profile WWW
June 08, 2011, 06:58:59 PM
 #82

oskar, thanks for posting your summary just as I stumbled upon the thread!  Good thing I started reading it at the end.  More suggestions:

Alternative chains

I would like to make it easy for distinct proof-of-work chains to share hashing power along the lines Mike proposed here: https://en.bitcoin.it/wiki/Alternative_Chains

I haven't completely thought through the details, but I think the "headers" message (or a successor) should support an "indirect" header in addition to the standard kind.  The indirect header would come with a standard header from another ("main") chain.  This "main" header would hash to a value below the target in the alternative chain, and the indirect header message would include:

  • the main header's coinbase transaction, whose script would include a Merkle root of headers from different chains
  • the Merkle branch anchoring the coinbase transaction to the main header
  • another Merkle branch anchoring the alternative chain's header to the root contained in the coinbase script

This would avoid the need to fragment hashing power as is happening between Bitcoin and Namecoin.  This would encourage a proliferation of experimental currencies and other applications, which would be a Good Thing.

The original BTC block chain would not necessarily have to accept indirect headers, though it would be nice to have it share logic with chains that do.

Pluggable policies

Subtle differences in the block acceptance rules in use among miners threaten chain unity.  When (not if) a block appears and some, but not all miners accept it, confusion will ensue.  I would suggest a protocol field encoding the block acceptance rules.  It could be, for example, a hash of a Forth or C++ program.  Miners could obtain and compile (or interpret) the program when they notice they are hashing on the losing side of a chain split.  Obviously, they'd want to restrict this feature to avoid malware, so C++ is probably not the best choice.

I call it "pluggable policies" because I imagine changing the client to support loading and using multiple block acceptance policies for different chains.  This is not strictly a protocol change.

Can a change to the best-chain criteria protect against 51% to 90+% attacks without a hard fork?
martin
Full Member
***
Offline Offline

Activity: 150
Merit: 100



View Profile WWW
June 08, 2011, 07:39:46 PM
 #83

1. Data Serialization: Vladimir suggests using BERT/YAML for binary and text serialization, and comboy suggests JSON for the latter.

(I'm putting this first because even though it's on page 2, I believe it's the biggest flaw in the current Bitcoin protocol, and a reason for many other problems listed here. Bitcoin is largely a binary-encoded protocol, so the composition of packets are set in stone. A more fungible serialization format like the aforementioned ones (or better yet, protocol buffers!) would allow the addition of new fields that older clients could safely ignore. All this talk about client versions, encryption algorithms, timestamp lengths, and more, would be much less of a worry, so I'll include them as 1a through 1d.)

I've made the suggestions of protocol buffers before, I still consider a non hand crafted binary serialisation protocol almost vital for bitcoins to really take off.

Satoshi wasn't a fan of the idea, I wonder what the new development team thinks?
xf2_org
Member
**
Offline Offline

Activity: 98
Merit: 13


View Profile
June 08, 2011, 08:38:43 PM
 #84

I've made the suggestions of protocol buffers before, I still consider a non hand crafted binary serialisation protocol almost vital for bitcoins to really take off.

Satoshi wasn't a fan of the idea, I wonder what the new development team thinks?

Well, if we are talking about a do-over, I'm pretty sure satoshi said something like "if I had to start from scratch, I'd use Google Protocol Buffers" and I agree with that.

On a slightly different subject, I would use RSASSA-PSS instead of ECDSA for keypairs.

lizthegrey
Newbie
*
Offline Offline

Activity: 56
Merit: 0


View Profile
June 08, 2011, 10:20:39 PM
 #85

Well, if we are talking about a do-over, I'm pretty sure satoshi said something like "if I had to start from scratch, I'd use Google Protocol Buffers" and I agree with that.
Is there a reason a protocol buffers based protocol cannot be implemented as an additional RPC mechanism and the old mechanism phased out?
xf2_org
Member
**
Offline Offline

Activity: 98
Merit: 13


View Profile
June 08, 2011, 10:33:16 PM
 #86

Well, if we are talking about a do-over, I'm pretty sure satoshi said something like "if I had to start from scratch, I'd use Google Protocol Buffers" and I agree with that.
Is there a reason a protocol buffers based protocol cannot be implemented as an additional RPC mechanism and the old mechanism phased out?

Yes, plain ole backwards compatibility.  We had never broken compat with older clients, and I'm not rushing to start now.

And if you aren't breaking backwards compat, that implies forever supporting two P2P protocols instead of just one.

Cdecker
Hero Member
*****
Offline Offline

Activity: 489
Merit: 504



View Profile WWW
June 08, 2011, 11:00:33 PM
 #87

Changes to message formatting and byte-ordering do have a lasting effect on the hashes that are used throughout the network. So everything that ends up being hashed is set in stone, unless we want to run dualstack for a really long transition period (until the last input from the old protocol has been spent, which probably will never happen due to lost wallets...).
Having to adhere to the existing protocol for large parts makes other changes pretty useless, as sad as it sound...

What I would like to do is add some structure to the network in order to reduce message complexity (think DHT for transaction inputs instead of everybody tracking everything), detect network partitions and a more hierarchical network topology (miners in the center, lightweight clients at the edge).

Want to see what developers are chatting about? http://bitcoinstats.com/irc/bitcoin-dev/logs/
Bitcoin-OTC Rating
realnowhereman
Hero Member
*****
Offline Offline

Activity: 504
Merit: 502



View Profile
June 08, 2011, 11:22:17 PM
 #88

Wow, I didn't expect to see my old thread when I browsed this forum today. To be honest, I took a pretty long break from bitcoin, and only recently began thinking about it and reading the code again. In fact, Sunday was the first day I even ran the client! That being said, allow me to do my best to summarize most ideas shared in this thread.

Excellent summary.  With one minor nit:

6. Hashing Algorithm: Vladimir and comboy suggest using multiple hashing algorithms for block generation. realnowhereman suggests the opposite: use a trivial CRC to make it quick for mobile devices to verify blocks. Difficulty could still be increased by increasing the number of 0-bits required in the beginning of the hash, or even increasing the number of iterations required as I suggested.

I certainly don't want to weaken the algorithm for the block hash (although I have my doubts that it needed to be double SHA256, but that the fact that blocks need to be mined already means that this is irrelevant other than making it slightly more computationally intensive to verify a block).

What I was suggesting was a considerably simpler algorithm for the payload checksum in network messages.  At present it's the first four bytes of the double SHA256 of the message payload.  This is complete overkill.  I'm not entirely convinced that any checksum is needed -- checksumming should be (and is) handled at a lower level in the network stack (ethernet, TCP, PPP, etc, all include checksums in the packets already, so why checksum again?).

Think about this: when was the last time you saw a single byte received wrong in a web page you were looking at?  Well HTTP includes no checksums, and relies on the TCP checksum entirely.  Seems to work okay.

1AAZ4xBHbiCr96nsZJ8jtPkSzsg1CqhwDa
Cdecker
Hero Member
*****
Offline Offline

Activity: 489
Merit: 504



View Profile WWW
June 08, 2011, 11:33:35 PM
 #89

Checksumming and the message magic bytes seem redundant, that's what Mike and I where thinking. Probably Satoshi was hunting down a bug in his code and introduced them. I love the magic bytes however since it enabled me to track down a bug in my protocol implementation (non-blocking IO and not yet completely filled read buffers Cheesy). A simple CRC should be enough for checksumming, if we want to have checksum at all. On the other side being a non-trusted network, and since any client may bomb you with anything, the checksum is not needed at all Cheesy

Want to see what developers are chatting about? http://bitcoinstats.com/irc/bitcoin-dev/logs/
Bitcoin-OTC Rating
realnowhereman
Hero Member
*****
Offline Offline

Activity: 504
Merit: 502



View Profile
June 09, 2011, 06:44:56 AM
 #90

Checksumming and the message magic bytes seem redundant, that's what Mike and I where thinking. Probably Satoshi was hunting down a bug in his code and introduced them. I love the magic bytes however since it enabled me to track down a bug in my protocol implementation (non-blocking IO and not yet completely filled read buffers Cheesy). A simple CRC should be enough for checksumming, if we want to have checksum at all. On the other side being a non-trusted network, and since any client may bomb you with anything, the checksum is not needed at all Cheesy

The magic bytes and payload length are important. They allow you to stay in sync when you are sent unsupported messages, and recover if you are sent junk (i'm sure i read that the official client does send junk sometimes).

1AAZ4xBHbiCr96nsZJ8jtPkSzsg1CqhwDa
Stevie1024
Member
**
Offline Offline

Activity: 70
Merit: 10


View Profile
June 09, 2011, 08:24:46 AM
 #91

Higher transaction fees lead to slower transactions:
That's right, but it is a problem only if a large majority of nodes do that and if some does:


There's an incentive to do so, and a good protocol should take it into account I think.

Since transactions are spread all over the network, only the transactions' hash are transfered in blocks.
Nodes are supposed to already have the transaction in inventory, if they dont have, they wont validate
the block until they have received all.
Low block spread is bad for all nodes, but the block of someone that kept some transactions will be transfered
over the network slowly compared to others and could lose race.

(there is also a system to extend the transaction fee of an emited transaction, which could be used to
"award" nodes having it, if client see that transaction has been correctly spreaded)

Your way to solve that problem is good but each transaction would need to access to more accounts which have a cost in disk IO
also i use a one time signature scheme (using hash chains) which can't do that, an emited transaction can't be modified.

Enforced limits are not optimal and Not truly decentralized:
i haven't put limits, limits means doubts in the protocol capacity.

Unmanageable storage requirements:
Huge storage needs also means huge bandwidth needs.
By reducing this, we increase the network number of nodes, strenght
and reduce the transactions fees.

final result is similar (transactions are only stored 3month, all is kept in a list containing all accounts).
I differenciate account and addresses (address = public key, used to create an account).
Very old accounts without activity also have a small fee each year (these will be mostly account of users that have lost their private key)
How to deal with 'old accounts' (or the growing number of accounts) is definitly one thing I don't have a proper solution for.

The way i manage accounts and transactions is different, but I also think storage is a problem since it is the easiest way for someone to attack the network.
For example, without being paranoid, someone could use 1million $ to create transactions that will
take the most possible definitive storage in the chain, that could make serious problem to nodes.
And it could be far more, mass storage capacity is part of the network security.

(Assuming the network is a real p2p network and that nodes have average user disk storage, if there was only
few hundred nodes maybe they could handle, but the currency would not be "p2p" anymore)

No unnecessary difficulty:
I have keep the way bitcoin manage the block race, but with some optimizations and more challenge
between nodes that spread the block.

Proof-of-work as ’currency’:
the way b$ is done, i could not handle a non fixed number of units.
instead i have made the award proportional to the target a of block

I am very curious about what you're developing and the solutions you have. Since most of the above seems to involve your protocol, I will respond on your b$ site or in a pm. One general remark: I don't think trying to develop a new protocol on your own is a good idea. I hope there will be an initiative quickly, in which more developers will assemble and start writing together.

I'm out of here!
Stevie1024
Member
**
Offline Offline

Activity: 70
Merit: 10


View Profile
June 09, 2011, 08:31:27 AM
 #92

Thanks for all the great ideas, everyone. I didn't have time to get a close look at Stevie1024's paper, so I'll do that later. I haven't decided whether I personally want to work on a Bitcoin client or an entirely new protocol, but I really hope either way we end up with a better, more robust cryptocurrency system in the future.

Hope you do and you will agree. Also hope you will decide it's time for a new, sound protocol, for me the decision is obvious. And I hope there's more developers thinking the same way, cause I'm quite sure I can't get nowhere alone.



I'm out of here!
realnowhereman
Hero Member
*****
Offline Offline

Activity: 504
Merit: 502



View Profile
June 09, 2011, 08:39:29 AM
 #93

Thanks for all the great ideas, everyone. I didn't have time to get a close look at Stevie1024's paper, so I'll do that later. I haven't decided whether I personally want to work on a Bitcoin client or an entirely new protocol, but I really hope either way we end up with a better, more robust cryptocurrency system in the future.

Hope you do and you will agree. Also hope you will decide it's time for a new, sound protocol, for me the decision is obvious. And I hope there's more developers thinking the same way, cause I'm quite sure I can't get nowhere alone.

Personally, I'm more in favour of evolution than revolution.

I'd first like to get a more programmer-friendly client out there (which is what I'm working on, slowly).  Hopefully that will make it easier to run experimental chains such as Stevie1024 would like; easier to integrate with website; easier to write alternative GUIs; and easier to store keys in whatever format is suitable for your business (I'm envisaging keys stored in SQL databases for large web stores).

However, while I see plenty of faults in the bitcoin implementation (as I listed above); the technology itself is sound -- let's not kid ourselves that the endianness of the bytes, or the use of binary instead of YAML affect the operation of bitcoin in any way at all; they are mechanism rather than policy.

The current bitcoin protocol is what it is.  Network effects mean it's not worth making a huge effort just to add one extra byte of precision to a timestamp and making old clients incompatible.

1AAZ4xBHbiCr96nsZJ8jtPkSzsg1CqhwDa
Stevie1024
Member
**
Offline Offline

Activity: 70
Merit: 10


View Profile
June 09, 2011, 07:08:41 PM
 #94

What some people, especially Satoshi, have said is that there's an unusual amount of risk involved with reimplementing the full system and using that reimplementation to mine. Bitcoin is very complex and if you aren't skilled and very thorough you are likely to diverge from its behavior in small, hard to detect ways. This can fork the chain and split the economy. It's one of the few things that could instantly kill Bitcoin beyond legal harassment of its users.

Wow, Mike, are you still sure about this then?

Removing script and simplifying the transaction format is superficially attractive, but would eliminate a lot of useful features that are helpful for a non-trust based financial system. The script language isn't really that hard to implement or review. Bitcoin is still millions of times simpler than, say, a web browser. My gut feeling is that Satoshi made the right flexibility:complexity tradeoff here.

I'm out of here!
Mike Hearn
Legendary
*
expert
Offline Offline

Activity: 1526
Merit: 1128


View Profile
June 09, 2011, 07:15:08 PM
 #95

The scripting language is fairly trivial, it's not what makes Bitcoin complicated. The complexity comes from things like ensuring you correctly implement all the verification steps, time handling, dealing with re-orgs, detecting dead transactions, being able to catch up given only the chain and some keys etc.

OP_DROP and friends is a small amount of code that's easy to unit test. It's also not required to run scripts at all for lightweight/SPV clients.
cunicula
Legendary
*
Offline Offline

Activity: 1050
Merit: 1003


View Profile
June 21, 2011, 12:34:33 AM
 #96

Response to OP:

Protocol needs a financial system. Would issue a large fraction of coins as contingent claims on future difficulty states rather than purely in an instantly maturing form. This system would make it easier for entities to realize profits from actions that influence future generation difficulty (e.g. adoption by large businesses). It would also reduce the costs of hedging possibly significant (but not catastrophic) changes in the price level. In the current system, rewards go mainly to early adopters (essentially rent-seekers). Don't care about the distributional issues, but as an incentive system the current coin generation protocol is wasteful.
steelhouse
Hero Member
*****
Offline Offline

Activity: 717
Merit: 501


View Profile
June 21, 2011, 09:21:17 PM
 #97

1. transaction fee with a difficulty such that block growth is set at maximum 1 mb per day.
2. auto clean function on January 1st, of each year to compress blocks such that the only info kept is the owner of each BTC.  block0002011.dat, now is retired for block0002012.dat.  Stop transactions December 31st to completely verify new block.
3. my as well add namecoin to it with possible 1mb/webspace.  Allow you to keep webspace forever, if you pay a deflation fee to the network.
4. put BTC growth on steroids such that most BTC will be out in 4 years.  Halve every year instead of 4.
5. change to a cpu as oppose to a gpu based problem.
smartcardguy
Newbie
*
Offline Offline

Activity: 14
Merit: 0



View Profile
June 22, 2011, 03:03:24 AM
 #98

This is a great thread, I have just started looking into the details of Bitcoin but it appears that the protocol is not designed with crypto agility in mind.

Given the nature of cryptography, e.g. that all crypto systems eventually become obsolete I would say that a crypto curency would need to be crypto agile.

In otherwords, one should be able to move from one algorithm to another without requiring revising the whole protocol; there are many ways to approach this problem in the case of ECC one might at a minumum plan from migrating from secp256k1 to other curves like secp256r1, secp384r1 and secp521r1.

Also since there has been significant advances in cryptographic attacks against hashing algorithms (though the SHA2s are likley safe for a very long time) having a conceptual model of how one would support SHA-3 at some point would be good as well.

Ryan
shads
Sr. Member
****
Offline Offline

Activity: 266
Merit: 254


View Profile
June 22, 2011, 04:31:52 AM
 #99

The hard limit on supply of bitcoins is the biggest problem IMHO.  The deflationary spiral threat is well documented so I won't repeat it except to point out that it gives bitcoin far more characteristics of an asset rather than a currency and provides a disincentive for circulation.

Monetary policy should be internally regulated and dynamic.  I'd propose something along the lines of a feedback loop where the rate of coin generation is proportional to the incentive to hoard coins.  Thus if you have excessive deflation more coins are produced, if you have inflation less coins are produced.  The hard limit would disappear and the real world value of the coins stays relatively stable.  The difficulty is that you can't tie it to any externality like exchange rates because then you need gateways to those externalities which creates points of potential manipulation. 

My first thought was to tie it to the inverse of the rate of circulation which is easy for any node to calculate.  The more people are hoarding, the more new currency is added to the market thus devaluing the hoarded currency or at least finding an equilibrium and removing the hoarding incentive.  But that would need to balanced with a disincentive to spam transactions or the rate of circulation could be manipulated (and probably choke the network if the effort was big enough).  My next thought is to tie it to the number of unique coins that have been transacted in the block (or the average of the last n blocks) compared to the total generated.  Each coin can only be counted once no mater how many times it is transacted thus preventing spamming with circular transactions to manipulate the rate.  Very large holders could still have an effect by ensuring their coins were moved at least once every block so perhaps it could actually be a one way test:

if circulation > threshold --> create normal number of new bitcoins
if circulation < threshold --> create normal number of new bitcoins + (threshold - circulation) * factor

Obviously not a fully fleshed out idea but hopefully greater minds can add to it...

PoolServerJ Home Page - High performance java mining pool engine

Quote from: Matthew N. Wright
Stop wasting the internet.
shads
Sr. Member
****
Offline Offline

Activity: 266
Merit: 254


View Profile
June 22, 2011, 04:44:16 AM
 #100

further thought, miners must have an incentive to process transactions (transaction fees) to counterbalance the incentive to hold them back to reduce apparent circulation and increase rate of coin production.

PoolServerJ Home Page - High performance java mining pool engine

Quote from: Matthew N. Wright
Stop wasting the internet.
Pages: « 1 2 3 4 [5]  All
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!