Bitcoin Forum
April 25, 2024, 05:19:44 PM *
News: Latest Bitcoin Core release: 27.0 [Torrent]
 
   Home   Help Search Login Register More  
Pages: [1]
  Print  
Author Topic: [RFD] All classes which use IMPLEMENT_SERIALIZE should check nVersion on reading  (Read 2226 times)
Jim Hyslop (OP)
Member
**
Offline Offline

Activity: 98
Merit: 20


View Profile
March 21, 2011, 12:33:18 AM
 #1

Sipa's recent patch to the wallet (for tracking partially spent coins) got me thinking about serialization and versioning issues.

Right now, few (if any) of the classes which use IMPLEMENT_SERIALIZE check the value of nVersion when reading. I propose that we should add that check to all classes, now.

My reasoning is: if we wait to introduce the version check until it's needed, then we have to either make the change backwards-compatible (which, if you have any experience with legacy software you will know that backwards compatibility is an unrealistic goal in the long term), hope that older clients don't break, or hope that people upgrade quickly enough to take advantage of the new feature/fix/whatever.

We aren't a big corporation, which can push out a new version whether people want it or not. Some people may wait months or even years before updating their clients. So, if we ensure that the clients check versions now, then if we do introduce a backwards-incompatible change, users can be immediately notified that they have to upgrade the client.

Which brings up a side issue of how to notify users. In the GUI, we can pop up a message box. But what about the daeamon version? Is it sufficient to log a message and refuse to load the object?

Like my answer? Did I help? Tips gratefully accepted here: 1H6wM8Xj8GNrhqWBrnDugd8Vf3nAfZgMnq
Whoever mines the block which ends up containing your transaction will get its fee.
Advertised sites are not endorsed by the Bitcoin Forum. They may be unsafe, untrustworthy, or illegal in your jurisdiction.
1714065584
Hero Member
*
Offline Offline

Posts: 1714065584

View Profile Personal Message (Offline)

Ignore
1714065584
Reply with quote  #2

1714065584
Report to moderator
1714065584
Hero Member
*
Offline Offline

Posts: 1714065584

View Profile Personal Message (Offline)

Ignore
1714065584
Reply with quote  #2

1714065584
Report to moderator
jgarzik
Legendary
*
qt
Offline Offline

Activity: 1596
Merit: 1091


View Profile
March 21, 2011, 12:40:05 AM
 #2

Well, the usage so far has been backwards compatible AFAIK.  You start out unversioned, and if a field is added, an nVersion check is also added.  If no nVersion check exists, that data structure is unchanged.

The current system seems straightforward and is backwards compatible, so why add superfluous checks?


Jeff Garzik, Bloq CEO, former bitcoin core dev team; opinions are my own.
Visit bloq.com / metronome.io
Donations / tip jar: 1BrufViLKnSWtuWGkryPsKsxonV2NQ7Tcj
Jim Hyslop (OP)
Member
**
Offline Offline

Activity: 98
Merit: 20


View Profile
March 21, 2011, 04:44:44 AM
 #3

Well, the usage so far has been backwards compatible AFAIK.  You start out unversioned, and if a field is added, an nVersion check is also added.  If no nVersion check exists, that data structure is unchanged.
(minor nit pick: nVersion already exists, and is set to 1, but that's a trivial point)

You're assuming the software will be updated at the same time as the data structure. That is a really bad assumption to make.
Think it through. Version 1 clients don't check version numbers. Now, what happens when a version 1 client tries to read version 2 data? It will assume that the data is version 1, so it will just blindly go ahead and try to read the data. That is a sure recipe for disaster down the road. Defensive coding 101 - validate everything.

I have learned through bitter experience that you check version numbers starting at version 1, not at version 2. Unless, that is, you have complete control over the software distribution and the data you're trying to work with - which we do not.

Quote
The current system seems straightforward and is backwards compatible, so why add superfluous checks?

And, as I said, trying to maintain backwards compatibility is a fool's errand. You might be able to carry it on for a short period, but sooner or later you're going to have to put in some really complicated, hard-to-maintain and error prone code in the name of "backwards compatibility."

Like my answer? Did I help? Tips gratefully accepted here: 1H6wM8Xj8GNrhqWBrnDugd8Vf3nAfZgMnq
jgarzik
Legendary
*
qt
Offline Offline

Activity: 1596
Merit: 1091


View Profile
March 21, 2011, 05:37:45 AM
 #4

And, as I said, trying to maintain backwards compatibility is a fool's errand. You might be able to carry it on for a short period, but sooner or later you're going to have to put in some really complicated, hard-to-maintain and error prone code in the name of "backwards compatibility."

Perhaps, but I think we should continue on this fool's errand of backwards compatibility a while longer...


Jeff Garzik, Bloq CEO, former bitcoin core dev team; opinions are my own.
Visit bloq.com / metronome.io
Donations / tip jar: 1BrufViLKnSWtuWGkryPsKsxonV2NQ7Tcj
Jim Hyslop (OP)
Member
**
Offline Offline

Activity: 98
Merit: 20


View Profile
March 21, 2011, 11:20:38 PM
 #5

And, as I said, trying to maintain backwards compatibility is a fool's errand. You might be able to carry it on for a short period, but sooner or later you're going to have to put in some really complicated, hard-to-maintain and error prone code in the name of "backwards compatibility."

Perhaps, but I think we should continue on this fool's errand of backwards compatibility a while longer...
Sipa's recent patch for partially unspent transactions is not entirely backwards compatible - existing clients will report an incorrect balance. That's not a major problem, but it's a definite warning sign that we need to be checking all version fields now.

What's it going to take to convince you that we should be proactive, not reactive?

Like my answer? Did I help? Tips gratefully accepted here: 1H6wM8Xj8GNrhqWBrnDugd8Vf3nAfZgMnq
mndrix
Michael Hendricks
VIP
Sr. Member
*
Offline Offline

Activity: 447
Merit: 258


View Profile
March 22, 2011, 06:53:54 PM
 #6

What's it going to take to convince you that we should be proactive, not reactive?

I have no opinion on this particular topic, but patches usually speak clearly and persuasively.  They also help facilitate the discussion by grounding it in a specific implementation.  Perhaps you could post a patch or a GitHub pull request implementing your suggestion.
genjix
Legendary
*
expert
Offline Offline

Activity: 1232
Merit: 1072


View Profile
March 23, 2011, 08:04:15 AM
 #7

What's the big deal about implementing version numbers in IMPLEMENT_SERIALIZE?
Hal
VIP
Sr. Member
*
expert
Offline Offline

Activity: 314
Merit: 3853



View Profile
March 23, 2011, 08:01:59 PM
 #8

Specifically, what are you proposing? What nVersion value should they check, what should they check it against, and what should they do if the check fails/succeeds?

Messages do not have version numbers. Nodes exchange their client versions on connection, but that value increments with every release. There's no way to know if packets from a more-recent client peer have changed to be incompatible with this version.

Here is some of the code that handles the "version" message:
Code:
        // Change version
        if (pfrom->nVersion >= 209)
            pfrom->PushMessage("verack");
        pfrom->vSend.SetVersion(min(pfrom->nVersion, VERSION));
        if (pfrom->nVersion < 209)
            pfrom->vRecv.SetVersion(min(pfrom->nVersion, VERSION));

pfrom->nVersion is the peer version. This sends the "verack" message, checking if the peer is newer than version 209 for backwards compatibility. It sets the outgoing serialization version (vSend) to the older of the peer and this node, but only does it for incoming (vRecv) for peers older than 209. Otherwise the incoming deserialization version is left at this node; don't know why the difference.

This design puts the responsibility on the newer node to send/receive backwards compatible messages.

Hal Finney
Jim Hyslop (OP)
Member
**
Offline Offline

Activity: 98
Merit: 20


View Profile
March 25, 2011, 12:10:15 AM
 #9

Specifically, what are you proposing? What nVersion value should they check, what should they check it against, and what should they do if the check fails/succeeds?
I see why everyone's so confused about what I'm trying to discuss. I just realized the nVersion parameter to Serialize, GetSerialSize, and Unserialize is tied to the VERSION macro, and is basically a constant.

Let me re-think this, and I'll start a new thread once I've got a clearer picture of how to present this.

Like my answer? Did I help? Tips gratefully accepted here: 1H6wM8Xj8GNrhqWBrnDugd8Vf3nAfZgMnq
Hal
VIP
Sr. Member
*
expert
Offline Offline

Activity: 314
Merit: 3853



View Profile
March 25, 2011, 02:43:56 AM
 #10

I see why everyone's so confused about what I'm trying to discuss. I just realized the nVersion parameter to Serialize, GetSerialSize, and Unserialize is tied to the VERSION macro, and is basically a constant.
This is not quite right, but the code is pretty spread out. Generally un/serialization is done by the operator<< and operator>> methods of CDataStream in serialize.h. We do CTransaction tx; vRecv >> tx; to read a transaction object; or vSend << a1; to send something of template class a1 (in PushMessage in net.h).

The << and >> methods in CDataStream call Serialize() or Unserialize(), which do default the nVersion parameter to VERSION. But in these calls the default is not used, they pass the nVersion instance variable of CDataStream. And this instance variable is set to the min of the node and peer version, e.g. in the code I showed above.

Hal Finney
Pages: [1]
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!