ByteCoin (OP)
|
|
July 17, 2010, 03:20:24 AM |
|
The primary purpose of generating BitCoins is to provide an incentive for people to participate in the maintainance of block chain. Generating BitCoins out of "thin air" has recently captured the imagination of a set of new users (me included) and the sudden increase in available computing power has meant a dramatic increase in the rate of block generation.
The increased rate doesn't have any substantial disadvantages or risks that I can see but the variability of the rate is inelegant and it seems to attract a lot of discussion on IRC which distracts from more important issues. I can make a stronger case for the undesirability of an increased rate if required.
The difficulty of block generation will increase to counteract the influx of processing power and the generation rate will normalize after some delay. I predict that new users become disillusioned with the apparently unproductive use of their computer time (especially compared with their experiences in generating coins easily before the difficulty increase) and leave en-masse. The difficulty will not ramp down fast enough to offset this and we will be left with a period of very slow block generation. This will result in trades taking an irritatingly long time to confirm and arguably leaves the system more susceptible to certain types of fraud.
I predict that sucessful fraud schemes will be preceeded by manipulation of the rate by untraceably and deniably introducing and withdrawing substantial hash computation resources.
It would be much more elegant to be able to rely on blocks being generated regularly at 10 minute intervals (or whatever rate is agreed upon). I believe this can be achieved with only a modest increase in bandwidth.
Simply, as the 10 minutes (or whatever) is about to elapse, hash generating computers broadcast the block they have found with the lowest hash. The other computers briefly stop to check the hash and they only broadcast their block if it has an even lower hash. At the 10 minute mark the lowest hashed block is adopted to continue the chain.
There are some details to iron out to do with how low the hash has to be versus the time elapsed before you bother breaking the silence and broadcasting it but I believe that this would be a more elegant solution to the rate problem. People could rely on a fixed number of blocks being generated a day at fixed times or whatever timetable was mutually agreed.
ByteCoin
|
|
|
|
llama
Member
Offline
Activity: 103
Merit: 61
|
|
July 17, 2010, 03:39:48 AM |
|
This is a very very very interesting idea. It does seem to "automatically" solve the difficulty problem.
To extend it just a bit, a node should broadcast its block as soon as it finds the new lowest hash, even if its not close to the ten minute mark. Then, nodes would only broadcast if their new hash was lower then that one and so on. This would help minimize the effects latency and of the nodes' clocks being slightly off.
I'd have to think about this a lot more, but you might be on to something...
|
|
|
|
Bitcoiner
Member
Offline
Activity: 70
Merit: 11
|
|
July 17, 2010, 04:21:23 AM |
|
This is indeed an interesting idea. I'm curious what the devs would think about this idea. It could always be implemented on the test network first.
|
Want to thank me for this post? Donate here! Flip your coins over to: 13Cq8AmdrqewatRxEyU2xNuMvegbaLCvEe
|
|
|
knightmb
|
|
July 17, 2010, 05:02:03 AM |
|
I"m not part of the development team, but my take on it is that you'll just be replacing the randomness with another randomness. Right now, even though the difficulty is very high, blocks are still being generated in under 3 to 5 minutes. So if this new system was in place, you would still be waiting for a block just as long as you would now. I don't usually disclose how many PCs I have in the BTC network for sanity reasons, but let me say that I have systems that can barely manage 90 khash/s and a few that are chruning out 19,2000 khash/s and one beast doing 38,400 khash/s. They don't win any more blocks than the much slower PCs does. One of my 900MHz PCs solved a block under 100 seconds by pure chance alone after the difficulty was increased. The other super clusters are still 0 after the difficulty went up earlier today.
I'm afraid your solution would give my super clusters a big advantage because then it becomes they will always have the lowest hashed block if it's a CPU vs CPU thing.
|
Timekoin - The World's Most Energy Efficient Encrypted Digital Currency
|
|
|
d1337r
|
|
July 17, 2010, 06:31:46 AM |
|
This is a very very very interesting idea. It does seem to "automatically" solve the difficulty problem.
To extend it just a bit, a node should broadcast its block as soon as it finds the new lowest hash, even if its not close to the ten minute mark. Then, nodes would only broadcast if their new hash was lower then that one and so on. This would help minimize the effects latency and of the nodes' clocks being slightly off.
I'd have to think about this a lot more, but you might be on to something...
It's not ten minutes, it is 2016 blocks. And with your variant: imagine: by some sheer luck, some machine generated a block with a VERY VERY low hash. Then if other machines pick this low hash as a target, most of the blocks that would suit the target otherwise will be dropped. And only after the 2016 block cycle ends, an easier target will be set. Target is not the thing that only decreases, it may increase (for example, if some nodes leave the network or stop generating, the "still generating" nodes should get a better chance to keep emission at the required level)
|
|
|
|
NewLibertyStandard
|
|
July 17, 2010, 06:36:25 AM Last edit: July 17, 2010, 06:47:00 AM by NewLibertyStandard |
|
I had this idea myself and it's pretty much the same solution in a different form. Yeah, the timing of blocks would be more consistent, but in the current implementation, the timing is consistent if you take the average time it takes to generate blocks over a long period of time. In the current implementation, it's easy to measure sudden increases and decreases in the swarm. In the suggested implementation, you could also calculate sudden increases and decreases in the swarm by the lowness of the hash, but it would be much less noticeable.
If confirmations suddenly increase or decrease dramatically, it warns users that there is a rush of new users or the abandonment of a botnet, which may cause the exchange rate to fluctuate.
In the current implementation, it's a race toward the lowest time with a set low hash, while in the suggested implementation, it would be a race toward the lowest hash with a set low time. The slow CPU would be just as likely to generate a block. It's competing in the same way, just with goals and limits reversed.
Edited a few times.
|
Treazant: A Fullever Rewarding Bitcoin - Backup Your Wallet TODAY to Double Your Money! - Dual Currency Donation Address: 1Dnvwj3hAGSwFPMnkJZvi3KnaqksRPa74p
|
|
|
RHorning
|
|
July 17, 2010, 06:48:35 AM |
|
I"m not part of the development team, but my take on it is that you'll just be replacing the randomness with another randomness. Right now, even though the difficulty is very high, blocks are still being generated in under 3 to 5 minutes.
Block generation is at roughly every 10-15 minutes right now. See: http://nullvoid.org/bitcoin/statistix.phpfor a current report on some statistical averages over the last several blocks that have been generated. Still, the general point is valid. Some blocks are being generated in under ten seconds from the previous one, but statistical averages still exist. I do see the variable time between blocks, and in particular the predictive quality about when the difficulty is going to increase as something which could be used as a manipulation target after a fashion, although I should point out that any such manipulation would by definition also require CPU processing ability that approaches at least a substantial minority of the overall CPU strength of the network as a whole which is engaged in creating bitcoins. I give that last little exception as I hope it will become apparent that in time there will start to be people dropping out of the bitcoin creation process thinking that the whole effort is futile even if maintaining a connection on the network for the purposes of transaction processing could still be useful. I'm curious about where that will go over time. The strength of the network is in the overwhelming number of participants where even somebody with a (temporarily) unused server room at their disposal doing nothing but making bitcoin blocks still is a minority of the overall network. Furthermore, having a couple of "trusted" participants with server farms who are cooperatively making blocks only enhances this protection for everybody and keeps the would-be miscreants at bay. The only manipulation that I can imagine where this proposal would help is in the case of an attacker who times the connection and release of significant computing resources on the network, where for some periods of time the CPU server farm is banging out the bitcoin blocks and then leaves the network when the difficulty increases substantially.... waiting for that difficulty to drop back to what it was before it started to make the bitcoin blocks (doing other stuff in the meantime or even simply shutting down). Such efforts over a prolonged period of time, if successful, could also be derived and even plotted statistically to show an attack was under way. Randomizing the attacks to make it seem like "noise" would only serve to drop the value of such an attack. Trying to sneak in under the radar to appear as a "normal" user would end up simply adding strength to the network against other would-be attackers and in the long run be ineffective in their attack. Attackers would be fighting each other and normal users could simply be oblivious that anything is happening at all in terms of an attack.
|
|
|
|
wizeman
Newbie
Offline
Activity: 7
Merit: 0
|
|
July 19, 2010, 05:48:33 PM |
|
It would be much more elegant to be able to rely on blocks being generated regularly at 10 minute intervals (or whatever rate is agreed upon). I believe this can be achieved with only a modest increase in bandwidth.
Simply, as the 10 minutes (or whatever) is about to elapse, hash generating computers broadcast the block they have found with the lowest hash. The other computers briefly stop to check the hash and they only broadcast their block if it has an even lower hash. At the 10 minute mark the lowest hashed block is adopted to continue the chain.
How do you get thousands of computers to agree when is the 10 minute mark? Ideally you want the algorithm to rely on synchronized clocks as little as possible. Another problem is that if you'd use your strategy, at every 10 minute mark the network would be swamped with a flood of candidate blocks.
|
|
|
|
RHorning
|
|
July 19, 2010, 06:36:10 PM |
|
How do you get thousands of computers to agree when is the 10 minute mark?
Ideally you want the algorithm to rely on synchronized clocks as little as possible.
Another problem is that if you'd use your strategy, at every 10 minute mark the network would be swamped with a flood of candidate blocks.
Just making a presumption on this particular issue, and I don't have any sort of commentary on what would be a "triggering" event to create these kind of blocks, but here is at least a strategy to keep the network from getting completely bogged down with candidate blocks: Each potential candidate would obviously have some sort of "fitness" metric to suggest which one is more "fit" than another. If you are generating a block and the "event" trigger occurs, that node would "broadcast" its candidate to all of its immediate neighbors.... and keep track of which "neighbor" (direct connection to another node) has already had that candidate block transmitted. It only gets transmitted once to a neighbor (with acknowledgment). When a node starts to receive candidate blocks, it would perform this "fitness" test and either dump the current block (it has failed a fitness test) or keep the block.... continuing to contact adjacent nodes that have not yet acknowledged receiving the candidate block. If a new candidate is found that is more fit, it wouldn't re-transmit back to the original source of that block (whatever direct connection sent that block), but it would try to share that with other nodes. If the node receives the same block from a neighbor, it would consider that node to have also acknowledged the block until all neighbors are essentially working with the same block. While it would be chaotic at first, the network would calm down very quickly in this situation and settle upon a new block that would be ultimately accepted into the chain. I do agree that the main triggering event would be the big problem with this kind of scheme, and that would imply some sort of centralized timekeeper to create the events. I also think that such an event driven block creation system would ultimately give out about the same number of "new" coin blocks as the current system, and it would create much more network bandwidth trying to negotiate "winner". It would also introduce scaling problems that currently don't exist in the current network.
|
|
|
|
wizeman
Newbie
Offline
Activity: 7
Merit: 0
|
|
July 19, 2010, 07:01:45 PM |
|
I do agree that the main triggering event would be the big problem with this kind of scheme, and that would imply some sort of centralized timekeeper to create the events. I also think that such an event driven block creation system would ultimately give out about the same number of "new" coin blocks as the current system, and it would create much more network bandwidth trying to negotiate "winner". It would also introduce scaling problems that currently don't exist in the current network.
Not to mention we'd also add another two new points of failure - the centralized timekeeper (presumably a network of NTP servers) and the automatic rejection of all valid blocks from clients which don't have the clock set correctly, either because they don't have an NTP service configured or because a firewall is blocking the NTP packets.
|
|
|
|
NewLibertyStandard
|
|
July 19, 2010, 08:01:52 PM |
|
You wouldn't need to wait until right before the ten minute mark to compare hashes. Ideally, hashes would be compared continuously, for the whole ten minutes, so when the ten minute mark approached, all nodes would already have a pretty good idea of who was going to get the block. I think I2P has a swarm based clock and from what I understand, it's a huge complicated mess trying to accurately achieve and maintain an accurate time, but if somebody did want to go that route, the code is available. I imagine if an attacker had enough nodes, he could manipulate the swarm time to his advantage. Slowing down time when he doesn't have the lowest hash and speeding up time when he does. Of course, if you waited until right before the ten minutes are over, I suppose that probably wouldn't be possible. I don't think a giant rush of hashes being compared, would really bog down the swarm, since each node is only sending out their hash if they haven't received a higher one and only spreading the highest received hash. I think the total time it would not take would be more than the amount of time for the lowest hash to be checked by each node, because it would always win when compared to against other hashes and so it would always be propagated and never held back. Of course then the issue arises that competing nodes have an incentive to lie, but I imagine that's the case under the current system. If user X is only connected to attacking, lying nodes, then if he gets a lower hash than the attacker, then the attacker just refrains from forwarding his hash.
|
Treazant: A Fullever Rewarding Bitcoin - Backup Your Wallet TODAY to Double Your Money! - Dual Currency Donation Address: 1Dnvwj3hAGSwFPMnkJZvi3KnaqksRPa74p
|
|
|
wizeman
Newbie
Offline
Activity: 7
Merit: 0
|
|
July 19, 2010, 08:32:06 PM |
|
Personally, I think this is not worth it, because: 1) We'd be complicating the algorithm, making it much harder to verify that the code is correct and potentially introducing new ways of attacking the network. 2) We'd be introducing new points of failure because clients with wrong clocks wouldn't generate new coins, also NTP packets can be easily forged, and you shouldn't trust the clocks of other clients because they can also be forged 3) We'd be introducing potential new scalability problems. With the current algorithm, it's easy to predict the total bandwidth needed by the network per unit of time: on average, sizeof(block)*number_of_clients*connections_per_client per 10 minutes. With the proposed algorithm, it's harder to calculate, but it'll definitely need more bandwidth (I think much more, but I have no proof). 4) You will never make all the clients agree on a common 10-minute window of time. There will be clients who will be a few seconds off, there will be some a few minutes off, some a few hours off. How do you decide when a time window starts and when it ends? Personally I find the current algorithm much more elegant than the proposed one. A slight improvement we can make is to do a more dynamic difficulty adjustment like proposed here - http://bitcointalk.org/index.php?topic=463.0 - this will more gradually mitigate the problem of blocks taking a much shorter or longer time when someone adds or removes a large amount of CPU power to the network. Still, I think that this is only a problem while the network is still small. When bitcoin becomes more popular, it will be much harder for any single entity to influence how long the block generation takes on average. But in fact, I don't even consider this a problem, because the network should work just as robustly, regardless of the rate of block generation. The actual rate of block generation should be an implementation detail, not something a user has to worry about. All he should know is that it may take a variable amount of time to confirm a transaction, even though in the future this variation will keep being more and more predictable.
|
|
|
|
bdonlan
|
|
July 21, 2010, 12:10:16 AM |
|
The biggest problem with this approach is you can't audit it after the fact. Consider the case of restarting your client after it's been off for a week. It immediately connects to Mallory's client and asks it for the last week's block chain. Mallory responds with a chain of super-easy blocks, each exactly 10 minutes apart. Now mallory can control your view of the network and transaction history. Oops. And even if you connect to a 'good' node later, you have no way of sorting out which block chain is real, unless you take the higher difficulty - but this raises the problem where an attacker could spend a long time generating a single block, more difficult than some particular historical block, followed by a bunch of easy blocks. It'll take a while to generate, but then Mallory can rewrite history for the entire network.
|
|
|
|
ByteCoin (OP)
|
|
July 21, 2010, 01:01:56 AM |
|
The problem you outline exists in the current system. You restart your client after it's been off for a week. It immediately connects to Mallory's client and asks it for the last week's block chain. Mallory responds with one or two blocks, each with an appropriate hash. Now mallory can control your view of the network and transaction history. Oops.
Same problem.
With my scheme when you connect to a 'good' node later, you take the chain with the higher total difficulty instead of the longest block chain. A reasonable measure of the total difficulty under the current proof of work is the total number of zero leading bits in all the block hashes. In the case you mention, the attacker generates a better single block that some particular historical block but because it's followed by a bunch of easy blocks the total number of leading zero bits is much lower than the real block chain and hence the attack fails.
ByteCoin
|
|
|
|
Unorthodox
Newbie
Offline
Activity: 2
Merit: 0
|
|
July 21, 2010, 03:10:54 AM |
|
The biggest issue with this idea except for the bandwidth, is that you won't have a good idea of how many computers are generating in the network, or how hard it will be for generating.
This kind of information is useful when buying/selling bitcoins, as it has an effect on the price. Also, I wouldn't know myself how easy it will be to generate, possibly wasting CPU power and electricity in my computers.
I rather stick with the system in use today.
|
|
|
|
NewLibertyStandard
|
|
July 21, 2010, 06:07:51 AM |
|
The biggest issue with this idea except for the bandwidth, is that you won't have a good idea of how many computers are generating in the network, or how hard it will be for generating.
This kind of information is useful when buying/selling bitcoins, as it has an effect on the price. Also, I wouldn't know myself how easy it will be to generate, possibly wasting CPU power and electricity in my computers.
I rather stick with the system in use today.
The lowness of accepted blocks would be a measurement of difficulty and network computational power.
|
Treazant: A Fullever Rewarding Bitcoin - Backup Your Wallet TODAY to Double Your Money! - Dual Currency Donation Address: 1Dnvwj3hAGSwFPMnkJZvi3KnaqksRPa74p
|
|
|
knightmb
|
|
July 21, 2010, 06:39:38 AM |
|
The problem you outline exists in the current system. You restart your client after it's been off for a week. It immediately connects to Mallory's client and asks it for the last week's block chain. Mallory responds with one or two blocks, each with an appropriate hash. Now mallory can control your view of the network and transaction history. Oops.
Same problem.
With my scheme when you connect to a 'good' node later, you take the chain with the higher total difficulty instead of the longest block chain. A reasonable measure of the total difficulty under the current proof of work is the total number of zero leading bits in all the block hashes. In the case you mention, the attacker generates a better single block that some particular historical block but because it's followed by a bunch of easy blocks the total number of leading zero bits is much lower than the real block chain and hence the attack fails.
ByteCoin
The client makes at least 8 connections, kind of random. One would need to control all of those entry points. Not impossible of course, but one rogue client is one thing, but a bunch of good clients, how to do you formulate attacking all of them?
|
Timekoin - The World's Most Energy Efficient Encrypted Digital Currency
|
|
|
Traktion
|
|
July 21, 2010, 12:22:17 PM |
|
There could be a good argument for increasing the rate, along with the number of participants, forever.
If every generating node maintains the same constant rate of minting per CPU cycle (ie. more powerful CPU => more minting), then the coin base will grow along with the node base - which is the user base, which gives us a handle on coin demand. This has been touched on a few times on this forum, but I sense resistance to this.
The reason for doing the above, is not to create an inflationary environment, but to keep the number of coins relative to the user base. Failing to do this, will put deflationary pressure on the currency; remember, increasing the demand (number of Bitcoin users) for the currency, is the same as decreasing the quantity (of coins in a fixed Bitcoin user base). If you want to retain a steady, neutral, value of Bitcoins, then this needs to be considered.
Therefore a constant rate relative to the user base would be ideal. The faster the rate of adoption, the more coins should be created. The reverse could be handled by natural wastage (lost coins), although this could be 'sticky' in the extremes (although destroying a proportion of the transaction fee would speed this up).
I'm sure an algorithm could be formulated to achieve the above, with the constant rate not being so high as to be inflationary - the target would be to keep the number of coins proportionate to the user base, thus creating 0% inflation*.
* I know this seems counter intuitive, but in a currency with a non-fixed (hopefully growing) user base, it becomes very important.
[NOTE: There may be an argument for the minting rate to track 'GDP' or some such - perhaps based on the number and value of transactions taking place? If people are economically active enough to have their node minting coins, maybe the user base may be sufficient/better in creating stability. This is probably another debate in itself, but the above point needs to be agreed on first.]
|
|
|
|
joechip
Newbie
Offline
Activity: 50
Merit: 0
|
|
July 21, 2010, 12:40:57 PM |
|
There could be a good argument for increasing the rate, along with the number of participants, forever.
If every generating node maintains the same constant rate of minting per CPU cycle (ie. more powerful CPU => more minting), then the coin base will grow along with the node base - which is the user base, which gives us a handle on coin demand. This has been touched on a few times on this forum, but I sense resistance to this.
The reason for doing the above, is not to create an inflationary environment, but to keep the number of coins relative to the user base. Failing to do this, will put deflationary pressure on the currency; remember, increasing the demand (number of Bitcoin users) for the currency, is the same as decreasing the quantity (of coins in a fixed Bitcoin user base). If you want to retain a steady, neutral, value of Bitcoins, then this needs to be considered.
This is simply the Monetarist desire for the rate of monetary increase to equal the output increase of the economy. It is a false argument the Austrians have debunked for years and is one of the factors which has led us to the situation we currently have. That's been the FED's policy... to match money growth to economic growth....it's one of their primary mandates, price stability. NO NO NO. Price deflation is a GOOD thing. Your money buys more per unit. You become more wealthy not only by investing in interest-bearing projects but through the increase in the value of what your savings (which pay no interest) will buy you. It leads to thrift and investment in projects likely to have a higher return than the rate of price deflation.
|
|
|
|
Traktion
|
|
July 21, 2010, 01:39:18 PM Last edit: July 21, 2010, 01:54:21 PM by Traktion |
|
There could be a good argument for increasing the rate, along with the number of participants, forever.
If every generating node maintains the same constant rate of minting per CPU cycle (ie. more powerful CPU => more minting), then the coin base will grow along with the node base - which is the user base, which gives us a handle on coin demand. This has been touched on a few times on this forum, but I sense resistance to this.
The reason for doing the above, is not to create an inflationary environment, but to keep the number of coins relative to the user base. Failing to do this, will put deflationary pressure on the currency; remember, increasing the demand (number of Bitcoin users) for the currency, is the same as decreasing the quantity (of coins in a fixed Bitcoin user base). If you want to retain a steady, neutral, value of Bitcoins, then this needs to be considered.
This is simply the Monetarist desire for the rate of monetary increase to equal the output increase of the economy. It is a false argument the Austrians have debunked for years and is one of the factors which has led us to the situation we currently have. That's been the FED's policy... to match money growth to economic growth....it's one of their primary mandates, price stability. NO NO NO. Price deflation is a GOOD thing. Your money buys more per unit. You become more wealthy not only by investing in interest-bearing projects but through the increase in the value of what your savings (which pay no interest) will buy you. It leads to thrift and investment in projects likely to have a higher return than the rate of price deflation. You get more wealthy for doing nothing, just for being one of the first users of Bitcoin. The more people join, the less they gain from this, until there are few new entrants. It's like a pyramid scheme in that sense - you're being rewarded for doing nothing. Sure, the currency can be easily divided. Sure, people can use alternatives (Hayek, Denationalisation of Money - good read), but I don't think that helps the Bitcoin to become the best money and will prompt its replacement. BTW, as the Bitcoin supply will grow for decades according to the current plan, this isn't at odds with my POV. I just don't think that it should stop growing if the user base is still growing; that would be counter productive. I also thing the rate of this growth could be optimised better, rather than being arbitrary. EDIT: P.S. It's nothing to do with keeping a price index like CPI steady (like the central banks try to do). That's something quite different and I would agree that it's flawed and probably an impossible task too (as most Austrians would agree).
|
|
|
|
|