Bitcoin Forum
June 21, 2024, 06:49:24 AM *
News: Voting for pizza day contest
 
  Home Help Search Login Register More  
  Show Posts
Pages: [1]
1  Bitcoin / Bitcoin Discussion / Re: Bitcoin exchange rate plasmoid for KDE on: February 11, 2011, 06:41:27 PM
I made a simple plasmoid to display the current exchange rate at Mt. Gox on KDE (a desktop environment for linux). You can download it from KDE look: http://kde-look.org/content/show.php?content=138572

Sounds great Smiley

But.. is it just me, or is the ZIP file empty?   Huh
2  Bitcoin / Development & Technical Discussion / Re: Dynamic Difficulty Adjustments on: July 19, 2010, 10:12:01 PM

I assume you meant "difficulty += .001 * (600 - block.genTimeInSeconds)" instead?

(...)

randomFractionFactor=.001.  This isn't .001 necessarily, but just a factor to scale the change to a certain percentage of current difficulty.

The difficulty factor should be a function of the hash target, not the other way around. We shouldn't try to reach consensus (i.e. the hash target) based on floating point numbers, because the rounding depends on the compiler flags used and/or architecture, and you don't want different nodes to have a different view of what the hash target is.

Also, you can't trust block.genTimeInSeconds, because the timestamp inside the block can be forged by the node who generated it, and the nodes can't use their own clocks to determine how long it took for the block to be generated, because then every node would have a different view of what the hash target should be.

IMO, we should do the calculation based on the timestamps of the latest set of 2016 blocks, similarly how it is done today, taking into account that not all block timestamps may be accurate.

So the bottom line is that I disagree with the actual implementation you are proposing, but I agree with the general idea.
3  Economy / Economics / Re: Get rid of "difficulty" and maintain a constant rate. on: July 19, 2010, 08:32:06 PM
Personally, I think this is not worth it, because:

1) We'd be complicating the algorithm, making it much harder to verify that the code is correct and potentially introducing new ways of attacking the network.

2) We'd be introducing new points of failure because clients with wrong clocks wouldn't generate new coins, also NTP packets can be easily forged, and you shouldn't trust the clocks of other clients because they can also be forged

3) We'd be introducing potential new scalability problems. With the current algorithm, it's easy to predict the total bandwidth needed by the network per unit of time: on average, sizeof(block)*number_of_clients*connections_per_client per 10 minutes. With the proposed algorithm, it's harder to calculate, but it'll definitely need more bandwidth (I think much more, but I have no proof).

4) You will never make all the clients agree on a common 10-minute window of time. There will be clients who will be a few seconds off, there will be some a few minutes off, some a few hours off. How do you decide when a time window starts and when it ends?

Personally I find the current algorithm much more elegant than the proposed one. A slight improvement we can make is to do a more dynamic difficulty adjustment like proposed here - http://bitcointalk.org/index.php?topic=463.0 - this will more gradually mitigate the problem of blocks taking a much shorter or longer time when someone adds or removes a large amount of CPU power to the network.

Still, I think that this is only a problem while the network is still small. When bitcoin becomes more popular, it will be much harder for any single entity to influence how long the block generation takes on average. But in fact, I don't even consider this a problem, because the network should work just as robustly, regardless of the rate of block generation. The actual rate of block generation should be an implementation detail, not something a user has to worry about. All he should know is that it may take a variable amount of time to confirm a transaction, even though in the future this variation will keep being more and more predictable.

4  Economy / Economics / Re: Get rid of "difficulty" and maintain a constant rate. on: July 19, 2010, 07:01:45 PM
I do agree that the main triggering event would be the big problem with this kind of scheme, and that would imply some sort of centralized timekeeper to create the events.  I also think that such an event driven block creation system would ultimately give out about the same number of "new" coin blocks as the current system, and it would create much more network bandwidth trying to negotiate "winner".  It would also introduce scaling problems that currently don't exist in the current network.

Not to mention we'd also add another two new points of failure - the centralized timekeeper (presumably a network of NTP servers) and the automatic rejection of all valid blocks from clients which don't have the clock set correctly, either because they don't have an NTP service configured or because a firewall is blocking the NTP packets.
5  Bitcoin / Development & Technical Discussion / Re: Dynamic Difficulty Adjustments on: July 19, 2010, 06:20:54 PM
Regardless of how it's done though, we need to be wary of wild swings...

Indeed, this is also very important, especially to protect against DDoS attacks.
The current algorithm already tries to prevent this by not allowing a more than 4x factor adjustment every 2016 blocks.

I think we'd need a way to achieve the same end result when doing the continuous adjustments, but not a 4x factor for every block, obviously... it would need to prevent a 4x factor difficulty difference in the latest 2016 blocks compared to the calculated difficulty of the 2016 blocks before those.
6  Bitcoin / Development & Technical Discussion / Re: Dynamic Difficulty Adjustments on: July 19, 2010, 06:02:31 PM
after each block:

difficulty+=.001*600-block.genTimeInSeconds

I assume you meant "difficulty += .001 * (600 - block.genTimeInSeconds)" instead?

Thus, the difficulty would adjust dynamically up or down every block, with the magnitude of the adjustments being in proportion to the influx or exodus of computing power during that last block.

Actually, in the current algorithm the magnitude of the adjustments is also proportional to the influx or exodus of computing power, it's just that it happens at a period of 2016 blocks instead of being adjusted whenever a block is received.

I think that what you're proposing is for the algorithm to adjust the difficulty continuously rather than periodically.

If so, I think there are better ways.
For one, we shouldn't use floating point to compute the target hash, because rounding differences in architectures could cause some computers to accept one hash but others not.

Second, I don't understand where the ".001" constant comes from. I'm not convinced that your algorithm would accurately reflect the difficulty of the hash generation.. it looks like it would converge into it eventually, but I'm not sure how quickly it would do that.

I think that to achieve your goal, it would be much better to simply change the current algorithm to adjust the difficulty at the end of each block rather than at the end of each sequence of 2016 blocks.

In other words, every time we accept a new block, we'd look at the elapsed time of the latest 2016 blocks and calculate what the new target hash should be.

This seems feasible and easy enough, the only problem is that it's not a backwards-compatible change, so you can only start doing this when everyone has upgraded to a new version which knows how to do the new calculations.
7  Economy / Economics / Re: Get rid of "difficulty" and maintain a constant rate. on: July 19, 2010, 05:48:33 PM
It would be much more elegant to be able to rely on blocks being generated regularly at 10 minute intervals (or whatever rate is agreed upon). I believe this can be achieved with only a modest increase in bandwidth.

Simply, as the 10 minutes (or whatever) is about to elapse, hash generating computers broadcast the block they have found with the lowest hash. The other computers briefly stop to check the hash and they only broadcast their block if it has an even lower hash. At the 10 minute mark the lowest hashed block is adopted to continue the chain.

How do you get thousands of computers to agree when is the 10 minute mark?

Ideally you want the algorithm to rely on synchronized clocks as little as possible.

Another problem is that if you'd use your strategy, at every 10 minute mark the network would be swamped with a flood of candidate blocks.
Pages: [1]
Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!