Bitcoin Forum
November 10, 2024, 02:14:53 AM *
News: Latest Bitcoin Core release: 28.0 [Torrent]
 
   Home   Help Search Login Register More  
Pages: [1] 2 »  All
  Print  
Author Topic: Hash/sec Throttling for Democracy  (Read 13472 times)
InterArmaEnimSil (OP)
Member
**
Offline Offline

Activity: 77
Merit: 10


View Profile
July 13, 2010, 06:23:55 PM
 #1

I've seen a number of posts complaining that coin generation on old machines is impractical (actually, the posts say impossible, but that's not correct).  A number of others have espoused the general idea that flops*luck=coins, which seems to me to be about right.  One even advocated for OpenCL/CUDA support, which seems to me like it would give those with OpenCL capable cards an incredible advantage in the "flops" category of flops*luck.  

Now, some have said "If you have no luck, you don't get coins...." but come on here...we're dealing with computers - RNGs have nothing, really, to do with luck.  They operate upon statistical averages.  (If BTC is using a true RNG based upon machine atmospheric noise, I could be wrong here, but I don't know that such a generator would be practical in that it would be too slow).  

Therefore, why not cap the number of hashes per second?  If the operations were capped at say, 250khash/sec based upon system clock and not the available number of cycles, then anyone with the "minimum requirements" could participate in generation at no disadvantage to the guy with the TESLA cluster running CUDA (okay...so people aren't going to use TESLA clusters for this...but you see my point, I hope).  Of course, difficulty would need to be adjusted accordingly to keep block generation on pace, and checks for blocks generated clients violating the cap (and thus outpacing other clients by cheating) would be required, but these are matters solved with relative ease in the code.

12aro27eH2SbM1N1XT4kgfsx89VkDf2rYK
laszlo
Full Member
***
Offline Offline

Activity: 199
Merit: 2384


View Profile
July 13, 2010, 06:35:38 PM
 #2

Couldn't I just run 500 instances of the proposed capped client?  This is more about trading than generating - the generation is just to make it so the supply is limited.

BC: 157fRrqAKrDyGHr1Bx3yDxeMv8Rh45aUet
Xunie
Full Member
***
Offline Offline

Activity: 132
Merit: 101



View Profile
July 13, 2010, 08:18:13 PM
 #3

flops*luck=coins, which seems to me to be about right.  One even advocated for OpenCL/CUDA support, which seems to me like it would give those with OpenCL capable cards an incredible advantage in the "flops" category of flops*luck.  

Now, some have said "If you have no luck, you don't get coins...." but come on here...we're dealing with computers - RNGs have nothing, really, to do with luck.  They operate upon statistical averages.  (If BTC is using a true RNG based upon machine atmospheric noise, I could be wrong here, but I don't know that such a generator would be practical in that it would be too slow).  

Say I have a RNG that can output 1024 different numbers, and we run it 1024*1024 times.
We will keep statistics of how many times it gives us what number between 0 and 1024.
The more we run it, the more difference in the array between the two numbers which got out putted the least and the most.

And so, I can say with a certain amount of security without looking like a fool that how RNGs are used in Bitcoin affects the "chance" of generating a block.


Therefore, why not cap the number of hashes per second?  If the operations were capped at say, 250khash/sec based upon system clock and not the available number of cycles, then anyone with the "minimum requirements" could participate in generation at no disadvantage to the guy with the TESLA cluster running CUDA (okay...so people aren't going to use TESLA clusters for this...but you see my point, I hope).  Of course, difficulty would need to be adjusted accordingly to keep block generation on pace, and checks for blocks generated clients violating the cap (and thus outpacing other clients by cheating) would be required, but these are matters solved with relative ease in the code.

We can't.
Bitcoin's Opensource so anyone can run a modified client and thus remove the cap!


Now, some have said "If you have no luck, you don't get coins...." but come on here...we're dealing with computers - RNGs have nothing, really, to do with luck.  They operate upon statistical averages.  (If BTC is using a true RNG based upon machine atmospheric noise, I could be wrong here, but I don't know that such a generator would be practical in that it would be too slow). 

The Linux kernel RNG is seeded by noise on the system actually.

Ignore this: 734d417914faa443d74e8205f639dfb0f79fdc44988ecae44db31e5636525afe

Caffeinism -- a toxic condition caused by excessive ingestion of coffee and other caffeine-containing beverage.
RHorning
Full Member
***
Offline Offline

Activity: 224
Merit: 141


View Profile
July 13, 2010, 10:55:47 PM
 #4

Couldn't I just run 500 instances of the proposed capped client?  This is more about trading than generating - the generation is just to make it so the supply is limited.

The original question still applies here, however, and the idea that people new to the network don't have coins with which to even experiment is an issue, where in this case it is a hoarding issue of those who have come earlier.

Is it possible to have a multi-tiered coin generation system, where you would have a cryptographically difficult series that would generate a whole bunch of coins at once when the series is completed (taking a few days, weeks, or even months to complete on average), but for those who are more interested in generating coins on a more gradual basis could generate say one bitcoin or even a fractional amount of a bitcoin every few minutes or hours?

The complaint right now is that given the current number of users on the network (which has increased substantially since the slashdot story.... look at the download statistics on Source Forge with a huge spike in the last few days for downloads alone) that coins simply aren't being generated at all.

BTW, it would be nice to see if some other alternative random number generators could be used, as the current one seems to be a time-randomized Linear Congruential Generator.  (see http://en.wikipedia.org/wiki/Linear_congruential_generator for details)  BTW, feel free to correct me if I'm wrong on this point too. 

It might be kind of interesting to experiment with alternate number generators, and it certainly would be healthy for the network if more than one kind of generator was being used.  The nice thing with the LCG as a random number generator is that it can be implemented using only integer arithmetic (often with floating point operations, however) and generally is the most efficient generator in terms of simply getting a number for most common implementations (such as is found with a typical video game) with the fewest CPU cycles.  There are some classes of numerical analysis applications where a generator of this nature may not be the best choice.
laszlo
Full Member
***
Offline Offline

Activity: 199
Merit: 2384


View Profile
July 13, 2010, 11:35:05 PM
 #5

I'm not sure which random number thing you're referring to, but here is a quick outline of how the generation works:

Prepare a block of memory organized as a struct - certain fields like the hash of the previous block, the current time and some other housekeeeping information is filled in.  The struct has an integer field and another 'expansion' field where the data is free form.  This free form data is altered (simple increment of a number) and then the struct is hashed.  The resulting hash is interpreted as a large integer.  If it is equal to or less than the current difficulty function then the proof of work has been found and this block is added locally and broadcast to other nodes.

This is similar to having a 100 sided die - if you roll below 70 you win.  To make the game harder you have to roll below 50, then 40, etc.  Basically there is no known shortcut to go from a desired hash value to the original input, we only know how to do it quickly in the other direction, so it is simply a brute force iterative process.  Statistically, less blocks of data will be below the target value, the lower the target value is.  Finding a block that hashes below the difficulty value proves that you worked on the problem, or got lucky and hit it on the first try, but over a long period of time you won't keep getting lucky, you'll have to try lots and lots of different values.

BC: 157fRrqAKrDyGHr1Bx3yDxeMv8Rh45aUet
InterArmaEnimSil (OP)
Member
**
Offline Offline

Activity: 77
Merit: 10


View Profile
July 14, 2010, 03:31:23 AM
 #6

Couldn't I just run 500 instances of the proposed capped client?  This is more about trading than generating - the generation is just to make it so the supply is limited.

Maybe, or maybe not.  It would depend upon whether the code did any checking for running clients. The software could easily run some check and prevent a machine from running more than one client.  Of course, there are ways around this, a la VMs, etc, but that's another issue altogether.

Say I have a RNG that can output 1024 different numbers, and we run it 1024*1024 times.
We will keep statistics of how many times it gives us what number between 0 and 1024.
The more we run it, the more difference in the array between the two numbers which got out putted the least and the most.

And so, I can say with a certain amount of security without looking like a fool that how RNGs are used in Bitcoin affects the "chance" of generating a block.

Actually, the number of outputs for the least output number and the number of the outputs for the most output number would tend, as you increased the number of runs to infinity, toward the same number.  Specifically, each would tend toward RUNS*1/RNG_RANGE.  This was my initial point - there is no luck involved - there's a mechanistic algorithmic process that simulates luck for a small set, but for any large number of runs or large number of machines performing runs, you get the result of the above formula.

We can't.
Bitcoin's Opensource so anyone can run a modified client and thus remove the cap!
Just because the program is open source doesn't mean that it can't check work for validity.  This comment is akin to saying "Anyone could modify the program to create fifty billion bitcoins, since its open source."  No.  The other nodes on the network run checks to determine that blocks (I think blocks is the right terminology, I just found bitcoin recently) are legitimate.  Similarly, there are a myriad of ways to sign work to verify that it is originating from an un-hacked client.  This would reject work from clients which had been modified to remove the cap, or to remove the limit of one instance of the program per machine.  Sure, someone is free to modify the source and remove all caps.  However, if they do that, their work would be rejected by the network of legitimate clients, and the "hacked" client would be useless unless someone wanted to start an entire new chain, and thus a competitive currency.

The Linux kernel RNG is seeded by noise on the system actually.

Yes, dev/random works this way, but it is blocking, and as I said, very slow.  Run "cat /dev/random" and see how quickly the bytes pour onto your screen - they trickle.  This may or may not be fast enough for bitcoin - I haven't delved into the source as of yet.  Conversely, dev/urandom does not work exclusively off of atmospheric noise, and thus is not truly random.   Run "cat /dev/urandom" and you'll be deluged with pseudorandom noise.  /dev/urandom, obviously, is more practical due to the fact that it doesn't hang the machine as it waits for more truly random data - it generates more as it needs it if there is none left in the kernel noise pool.  Even with fully random clients, however, they tend, statistically, over time, to the RUNS*1/RNG_RANGE formula for each output possibility.  Luck applies per each individual chance.  Statistically, as the number of blocks processed grows, it ceases to be a factor.

My key point, or key question....Do those with faster computers have an advantage, or do they not?  In the general forums people claim that the "luck" factor makes up for this...but statistics beg to differ.  Granted, and I do agree, the point is trading, not generation, but essentially with generation we're handing out free (though inflating) money.  I would like to think that the process is equitable and not based upon one's ability to afford, in USD, a beast of a machine.  This would loosely tie the initial distribution of BTC to the present distribution of USD of those willing to contribute...and seems to undermine the currency philosophically, apart from being unfair.

Thanks for all the hearty discussion on the matter, everyone.

12aro27eH2SbM1N1XT4kgfsx89VkDf2rYK
knightmb
Sr. Member
****
Offline Offline

Activity: 308
Merit: 258



View Profile WWW
July 14, 2010, 04:00:17 AM
 #7

My key point, or key question....Do those with faster computers have an advantage, or do they not?  In the general forums people claim that the "luck" factor makes up for this...but statistics beg to differ.  Granted, and I do agree, the point is trading, not generation, but essentially with generation we're handing out free (though inflating) money.  I would like to think that the process is equitable and not based upon one's ability to afford, in USD, a beast of a machine.  This would loosely tie the initial distribution of BTC to the present distribution of USD of those willing to contribute...and seems to undermine the currency philosophically, apart from being unfair.

Thanks for all the hearty discussion on the matter, everyone.

Yes, they do. If the old PC can only generate 1 block in 48 hours and the super fast modern PC can generate 1 block in 12 hours, then statistically, the old PC is 4 times as slow as the modern PC during coin generation. The difference is the luck factor. It's possible that the old PC will find a crypto solution by pure luck in less time. So it's possible that the old PC finds one in 2 hours while the super fast PC has to burn through the entire brute force before finding a block in 12 hours. So the advantage is the odds. Simplified of course, but yes, the faster PC gives you a 1 in 12 chance to get the solution and the old PC gives you a 1 in 48 chance (again, way over simplified example)

So the faster PC is like having a few extra lottery tickets  Wink

Timekoin - The World's Most Energy Efficient Encrypted Digital Currency
InterArmaEnimSil (OP)
Member
**
Offline Offline

Activity: 77
Merit: 10


View Profile
July 14, 2010, 07:59:45 AM
 #8

Yes, they do. If the old PC can only generate 1 block in 48 hours and the super fast modern PC can generate 1 block in 12 hours, then statistically, the old PC is 4 times as slow as the modern PC during coin generation. The difference is the luck factor. It's possible that the old PC will find a crypto solution by pure luck in less time. So it's possible that the old PC finds one in 2 hours while the super fast PC has to burn through the entire brute force before finding a block in 12 hours. So the advantage is the odds. Simplified of course, but yes, the faster PC gives you a 1 in 12 chance to get the solution and the old PC gives you a 1 in 48 chance (again, way over simplified example)

So the faster PC is like having a few extra lottery tickets  Wink

Given our sample size of 67082 runs, I think that luck is ironed out by statistics at this point.  Finding a crypto solution by "luck" would imply that the machines began the calculations at a different starting point in instruction sequence, wouldn't it?  However, if my machine does, say, 100MFLOPS, and begins a 10-trillion operation procedure to decode a block, and another machine with 1GFLOPS begins the same procedure at ANY time when I'm not 90% done with the process, then the faster machine will always finish faster, will it not?  These are deterministic machines working on deterministic problems, with finite starting and ending points and X number of steps in between....I fail to see where luck comes in to play.

On the other hand, as I have a rather slow (but not ancient) laptop, I'd LIKE to see how luck comes into the equation...so if anyone can fill me, in, please do so.

One question...Do machines cease work on a block upon discovery that it has been finished first by someone else, or does everyone keep working on a block until he/she is done?  In this case, it is conceivable that if I happen to start the next block at the right time, I could "by luck" finish first...on the other hand, if machines all discard their current block when it is finished by someone else, then everyone is beginning the blocks at roughly the same time, and the fastest machines will win almost universally.  Anyone care to shed any clarity on the situation?

12aro27eH2SbM1N1XT4kgfsx89VkDf2rYK
Insti
Sr. Member
****
Offline Offline

Activity: 294
Merit: 252


Firstbits: 1duzy


View Profile
July 14, 2010, 10:02:54 AM
 #9

Given our sample size of 67082 runs, I think that luck is ironed out by statistics at this point.  Finding a crypto solution by "luck" would imply that the machines began the calculations at a different starting point in instruction sequence, wouldn't it?  However, if my machine does, say, 100MFLOPS, and begins a 10-trillion operation procedure to decode a block, and another machine with 1GFLOPS begins the same procedure at ANY time when I'm not 90% done with the process, then the faster machine will always finish faster, will it not?  These are deterministic machines working on deterministic problems, with finite starting and ending points and X number of steps in between....I fail to see where luck comes in to play.

Everybody is working on a different problem.

I am trying to find a hash for a block which includes a transaction paying ME 50 BTC, you are trying to find the hash for a block which includes a transaction paying YOU 50BTC. These blocks will be different, so the nonce value required to hash them below the target value will be different.

Quote
On the other hand, as I have a rather slow (but not ancient) laptop, I'd LIKE to see how luck comes into the equation...so if anyone can fill me, in, please do so.

I have an old slow machine that is only getting about 80khash/s, I managed to generate a block when the difficulty was 23. That seems pretty lucky to me.

Quote
One question...Do machines cease work on a block upon discovery that it has been finished first by someone else, or does everyone keep working on a block until he/she is done?  In this case, it is conceivable that if I happen to start the next block at the right time, I could "by luck" finish first...on the other hand, if machines all discard their current block when it is finished by someone else, then everyone is beginning the blocks at roughly the same time, and the fastest machines will win almost universally.  Anyone care to shed any clarity on the situation?

Once a valid block comes through, everyone stops working on the one they were working on and starts working on the next block.
It doesn't really matter if you start a few seconds before anyone else, because you're working on a different problem as I explained above.


knightmb
Sr. Member
****
Offline Offline

Activity: 308
Merit: 258



View Profile WWW
July 14, 2010, 03:02:09 PM
 #10

On the other hand, as I have a rather slow (but not ancient) laptop, I'd LIKE to see how luck comes into the equation...so if anyone can fill me, in, please do so.

One question...Do machines cease work on a block upon discovery that it has been finished first by someone else, or does everyone keep working on a block until he/she is done?  In this case, it is conceivable that if I happen to start the next block at the right time, I could "by luck" finish first...on the other hand, if machines all discard their current block when it is finished by someone else, then everyone is beginning the blocks at roughly the same time, and the fastest machines will win almost universally.  Anyone care to shed any clarity on the situation?
From what I've read, when a new block is discovered, the other machines will take a few milliseconds to verify if it's real or not first because say two people's computer discovered a solution within a second of each other. How do we know who the winner is? Well, again more luck. Other people's computers have to verify that your PC wasn't cheating/stupid/error/etc for the block. After a hundred or more confirmations, it's accepted by the "group" that your PC founding the winning block and everyone assigns the credit to your PC. So when two computers at the same time find the block, it's more luck on how fast it will be accepted by the group by how many other people's computers verify your find.

The chain is updated and if your neighbors computer was working on block 66777, it hears the news that your found the winning block and says "well, time to start on the next block" and begins the calculations over. Now your computer and your neighbors computer are working on block 66778, but each is doing it's own unique brute force. They aren't both working on the same brute force for the same block, otherwise, the fastest computer would always win over and over. The faster computer does have an advantage that it can brute force the next find faster than the slow guy, but the slow guy still has luck on his side.

Now imagine this with thousands of PCs out there playing the lotto every second. Just because you have a quad-core system churning 2,400 khash/s doesn't mean you are going to win every time. It makes your odds better of course, but by no means is it a sure thing just because of the raw CPU power you have. Ask the people at this forum who are running this on a server farm (like myself) and you'll see. I'm churning 10,000 khash/s on one of my systems (6 cpu) and it hasn't won a single block yet  Wink  I'm not bitter though, it proves to me that the system is being fair in the block generation, so I'm happy actually.

Timekoin - The World's Most Energy Efficient Encrypted Digital Currency
Strofcon
Newbie
*
Offline Offline

Activity: 20
Merit: 0


View Profile
July 14, 2010, 04:02:05 PM
 #11

Now imagine this with thousands of PCs out there playing the lotto every second. Just because you have a quad-core system churning 2,400 khash/s doesn't mean you are going to win every time. It makes your odds better of course, but by no means is it a sure thing just because of the raw CPU power you have. Ask the people at this forum who are running this on a server farm (like myself) and you'll see. I'm churning 10,000 khash/s on one of my systems (6 cpu) and it hasn't won a single block yet  Wink  I'm not bitter though, it proves to me that the system is being fair in the block generation, so I'm happy actually.

This is a good point. My weaker PC is barely squeaking out 1,000 khash/s on two cores, and has netted me 100 coins in less than 48 hours. My server, on the other, hitting closer to 1,200 khash/s on two cores (not a huge bump, but still more), and has been running for about 12 hours longer than the other and has netted me 0 coins. So, like knightmb said... there's hope still. Smiley
InterArmaEnimSil (OP)
Member
**
Offline Offline

Activity: 77
Merit: 10


View Profile
July 14, 2010, 07:07:03 PM
 #12

Okay - if everyone's machines are working on a hash to a different problem, then I can see how luck would be a factor.  However, what is the source of the variation between the problem my machine is working on, the problem yours is, etc?  One reply earlier seemed to imply that it had to do with transactions in which the individual recently took part....but what about those of us not taking part in any recent transactions? (My most recent one is at least two days old now)

12aro27eH2SbM1N1XT4kgfsx89VkDf2rYK
knightmb
Sr. Member
****
Offline Offline

Activity: 308
Merit: 258



View Profile WWW
July 14, 2010, 07:17:43 PM
 #13

Okay - if everyone's machines are working on a hash to a different problem, then I can see how luck would be a factor.  However, what is the source of the variation between the problem my machine is working on, the problem yours is, etc?  One reply earlier seemed to imply that it had to do with transactions in which the individual recently took part....but what about those of us not taking part in any recent transactions? (My most recent one is at least two days old now)
In terms of Coin Generation, you need only be connected to the network. That's all that is needed for your computer to broadcast "I found it!!" message and for other computers to check if it's valid. Since blocks are being generated on a constant basis, doing offline Coin generation won't be practical. Because if 2 days ago your computer found a block, but so did mine and my PC was online the whole time, mine will be proclaimed the winner/owner of that Coin by the network as a whole. Your PC comes in 2 days later and broadcast that it has the same solution and the other computers will just snide a "too late, XYZ already solved it, better luck next time".

In terms of variation between problems, when a block is found, everyone starts on the next block. So if your computer was only 1% towards solving block 68000 and got the message "XYZ solved the block 68000 just a few minutes ago", your PC thinks "well, on to the next one". It doesn't waste CPU trying to solve a block that was already solved by someone else. That's where the verification part comes in. Otherwise, someone would just hack together a client that broadcast "I solved block 68000, 680001, 68002, etc" to claim ownership of the entire range. When a client says it solved a block, all the other computers say "ok, well prove it then, send me your results". When enough of them talk to each other about it, they will agree that "yes, your PC solved block 68000, you are the new owner, congratulations".

The key part is that it takes hours/days for our PCs to solve a block, but only milliseconds for everyone else to check if it's true. That prevents a "fake block found" attack from happening on the network.

Another example, you have a room with hundreds of people in it. Everyone is given a randomly mixed up Rubix cube to solve. The first one to solve his/her cube gets 50 coin.

Now if someone shouts "I solved it", it won't take but a mere glance from the surrounding people to tell if it's true or not (solid colors on all sides) If someone shouts "I solved it" and it's still a jumbled mess, well everyone just ignores that person and continues on. The first person who solves it for "real" wins the prize and then everyone throws away their current Rubix cube and a bunch more randomly mixed Rubix cubes drop from the ceiling to start the process all over again.

Timekoin - The World's Most Energy Efficient Encrypted Digital Currency
InterArmaEnimSil (OP)
Member
**
Offline Offline

Activity: 77
Merit: 10


View Profile
July 14, 2010, 07:30:00 PM
 #14

In terms of variation between problems, when a block is found, everyone starts on the next block. So if your computer was only 1% towards solving block 68000 and got the message "XYZ solved the block 68000 just a few minutes ago", your PC thinks "well, on to the next one". It doesn't waste CPU trying to solve a block that was already solved by someone else.
But the problem for solving block X is the same across all the computers, right?  So they have a starting point, an algorithm for finding the hash, and a processor.  Each starts churning away at the instruction sequence...fastest one wins.  I still don't see the luck.


...randomly mixed Rubix cubes drop from the ceiling to start the process all over again.

So the starting point for the crypto hash problems is different for the various nodes?  This would make sense - if a slow machine happened to get a block that needed only 100,000 steps to solve, but a fast machine happened to get a block that needed 10 trillion steps, the slow machine might "win."  My question remains, though.  What is the source of the variation between starting points?  How are the differences created?  Obviously, if every machine starts in the same place each time, the fastest machine always wins...so from where do the different starting points come?

12aro27eH2SbM1N1XT4kgfsx89VkDf2rYK
laszlo
Full Member
***
Offline Offline

Activity: 199
Merit: 2384


View Profile
July 14, 2010, 07:33:51 PM
 #15

Aside from the housekeeping fields needed to make sense of the data, the rest of the data that is being hashes is just random.  Everyone's is different and you never get any closer to solving it.  Every time it is twiddled and re-hashed you have the same chance of finding a solution.  This is just like buying raffle tickets.  Everyone's numbers are different and they could all be winners, however the guy who gets more raffle tickets might win more often, if this process was repeated over and over.  Computers that can try hashing faster have more raffle tickets but every hash calculation has the same chance of being a winner.

BC: 157fRrqAKrDyGHr1Bx3yDxeMv8Rh45aUet
Strofcon
Newbie
*
Offline Offline

Activity: 20
Merit: 0


View Profile
July 14, 2010, 07:50:02 PM
 #16

Take this lightly until confirmed, but here's my understanding...

There is no variation in the problem itself - every node is intended to work on the same block at the same time (accounting for latencies and such). The luck factor is really the random number generated at the beginning of each node's attempt to solve a new block. When a new block needs to be solved, each node generates a random value (nonce), which is used to hash the block. If that hash isn't the right one, the nonce is incremented, and the new incremented value is used to hash the block again.

Say my clunker manages 1,000 khash/s (which it really does...  Sad), and you have a cluster that cranks out 100,000 khash/s, there's still a reasonable chance that my clunker will randomly (and very luckily) land on the value that solves the block within a very small number of hashes... say my nonce is a winner after only 10 hashes. I'm working out 1,000,000 hashes per second, so it only took me 1/100,000 of a second to solve the block. You cluster would have to (again, luckily) generate the right nonce in less than 0.00001 seconds to beat my lucky guess... which means your cluster would have guess correctly in less than 100,000,000 (hash/s) / 100,000 (s) = 1,000 hashes. Given the huge number of hashes possible, the likelihood of you hitting it in under 1,000 is remarkably low...

Granted, my chances of hitting it in under 10 hashes was even more insanely low, but you get the idea I think. So yes, the cluster will, overall, solve more blocks than my clunker, but it won't win out every single time.

Now that I've gone through all that... I'm sure someone will point out a flaw in my reasoning! Smiley I'm fine with that though, I want to make sure I understand it all correctly!

Edit - Laszlo said it much more conscisely, but I think we made the same point...? Hopefully!
theymos
Administrator
Legendary
*
Offline Offline

Activity: 5376
Merit: 13407


View Profile
July 14, 2010, 07:58:25 PM
 #17

In addition the the random nonce, each block also contains a BitCoin address (newly-generated, used only for this purpose) that the 50 BC reward is credited to if you solve a block. Even if two nodes choose the same random nonce to start at (which is unlikely), they're pretty much guaranteed to have different BitCoin addresses.

1NXYoJ5xU91Jp83XfVMHwwTUyZFK64BoAD
satoshi
Founder
Sr. Member
*
qt
Offline Offline

Activity: 364
Merit: 7243


View Profile
July 14, 2010, 08:25:06 PM
 #18

So if your computer was only 1% towards solving block 68000
This is a common point of confusion.  There's no such thing as being 1% towards solving a block.  You don't make progress towards solving it.  After working on it for 24 hours, your chances of solving it are equal to what your chances were at the start or at any moment.

It's like trying to flip 37 coins at once and have them all come up heads.  Each time you try, your chances of success are the same.

The RNG is the OpenSSL secure random number generator.  On Windows it's seeded with the complete set of all hardware performance counters since your computer started, on Linux it's dev/random.
Insti
Sr. Member
****
Offline Offline

Activity: 294
Merit: 252


Firstbits: 1duzy


View Profile
July 14, 2010, 08:39:35 PM
 #19

Take this lightly until confirmed, but here's my understanding...

There is no variation in the problem itself - every node is intended to work on the same block at the same time (accounting for latencies and such). The luck factor is really the random number generated at the beginning of each node's attempt to solve a new block. When a new block needs to be solved, each node generates a random value (nonce), which is used to hash the block. If that hash isn't the right one, the nonce is incremented, and the new incremented value is used to hash the block again.

You are right about the nonce and the hashing, but..

Everybody is working on a different block.

To pay the block creator 50BTC you need to know the Bitcoin Address it needs to go to.
For the block creator to be able to spend the 50BTC they create, they need to have the private key associated with the bitcoin address the 50 was paid to.

Since everybody has a different private key (randomly generated), everyone has a different  Bitcoin Address (associated public key (hashed))

Part of what makes up the block is the hash of the transaction that pays the 50BTC to the block creators Bitcoin Address.

This means that everybody has a different block.
 
Whoever solves their block first 'wins' and that block is acclaimed by all as the 'next' block.


InterArmaEnimSil (OP)
Member
**
Offline Offline

Activity: 77
Merit: 10


View Profile
July 14, 2010, 08:45:10 PM
 #20

nonce + your_address + garbage data = randomness that varies from client to client.

Got it.  Thank you, everyone.  My confidence in the system is restored.

12aro27eH2SbM1N1XT4kgfsx89VkDf2rYK
Pages: [1] 2 »  All
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!