Bitcoin Forum
May 04, 2024, 12:24:31 AM *
News: Latest Bitcoin Core release: 27.0 [Torrent]
 
   Home   Help Search Login Register More  
Pages: [1] 2 »  All
  Print  
Author Topic: Isn't this a massive vulnerability built into the system?  (Read 1850 times)
Desolator (OP)
Sr. Member
****
Offline Offline

Activity: 392
Merit: 250



View Profile
July 15, 2011, 08:04:46 AM
 #1

I couldn't get past the whole "there is no progress" thing because with everything to do with hashing I've ever heard about, there obviously is progress.  Like brute forcing by generating hashes to compare to the one you want to decrypt, you're making progress by finding out ones that didn't work.  So why doesn't bitcoin mining make progress?

I thought about it and came to this hopefully incorrect conclusion because otherwise everyone's gonna be pretty pissed off at me Tongue

The only thing that would make that statement about "there is no progress" true is if people's miners didn't remember what it already tried so it could theoretically hash the same value twice or more.  Here's my poorly arranged proof-ish thing Tongue Note: all the numeric values are completely made up.
-----------------------------------------------------------------------------------
The rate at which hashes are calculated by my miner is extremely static so the size of the values being hashed is static, right?  I think that's how it works.  Like if it was some 16 bit values being hashed then some 256 bit ones after that, the hash rate would be erratic cuz it takes longer to hash larger data, right?  So I assume it's feeding in values to be hashed that are always the same bit-size.

Variable time!
The size of each value to be hashed by my GPU is A bits.  So like A = 256 bits for example.

There's X amount of total hashes possible based on the size of A (like if A is 64 bits of data, there are like 100 billion possible hash results with a character set of 52 or whatever, assuming it's string data, which it apparently is since the first block was based on text from a news story and strings and characters are really binary data anyway)

So back on the network, it decides that difficulty level 0.23934 means all hashes of 00000xxxxxxxxxxx and below would complete the block. So that results in a "low enough" range of hashes containing Y amount of hash values that will complete the block.  Let's say given the difficulty rating of whatever, 12,345 out of a possible 100 billion hashes are "low enough" so Y = 12,345.

Your miner takes data from the last block, adds a random value resulting in a static total value of size A-bits, then at any given point in time has tried Z amount of hashes so far.  So at 40.0MH/s for 5 seconds, you tried 200 million hashes so Z = 200 million.  Let's say all the hashes so far were found to be outside the "low enough" range.

So while mining, at any given point, there's only X minus Z hash values left to check (total hashes possible minus amount of hashes you already tried) because your client knows the ones it already tried aren't inside the range,

So in that case you would be "making progress."

But...everyone keeps saying there is no progress and you're not making any progress.

That would be true under one single condition. Your client tried the same values multiple times because it's picking them completely at random with no memory of past tries.  As far as I understand, that's how it works, right?

Which brings me to my point about the vulnerability.  I think you're thinking "He's gonna say, 'Can't someone rig a client to remember the values it already tried'?" but no, that'd be gigabytes of data and nobody has the RAM for that.  You don't have to "remember" the values, you just have to not try them twice....like trying them sequentially for example.

What's to stop someone from writing a rigged mining client with the random number generation function replaced with code to try sequential values so it never tries the same value twice?  Like "value += 1" for example.  Instead of hashing values based on random numbers like everyone else, they're strategically trying a sequence so none get repeated which means they hit a hash value inside the "low enough" range a hell of a lot quicker than everyone else.

That would give them such an enormous advantage over everyone in the long term. They'd get an abnormally high bit coin share and cause blocks to be created more quickly than the system predicted random chance would allow and who knows what the effect of that would be.  So hopefully this works a little differently than I'm imagining or we've got a problem here.
The trust scores you see are subjective; they will change depending on who you have in your trust list.
Advertised sites are not endorsed by the Bitcoin Forum. They may be unsafe, untrustworthy, or illegal in your jurisdiction.
1714782271
Hero Member
*
Offline Offline

Posts: 1714782271

View Profile Personal Message (Offline)

Ignore
1714782271
Reply with quote  #2

1714782271
Report to moderator
tysat
Legendary
*
Offline Offline

Activity: 966
Merit: 1004


Keep it real


View Profile
July 15, 2011, 08:13:15 AM
 #2

I'll admit I don't know exactly how mining works... but I would assume you're wrong otherwise everyone who wrote miners would have done it differently.  I haven't looked at any source for the miners, but I'm pretty sure they wouldn't rehash the same stuff over and over because that would be a huge waste of time.
wumpus
Hero Member
*****
Offline Offline

Activity: 812
Merit: 1022

No Maps for These Territories


View Profile
July 15, 2011, 08:13:59 AM
 #3

What's to stop someone from writing a rigged mining client with the random number generation function replaced with code to try sequential values so it never tries the same value twice?  Like "value += 1" for example.
This is already how miners work. They try all 2^32 values for the last bits of the nonce in sequence or in parallel (GPU).

But as everyone is working with different block contents, there is no 'race'. Every miner is exploring a different space and there are multiple 'right' answers, as there are more inputs that produce a hash value lower than the threshold.

So it doesn't matter at all what your iteration algorithm does. You cannot cheat this way.



Bitcoin Core developer [PGP] Warning: For most, coin loss is a larger risk than coin theft. A disk can die any time. Regularly back up your wallet through FileBackup Wallet to an external storage or the (encrypted!) cloud. Use a separate offline wallet for storing larger amounts.
Desolator (OP)
Sr. Member
****
Offline Offline

Activity: 392
Merit: 250



View Profile
July 15, 2011, 08:51:30 AM
 #4

I thought a "nonce" was by definition a number generated at random and used once.  So it's used and forgotten about so you might try it twice and not know it.  Everyone seemed to imply random numbers were being used to generate them but if you say they're sequential or parallellaly logically processed in an overall non-repeating way (oh yeah, I made a word Tongue), I'll take your word for it Tongue

Also GO PACKERS.

But also, how is everyone working with different block contents?  You mean previous block contents or current block contents?  It has to be based on the block before it so does everyone just grab a random partial piece of the last block and not all of it then when it claims to have a legit block, it tells the verification system what specific chunk it used?  Or is the difference the transactions they're processing and the transactions are included in the about-to-be-hashed chunk of data and each ransaction only gets grabbed by one pool so each pool's attempted block is different?
Desolator (OP)
Sr. Member
****
Offline Offline

Activity: 392
Merit: 250



View Profile
July 15, 2011, 09:02:56 AM
 #5

Wait wait wait a minute.  If they're using non-repeating values of any sort, then....how are clients not "making progress" towards a low enough hash value? If they're leaving behind non-matching hashes that they're not going to try again, obviously they are making progress then because there are less total possible hashes left to try.
tysat
Legendary
*
Offline Offline

Activity: 966
Merit: 1004


Keep it real


View Profile
July 15, 2011, 09:03:15 AM
 #6

It's different current block contents (the transactions being included), not everyone will have the exact same thing... but most will be pretty similar.

Your best bet is to look at source code for some of the major miners, that way you can figure out how they're generating hashes/nonces/etc.
Meni Rosenfeld
Donator
Legendary
*
Offline Offline

Activity: 2058
Merit: 1054



View Profile WWW
July 15, 2011, 09:34:56 AM
 #7

Your mistake is vast underestimation of the space of possible hashes. The block header is 640 bits long, of which 288 bits can be chosen more or less freely (Merkle root and nonce), meaning there are 2^288 different headers to try. If someone chooses these completely randomly and calculates a pentillion hashes (10^18), the chance of a collision (trying the same header twice) is less than 10^(-50). So, for all purposes, the hash space is infinite, there is no chance of collision, and random hashing is just as good as sequential.

Now, more technically, the nonce is the easiest part to change, and since it's only 32 bits, it does get checked sequentially (if you tried millions of nonces randomly you would get collisions). But that's implementation details.

1EofoZNBhWQ3kxfKnvWkhtMns4AivZArhr   |   Who am I?   |   bitcoin-otc WoT
Bitcoil - Exchange bitcoins for ILS (thread)   |   Israel Bitcoin community homepage (thread)
Analysis of Bitcoin Pooled Mining Reward Systems (thread, summary)  |   PureMining - Infinite-term, deterministic mining bond
tysat
Legendary
*
Offline Offline

Activity: 966
Merit: 1004


Keep it real


View Profile
July 15, 2011, 09:36:48 AM
 #8

Your mistake is vast underestimation of the space of possible hashes. The block header is 640 bits long, of which 288 bits can be chosen more or less freely (Merkle root and nonce), meaning there are 2^288 different headers to try. If someone chooses these completely randomly and calculates a pentillion hashes (10^18), the chance of a collision (trying the same header twice) is less than 10^(-50). So, for all purposes, the hash space is infinite, there is no chance of collision, and random hashing is just as good as sequential.

Now, more technically, the nonce is the easiest part to change, and since it's only 32 bits, it does get checked sequentially (if you tried millions of nonces randomly you would get collisions). But that's implementation details.

Thanks dude with more technical bitcoin knowledge than me!

Helped me understand the inner workings of bitcoin more.
Meni Rosenfeld
Donator
Legendary
*
Offline Offline

Activity: 2058
Merit: 1054



View Profile WWW
July 15, 2011, 09:38:32 AM
Last edit: July 15, 2011, 09:57:16 AM by Meni Rosenfeld
 #9

Your mistake is vast underestimation of the space of possible hashes. The block header is 640 bits long, of which 288 bits can be chosen more or less freely (Merkle root and nonce), meaning there are 2^288 different headers to try. If someone chooses these completely randomly and calculates a pentillion hashes (10^18), the chance of a collision (trying the same header twice) is less than 10^(-50). So, for all purposes, the hash space is infinite, there is no chance of collision, and random hashing is just as good as sequential.

Now, more technically, the nonce is the easiest part to change, and since it's only 32 bits, it does get checked sequentially (if you tried millions of nonces randomly you would get collisions). But that's implementation details.

Thanks dude with more technical bitcoin knowledge than me!

Helped me understand the inner workings of bitcoin more.
Oh, I don't know anything myself, I just quote what I read here Smiley.

Wait wait wait a minute.  If they're using non-repeating values of any sort, then....how are clients not "making progress" towards a low enough hash value? If they're leaving behind non-matching hashes that they're not going to try again, obviously they are making progress then because there are less total possible hashes left to try.
Implementation details. If the nonce was longer (say 128 bits), you could just pick nonces randomly. If the Merkle root was easy to change, you could just use a different random Merkle root and nonce for every hash. But a compromise was reached with a short nonce which is checked sequentially, followed by a random change in the Merkle root. There's "no progress" in the sense that if you hashed for X minutes and didn't find a block, you aren't any closer to finding one than you were in the beginning. (That you don't try the same header twice only means that your progress isn't negative.)

But also, how is everyone working with different block contents?  You mean previous block contents or current block contents?  It has to be based on the block before it so does everyone just grab a random partial piece of the last block and not all of it then when it claims to have a legit block, it tells the verification system what specific chunk it used?  Or is the difference the transactions they're processing and the transactions are included in the about-to-be-hashed chunk of data and each ransaction only gets grabbed by one pool so each pool's attempted block is different?
You say you read the documentation, yet you keep asking these very basic questions.
This page details the contents of a block header. One of the fields is the hash of the previous block, this much doesn't change between different miners at the same time. The main thing that changes is the Merkle root, a hash of a data structure of all of the transactions to be included in the block. Assuming everyone knows and willing to include all transactions, most of the data doesn't change between different miners. What does change is the generation transaction. Everyone uses an address of his own in it. Also, there's "extra nonce" in this transaction which can be chosen freely. Hash functions being what they are, every such change alters the Merkle root in ways you can't imagine. Thus, the Merkle root can be for all purposes chosen randomly.

This will all become so much clearer to you if you hang around http://blockexplorer.com/ for a while.

1EofoZNBhWQ3kxfKnvWkhtMns4AivZArhr   |   Who am I?   |   bitcoin-otc WoT
Bitcoil - Exchange bitcoins for ILS (thread)   |   Israel Bitcoin community homepage (thread)
Analysis of Bitcoin Pooled Mining Reward Systems (thread, summary)  |   PureMining - Infinite-term, deterministic mining bond
Desolator (OP)
Sr. Member
****
Offline Offline

Activity: 392
Merit: 250



View Profile
July 15, 2011, 03:29:38 PM
 #10

Well now that makes a lot more sense Tongue btw I somehow missed that wiki page, I mostly read the FAQ and some technical explanations on I think wikipedia.  But since I didn't know the size of the data involved, I assumed that since a block can be solved in about 10 minutes by a pool instead of like 100 years, then trying random numbers over again in that relatively short period of time when you consider all the computers involved, would be inefficient as a whole with a huge percentage of repeats.

But I would think everyone would use an incrementing function instead of a random one cuz there's got to be a many times more math involved in generating a random number than just adding 1 to the last number.  I mean it like reads a value off the clock and does some whole big equation thing and then splits it out based on the range you're looking for and then hashes it.  That would throw my hash rate out the window, not to mention how I doubt the clock's value interval would be small enough to actually chanve between hashes at let's say 250 million hash calculations per second.  I dunno, maybe it's based on 1 billionth of a second.
Meni Rosenfeld
Donator
Legendary
*
Offline Offline

Activity: 2058
Merit: 1054



View Profile WWW
July 15, 2011, 03:57:52 PM
 #11

But I would think everyone would use an incrementing function instead of a random one cuz there's got to be a many times more math involved in generating a random number than just adding 1 to the last number.  I mean it like reads a value off the clock and does some whole big equation thing and then splits it out based on the range you're looking for and then hashes it.  That would throw my hash rate out the window, not to mention how I doubt the clock's value interval would be small enough to actually chanve between hashes at let's say 250 million hash calculations per second.  I dunno, maybe it's based on 1 billionth of a second.
That's true, generating a new pseudorandom number for every hash wouldn't be very efficient. But the clock granularity has little to do with it, it's just used as a seed. Once you have the seed you can generate as many pseudorandom numbers in a sequence as you want.

1EofoZNBhWQ3kxfKnvWkhtMns4AivZArhr   |   Who am I?   |   bitcoin-otc WoT
Bitcoil - Exchange bitcoins for ILS (thread)   |   Israel Bitcoin community homepage (thread)
Analysis of Bitcoin Pooled Mining Reward Systems (thread, summary)  |   PureMining - Infinite-term, deterministic mining bond
bitcrazy
Newbie
*
Offline Offline

Activity: 6
Merit: 0



View Profile
July 15, 2011, 11:50:51 PM
 #12

I heard that the maximum number of bit coins will be 21 million. So what is the incentive to keep mining after this amount has been reached?
randomguy7
Hero Member
*****
Offline Offline

Activity: 527
Merit: 500


View Profile
July 15, 2011, 11:58:55 PM
 #13

Transaction fees.
nafai
Member
**
Offline Offline

Activity: 112
Merit: 10



View Profile
July 16, 2011, 03:45:55 AM
 #14

When they say you don't "make progress", it means that the rules of probability have no memory when it comes to independent events. You can buy 100 scratch off tickets, scratch them off and lose 100 times, and the 101st ticket still has the same odds as before you scratched any. It doesn't matter if you keep going 24/7 or take breaks and stop then restart. The odds are the same for each ticket (hash) no matter what you've already calculated in the past (assuming you aren't duplicating work, which they don't).

The odds of a coin coming up tails is 0.5 or 50%. But if you flip a coin and it comes up heads 100 times in a row, what are the odds that the next time it'll come up tails?  Yep, still 50%.  You weren't "making progress" towards successfully getting a tails by doing a bunch of failed attempts. The attempts are independent, probabilistically, from each other regardless of success or failure.

1HQiS9PLHPcoQMgN8ZdcGwhoMHWh2Hp37p
macharborguy
Newbie
*
Offline Offline

Activity: 42
Merit: 0


View Profile
July 16, 2011, 04:09:48 AM
 #15

I just posted a "simple question" thread and then read this one.

Keep in mind that the Nonce is not the only changing value that is hashed within the attempts to create the next block.  The "timestamp" also changes, presumably every second in the case of a UNIX timestamp.

Lets assume we have a mining rig that can check 1 GigaHashes per second.  Within that 1 second, all of thoses hashes are tested with the exact same timestamp value as part of their computation.  The next second, the next 1 GigaHashes are tested with +1 second change to the timestamp, and we all know that even a small difference like that between a 10 and a 11 can have vastly different outcomes with their hash counterpart.  Example:

the MD5 hash value of '10' is : d3d9446802a44259755d38e6d163e820
the MD5 hash value of '11' is : 6512bd43d9caa6e02c990b0a82652dca

the SHA-256 has value of '10' is : 4a44dc15364204a80fe80e9039455cc1608281820fe2b24f1e5233ade6af1dd5
the SHA-256 has value of '11' is : 4fc82b26aecb47d2868c4efbe3581732a3e7cbcc6c2efb32062c08170a05eeb8

And this is just hashing two numbers, one of which, numerically and Spinal Tappingly speaking, is 1 louder than the other.  The actual block contains far more information than a simple 10 or 11.
Desolator (OP)
Sr. Member
****
Offline Offline

Activity: 392
Merit: 250



View Profile
July 16, 2011, 04:21:25 AM
 #16

Oh yeah, I forgot timestamps were involved Sad that does in fact throw it off I guess cuz there's a new pool every single time then.  Now I get what you all were saying Tongue

Though you're wrong about the lottery tickets though Tongue  If there are 100,000 of one type of scratch off and 5,000 are winners and you buy one and lose at it, there are now 99,999 tickets and 5,000 winners.  And if you lose again, there's 99,998 and 5,000 winners.  And you're probably gonna keep going on that pattern cuz the odds are freakin terrible rofl.

Coin flips though, yeah Tongue
macharborguy
Newbie
*
Offline Offline

Activity: 42
Merit: 0


View Profile
July 16, 2011, 04:34:13 AM
 #17

one thing to keep in mind with the "coin flip 50/50 chance" thing is that there are more factors involved than simply the two sides of the coin.  Slight differences like how it is flipped, wind speeds, imperfections of the coin, etc all change the chances and the outcome.  Although that imperfect 50/50 chance of a coin flip does fit very well with BitCoin, since those slight differences are part of Bitcoin as a whole.
RandyFolds
Sr. Member
****
Offline Offline

Activity: 448
Merit: 250



View Profile
July 16, 2011, 05:09:14 AM
 #18

You can buy 100 scratch off tickets, scratch them off and lose 100 times, and the 101st ticket still has the same odds as before you scratched any. It doesn't matter if you keep going 24/7 or take breaks and stop then restart. The odds are the same for each ticket (hash) no matter what you've already calculated in the past (assuming you aren't duplicating work, which they don't).

This isn't a good comparison, as there are a finite amount of lottery tickets printed. Each one that is consumed reduces the overall volume and affects the odds. Say you have 1000 tickets with 20 jackpots and you win one, then your odds of the next ticket being a jackpot are 19/999. If you lose on 100 tickets in a row after that getting that jackpot, the odds are 19/899. Obviously the tickets are spread out temporally, you aren't the only one playing and there are millions and millions of them so the effect is extremely minute, but it is there.

Either way, lottery tickets serve to tax the poor and the stupid...but as I tell my super-Jew ladyfriend who scoffs at my gambling, "You're never going to win the jackpot. I might..."
Desolator (OP)
Sr. Member
****
Offline Offline

Activity: 392
Merit: 250



View Profile
July 16, 2011, 06:11:33 AM
 #19

Well since we're on the topic...well off the topic Tongue ummm, I have a theoretic question that's been bugging me because I suck at remotely complicated probability calculations.

If you were to win let's say $51 on $1 scratch off for a profit of $50 and because you're dumb, you decide to take the payout in 100% more lottery tickets  Cheesy Grin

Is it more beneficial to get 5 $10 tickets or 25 $2 tickets?  I know in the real world, the payout vs odds scale up slightly unevenly on more expensive tickets but at least sort of close so ignore that.  In fact, lets say the average overall win on the $10 one is 500x the card's value with an overall average odds of 1 in 1000.  The $2 tickets have an average overall win of 50x the card's value and average overall odds of 1 in 100.  So they look identical cuz the $2 one has 10x better odds of winning 10x less money BUT the $2 choice results in more tries but for less money but all the tries are trying to win the same high value jackpots which there are a finite number of so they're not individual tries, they're sequential ones so the odds get better the more you have, making them appear to be superior to buying less cards of a higher value.  So would the $2 one be a significantly better choice or the $10 ones?
Meni Rosenfeld
Donator
Legendary
*
Offline Offline

Activity: 2058
Merit: 1054



View Profile WWW
July 16, 2011, 06:05:07 PM
 #20

Well since we're on the topic...well off the topic Tongue ummm, I have a theoretic question that's been bugging me because I suck at remotely complicated probability calculations.

If you were to win let's say $51 on $1 scratch off for a profit of $50 and because you're dumb, you decide to take the payout in 100% more lottery tickets  Cheesy Grin

Is it more beneficial to get 5 $10 tickets or 25 $2 tickets?  I know in the real world, the payout vs odds scale up slightly unevenly on more expensive tickets but at least sort of close so ignore that.  In fact, lets say the average overall win on the $10 one is 500x the card's value with an overall average odds of 1 in 1000.  The $2 tickets have an average overall win of 50x the card's value and average overall odds of 1 in 100.  So they look identical cuz the $2 one has 10x better odds of winning 10x less money BUT the $2 choice results in more tries but for less money but all the tries are trying to win the same high value jackpots which there are a finite number of so they're not individual tries, they're sequential ones so the odds get better the more you have, making them appear to be superior to buying less cards of a higher value.  So would the $2 one be a significantly better choice or the $10 ones?
If you assume that:
1. There are few tickets in total, so buying many tickets has a noticeable effect on your odds, and
2. You stop buying tickets if you find a winning one,
Then there's an advantage to the $2 tickets.

Regardless you also need to consider the variance. With the $10 tickets you have more variance, whether that's good or bad depends on your perspective. If you don't want variance you're better off not buying any tickets at all.

1EofoZNBhWQ3kxfKnvWkhtMns4AivZArhr   |   Who am I?   |   bitcoin-otc WoT
Bitcoil - Exchange bitcoins for ILS (thread)   |   Israel Bitcoin community homepage (thread)
Analysis of Bitcoin Pooled Mining Reward Systems (thread, summary)  |   PureMining - Infinite-term, deterministic mining bond
Pages: [1] 2 »  All
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!