Bitcoin Forum
October 02, 2022, 10:12:51 AM *
News: Latest Bitcoin Core release: 23.0 [Torrent]
 
   Home   Help Search Login Register More  
Pages: [1]
  Print  
Author Topic: why not measure difficulty in bits?  (Read 3373 times)
adam3us (OP)
Sr. Member
****
expert
Offline Offline

Activity: 404
Merit: 317


in bitcoin we trust


View Profile WWW
May 01, 2013, 10:16:54 PM
 #1

Why is bitcoin difficulty not expressed in bits?

With hashcash I always used bits eg 20bits.  I think bitcoin is currently at 55.26, as bitcoin mining is extended to allow fractional bits (rather than to find k 0 bits, to find a number < 2^k where can be fractional - that is the same thing when k is a whole number).

You can convert difficulty into bits with log2(difficulty)+32.  (log(difficulty)/log(2)+32).

(+32 because 2^32 is the original or minimal difficulty in bitcoin and is excluded from the difficulty number).

I find this page is unnecessarily complex for a very simple actual problem: https://en.bitcoin.it/wiki/Difficulty
(Current difficulty 10,076,293 from http://bitcoindifficulty.com/).

By comparison bits are very easy to read, even by hand.  If one looks at the hash output in hex just multiply the leading 0s by 4 (and the next nibble figure out if it is >7 = +0 bits, > 3 = +1 bits, > 1 = +2 bits and 1 = +3 bits (and obviously 0 would be another leading 0).  QED trivial, human comprehensible difficulty that can be hand-checked.  That was part of the design aim for hashcash to simplify the computation, programming and human verification.

And when you see a bitcoin in hex you can visually see those 55 bits.  This is the latest hash from the block explorer:

http://blockexplorer.com/block/00000000000000e3d3268e05a9901759c1452590d0838a80aeb8abaea59f1e9f

and bingo I can count 0s (14 of them) multiply by 4 (bits per hex nibble) and I know that is a 56bit hash collision.  (You get lucky and an extra 1 bit half the time, 2 bits 1/4 time etc).

Adam

hashcash, committed transactions, homomorphic values, blind kdf; researching decentralization, scalability and fungibility/anonymity
1664705571
Hero Member
*
Offline Offline

Posts: 1664705571

View Profile Personal Message (Offline)

Ignore
1664705571
Reply with quote  #2

1664705571
Report to moderator
Bitcoin mining is now a specialized and very risky industry, just like gold mining. Amateur miners are unlikely to make much money, and may even lose money. Bitcoin is much more than just mining, though!
Advertised sites are not endorsed by the Bitcoin Forum. They may be unsafe, untrustworthy, or illegal in your jurisdiction.
1664705571
Hero Member
*
Offline Offline

Posts: 1664705571

View Profile Personal Message (Offline)

Ignore
1664705571
Reply with quote  #2

1664705571
Report to moderator
1664705571
Hero Member
*
Offline Offline

Posts: 1664705571

View Profile Personal Message (Offline)

Ignore
1664705571
Reply with quote  #2

1664705571
Report to moderator
1664705571
Hero Member
*
Offline Offline

Posts: 1664705571

View Profile Personal Message (Offline)

Ignore
1664705571
Reply with quote  #2

1664705571
Report to moderator
TierNolan
Legendary
*
Offline Offline

Activity: 1232
Merit: 1014


View Profile
May 01, 2013, 11:07:40 PM
 #2

It would mean that the steps in difficulty would have to be factors of 2.

If the current period takes 20 days, you would have to adjust down to 10 days (or leave it at 20).

1LxbG5cKXzTwZg9mjL3gaRE835uNQEteWF
adam3us (OP)
Sr. Member
****
expert
Offline Offline

Activity: 404
Merit: 317


in bitcoin we trust


View Profile WWW
May 01, 2013, 11:29:44 PM
 #3

It would mean that the steps in difficulty would have to be factors of 2.

If the current period takes 20 days, you would have to adjust down to 10 days (or leave it at 20).

No I mean fractional bits.  Hashcash worked on whole bit only, and there it was possible only to double or halve bits, like you said.  Bitcoin needed more fine-grained control, and so extended hashcash with fractional bits and there the challenge is not technically to find a hash with 55 leading bits, but to find a hash that is less than 1/2^55.26 bits where bits is fractional and the hash is viewed as a 256-bit precision fixed point decimal. (Those are the same thing if the bits are whole).

So 55.26 bits is numbers < 1/2^55.26 viewed in hex that is any hash < .00000000000001FFE

(I use bc -l: bits=l(10076293)/l(2)+32; obase=16; 1/2^55.26).

Anyway my point is for human understanding you can mostly ignore or estimate the fractional bits and be within 10% of right.  And its easy and meaningful (in terms of cipher & hash security which are measured in bits) and visually checkable to see that its around 55 bits = 13 or 14 leading 0s.  You can even approximate the fractional bits as I was saying.

For comparison EFFs 1998 $250,000 DES crack machine broke a 56-bit DES in 112 hours expected.  Bitcoin network does something approximately analogous every 20mins Smiley

Now if we could persuade EFF to make another miner but for bitcoin, they could fund their own donations.   I did suggest it to the people on the DES crack team...

Adam

hashcash, committed transactions, homomorphic values, blind kdf; researching decentralization, scalability and fungibility/anonymity
odolvlobo
Legendary
*
Offline Offline

Activity: 3724
Merit: 2666



View Profile
May 01, 2013, 11:48:38 PM
 #4

Why is bitcoin difficulty not expressed in bits?

I don't have any problem with either notation, but there are pros and cons. The advantage of expressing the difficulty as an exponent is that the number is smaller and more digestible. The disadvantages are that it can't be represented exactly, and people generally will have a difficult time relating to the values.

Buy stuff on Amazon with BTC or convert Amazon points to BTC here: Purse.io
Join an anti-signature campaign: Click ignore on the members of signature campaigns.
PGP Fingerprint: 6B6BC26599EC24EF7E29A405EAF050539D0B2925 Signing address: 13GAVJo8YaAuenj6keiEykwxWUZ7jMoSLt
adam3us (OP)
Sr. Member
****
expert
Offline Offline

Activity: 404
Merit: 317


in bitcoin we trust


View Profile WWW
May 02, 2013, 12:08:12 AM
 #5

Why is bitcoin difficulty not expressed in bits?

I don't have any problem with either notation, but there are pros and cons. The advantage of expressing the difficulty as an exponent is that the number is smaller and more digestible. The disadvantages are that it can't be represented exactly, and people generally will have a difficult time relating to the values.

OK here's another version for you.  What does difficulty even mean specifically? 

Read this and tell me if you can figure it out: https://en.bitcoin.it/wiki/Difficulty

I tried and it was pretty confusing.  Snippets of C code, definitions in terms of other undefined things.  Mixing in 600 seconds in places not in others.  Does difficulty include 2^32 or not?  Adjusted for 600 seconds or not?  That whole page is extra confusing.

Whereas I know what 55 bits mean, as with hashes and ciphers: it means you had to try 2^55 times to get this (on average).  And I can convert bits=log2(diff)+32 not so hard on any scientific calculator.

And therefore if I can search 1GH/sec then I know that is 30bits/sec so I'm going to need to try 2^25 times.

I think part of the problem is difficulty is actually divided by 2^32.  So its not really the number of expected tries.  And 2^32 isnt 1G its 4G.

Adam

hashcash, committed transactions, homomorphic values, blind kdf; researching decentralization, scalability and fungibility/anonymity
Zeilap
Full Member
***
Offline Offline

Activity: 154
Merit: 100


View Profile
May 02, 2013, 12:37:56 AM
Last edit: May 02, 2013, 02:41:55 AM by Zeilap
 #6

It basically comes down to Satoshi's over-reliance on OpenSSL. The whole thing is a hack around the function BN_bn2mpi() (which converts a bignum to a sequence of bytes) so that only the 3 most significant bytes of the number are stored and the rest discarded.

EDIT: After re-reading, I seem to be answering a different question to the one asked. I thought the question was asking why the the 'bits' field was an encoding of the target rather than simply an encoding of the number of fractional bits.
deepceleron
Legendary
*
Offline Offline

Activity: 1512
Merit: 1011



View Profile WWW
May 02, 2013, 01:39:04 AM
Last edit: May 02, 2013, 02:09:40 AM by deepceleron
 #7

The difficulty is actually in bits, the field in the block that contains it is called bits, which is decoded into the target.

Here's the current target, a hash has to be smaller than this number:
00000000000001AA3D0000000000000000000000000000000000000000000000

or in binary:
0000000000000000000000000000000000000000000000000000000110101010001111010000000 0000000000000000000000000000000000000000000000000000000000000000000000000000000 0000000000000000000000000000000000000000000000000000000000000000000000000000000 0000000000000000000

The difficulty is easy to comprehend and is what is presented to users. What you are proposing is harder:

>>> math.log(2**256/int('00000000000001AA3D0000000000000000000000000000000000000000000000',16),2)
55.26448364017038

What  "bit" difficulty would be 10% harder?

>>> math.log(2**256/((int('00000000000001AA3D0000000000000000000000000000000000000000000000',16))/1.10),2)
55.40198716392032

Satoshi got it right. Not everyone uses Python as their calculator. Use a base difficulty, where 1 = 1 block find per ~4295032833.0 hashes on average, and higher difficulties are multipliers of that.
TierNolan
Legendary
*
Offline Offline

Activity: 1232
Merit: 1014


View Profile
May 02, 2013, 09:07:10 AM
 #8

No I mean fractional bits.  Hashcash worked on whole bit only, and there it was possible only to double or halve bits, like you said.  Bitcoin needed more fine-grained control, and so extended hashcash with fractional bits and there the challenge is not technically to find a hash with 55 leading bits, but to find a hash that is less than 1/2^55.26 bits where bits is fractional and the hash is viewed as a 256-bit precision fixed point decimal. (Those are the same thing if the bits are whole).

It actually uses a floating point system.  However, I see what you mean.

A 16 bit number would cover the entire 256 bit range with 256 fraction bit levels.

Btw, when doing the difficulty update, do they use the floating point, or the full difficulty?  Is the value in the block the actual difficulty or just a summary?

1LxbG5cKXzTwZg9mjL3gaRE835uNQEteWF
adam3us (OP)
Sr. Member
****
expert
Offline Offline

Activity: 404
Merit: 317


in bitcoin we trust


View Profile WWW
May 02, 2013, 12:46:13 PM
 #9

The difficulty is actually in bits, the field in the block that contains it is called bits, which is decoded into the target.

Here's the current target, a hash has to be smaller than this number:
00000000000001AA3D0000000000000000000000000000000000000000000000

Isnt it interesting that the hextarget isnt the same as what I calculated, maybe not so simple as deepceleron declares Wink  Starting from difficulty 10,076,293 http://bitcoindifficulty.com/ I get .00000000000001AA3EA9EBE... so there is a .0015% discrepancy.  Clearly the target is the correct value as its the used value.  Looking around it seems that difficulty is actually the multiple of hardness relative to the minimum difficulty which is actually not 32 0s (expected 2^32 tries) but rather 0.FFFFh/2^32 (ie x < .00000000FFFF0000h) expected tries 2^32/0.FFFFh = 4295032833 (100010001h instead of 1000000h).

So convert from target to difficulty and difficulty to bits is even messier:


scale=80
define pow(x,p) { return e(p*l(x)); }
define log(b,x) { return l(x)/l(b); }
define log2(x) { return log(2,x); } 

# http://blockexplorer.com/q/hextarget
ibase=16
target=1AA3D0000000000000000000000000000000000000000000000/2^100
mindiff=FFFF/2^10 # the source of the .0015% discrepancy
ibase=A

tries=2^32/mindiff

diff=1/target/tries
bits=log2(diff*tries)   
cbits=-log2(target)   

gdiff=diff*4*mindiff # difficulty in gigahashes
nhash=70.48*1024
time=gdiff/nhash


I think my unnecessary complexity issue with this page https://en.bitcoin.it/wiki/Difficulty (and the measure chosen for difficulty) is not so much that it is log2 scale or not.  I can handle that.  But that it is not even expected number of hashes (or Gigahashes etc).  At an approximation it is number of hashes / 2^32.  Now 2^32 is not a nice number in both bases (log2 scale and log10 scale); 2^30 is a nice number.  That would be a nicer way to report difficulty IMO as thats a GH, and you'll notice ALL of the miners are reporting power in GH or MH; and the network hash rate is in TH.  (Not difficulty chunks which are the former divided by 2^32).  But on top of that for proper accuracy it is not even hashes/ 2^32 but difficulty = hashes /2^32*0.FFFFh.  And that is harder to test at discrete difficulties (whole number).  Which is why pool shares are not an exact multiple of difficulty but rather trailing FFF difficulty to counter act this issue.

You know I once knew a crypto math/hacker guy who used to think human huffman encoding was fun.  Satoshi?  Hmmm Smiley

Quote
>>> math.log(2**256/int('00000000000001AA3D0000000000000000000000000000000000000000000000',16),2)
55.26448364017038

What  "bit" difficulty would be 10% harder?

Well that wasnt exactly my point (my point was that you can get a ball park approximate order of magnitude with your eyes and mental arithmetic with bits).  But about your question log2(1.1) = .1375 (call it .14, remember that) so 55.26+10% = 55.40?

Quote
Use a base difficulty, where 1 = 1 block find per ~4295032833.0 hashes on average, and higher difficulties are multipliers of that.

I dont find 2^32/.FFFFh a particularly meaningful number.  I know the discrepancy is small, but why even bother .. just simplify and use trailing FFF difficulty.

Sorry but simplicity does matter.

Anyway untangling and ignoring the .0015% discrepancy, you could convert difficulty into approx gigahash by multiplying by 4: difficulty *4 = 40305172 GH.  And network hash rate = 70.48TH, so expected time = 40305172/(70.48*1024) = 558s.   Close enough - network hash rate has grown since that difficulty was calculated.  (Or in log2 scale difficulty is 55.26 and network hash rate is 46.14 so > 2^9 tries > 500 seconds. )

Adam

hashcash, committed transactions, homomorphic values, blind kdf; researching decentralization, scalability and fungibility/anonymity
adam3us (OP)
Sr. Member
****
expert
Offline Offline

Activity: 404
Merit: 317


in bitcoin we trust


View Profile WWW
May 02, 2013, 01:01:25 PM
 #10

It actually uses a floating point system.

It depends how you treat it - you could consider it a 256-bit big int also.  However I was thinking of it as fixed point because I found the fractional comparison easier to think about.  (Fixed point means only mantissa is used, exponent fixed to 0, or at least a fixed value.  People used to abuse integer CPU instructions to do fixed point arithmetic in the days before floating point processors were included in CPUs or at least integer arithmetic was much faster than FP coprocessor).

Quote
Btw, when doing the difficulty update, do they use the floating point, or the full difficulty?  Is the value in the block the actual difficulty or just a summary?

I dont know, but let me guess based on the difficulty algorithm: my guess it uses integer math only.

Adam

hashcash, committed transactions, homomorphic values, blind kdf; researching decentralization, scalability and fungibility/anonymity
TierNolan
Legendary
*
Offline Offline

Activity: 1232
Merit: 1014


View Profile
May 02, 2013, 01:18:41 PM
Last edit: May 02, 2013, 03:27:52 PM by TierNolan
 #11

Quote
It depends how you treat it

I was thinking about how the block treats it.

In summary

This is how the number is re-calcalated.  The bits / floating point number is the actual target encoding.  The difficulty is computed from that.

Code:
if (prev.height + 1 mod 2160 != 0) then return last prev.bits

first = prev
first = first.prev (2159 times)

// first is the first block in the last difficulty period
// prev is the last block in the last difficulty period

actual_time = prev.blockTime - first.blockTime

actual_time = limit(actual_time, 3.5 days, 56 days)

big_number = decompact(prev.bits) * actual_time / 14 days

if (big_number > max_allowed)
 big_number = max_allowed

return compact(big_number)

The compact/decompact function is floating point conversion.

Compact form: ab_uvwxyz (each letter is a hex digit)

Int = hex2Int(uvwxyz) << (8 * (hex2Int(ab) - 3))

The compactor is given here

1LxbG5cKXzTwZg9mjL3gaRE835uNQEteWF
adam3us (OP)
Sr. Member
****
expert
Offline Offline

Activity: 404
Merit: 317


in bitcoin we trust


View Profile WWW
May 02, 2013, 03:26:35 PM
 #12


I was thinking about the block treats it.

In summary

OK yes thats what I was referring to with "human huffman encoder" comment.  Therefore I see what you mean about also, I just meant that there would be (my guess) no floating point in the sense of call of CPU floating point instructions, and that seems to be the case Smiley

Its simple as such things go (I've seen worse) and just a way of encoding between 23 and 16 bits of precision plus 8 bits of mantissa.  (Kind of odd that the precision depends on how close the difficulty is to an 8bit boundary, but there you go.)  It could have made better use of the 8-bit exponent eg by treating it as bits instead of bytes as the number is anyway definitionally a 256-bit number.

If optimized it could probably have been a 16-bit (8-bit exponent + 8-bit mantissa) encoding if the bits in the exponent were used as bits rather than bytes.  Or certainly a 24-bit.   But maybe thats my turn to over-optimize - the difficulty precision is not that critical anyway.

hashcash, committed transactions, homomorphic values, blind kdf; researching decentralization, scalability and fungibility/anonymity
adam3us (OP)
Sr. Member
****
expert
Offline Offline

Activity: 404
Merit: 317


in bitcoin we trust


View Profile WWW
May 02, 2013, 05:37:19 PM
 #13

the difficulty precision is not that critical anyway.

btw what I was thinking there is the difficulty precision ideally needs to be a few bits more than log2( interval = 2016 ).  So 8-bits might be a bit low.  16-bits (+8-bit mantissa) would be ample.

If that was not the case (eg 8-bits precision) consider a pool holding 50%, now it can see that difficulty is getting close to rolling over another lsb digit of difficulty, it may back off (stop mining) to prolong the time to the block being found, preventing the roll-over.  That makes difficulty fraction f easier 1/128 < f < 1/256 for the next two weeks, or specifically f=1/m for difficulty mantissa m, 128 < m < 256.  By holding off it loses 1-block, and it stands to make r= 2016/m*50% reward r for that action.  (Or r = 1008/(m*b) < for holding off for b-blocks, which makes sense so long as r > b.  With 50% that remains the case for between 1 and 3 < b < 7 depending on the mantissa.  Now of course as the 50% pool mines faster than difficulty predicted, the 2 week period goes past in under 2 weeks slighty, but it still it is 2016 coins by definition, and actually the pool actually gets slightly more than 50% of the coins in addition because now it is working at full power, while it slacked off briefly before.

So minimally 8+log2(7) bits = 11-bits kills the weak attack.  And bitcoin minimum is 16-bits.  Coincidence?  Probably not.

Perhaps something for a slow altcoin to think about (everyone seems to go for faster pools for some reason).  Though the unfair advantage gain is slim even then.

Adam

hashcash, committed transactions, homomorphic values, blind kdf; researching decentralization, scalability and fungibility/anonymity
Pages: [1]
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!