Bitcoin Forum
April 26, 2024, 12:52:10 PM *
News: Latest Bitcoin Core release: 27.0 [Torrent]
 
   Home   Help Search Login Register More  
Pages: [1]
  Print  
Author Topic: If you just SUM the inverse-hashes of valid blocks ..?  (Read 202 times)
spartacusrex (OP)
Hero Member
*****
Offline Offline

Activity: 718
Merit: 545



View Profile
June 13, 2018, 12:05:39 PM
 #1

..I'm working on something.

The inverse hash is the MAX_HASH_VALUE - the hash value. So the lower the hash of the block, the more difficult the block, the more you add to the total.

I'm wondering what the sum of the inverse-hashes of the blocks, as opposed to the sum of the difficulty of the blocks, would give you ?

You would still need to find a nonce that satisfies the block difficulty, as usual, but after that the lower your hash, the more your block is worth.

The chain with the most hash-rate would on average still have the highest sum of inverse-hashes.

Any ideas what would happen?

( It just seems that some of the POW available is being left unused.. )

Life is Code.
1714135930
Hero Member
*
Offline Offline

Posts: 1714135930

View Profile Personal Message (Offline)

Ignore
1714135930
Reply with quote  #2

1714135930
Report to moderator
Advertised sites are not endorsed by the Bitcoin Forum. They may be unsafe, untrustworthy, or illegal in your jurisdiction.
1714135930
Hero Member
*
Offline Offline

Posts: 1714135930

View Profile Personal Message (Offline)

Ignore
1714135930
Reply with quote  #2

1714135930
Report to moderator
monsterer2
Full Member
***
Offline Offline

Activity: 351
Merit: 134


View Profile
June 13, 2018, 12:26:46 PM
 #2

For anyone wondering what he's asking, here's some clarification:

He's talking about what would happen if you changed the LCR from sorting branches by cumulative difficulty, to sorting by lowest achieved hash vs target value. Sometimes miners mine blocks with a hash quite a lot lower than the target value, and on average these numerically lower hashes are harder to mine, so does it increase the security of chain to sort by them?
tromp
Legendary
*
Online Online

Activity: 977
Merit: 1077


View Profile
June 13, 2018, 01:42:43 PM
Last edit: June 13, 2018, 04:13:40 PM by tromp
Merited by suchmoon (5), HeRetiK (1)
 #3

Sometimes miners mine blocks with a hash quite a lot lower than the target value

And that is exactly the problem with this proposal, since such unexpectedly low hashes will then cause multiple blocks to get orphaned. There will be a permanent state of uncertainty about finalization of recent transactions. 6 confirmations will no longer be particularly safe.

For the same reason, this will make selfish mining all the more effective.

So the Longest Chain Rule, where one next block is as good as any other, is quite essential in stabilizing the transaction history.
spartacusrex (OP)
Hero Member
*****
Offline Offline

Activity: 718
Merit: 545



View Profile
June 13, 2018, 02:33:01 PM
 #4

Sometimes miners mine blocks with a hash quite a lot lower than the target value

And that is exactly the problem, since such unexpectedly low hashes will then cause multiple blocks to get orphaned. There will be a permanent state of uncertainty about finalization of recent transactions.
6 confirmations will no longer be particularly safe.

For the same reason, this will make selfish mining all the more effective.

So the Longest Chain Rule, where one next block is as good as any other, is quite essential in stabilizing the transaction history.

Yes - this is the issue.. one lone hash can wipe out many normal / smaller hashes.

I have a system where you delay this. So initially a block is worth it's normal difficulty - as per usual.

But after 10,000 blocks.. beyond a re-org, you let it be worth what it's actually worth. This has some nice benefits (in my case - for pruning)

Life is Code.
Carlton Banks
Legendary
*
Offline Offline

Activity: 3430
Merit: 3071



View Profile
June 13, 2018, 03:36:15 PM
Last edit: June 13, 2018, 05:44:21 PM by Carlton Banks
 #5

Sometimes miners mine blocks with a hash quite a lot lower than the target value

And that is exactly the problem, since such unexpectedly low hashes will then cause multiple blocks to get orphaned. There will be a permanent state of uncertainty about finalization of recent transactions.
6 confirmations will no longer be particularly safe.

This has no basis in fact.

All blocks must have a hash lower than the threshold, there is no logic in Bitcoin block validation that behaves in any way differently depending on how low the hash value is of a new block. Either a block is the lowest hash value for the next block solution, or it's not. "Unexpectedly low" is therefore completely meaningless. The only logic that exists in Bitcoin block validation is "lowest", not "how low".


Let's remove this "problem" you've identified lol, only the longest chain wins. We then have a "new" problem; how to resolve chain forks when 2 blocks are found before either block has 100% acceptance, and so different parts of the network accept either block. This problem was solved in Bitcoin back in 2009 or 2010, and now you're saying that the solution is causing the problem. Those who merited your post should ask for the merit points to be returned.


For the same reason, this will make selfish mining all the more effective.

But you were wrong, so it actually makes zero difference

Edit: I misinterpreted, sorry tromp

Vires in numeris
monsterer2
Full Member
***
Offline Offline

Activity: 351
Merit: 134


View Profile
June 13, 2018, 04:12:22 PM
Last edit: June 13, 2018, 05:18:38 PM by monsterer2
 #6

Let's remove this "problem" you've identified lol, only the longest chain wins. We then have a "new" problem; how to resolve chain forks when 2 blocks are found before either block has 100% acceptance, and so different parts of the network accept either block. This problem was solved in Bitcoin back in 2009 or 2010, and now you're saying that the solution is causing the problem. Those who merited your post should ask for the merit points to be returned.

I don't think you've read this thread correctly. No one is suggesting we remove the cumulative difficulty rule; the OP was suggesting that hashes with a lower numerical value than target are on average harder to mine than hashes at the target value, therefore there might be some merit in using this to order blocks.

@tromp rightly pointed out that this would cause wild reorgs when a miner gets lucky with a low target, thereby increasing double spend risk.
HeRetiK
Legendary
*
Offline Offline

Activity: 2912
Merit: 2080


Cashback 15%


View Profile
June 13, 2018, 04:42:14 PM
 #7

Sometimes miners mine blocks with a hash quite a lot lower than the target value

And that is exactly the problem, since such unexpectedly low hashes will then cause multiple blocks to get orphaned. There will be a permanent state of uncertainty about finalization of recent transactions.
6 confirmations will no longer be particularly safe.

This has no basis in fact.

All blocks must have a hash lower than the threshold, there is no logic in Bitcoin block validation that behaves in any way differently depending on how low the hash value is of a new block. Either a block is the lowest hash value for the next block solution, or it's not. "Unexpectedly low" is therefore completely meaningless. The only logic that exists in Bitcoin block validation is "lowest", not "how low".

[...]

I'm not sure you read tromp's post correctly (well, or maybe I didn't), but the way I understand it is this:

1) Bitcoin follows the chain of the longest cumulative work based on the difficulty at which each block was mined

2) spartacus rex suggests calculating the cumulative work based on not the difficulty (which is the same for each block within a given difficulty period) but rather using the amount of work that is put into a block beyond this difficulty (ie. how far the hash is beyond the difficulty threshold)

3) tromp then points out that this would make the logic for following the longest cumulative work less stable, as a single "hard" block (ie. far beyond the difficulty threshold) would supersede multiple "easy" blocks (ie. within, but close to the difficulty threshold). This would lead to both (a) more orphans and (b) some confirmations being more worth than others (ie. 6 confirmations by "easy" blocks would be nullified by a single "hard" block), making it hard to reliably asses transaction finality.

While I wouldn't go as far as claiming this to make selfish mining easier, pointing out that block equality within a given difficulty period is important for a stable network and thus network security seems to me both correct and important. Feel free to correct me though.

.
.HUGE.
▄██████████▄▄
▄█████████████████▄
▄█████████████████████▄
▄███████████████████████▄
▄█████████████████████████▄
███████▌██▌▐██▐██▐████▄███
████▐██▐████▌██▌██▌██▌██
█████▀███▀███▀▐██▐██▐█████

▀█████████████████████████▀

▀███████████████████████▀

▀█████████████████████▀

▀█████████████████▀

▀██████████▀▀
█▀▀▀▀











█▄▄▄▄
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
.
CASINSPORTSBOOK
▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄
▀▀▀▀█











▄▄▄▄█
Carlton Banks
Legendary
*
Offline Offline

Activity: 3430
Merit: 3071



View Profile
June 13, 2018, 05:42:17 PM
 #8

Oh I get it now, tromp was saying "if that were true", explaining his edit

Vires in numeris
odolvlobo
Legendary
*
Offline Offline

Activity: 4298
Merit: 3209



View Profile
June 13, 2018, 05:55:39 PM
Last edit: June 13, 2018, 06:13:58 PM by odolvlobo
Merited by Foxpup (2)
 #9

I think the idea is based on a misconception that rarer values are somehow worth more. I don't believe that is true because the amount of work expended to generate a hash is the same, regardless of the hash.

I believe that the result of such a system would simply make the actual difficulty higher than the stated difficulty, but by a varying amount. I can't think any benefit of adding a random factor to the difficulty that would outweigh the problems.

Join an anti-signature campaign: Click ignore on the members of signature campaigns.
PGP Fingerprint: 6B6BC26599EC24EF7E29A405EAF050539D0B2925 Signing address: 13GAVJo8YaAuenj6keiEykwxWUZ7jMoSLt
spartacusrex (OP)
Hero Member
*****
Offline Offline

Activity: 718
Merit: 545



View Profile
June 13, 2018, 06:42:34 PM
 #10

P2Pool uses a system a little bit like this.

In P2Pool, you search for a block every 10 secs. You publish at that difficulty level, on the P2Pool network. BUT if you find a block that is 60x harder (one every 10 minutes), enough to be a valid Bitcoin block, you then publish that on the mainnet. The cumulative POW is preserved  - in a smaller amount of blocks.

In this scenario using multi-difficulty blocks has a great use case. 

I am trying to leverage this.

In Bitcoin, you search for a block every 10 minutes. You publish at that difficulty level, on the mainnet. BUT if you find a block that is 60x harder (one every 10 hours), enough to be a valid Super Block (let's say), then you can publish that on a Super Block Network, where you only find one block every 60 normal Bitcoin blocks. The cumulative POW is preserved - in a smaller amount of blocks.

The point is that since this works for dual-difficulty blocks, what happens if you take this to the extreme, and simply use the inverse-hash as the block weight ?

It STILL seems to work (chain with most POW wins), but the fluctuations are far bigger..

I'm trying to see if those fluctuations can be better controlled.

Life is Code.
aliashraf
Legendary
*
Offline Offline

Activity: 1456
Merit: 1174

Always remember the cause!


View Profile WWW
June 13, 2018, 07:36:54 PM
 #11

I think the idea is based on a misconception that rarer values are somehow worth more. I don't believe that is true because the amount of work expended to generate a hash is the same, regardless of the hash.

I believe that the result of such a system would simply make the actual difficulty higher than the stated difficulty, but by a varying amount. I can't think any benefit of adding a random factor to the difficulty that would outweigh the problems.
Agree. No matter what the outcome is, you are processing the target difficulty and you hit proportional to your hashpower in long run, no hashpower is lost as OP suggests.
Pages: [1]
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!