Bitcoin Forum
July 01, 2024, 11:51:15 PM *
News: Latest Bitcoin Core release: 27.0 [Torrent]
 
   Home   Help Search Login Register More  
Pages: [1]
  Print  
Author Topic: NPOW: A Neural Proof-of-Work  (Read 85 times)
JamesWilliamFletcher (OP)
Newbie
*
Offline Offline

Activity: 1
Merit: 0


View Profile
April 18, 2021, 02:31:07 PM
Last edit: July 10, 2021, 06:27:11 PM by JamesWilliamFletcher
 #1

You can now skip to the white paper: npow_whitepaper.pdf

Abstract. A simple proposal to apply machine learning as a replacement to poof-of-work in electronic cash systems such as Bitcoin.

1. Introduction
The proposal of machine learning problems to replace the nonce problem in traditional proof-of-work is not an original concept, many individuals before this document would have come up with very similar ideas such as Sergey Surkov (https://medium.com/@sergey_surkov/random-a-marriage-between-crypto-and-ai-1a3f4aa752ca).

A block producer is any entity that decides to submit a proposal for a block. In theory, this could be any node in the network.

The concept is that every block producer has to train a small binary classification network to signal 1 when the producers block hash is provided as input and 0 when hashes from other submitted block producers are input.

The block producer hash is a combination of their block submission hash and the hash of the last block; which is used as the input to a new hash to ensure that new neural network weights have to be trained to identify new hashes for each block producer upon each new block to be produced.

Once all proofs have been submitted for the current block (defined by some set time that the next block will be created) every node will average all of the submitted proof weights into one set of binary classification network weights which then all entrant block producer hashes for the current block will be tested against and the hash with the higher output value wins the right to have their block processed into the current chain.

To prevent random or garbage weights from being entered each proof submitted must signal an output of more than some tolerance value of say 0.5 when the block producer hash of the submitter is used as an input to the network.

In cases where the whole network does not have the same view of the block submissions or to say that some nodes don’t get submissions that others did get, the theory here is that eventually, these nodes would have to submit to the majority consensus of the network over which chain is correct. But this could also be seen as a problem that traditional proof-of-work doesn’t need to be concerned about.

The obvious flaw in this and similar solutions is the aspect that essentially unidentifiable submissions are being taken into account, in traditional proof-of-work systems every node races to be the first to find the solution to a problem, and the first to find this solution has their block cemented into the blockchain granted it passes other basic verification checks. Not only is it a more efficient solution for the peer-to-peer network to deal with; it also does not have the problem where a malicious actor can submit multiple solutions to try and  leverage the odds that their block will become the winner.

Malicious block producers cannot be prevented submitting multiple proofs for the same block which could be crafted to give their block better odds in winning. Also submissions of identical blocks with different proofs/network weights cannot be prevented because there is no reliable way to identify which submission was the first or original to keep over the other submitted duplicates. So the debate here comes down to this, are all the auxiliary verification checks on submitted blocks performed by each node enough to mitigate any damage such potential malicious block producers could cause? A debate I leave open to speculation.

Another issue is that every node having the potential to take part in this block submission also gives the nodes an incentive to suppress relaying submissions from other nodes as they would be deemed competitors which will bring down the activation in the overall average weighted network. Is it questionable as to how much impact a refusal to propagate submissions from others nodes would be in a highly connective peer-to-peer network. It could prove quite hard to successfully suppress submissions from the wider network or even cause any significant propagation delay.

2. Implementation Proposal
Each block producer trains a one-dimensional convolutional neural network which outputs a binary classification as described in this document, network weights are of int8 to reduce network bandwidth, 3-byte convolutional filters/kernels (4-bytes including the bias), and a total of 5 network layers. The number of filters per layer would be;

Layer 1 – 32 kernels — 128 bytes
Layer 2 – 16 kernels — 64  bytes
Layer 3 – 8  kernels  — 32  bytes
Layer 4 – Global Average Pooling
Layer 5 – Sigmoid output

That’s a total of 224 bytes for each set of weights provided per submitted proof.

3. Conclusion
The solution proposed here has several drawbacks over traditional proof-of-work such as;

Drawbacks.
1    Inability to identify malicious actors submitting multiple proofs.
2    Inability to ignore duplicate blocks with different proofs.
3    Higher network bandwidth and compute time for verification of the winning submission.
4    Unbound-able list of entrants exposes denial of service vector.
5    Incentive to suppress propagation of other nodes proof submissions.

Benefits.
1    Less power and resources are used to submit proofs, computers don’t work endlessly to find a solution to a nonce.
2    How proofs are generated is optional, as the training of the submitted weights is not bound by some constant algorithm, block producers can train weights using any method desired.

The benefits are subjective, one could argue that simply having an alternate form to proof-of-work could be advantageous but in respect to current day expectations of alternatives, such as barring GPU’s from entry, the use of machine learning offers no kudos here as GPU’s are designed to excel in these types of machine learning workloads.

4. Solution
The major drawback in the system described above is the inability to prevent multiple submissions from the same actor, the solution to this is simplified;

In this solution, the problem is to solve the latent variables for a neural network that takes the last block hash as input and must output the new block hash. The new hash is a hash of the previous block hash concatenated the hash of the new block. The first person to find the solution by training the latent variables of the neural network wins the right to cement the new block into the blockchain. The structure of the neural network’s latent variables is a matter of preference but must be constant among all peers. When the network needs to adjust the difficulty of training the neural network it can increase or decrease the number of latent variables that need to be trained and/or the size of the output hash.

The drawbacks and benefits now parallel with traditional proof-of-work. The only distinguishing difference with this solution is that nodes have more flexibility over how they generate a proof. This means that the problem is no longer completely focused on who has more raw computer power but now gives an edge to anyone with a better algorithm to train the latent variables of the neural network and that the solution is no longer a 4-byte nonce value submitted with each new block but the latent variables of some sizeable neural network which could be anything from 128 bytes to a few megabytes.

5. Implementation Proposal #2
Continuing from the solution described in (4.) a fully connected network of some configuration is trained taking a hash as input and outputting a hash. The difficulty is how many blocks into the past the network needs to be trained on / how many inputs you have to train the network on to produce different outputs.

If the difficulty is 0 the network only needs to be trained to output the new block hash when the previous block hash is used as input. If the difficulty is 1 then the network must be trained to output not only the hash of the new block on the input of the previous block hash but also must output the previous block hash when the block hash of the block before the previous block is input.

The main problem with this is that it offers no significant benefit over traditional proof-of-work, having the ability to customize how the network weights are trained is not necessarily a benefit because of how temperamental neural networks can be, often humans need to fine-tune network hyper-parameters in order to get a neural network to train correctly which requires manual attention, miners generally want an automated process and while the process could be somewhat automated using random hyper-parameters that the miners set a min/max range for — the system overall looks a lot more finicky than just the odds of finding the correct nonce in traditional proof-of-work. The neural network proof has more vectors to be gamed than its predecessor which is a far more predictable and robust system. Such a new system would need significant benefits over the original for which the neural proof seems to offer none. NPOW could be deployed into an operable chain today, it would function as expected, but the new learning curve for miners would have questionable popularity, new block creation would probably stall often, many miners would probably become discouraged as the better network training methods would cut into their profits and most likely be kept as closed source solutions by their competitors.

6. Summary
NPOW comes with the cost increase in the proof size of each submitted block and the added complexity in generating proofs which is arguably less future proof and a less predictable system than traditional proof-of-work. Yet has no defining beneficial features to compensate for either of these costs.
Pages: [1]
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!