NPOW: A Neural Proof-of-Work

1. Introduction

3. Implementation Proposal

Layer 1 – 32 kernels — 128 bytes
Layer 2 – 16 kernels — 64 bytes
Layer 3 – 8 kernels — 32 bytes
Layer 4 – Global Average Pooling
Layer 5 – Sigmoid output

3. Conclusion


  1. Inability to identify malicious actors submitting multiple proofs.
  2. Inability to ignore duplicate blocks with different proofs.
  3. Higher network bandwidth and compute time for verification of the winning submission.
  4. Unbound-able list of entrants exposes denial of service vector.
  5. Incentive to suppress propagation of other nodes proof submissions.


  1. Less power and resources are used to submit proofs, computers don’t work endlessly to find a solution to a nonce.
  2. How proofs are generated is optional, as the training of the submitted weights is not bound by some constant algorithm, block producers can train weights using any method desired.

4. Solution

5. Implementation Proposal #2

6. Summary

7. Proof-of-Concept

8. White Paper




Love podcasts or audiobooks? Learn on the go with our new app.

Recommended from Medium

15 Key Learnings for Product Managers in ML Engagements, Part 1

Design by evolution

How does bias in machine learning affect algorithmic trading?

Understand Machine Learning with One Article

Writing your first Machine Learning Model from Scratch.

Survival Analysis, Part 2: Taking advantage of static data

A simple introduction to Activation Functions

How to uplevel your Security Cam by detecting Objects with a Streamlit Data App?

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store


More from Medium

Cyberpunk Style Transfer

Deep Q Learning, Monte Carlo Policy Gradients, Actor2Critic: from Conceptual to Math to Tensorflow

Sokoban Game for Reinforcement Learning

Why do I receive the error “GETDATA timed out before FRAMES