Its no secret that nearly every facet of the tech industry has adopted the use of AI and some form or another, most of all developers have come to almost universally rely on it daily. Its fast, its cheap and its damn good - IF prompted correctly. I have been using AI to code since its infancy and have both launched and supported a large number of projects through the use of AI. I would not consider myself perfect by any means, I'm human. I think the success of this project will depend heavily on the community for both audits and testing. Satoshi's original vision was a blockchain that was
fair. He envisioned an ecosystem where everyone could participate with nearly equal rights to reward. I believe this is the foundation that every member of this network loves most about cryptocurrency. I believe also that the biggest problem that has yet to be solved meaningfully is - how do we make it fair. There are very clear tradeoffs for every solution and no solution will ever be perfect. The biggest problem with innovation is the resource (TIME). This project will rely heavily on AI to speed up the development of the blockchain and help accelerate every aspect of the code base and infrastructure.
While I plan to rely heavily on AI while developing this I will do my very best to keep all community interaction and forward facing documentation human-based. My hope is that in doing so it will prove that I have a complete and total understanding of the code base while providing resources that are easy to digest and not predictive word-soup.
Currently the tokenomics dont matter, so I wont disclose those - any reference to tokenomics in the codebase dont matter and are subject to change as we get closer to a working prototype.
Code work will be done via regtest network and local isolated testnets and then changes will be gated behind height activation to ensure consensus once a real testnet has developed.
Akshually Fair ModelThe Fair Model (Concept Overview)The entire goal of this project is to answer a question that has existed since the beginning of cryptocurrency:
How do we make mining fair without sacrificing decentralization?Traditional Proof-of-Work systems reward raw throughput. The more hashes you can compute per second, the more likely you are to win a block. This has led to an arms race of ASICs, massive GPU farms, cheap power arbitrage, and ultimately mining centralization. There is nothing inherently wrong with that model. It has proven secure. But it also means that the probability of success is directly proportional to hardware scale. The bigger the farm, the bigger the advantage. The model being explored in this project attempts to change that dynamic. Instead of rewarding
unlimited hashing throughput, the protocol limits the number of
useful attempts each participant can make within a given window. This means the network does not try to detect who is hashing faster. Instead, it simply makes additional hashing beyond a certain point useless. This changes the incentive structure dramatically and hopefully gets us closer to a fair network.
Mining IdentitiesEach miner registers a cryptographic mining identity.
This is simply a public key that becomes eligible to participate in mining after a short activation delay. The purpose of this delay is to prevent rapid identity rotation.
These identities are not meant to represent real-world identity. They simply give the network a deterministic participant structure.
Epochs and TicketsTime on the network is divided into epochs.
At the start of each epoch the chain generates a deterministic random seed derived from recent blocks.
Using this seed, each mining identity receives a small number of
tickets.
These tickets represent the miner's opportunity to attempt block production during that epoch.
For example:
• Each identity may receive 3–5 tickets per epoch
• Each ticket allows exactly one mining attempt
• Once a ticket is used, it cannot be reused
The number of tickets is fixed by protocol rules.
This is where the fairness begins.
No matter how much hardware someone owns, a single identity only receives a limited number of chances to compete.
Sequential WorkWhen a ticket becomes active, the miner must perform a sequential computation challenge.
This challenge is intentionally designed so that:
• Each step depends on the previous step
• The computation cannot be parallelized efficiently
• Memory access patterns are randomized
• Verification is much cheaper than generation
Because the computation is sequential, throwing massive parallel hardware at the problem provides limited advantage.
In traditional mining, running 100 GPUs means 100x more attempts.
In this model, the same identity still only gets the same number of tickets.
Extra hardware simply finishes the same job slightly faster.
Winning a BlockAfter completing the sequential challenge, the miner produces a score.
If the score falls below the current difficulty target, the miner can produce a block.
Difficulty still adjusts exactly like a normal blockchain:
• If blocks are produced too quickly, difficulty increases
• If blocks slow down, difficulty decreases
The only difference is that attempts are limited by tickets rather than raw hash rate.
Why This MattersIn a traditional PoW system:
probability_of_winning = hashrate
In the fair model:
probability_of_winning = number_of_identities × tickets_per_epoch
This dramatically reduces the advantage of large-scale mining farms. A single identity cannot gain unlimited advantage simply by adding more hardware. To scale influence, a participant must operate more identities, which introduces costs such as collateral, activation delays, and operational complexity.
Difficulty and Block TimesBlock times and difficulty retargeting still function exactly like a traditional blockchain.
The protocol simply adjusts the probability that a ticket will produce a valid block.
This keeps block intervals stable while still limiting useful mining attempts.
There are many unknowns and this system needs to answer questions like: How resistant is it to Sybil attacks, what are the real hardware advantages, how do pools behave under this model, what is the optimal sequential workload design, how does identity registration affect participation? These questions can only be answered through experimentation. I will do as much experimentation while working with the code as possible to produce deterministic results while maintaining the core ideas outlined in this thread. This thread will serve as a sort of eternal notepad and idea-board for the network up until launch. The goal is to build something real, test it openly, break it repeatedly, and improve it with community input.
Fairness in decentralized systems is extremely difficult.Every design introduces tradeoffs between security, decentralization, accessibility, and economic incentives. I am not claiming this project will completely solve the problem, but I'm hoping through experimentation and the advancements of AI we can get much closer then we currently are.
If it works, it could allow ordinary devices - including ARM systems - to remain competitive participants in securing the network. Desktops, laptops, phones could all have meaningful roles in block production and economy distribution.
And if it fails, we will at least learn something meaningful along the way and perhaps lay out the groundwork for a new network to learn from our mistakes and perhaps improve on them.