Alice Protocol
Not all intelligence bends the knee.
Decentralized AI Training Network — Mainnet Live
What is Alice?Alice Protocol is a decentralized network that trains a 7-billion parameter AI language model from scratch. No corporate model, no centralized control. Anyone with a GPU contributes training gradients and earns ALICE tokens.
Unlike projects that fine-tune existing models or run inference, Alice trains a foundation model from zero using distributed Proof of Gradient (PoG) consensus. Every miner's GPU computes real gradients that update the shared model.
Key facts:7B parameter model trained from scratch by GPU miners
Proof of Gradient (PoG) — a novel consensus for verifiable AI training
Independent Substrate blockchain (not built on Ethereum/Solana/Bittensor)
Fixed supply: 21,000,000 ALICE
2-year halving schedule
No ICO, no pre-sale, no VC round — 100% mined
How is Alice different?vs. Bittensor (TAO): Bittensor validators score existing models. Alice miners actually train a model from scratch. Every gradient moves the weights.
vs. Gensyn: Gensyn builds compute infrastructure. Alice is a live training network with its own chain, model, and token economics.
vs. Render / Akash: Those are GPU rental markets. Alice is a training protocol — miners contribute gradients, not raw compute hours.
vs. Centralized AI (OpenAI, Google): Their models are corporate property. Alice's model is trained by the community, on the community's hardware, with open weights.
Mainnet StatusMainnet is live and producing blocks.Chain: Substrate-based, runtime v108
Current epoch: ~100+
Model version: v115+
Active scoring: dual scorer pool with automatic coverage
Reward distribution: every epoch (~1 hour)
Current issuance: ~23,000 ALICE minted
Live explorer:
aliceprotocol.org/explorer
Tokenomics| Total Supply | 21,000,000 ALICE |
| Distribution | 100% mined — no pre-mine, no team allocation |
| Halving | Every 2 years |
| Current reward/epoch | ~601 ALICE |
| Epoch duration | ~1 hour |
Reward split per epoch:89% — Miners (GPU trainers)
6% — Scorers (training validators)
2% — Aggregators (gradient aggregation)
3% — Treasury (protocol development)
How to MineAlice mining is permissionless. No staking required for miners. Just a GPU and one command.
Requirements:NVIDIA GPU with 24GB+ VRAM (RTX 3090/4090 recommended)
OR Apple Silicon with 24GB+ unified memory
OR NVIDIA GPU with 16GB VRAM (batch_size=1, lower rewards)
OR CPU with 32GB+ RAM (supported but very slow)
Quick start:git clone https://github.com/V-SK/Alice-Protocol.git
cd Alice-Protocol
./miner/bootstrap.sh --ps-url https://ps.aliceprotocol.org
The bootstrap script automatically:
Checks Python 3.10+
Installs all dependencies
Detects your hardware (CUDA/MPS/CPU)
Creates a wallet
Downloads the model (~14GB)
Starts training
Higher-end hardware earns more. The network assigns batch_size based on your GPU VRAM. More VRAM = larger batches = higher effective training contribution = more rewards.
| GPU | VRAM | Batch Size | Reward Weight |
| RTX 4080 | 16GB | 1 | 1x |
| RTX 3090/4090 | 24GB | 4 | 4x |
| A5000 | 24GB | 4 | 4x |
| M1/M2 Max 32GB | 32GB | 8 | 8x |
| A100 40GB | 40GB | 16 | 16x |
| A100/H100 80GB | 80GB | 32 | 32x |
How to Validate (Scorer)Scorers verify training quality by evaluating gradient contributions. Requires staking.
Stake: 5,000 ALICE
Reward: 6% of all ALICE rewards
Single stake supports multiple scorer machines
Hardware: 32GB+ RAM recommended
git clone https://github.com/V-SK/Alice-Protocol.git
cd Alice-Protocol
./scorer/bootstrap.sh
--scorer-address YOUR_STAKED_ADDRESS
--public-endpoint http://YOUR_IP:8090
TechnologyTraining architecture:Full data parallelism — every miner trains the complete 7B model
TopK gradient compression (0.1%) — reduces upload from ~14GB to ~16MB per gradient
Post-accumulate gradient hooks — enables 7B training on 24GB consumer GPUs
Gradient checkpointing — 60% reduction in activation memory
Streaming mean aggregation — Byzantine fault tolerance via scorer validation
Chain:Custom Substrate blockchain
SS58 address format (prefix 300, addresses start with 'a')
Proof of Gradient (PoG) consensus
On-chain epoch settlement with reward distribution
Scorer staking and slashing
Scoring:Random sampling validation (not full dataset — fast and efficient)
Multiple independent scorers with median aggregation
3-sigma anomaly detection for gradient poisoning
RoadmapCurrent (Phase 1): Mainnet live with 7B dense model
GPU mining with automatic hardware detection
Dual scorer validation pool
Block explorer and live dashboard
Public mining — permissionless participation
Phase 2 (Q3-Q4 2026):Open aggregator participation (20,000 ALICE stake)
GUI miner client (Tauri desktop app)
Multi-scorer sharding for scale
Public training dashboard with device census
Phase 3 (2027):MoE architecture: 8×7B Mixture of Experts (56B total)
Each miner trains 1-2 expert modules
16GB GPU entry point for MoE training
Parameter server decentralization
LinksWebsite: aliceprotocol.orgExplorer: aliceprotocol.org/explorerGitHub: github.com/V-SK/Alice-ProtocolWhitepaper: aliceprotocol.org/whitepaperDiscord: discord.gg/mRz3BThDyhX (Twitter): @Alice_AI102
FAQQ: Is there a pre-mine or team allocation?A: No. 100% of ALICE is mined through training contribution. No ICO, no pre-sale, no insider allocation.
Q: What GPU do I need?A: Minimum 16GB VRAM (RTX 4080). Recommended 24GB+ (RTX 3090/4090). Higher VRAM = higher rewards.
Q: Can I mine on Mac?A: Yes. Apple Silicon with 24GB+ unified memory is supported via MPS. 16GB Macs can mine slowly using macOS swap.
Q: Can I mine with CPU only?A: Technically yes with 32GB+ RAM, but extremely slow (~1/50 to 1/100 of GPU speed). Rewards will be minimal.
Q: Is this just another Bittensor subnet?A: No. Alice has its own independent blockchain, its own consensus mechanism (Proof of Gradient), and trains a model from scratch. It is not built on or dependent on Bittensor.
Q: Where can I trade ALICE?A: ALICE is currently earned through mining only. Exchange listings will be announced when available.
Q: Is the model open source?A: The training code and miner software are open source (MIT license). Model weights are produced by the network and available to participants.
They chose power. We chose freedom.
Alice Protocol — Training AI without permission.