A blockchain that trains AI instead of hashing.
Every block makes the model smarter.
| Bitcoin | ResonanceNet | |
|---|---|---|
| Proof | SHA-256 < target | val_loss < prev_val_loss |
| Output | Heat | Trained AI model |
| Useful work | No | Yes |
| Block time | 10 min | 10 min |
| Supply cap | 21M BTC | 21M RNET |
| Inference | — | 150K tokens/sec |
Based on "Were RNNs All We Needed?" by Feng et al.
| Feature | MinGRU | Transformer |
|---|---|---|
| Parameter efficiency | ~388x | 1x |
| Inference memory | O(1) fixed state | O(seq_len) KV cache |
| Context length | Infinite | Limited (8K-128K) |
| Tokens/sec (RTX 5080) | ~150,000 | ~3,000-5,000 |
| Runs on phone (INT4) | Yes | Barely |
The model starts small and grows with the network. Growth rate is bounded by real GPU compute.
| Timeline | Model Size | Transformer Equivalent |
|---|---|---|
| Month 1 | ~50M | ~19B |
| Year 1 | ~160M | ~62B |
| Year 3 | ~700M | ~272B |
| Year 5+ | ~4.9B | ~1.9T |