Bitcoin Forum
March 13, 2026, 01:52:47 PM *
News: Latest Bitcoin Core release: 30.2 [Torrent]
 
   Home   Help Search Login Register More  
Pages: « 1 [2]  All
  Print  
Author Topic: kangaroo-wild with precomputed (and chosen) DP  (Read 297 times)
kTimesG
Full Member
***
Offline Offline

Activity: 770
Merit: 236


View Profile
March 12, 2026, 05:34:42 PM
 #21

To generate a DB 77 it take about 12 days and also a lot of RAM and SSD space needed.

Or you can just store DPs continuously as they are found, with zero RAM overhead, and simply improve the DB each and every time a new DLP is solved. If you need a lot of RAM and 12 days, you have a bad understanding / bad implementation of the algorithm... if the purpose is to build a static DB of "best DPs", this is only "best" for the solver parameters (computing power) that you have in mind, and would not help at all solving higher ranges, or solving via something magnitudes of times faster (such as a GPU large-scale cluster).

Off the grid, training pigeons to broadcast signed messages.
oddstake
Newbie
*
Offline Offline

Activity: 69
Merit: 0


View Profile
March 12, 2026, 06:13:49 PM
 #22

Everyone optimizes based on the resources they actually have. Not everyone is running a GPU cluster or has unlimited hardware. Different setups require different trade-offs, and what looks inefficient in one configuration can make perfect sense in another.

The problem isn’t how you build the database, the scale is simply too large for current hardware.
Who do you think will win? A perfectly optimized program with limited hardware, or unlimited hardware running poorly optimized software?

Anyway, I have no intention of solving puzzle 135 with a single CPU, just experimenting with stuff.
arulbero (OP)
Legendary
*
Offline Offline

Activity: 2154
Merit: 2526


View Profile
March 12, 2026, 06:20:14 PM
 #23

I will try to generate a DB 77 but it will take a lot of time. I will let you know when finishes and made some tests.

Explain me why you need so much RAM and time to generate 2^29 DP points.  12 days??
oddstake
Newbie
*
Offline Offline

Activity: 69
Merit: 0


View Profile
March 12, 2026, 06:45:31 PM
 #24


-n 2147483648 = 2^31 DPs (not 2^29).

Look at the progress:
DPs: ... / 6,442,450,944

6,442,450,944 = 3 × 2^31 — scored mode generates 3× more candidate DPs, then selects the best 2^31 at the end.

Value   
-n (final target DPs)                         2^31 = 2.1B
Generated DPs (scored 3×)   3 × 2^31 = 6.4B
Your DB 75 (-n)                         2^29 = 536M

DB 77 which I'm trying to generate now is 4× larger than default DB 75 (2^31 vs 2^29).


DB 77  should find puzzle 85   4 times  faster   than your DB 75 if my math is correct.

Code:
./tame_phase -n 2147483648 -r 76 77 -g 17 -w 384 -b 256 --scored --checkpoint 30
============================================================
  TAME PHASE - Scored DP Database Generator
============================================================
  Range:          2^76 - 2^77
  Global bits:    17 (DP mask: 0x1ffff)
  Target DPs:     2147483648 (2147.5M)
  Scored mode:    YES
  Workers:        384 x 128 = 49152 kangaroos
  Jump table:     512 entries (bits=9, seed=42)
  Escape table:   128 entries (mult=2000)
  History:        4
  Local buf1:     2048
  Vita (life):    8388608
  Dist bytes:     10 (trunc=0)
  Hash index:     28 bits (268435456 buckets)
  Prefix:         77_scored_17
  Midpoint:       0x18000000000000000000
============================================================
[INIT] Building jump table...
[JUMP_TABLE] target_dp=2147483648 gap=1.76e+13 opt=48592007999 (2^35.5)
[INIT] Allocating hash table: 17179869184 slots (512.00 GB)
[HUGEPAGE] 512.00 GB on 1GB hugepages (512 pages)
[RUN] Starting 384 workers...
[CHECKPOINT] Auto-save every 30 minutes to 77_scored_17_checkpoint.bin
[PROGRESS] DPs: 58876 / 6442450944 (0.0%) | 1531.59M steps/s | 11715 DP/s | fill: 0.0% | ETA: 549924s | elapsed: 5s
[PROGRESS] DPs: 117383 / 6442450944 (0.0%) | 1532.74M steps/s | 11701 DP/s | fill: 0.0% | ETA: 550571s | elapsed: 10s
[PROGRESS] DPs: 175487 / 6442450944 (0.0%) | 1532.87M steps/s | 11621 DP/s | fill: 0.0% | ETA: 554385s | elapsed: 15s
[PROGRESS] DPs: 233868 / 6442450944 (0.0%) | 1532.72M steps/s | 11676 DP/s | fill: 0.0% | ETA: 551749s | elapsed: 20s
[PROGRESS] DPs: 291662 / 6442450944 (0.0%) | 1532.49M steps/s | 11559 DP/s | fill: 0.0% | ETA: 557350s | elapsed: 25s
kTimesG
Full Member
***
Offline Offline

Activity: 770
Merit: 236


View Profile
March 12, 2026, 06:54:22 PM
 #25

The problem isn’t how you build the database, the scale is simply too large for current hardware.
Who do you think will win? A perfectly optimized program with limited hardware, or unlimited hardware running poorly optimized software?

The winner is the one with more computing power put to most efficient use possible, which is irrelevant of hardware, and only relevant to costs per watt, times throughput. So it doesn't matter if you have a perfectly optimized program, if it's running on a toaster. You can also have a perfectly optimized program that runs on 5000 GPUs and solves a single DLP, simply because this is the optimal quantity of computing power required, otherwise the costs are too high due to the expected time and resources.. Having more computing power is not inversely proportional to a person's IQ, which is what your comment sounds like.

So yeah: having a perfect toaster that solves 135 will cost MORE than having a 5000 GPUs cluster working on the same problem. And as a bonus, one may not live long enough until the perfect toaster spits out the solution.

Off the grid, training pigeons to broadcast signed messages.
oddstake
Newbie
*
Offline Offline

Activity: 69
Merit: 0


View Profile
March 12, 2026, 07:11:36 PM
 #26

Quote
Having more computing power is not inversely proportional to a person's IQ, which is what your comment sounds like.

lmao , I was talking about resources, not my intelligence.
You must be proud of yourself, or maybe very frustrated. Good luck dude and happy hunting  !
arulbero (OP)
Legendary
*
Offline Offline

Activity: 2154
Merit: 2526


View Profile
March 12, 2026, 07:15:52 PM
Last edit: March 12, 2026, 07:32:23 PM by arulbero
 #27


-n 2147483648 = 2^31 DPs (not 2^29).

Look at the progress:
DPs: ... / 6,442,450,944

6,442,450,944 = 3 × 2^31 — scored mode generates 3× more candidate DPs, then selects the best 2^31 at the end.

Value   
-n (final target DPs)                         2^31 = 2.1B
Generated DPs (scored 3×)   3 × 2^31 = 6.4B
Your DB 75 (-n)                         2^29 = 536M

DB 77 which I'm trying to generate now is 4× larger than default DB 75 (2^31 vs 2^29).


DB 77  should find puzzle 85   4 times  faster   than your DB 75 if my math is correct.

Code:
./tame_phase -n 2147483648 -r 76 77 -g 17 -w 384 -b 256 --scored --checkpoint 30
============================================================
  TAME PHASE - Scored DP Database Generator
============================================================
  Range:          2^76 - 2^77
  Global bits:    17 (DP mask: 0x1ffff)
  Target DPs:     2147483648 (2147.5M)
  Scored mode:    YES
  Workers:        384 x 128 = 49152 kangaroos
  Jump table:     512 entries (bits=9, seed=42)
  Escape table:   128 entries (mult=2000)
  History:        4
  Local buf1:     2048
  Vita (life):    8388608
  Dist bytes:     10 (trunc=0)
  Hash index:     28 bits (268435456 buckets)
  Prefix:         77_scored_17
  Midpoint:       0x18000000000000000000
============================================================
[INIT] Building jump table...
[JUMP_TABLE] target_dp=2147483648 gap=1.76e+13 opt=48592007999 (2^35.5)
[INIT] Allocating hash table: 17179869184 slots (512.00 GB)
[HUGEPAGE] 512.00 GB on 1GB hugepages (512 pages)
[RUN] Starting 384 workers...
[CHECKPOINT] Auto-save every 30 minutes to 77_scored_17_checkpoint.bin
[PROGRESS] DPs: 58876 / 6442450944 (0.0%) | 1531.59M steps/s | 11715 DP/s | fill: 0.0% | ETA: 549924s | elapsed: 5s
[PROGRESS] DPs: 117383 / 6442450944 (0.0%) | 1532.74M steps/s | 11701 DP/s | fill: 0.0% | ETA: 550571s | elapsed: 10s
[PROGRESS] DPs: 175487 / 6442450944 (0.0%) | 1532.87M steps/s | 11621 DP/s | fill: 0.0% | ETA: 554385s | elapsed: 15s
[PROGRESS] DPs: 233868 / 6442450944 (0.0%) | 1532.72M steps/s | 11676 DP/s | fill: 0.0% | ETA: 551749s | elapsed: 20s
[PROGRESS] DPs: 291662 / 6442450944 (0.0%) | 1532.49M steps/s | 11559 DP/s | fill: 0.0% | ETA: 557350s | elapsed: 25s

I generated 6*2^29 DP points, I don't know the time, multiple times, maybe 100 hours ...

EDIT: in my opinion, instead of generating a 77 db, if your goal is optimizing 85 range, try 85 range directly; you can't achieve on 85 range the same performance that I achieved in 75 (normal), but generating 2^30 points in 85 range is better than generating 2^30 points in 2^77.  
oddstake
Newbie
*
Offline Offline

Activity: 69
Merit: 0


View Profile
Today at 04:28:06 AM
 #28

I will try your idea after it finishes to generate DB 77 in about 7 days  Grin



But :

DB 75 → extend to 85:      2^(85-75) = 1024 partitions
DB 77 → extend to 85:      2^(85-77) = 256 partitions

4× less partitions  ->  4× faster

                    Partitions for puzzle 85                    Median time for puzzle 85
DB 75                     1024                                    ~1 min
DB 77                      256                                    ~15 sec


So .... DB 77 with 2^31      >     DB 85 with 2^30     (2× faster)

You are right but at equal entries, DB 85 is better than DB 77, zero overhead. 

But I have 2^31 in DB 77, so double.
kTimesG
Full Member
***
Offline Offline

Activity: 770
Merit: 236


View Profile
Today at 08:50:12 AM
 #29

Quote
Having more computing power is not inversely proportional to a person's IQ, which is what your comment sounds like.

lmao , I was talking about resources, not my intelligence.
You must be proud of yourself, or maybe very frustrated. Good luck dude and happy hunting  !

Why would I be frustrated? You basically put an equal sign between "more resources" and "unoptimized".

When you factor in things like problem size, time, performance, and costs, the conclusion is that having more resources is a necessity that is required in order to reach the optimum (minimal) costs, irrespective of algorithm efficiency.

This applies also to your assumptions that you need tons of RAM (false) to build some DB that you expect to solve things N times faster (again, false). Maybe read Bernstein's paper on cube root before attempting anything else, which is what this arulbero's program implements, basically.

Off the grid, training pigeons to broadcast signed messages.
arulbero (OP)
Legendary
*
Offline Offline

Activity: 2154
Merit: 2526


View Profile
Today at 11:50:07 AM
Last edit: Today at 12:51:01 PM by arulbero
 #30

I will try your idea after it finishes to generate DB 77 in about 7 days  Grin



But :

DB 75 → extend to 85:      2^(85-75) = 1024 partitions
DB 77 → extend to 85:      2^(85-77) = 256 partitions

4× less partitions  ->  4× faster

                    Partitions for puzzle 85                    Median time for puzzle 85
DB 75                     1024                                    ~1 min
DB 77                      256                                    ~15 sec


So .... DB 77 with 2^31      >     DB 85 with 2^30     (2× faster)

You are right but at equal entries, DB 85 is better than DB 77, zero overhead.  

But I have 2^31 in DB 77, so double.

I tried to save the first 2^30 DPs instead of the first 2^29 DPs (always 74-75 range),  2GB+8GB instead of 2GB + 4GB,

workers x batch = 192x60, aws machine, test of only 30 keys in range 84-85

==========================================================
   SUMMARY
==========================================================
   Total keys:     30
   Solved:         30
   Failed/Aborted: 0
   Total time:     886.2s
   Min:            2.531s | Max: 102.924s
   Mean time/key:  29.541s
   >>> MEDIAN:     18.446s <<<
   Mean steps/key: 28566.83M
   Median steps:   17692.83M
   StdDev:         26402.86M
   95% CI (mean):  [19118.68M, 38014.97M] steps/key
==========================================================

It's okay? Half of the keys in 18s or less.

I modified the kangaroo-wild so it can work in -ssd mode, only 2GB in ram and the db_file on ssd.

I wonder how fast could be if I save the first 2^31 or the first 2^32 DPs ....  Grin

----------------------------------------------------------------------------------------------------------------------
EDIT:

I tried to find a few keys in 89-90 range, I was very lucky for the first key:

[!!!] KEY FOUND (f1 m1/1 delta=1454 partition=29075): 0x7d4ac6516f11e2b19fc
[SOLVED 1/5] Key: 0x3c64fd4ac6516f11e2b19fc (part 29075) | Steps: 109917.3M | Time: 114.930s

but not for the second one:

[!!!] KEY FOUND (f2 m1/1 delta=1353 partition=11682): 0x78b3fbfe9d14076b71b
[SOLVED 2/5] Key: 0x2b68b8b3fbfe9d14076b71b (part 11682) | Steps: 1911265.6M | Time: 2011.398s

[!!!] KEY FOUND (f1 m1/1 delta=1522 partition=29993): 0x787b34cce1255fb4266
[SOLVED 3/5] Key: 0x3d4a787b34cce1255fb4266 (part 29993) | Steps: 524447.9M | Time: 555.927s

[!!!] KEY FOUND (f1 m1/1 delta=1285 partition=30439): 0x4a5b3be06a8e939e486
[SOLVED 4/5] Key: 0x3db9ca5b3be06a8e939e486 (part 30439) | Steps: 47841.3M | Time: 48.957s

[!!!] KEY FOUND (f1 m1/1 delta=173 partition=15753): 0x5c4a1bd78b791cbf856
[SOLVED 5/5] Key: 0x2f625c4a1bd78b791cbf856 (part 15753) | Steps: 250001.7M | Time: 261.958s


==========================================================
   SUMMARY
==========================================================
   Total keys:     5
   Solved:         5
   Failed/Aborted: 0
   Total time:     2993.7s
   Min:            49.058s | Max: 2011.499s
   Mean time/key:  598.735s
   >>> MEDIAN:     262.061s <<<
   Mean steps/key: 568694.78M
   Median steps:   250001.73M
   StdDev:         772622.91M
   95% CI (mean):  [0.00M, 1245928.62M] steps/key
==========================================================


I think that the real average time must be over 900s.

In any case under a 1 hour using only cpus and a database build for 74-75 bit range is not bad at all.

----------------------------------------------------------------------------------------------------------------------

My github account is ok now, I'm planning to update kangaroo-wild with -ssd option and to add in the readme file the links to  

75_scored_16_30_tame_db.bin
75_scored_16_30_bucket_offsets.bin
75_scored_16_30_fingerprints.bin
75_training_16_30_params.bin

where:

75 means:  74-75 range
16 means:  DP mask
30 means:  2^30 DP saved
Pages: « 1 [2]  All
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!