Bitcoin Forum
March 13, 2026, 07:39:48 PM *
News: Latest Bitcoin Core release: 30.2 [Torrent]
 
   Home   Help Search Login Register More  
Pages: « 1 [2]  All
  Print  
Author Topic: kangaroo-wild with precomputed (and chosen) DP  (Read 331 times)
kTimesG
Full Member
***
Offline Offline

Activity: 770
Merit: 236


View Profile
March 12, 2026, 05:34:42 PM
 #21

To generate a DB 77 it take about 12 days and also a lot of RAM and SSD space needed.

Or you can just store DPs continuously as they are found, with zero RAM overhead, and simply improve the DB each and every time a new DLP is solved. If you need a lot of RAM and 12 days, you have a bad understanding / bad implementation of the algorithm... if the purpose is to build a static DB of "best DPs", this is only "best" for the solver parameters (computing power) that you have in mind, and would not help at all solving higher ranges, or solving via something magnitudes of times faster (such as a GPU large-scale cluster).

Off the grid, training pigeons to broadcast signed messages.
oddstake
Newbie
*
Offline Offline

Activity: 73
Merit: 0


View Profile
March 12, 2026, 06:13:49 PM
 #22

Everyone optimizes based on the resources they actually have. Not everyone is running a GPU cluster or has unlimited hardware. Different setups require different trade-offs, and what looks inefficient in one configuration can make perfect sense in another.

The problem isn’t how you build the database, the scale is simply too large for current hardware.
Who do you think will win? A perfectly optimized program with limited hardware, or unlimited hardware running poorly optimized software?

Anyway, I have no intention of solving puzzle 135 with a single CPU, just experimenting with stuff.
arulbero (OP)
Legendary
*
Online Online

Activity: 2156
Merit: 2526


View Profile
March 12, 2026, 06:20:14 PM
 #23

I will try to generate a DB 77 but it will take a lot of time. I will let you know when finishes and made some tests.

Explain me why you need so much RAM and time to generate 2^29 DP points.  12 days??
oddstake
Newbie
*
Offline Offline

Activity: 73
Merit: 0


View Profile
March 12, 2026, 06:45:31 PM
 #24


-n 2147483648 = 2^31 DPs (not 2^29).

Look at the progress:
DPs: ... / 6,442,450,944

6,442,450,944 = 3 × 2^31 — scored mode generates 3× more candidate DPs, then selects the best 2^31 at the end.

Value   
-n (final target DPs)                         2^31 = 2.1B
Generated DPs (scored 3×)   3 × 2^31 = 6.4B
Your DB 75 (-n)                         2^29 = 536M

DB 77 which I'm trying to generate now is 4× larger than default DB 75 (2^31 vs 2^29).


DB 77  should find puzzle 85   4 times  faster   than your DB 75 if my math is correct.

Code:
./tame_phase -n 2147483648 -r 76 77 -g 17 -w 384 -b 256 --scored --checkpoint 30
============================================================
  TAME PHASE - Scored DP Database Generator
============================================================
  Range:          2^76 - 2^77
  Global bits:    17 (DP mask: 0x1ffff)
  Target DPs:     2147483648 (2147.5M)
  Scored mode:    YES
  Workers:        384 x 128 = 49152 kangaroos
  Jump table:     512 entries (bits=9, seed=42)
  Escape table:   128 entries (mult=2000)
  History:        4
  Local buf1:     2048
  Vita (life):    8388608
  Dist bytes:     10 (trunc=0)
  Hash index:     28 bits (268435456 buckets)
  Prefix:         77_scored_17
  Midpoint:       0x18000000000000000000
============================================================
[INIT] Building jump table...
[JUMP_TABLE] target_dp=2147483648 gap=1.76e+13 opt=48592007999 (2^35.5)
[INIT] Allocating hash table: 17179869184 slots (512.00 GB)
[HUGEPAGE] 512.00 GB on 1GB hugepages (512 pages)
[RUN] Starting 384 workers...
[CHECKPOINT] Auto-save every 30 minutes to 77_scored_17_checkpoint.bin
[PROGRESS] DPs: 58876 / 6442450944 (0.0%) | 1531.59M steps/s | 11715 DP/s | fill: 0.0% | ETA: 549924s | elapsed: 5s
[PROGRESS] DPs: 117383 / 6442450944 (0.0%) | 1532.74M steps/s | 11701 DP/s | fill: 0.0% | ETA: 550571s | elapsed: 10s
[PROGRESS] DPs: 175487 / 6442450944 (0.0%) | 1532.87M steps/s | 11621 DP/s | fill: 0.0% | ETA: 554385s | elapsed: 15s
[PROGRESS] DPs: 233868 / 6442450944 (0.0%) | 1532.72M steps/s | 11676 DP/s | fill: 0.0% | ETA: 551749s | elapsed: 20s
[PROGRESS] DPs: 291662 / 6442450944 (0.0%) | 1532.49M steps/s | 11559 DP/s | fill: 0.0% | ETA: 557350s | elapsed: 25s
kTimesG
Full Member
***
Offline Offline

Activity: 770
Merit: 236


View Profile
March 12, 2026, 06:54:22 PM
 #25

The problem isn’t how you build the database, the scale is simply too large for current hardware.
Who do you think will win? A perfectly optimized program with limited hardware, or unlimited hardware running poorly optimized software?

The winner is the one with more computing power put to most efficient use possible, which is irrelevant of hardware, and only relevant to costs per watt, times throughput. So it doesn't matter if you have a perfectly optimized program, if it's running on a toaster. You can also have a perfectly optimized program that runs on 5000 GPUs and solves a single DLP, simply because this is the optimal quantity of computing power required, otherwise the costs are too high due to the expected time and resources.. Having more computing power is not inversely proportional to a person's IQ, which is what your comment sounds like.

So yeah: having a perfect toaster that solves 135 will cost MORE than having a 5000 GPUs cluster working on the same problem. And as a bonus, one may not live long enough until the perfect toaster spits out the solution.

Off the grid, training pigeons to broadcast signed messages.
oddstake
Newbie
*
Offline Offline

Activity: 73
Merit: 0


View Profile
March 12, 2026, 07:11:36 PM
 #26

Quote
Having more computing power is not inversely proportional to a person's IQ, which is what your comment sounds like.

lmao , I was talking about resources, not my intelligence.
You must be proud of yourself, or maybe very frustrated. Good luck dude and happy hunting  !
arulbero (OP)
Legendary
*
Online Online

Activity: 2156
Merit: 2526


View Profile
March 12, 2026, 07:15:52 PM
Last edit: March 12, 2026, 07:32:23 PM by arulbero
 #27


-n 2147483648 = 2^31 DPs (not 2^29).

Look at the progress:
DPs: ... / 6,442,450,944

6,442,450,944 = 3 × 2^31 — scored mode generates 3× more candidate DPs, then selects the best 2^31 at the end.

Value   
-n (final target DPs)                         2^31 = 2.1B
Generated DPs (scored 3×)   3 × 2^31 = 6.4B
Your DB 75 (-n)                         2^29 = 536M

DB 77 which I'm trying to generate now is 4× larger than default DB 75 (2^31 vs 2^29).


DB 77  should find puzzle 85   4 times  faster   than your DB 75 if my math is correct.

Code:
./tame_phase -n 2147483648 -r 76 77 -g 17 -w 384 -b 256 --scored --checkpoint 30
============================================================
  TAME PHASE - Scored DP Database Generator
============================================================
  Range:          2^76 - 2^77
  Global bits:    17 (DP mask: 0x1ffff)
  Target DPs:     2147483648 (2147.5M)
  Scored mode:    YES
  Workers:        384 x 128 = 49152 kangaroos
  Jump table:     512 entries (bits=9, seed=42)
  Escape table:   128 entries (mult=2000)
  History:        4
  Local buf1:     2048
  Vita (life):    8388608
  Dist bytes:     10 (trunc=0)
  Hash index:     28 bits (268435456 buckets)
  Prefix:         77_scored_17
  Midpoint:       0x18000000000000000000
============================================================
[INIT] Building jump table...
[JUMP_TABLE] target_dp=2147483648 gap=1.76e+13 opt=48592007999 (2^35.5)
[INIT] Allocating hash table: 17179869184 slots (512.00 GB)
[HUGEPAGE] 512.00 GB on 1GB hugepages (512 pages)
[RUN] Starting 384 workers...
[CHECKPOINT] Auto-save every 30 minutes to 77_scored_17_checkpoint.bin
[PROGRESS] DPs: 58876 / 6442450944 (0.0%) | 1531.59M steps/s | 11715 DP/s | fill: 0.0% | ETA: 549924s | elapsed: 5s
[PROGRESS] DPs: 117383 / 6442450944 (0.0%) | 1532.74M steps/s | 11701 DP/s | fill: 0.0% | ETA: 550571s | elapsed: 10s
[PROGRESS] DPs: 175487 / 6442450944 (0.0%) | 1532.87M steps/s | 11621 DP/s | fill: 0.0% | ETA: 554385s | elapsed: 15s
[PROGRESS] DPs: 233868 / 6442450944 (0.0%) | 1532.72M steps/s | 11676 DP/s | fill: 0.0% | ETA: 551749s | elapsed: 20s
[PROGRESS] DPs: 291662 / 6442450944 (0.0%) | 1532.49M steps/s | 11559 DP/s | fill: 0.0% | ETA: 557350s | elapsed: 25s

I generated 6*2^29 DP points, I don't know the time, multiple times, maybe 100 hours ...

EDIT: in my opinion, instead of generating a 77 db, if your goal is optimizing 85 range, try 85 range directly; you can't achieve on 85 range the same performance that I achieved in 75 (normal), but generating 2^30 points in 85 range is better than generating 2^30 points in 2^77.  
oddstake
Newbie
*
Offline Offline

Activity: 73
Merit: 0


View Profile
Today at 04:28:06 AM
 #28

I will try your idea after it finishes to generate DB 77 in about 7 days  Grin



But :

DB 75 → extend to 85:      2^(85-75) = 1024 partitions
DB 77 → extend to 85:      2^(85-77) = 256 partitions

4× less partitions  ->  4× faster

                    Partitions for puzzle 85                    Median time for puzzle 85
DB 75                     1024                                    ~1 min
DB 77                      256                                    ~15 sec


So .... DB 77 with 2^31      >     DB 85 with 2^30     (2× faster)

You are right but at equal entries, DB 85 is better than DB 77, zero overhead. 

But I have 2^31 in DB 77, so double.
kTimesG
Full Member
***
Offline Offline

Activity: 770
Merit: 236


View Profile
Today at 08:50:12 AM
 #29

Quote
Having more computing power is not inversely proportional to a person's IQ, which is what your comment sounds like.

lmao , I was talking about resources, not my intelligence.
You must be proud of yourself, or maybe very frustrated. Good luck dude and happy hunting  !

Why would I be frustrated? You basically put an equal sign between "more resources" and "unoptimized".

When you factor in things like problem size, time, performance, and costs, the conclusion is that having more resources is a necessity that is required in order to reach the optimum (minimal) costs, irrespective of algorithm efficiency.

This applies also to your assumptions that you need tons of RAM (false) to build some DB that you expect to solve things N times faster (again, false). Maybe read Bernstein's paper on cube root before attempting anything else, which is what this arulbero's program implements, basically.

Off the grid, training pigeons to broadcast signed messages.
arulbero (OP)
Legendary
*
Online Online

Activity: 2156
Merit: 2526


View Profile
Today at 11:50:07 AM
Last edit: Today at 02:25:21 PM by arulbero
 #30

I will try your idea after it finishes to generate DB 77 in about 7 days  Grin



But :

DB 75 → extend to 85:      2^(85-75) = 1024 partitions
DB 77 → extend to 85:      2^(85-77) = 256 partitions

4× less partitions  ->  4× faster

                    Partitions for puzzle 85                    Median time for puzzle 85
DB 75                     1024                                    ~1 min
DB 77                      256                                    ~15 sec


So .... DB 77 with 2^31      >     DB 85 with 2^30     (2× faster)

You are right but at equal entries, DB 85 is better than DB 77, zero overhead.  

But I have 2^31 in DB 77, so double.

I tried to save the first 2^30 DPs instead of the first 2^29 DPs (always 74-75 range),  4GB+8GB instead of 2GB + 4GB,

workers x batch = 192x60, aws machine, test of only 30 keys in range 84-85

==========================================================
   SUMMARY
==========================================================
   Total keys:     30
   Solved:         30
   Failed/Aborted: 0
   Total time:     886.2s
   Min:            2.531s | Max: 102.924s
   Mean time/key:  29.541s
   >>> MEDIAN:     18.446s <<<
   Mean steps/key: 28566.83M
   Median steps:   17692.83M
   StdDev:         26402.86M
   95% CI (mean):  [19118.68M, 38014.97M] steps/key
==========================================================

It's okay? Half of the keys in 18s or less.

I modified the kangaroo-wild so it can work in -ssd mode, only 2GB in ram and the db_file on ssd.

I wonder how fast could be if I save the first 2^31 or the first 2^32 DPs ....  Grin

----------------------------------------------------------------------------------------------------------------------
EDIT:

I tried to find a few keys in 89-90 range, I was very lucky for the first key:

[!!!] KEY FOUND (f1 m1/1 delta=1454 partition=29075): 0x7d4ac6516f11e2b19fc
[SOLVED 1/5] Key: 0x3c64fd4ac6516f11e2b19fc (part 29075) | Steps: 109917.3M | Time: 114.930s

but not for the second one:

[!!!] KEY FOUND (f2 m1/1 delta=1353 partition=11682): 0x78b3fbfe9d14076b71b
[SOLVED 2/5] Key: 0x2b68b8b3fbfe9d14076b71b (part 11682) | Steps: 1911265.6M | Time: 2011.398s

[!!!] KEY FOUND (f1 m1/1 delta=1522 partition=29993): 0x787b34cce1255fb4266
[SOLVED 3/5] Key: 0x3d4a787b34cce1255fb4266 (part 29993) | Steps: 524447.9M | Time: 555.927s

[!!!] KEY FOUND (f1 m1/1 delta=1285 partition=30439): 0x4a5b3be06a8e939e486
[SOLVED 4/5] Key: 0x3db9ca5b3be06a8e939e486 (part 30439) | Steps: 47841.3M | Time: 48.957s

[!!!] KEY FOUND (f1 m1/1 delta=173 partition=15753): 0x5c4a1bd78b791cbf856
[SOLVED 5/5] Key: 0x2f625c4a1bd78b791cbf856 (part 15753) | Steps: 250001.7M | Time: 261.958s


==========================================================
   SUMMARY
==========================================================
   Total keys:     5
   Solved:         5
   Failed/Aborted: 0
   Total time:     2993.7s
   Min:            49.058s | Max: 2011.499s
   Mean time/key:  598.735s
   >>> MEDIAN:     262.061s <<<
   Mean steps/key: 568694.78M
   Median steps:   250001.73M
   StdDev:         772622.91M
   95% CI (mean):  [0.00M, 1245928.62M] steps/key
==========================================================


I think that the real average time must be over 900s.

In any case under a 1 hour using only cpus and a database build for 74-75 bit range is not bad at all.

----------------------------------------------------------------------------------------------------------------------

My github account is ok now, I'm planning to update kangaroo-wild with -ssd option and to add in the readme file the links to  

75_scored_16_30_tame_db.bin
75_scored_16_30_bucket_offsets.bin
75_scored_16_30_fingerprints.bin
75_training_16_30_params.bin

where:

75 means:  74-75 range
16 means:  DP mask
30 means:  2^30 DP saved
oddstake
Newbie
*
Offline Offline

Activity: 73
Merit: 0


View Profile
Today at 03:49:38 PM
 #31

The DB lookup is random access and SSD random reads are 100x slower than RAM , which is the worst case for SSD.

With DB in RAM you get for example ~1300M steps/s, with SSD it would drop to maybe 10-50M steps/s depending on the NVMe model.

Still useful for huge DBs that don't fit into RAM, but expect 30-100x slowdown compared to RAM.

My tames generation script https://github.com/providiu/tames-gen already has --disk option for the hash table, but it's about 30x slower than RAM.

DB generation that takes 1 day in RAM would take about 32 days on disk.

How much are you paying for 192 cores on AWS ?



oddstake
Newbie
*
Offline Offline

Activity: 73
Merit: 0


View Profile
Today at 03:56:38 PM
 #32

Quote
Why would I be frustrated? You basically put an equal sign between "more resources" and "unoptimized".

When you factor in things like problem size, time, performance, and costs, the conclusion is that having more resources is a necessity that is required in order to reach the optimum (minimal) costs, irrespective of algorithm efficiency.

This applies also to your assumptions that you need tons of RAM (false) to build some DB that you expect to solve things N times faster (again, false). Maybe read Bernstein's paper on cube root before attempting anything else, which is what this arulbero's program implements, basically.


I think this discussion is moving in a better direction now compared to the previous one. Earlier it felt a bit more confrontational, but your current explanation is clearer.

Maybe I didn’t express myself very well before,  my point wasn’t that more RAM automatically means faster results. What I meant is that given the hardware I already have at home, I’m trying to optimize the program so it can make use of as much of the available memory as possible instead of leaving most of it idle.

If the program can benefit from memory-heavy structures like larger tables or databases, then it makes sense to try to leverage my RAM that’s already there.
arulbero (OP)
Legendary
*
Online Online

Activity: 2156
Merit: 2526


View Profile
Today at 04:01:25 PM
 #33

I updated the program:

1) kangaroo_wild (-ram and -ssd option)

2) links to 3 databases (2^28, 2^29, 2^30 dp points)

https://github.com/arulbero/kangaroo-wild  


The DB lookup is random access and SSD random reads are 100x slower than RAM , which is the worst case for SSD.

With DB in RAM you get for example ~1300M steps/s, with SSD it would drop to maybe 10-50M steps/s depending on the NVMe model.

Still useful for huge DBs that don't fit into RAM, but expect 30-100x slowdown compared to RAM.

My tames generation script https://github.com/providiu/tames-gen already has --disk option for the hash table, but it's about 30x slower than RAM.

DB generation that takes 1 day in RAM would take about 32 days on disk.


No, in my case there is a little difference, try yourself with my new kangaroo_wild (maybe 3% of difference).
oddstake
Newbie
*
Offline Offline

Activity: 73
Merit: 0


View Profile
Today at 05:10:15 PM
 #34

With this build my median is better than the last test but still slower than yours.
Maybe AWS servers are better optimized than my machine.


Workers: 192 | Batch: 60 | 75_scored_16_30_
==========================================================
   SUMMARY
==========================================================
   Total keys:     30
   Solved:         30
   Failed/Aborted: 0
   Total time:     1212.8s
   Min:            1.408s | Max: 127.952s
   Mean time/key:  40.426s
   >>> MEDIAN:     32.620s <<<
   Mean steps/key: 45211.76M
   Median steps:   36718.18M
   StdDev:         36613.76M
   95% CI (mean):  [32109.70M, 58313.83M] steps/key
==========================================================

I had to modify some parameters like file names which are hard-coded (line 45) into the binary (default is 75_scored_16_29_) and workers threads.


This error happens if someone downloaded 75_scored_16_28_  or 75_scored_16_30_
./kangaroo_wild gen_test 30 84 85
[ERROR] Cannot open training_params file: 75_training_16_29_params.bin
[ERROR] training_params file is required.


Made a test also with Workers: 384 | Batch: 60  on the same 75_scored_16_30_ , slower than yours as well.

==========================================================
   SUMMARY
==========================================================
   Total keys:     30
   Solved:         30
   Failed/Aborted: 0
   Total time:     1002.6s
   Min:            0.816s | Max: 101.054s
   Mean time/key:  33.418s
   >>> MEDIAN:     27.726s <<<
   Mean steps/key: 49671.03M
   Median steps:   41585.53M
   StdDev:         37383.42M
   95% CI (mean):  [36293.54M, 63048.51M] steps/key
==========================================================
arulbero (OP)
Legendary
*
Online Online

Activity: 2156
Merit: 2526


View Profile
Today at 05:20:04 PM
 #35


I had to modify some parameters like file names which are hard-coded (line 45) into the binary (default is 75_scored_16_29_) and workers threads.


This error happens if someone downloaded 75_scored_16_28_  or 75_scored_16_30_
./kangaroo_wild gen_test 30 84 85
[ERROR] Cannot open training_params file: 75_training_16_29_params.bin
[ERROR] training_params file is required.


Of course you have to set the correct name.

With this build my median is better than the last test but still slower than yours.
Maybe AWS servers are better optimized than my machine.


Workers: 192 | Batch: 60 | 75_scored_16_30_
==========================================================
   SUMMARY
==========================================================
   Total keys:     30
   Solved:         30
   Failed/Aborted: 0
   Total time:     1212.8s
   Min:            1.408s | Max: 127.952s
   Mean time/key:  40.426s
   >>> MEDIAN:     32.620s <<<
   Mean steps/key: 45211.76M
   Median steps:   36718.18M
   StdDev:         36613.76M
   95% CI (mean):  [32109.70M, 58313.83M] steps/key
==========================================================


Made a test also with Workers: 384 | Batch: 60  on the same 75_scored_16_30_ , slower than yours as well.

==========================================================
   SUMMARY
==========================================================
   Total keys:     30
   Solved:         30
   Failed/Aborted: 0
   Total time:     1002.6s
   Min:            0.816s | Max: 101.054s
   Mean time/key:  33.418s
   >>> MEDIAN:     27.726s <<<
   Mean steps/key: 49671.03M
   Median steps:   41585.53M
   StdDev:         37383.42M
   95% CI (mean):  [36293.54M, 63048.51M] steps/key
==========================================================

I repeat: tests with 30 keys give every time very different results, 30 is too low.

With 192x60 I get around 1000Msteps/s, and you?
oddstake
Newbie
*
Offline Offline

Activity: 73
Merit: 0


View Profile
Today at 05:33:47 PM
 #36

About 1120

Quote
[CONFIG] Workers: 192 | Batch: 60 | C: 11520 | VITA: 8388608 | Storage: SSD
[EXTEND] Range 2^84 - 2^85 | 1024 partitions
[KEYS] Loaded 30 public keys from public_keys.txt

[KEY 1/30] Searching (extend 2^84-2^85, 1024 parts): 038a23e1c6bec1e366b724bd434c94111dab8bf0aa8af1d55361bfb84a92f84901
[EXTEND 1/30] 3394.7M steps | 1128.4M/s | 3s elapsed
[EXTEND 1/30] 6782.0M steps | 1128.5M/s | 6s elapsed
[EXTEND 1/30] 10164.4M steps | 1128.0M/s | 9s elapsed
[EXTEND 1/30] 13540.4M steps | 1127.3M/s | 12s elapsed
[EXTEND 1/30] 16912.5M steps | 1126.5M/s | 15s elapsed
[EXTEND 1/30] 20280.2M steps | 1125.8M/s | 18s elapsed
[EXTEND 1/30] 23642.5M steps | 1125.0M/s | 21s elapsed
[EXTEND 1/30] 26998.6M steps | 1124.2M/s | 24s elapsed
[EXTEND 1/30] 30346.8M steps | 1123.3M/s | 27s elapsed
[EXTEND 1/30] 33688.9M steps | 1122.3M/s | 30s elapsed

[!!!] KEY FOUND (f2 m1/1 delta=242 partition=403): 0x7d4ac6516f11e2b19fc
[SOLVED 1/30] Key: 0x164fd4ac6516f11e2b19fc (part 403) | Steps: 34771.1M | Time: 31.019s

[KEY 2/30] Searching (extend 2^84-2^85, 1024 parts): 029272869314de81e44e604c1c0da5b9db5b728c87db5a9173e157c62be458ef92
[EXTEND 2/30] 3390.3M steps | 1128.1M/s | 3s elapsed
[EXTEND 2/30] 6771.2M steps | 1127.4M/s | 6s elapsed
[EXTEND 2/30] 10146.9M steps | 1126.5M/s | 9s elapsed
[EXTEND 2/30] 13515.3M steps | 1125.5M/s | 12s elapsed
[EXTEND 2/30] 16873.7M steps | 1124.2M/s | 15s elapsed
[EXTEND 2/30] 20225.6M steps | 1123.0M/s | 18s elapsed
[EXTEND 2/30] 23569.1M steps | 1121.7M/s | 21s elapsed
[EXTEND 2/30] 26903.4M steps | 1120.4M/s | 24s elapsed
[EXTEND 2/30] 30228.9M steps | 1119.0M/s | 27s elapsed
[EXTEND 2/30] 33541.5M steps | 1117.5M/s | 30s elapsed
[EXTEND 2/30] 36846.1M steps | 1116.0M/s | 33s elapsed
[EXTEND 2/30] 40147.2M steps | 1114.7M/s | 36s elapsed
[EXTEND 2/30] 43445.2M steps | 1113.5M/s | 39s elapsed
Pages: « 1 [2]  All
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!