Bitcoin Forum
June 23, 2024, 09:38:14 PM *
News: Latest Bitcoin Core release: 27.0 [Torrent]
 
   Home   Help Search Login Register More  
Pages: « 1 2 [3] 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 ... 142 »
  Print  
Author Topic: Pollard's kangaroo ECDLP solver  (Read 56646 times)
arulbero
Legendary
*
Offline Offline

Activity: 1915
Merit: 2074


View Profile
May 02, 2020, 09:24:36 AM
Last edit: May 02, 2020, 12:32:13 PM by arulbero
 #41

This is a more update article:

https://www.ams.org/journals/mcom/2013-82-282/S0025-5718-2012-02641-X/S0025-5718-2012-02641-X.pdf

Quote
A natural choice for the wild set is the set of equivalence classes coming from elements of the form hg^x with −N/2 < x ≤ N/2. Note that the size of the wild set now depends on the discrete logarithm problem: if h = g^0= 1 then the wild set has 1 + N/2 elements while if h = g^N/2 then the wild set has N elements. Even more confusingly, sampling from the wild set by uniformly choosing x does not, in general, lead to uniform sampling from the wild set. This is because the equivalence class {hg^x, (hg^x)−1} can arise in either one or two ways, depending on h. To analyse the algorithm it is necessary to use a non-uniform version of the birthday paradox (see, for example, Galbraith and Holmes [223]).The main result of [228] is an algorithm that solves the DLP in heuristic average-case expected (1.36 + o(1))√N group operations.


The number of bad collision is also link to the kangaroo density.

A note: probably half of your tames start from a even position (+24G, +1466G,... ) and the other half from a odd position (37G, 54365G, ....), and if you use the jumps: 1G, 2G, 4G, 8G,16G,... they are (except one) all even, then you have actually 2 groups that move in distinct areas. The same for the wild kangaroos.

It is like you worked in 2 subintervals, not good for the collision probability. At least you should concentrate all the wild kangaroos in a unique subinterval (all even or odd).

see this:


Quote
Four Kangaroo Method. We can make a further improvement. The starting points of W1 and W2,
namely x and −x, are integers of the same parity. Hence, to obtain wild-wild collisions more quickly one
should take walks whose jumps are all even length. However, now there is the possibility that the wild walks
will never collide with the tame walk. The solution is to have two tame kangaroos, T1 and T2, starting on
adjacent integers (one even and one odd). There is no possibility of a useless tame-tame collision, but there
is a chance of tame-wild collisions, as desired (though with only one of the tame kangaroos).
...
We now estimate the complexity of the method.
...

Hence, using 4 kangaroos gives an algorithm for the DLP in an interval whose heuristic average case
expected running time is (1.714 + o(1)).sqrt(N) group operations.
Jean_Luc (OP)
Sr. Member
****
Offline Offline

Activity: 462
Merit: 696


View Profile
May 02, 2020, 01:08:43 PM
 #42

I did a test using dp 0 (much slower):
The algorithm runs in 2^20.554 = ~1.47 sqrt(N) (1000 trials) as announced on the paper so it seems ok but I have to figure out why it is not the case for dp > 0. There is a relationship betwen dp and range size (m√2/3πθ) that I missed.
Will try to undersand, how to choose this m.

arulbero
Legendary
*
Offline Offline

Activity: 1915
Merit: 2074


View Profile
May 02, 2020, 01:19:01 PM
Last edit: May 02, 2020, 01:29:17 PM by arulbero
 #43

I did a test using dp 0 (much slower):
The algorithm runs in 2^20.554 = ~1.47 sqrt(N) (1000 trials) as announced on the paper so it seems ok but I have to figure out why it is not the case for dp > 0. There is a relationship betwen dp and range size (m√2/3πθ) that I missed.
Will try to undersand, how to choose this m.

this is the difference ?

Quote
To detect small cycles we stored the previous 30 group elements in the walk
in the case N ≈ 2^34 (respectively, 30, 45 group elements for N ≈ 2^40, 2^48). Each
new step in the walk was compared with the previous 30 (respectively, 35, 45)
group elements visited. If this group element had already been visited then the
walk is in a cycle. We used a deterministic method to jump out of the cycle (using
a jump of distinct length from the other jumps used in the pseudorandom walk)
so that the pseudorandom walk as a whole remained deterministic
. The cost of
searching the list of previous group elements is not included in our experimental
results, but our count of group operations does include the “wasted” steps from
being in a cycle.

With these data:
N = 2^40
m = 2^11
θ = 2^−5


still 2^21?
COBRAS
Member
**
Offline Offline

Activity: 885
Merit: 22


View Profile
May 02, 2020, 01:23:11 PM
 #44

@Jean_Luc

In you examples:

0
FFFFFFFFFFFFFF
02E9F43F810784FF1E91D8BC7C4FF06BFEE935DA71D7350734C3472FE305FEF82A

[22.67 MKey/s][GPU 13.04 MKey/s][Count 2^29.06][Dead 0][28s][89.1MB]


Key# 0 Pub:  0x02E9F43F810784FF1E91D8BC7C4FF06BFEE935DA71D7350734C3472FE305FEF82A
       Priv: 0x378ABDEC51BC5D

Done: Total time 29s

I this your example in readme on github resalts was only 29 sec (!!!) this is more faster then use a ranges like this:

Start:49DCCFD96DC5DF56487436F5A1B18C4F5D34F65DDB48CB5E0000000000000000
Stop :49DCCFD96DC5DF56487436F5A1B18C4F5D34F65DDB48CB5EFFFFFFFFFFFFFFFF

And I think more because  no need to research for RANGES(!)

I think what PUBKEY can not be > PRIVKEY, so our ranges not from 0.....FFFFFFFFFFFFFF, but from 0....(FFFFFFFF minus UnknownX HuhHuh?(mayby minus PYBKEY Huh??))

Huh

[
COBRAS
Member
**
Offline Offline

Activity: 885
Merit: 22


View Profile
May 02, 2020, 01:34:39 PM
 #45

@Luc

Can you realise in your code modificarion of ranges ?


From:

Start-dfggsdgdsfh1111111
End- FFFFFABC112222233

To:
O

ABC888888

Etc...

Huh?



Biiigesr thank you.


[
Jean_Luc (OP)
Sr. Member
****
Offline Offline

Activity: 462
Merit: 696


View Profile
May 02, 2020, 01:48:32 PM
 #46

With these data:
N = 2^40
m = 2^11
θ = 2^−5


still 2^21?

I have to try, I let you know...

@COBRAS
I release the code ASAP Wink
COBRAS
Member
**
Offline Offline

Activity: 885
Merit: 22


View Profile
May 02, 2020, 02:20:02 PM
 #47



@COBRAS
I release the code ASAP Wink

This will be great Luc.

If we reduce the number of calculations by reducing the dimension of the range in length and bits, we can get closer to a positive result, I think. So I think that processing calculation ranges is also very important Luc

Biggest Thanks. Smiley


[
MrFreeDragon
Sr. Member
****
Offline Offline

Activity: 443
Merit: 350


View Profile
May 02, 2020, 02:45:18 PM
 #48

-snip-
@MrFreeDragon
You need a large number of tests on key uniformly distributed in the range to get a correct estimation, at least 1000 trials.
Did you try with a jump table size of 32 or 64 ? NB_JUMP in GPUEngine.h:24 ?

I added the file I use for stats, it contains 1000 40 bits key uniformly distributed.
https://github.com/JeanLucPons/Kangaroo/blob/master/VC_CUDA8/in40_1000.txt

I tried with NB_JUMP 64 (as was by default)

As for 1000 40bit keys I gues that 40bit is very small range for GPU tests. I started your example file with 1000 jeys, and even it shows the average operations number 2^21.6, the GPU performs 2^25-2^26 operations just for 5-7 seconds and finds the key (in many cases it shows even Count 2^-inf). So the result is not reasonable. The GPU is only starting, but solves the key... and I do not see the exact speed of it during such tests. Or change grid size... for the test purpuses

The larger key is required, but with longer time of course.

There should be some range with a good balance between test results vs time required. With 5 min per key we need 3 days for the test. With 1 min per key only 16-17 hours. For 1080ti 70bit key could found for 1 minute with your program. May be 70bit key is that balance

Jean_Luc (OP)
Sr. Member
****
Offline Offline

Activity: 462
Merit: 696


View Profile
May 02, 2020, 03:16:46 PM
 #49

Yes GPU is not good on 40bit range, this 40bit file is for making stats on CPU.
To make accurate statistics you need to do it on CPU, the granularity of the counting on GPU is too large but I will try to find something in order to make tests on GPU.

Anyway i did a test with dp 3 and it also converge to 2^21 (same algorithm, no change on the range in function of dp).
This is tricky, there is something I need to understand with those random walks...

I have the feeling that this algorithm win in average complexity by favoriting keys on the middle of the range as the density of wild is increased in the center. It has a worse 'worst case' and a much better 'best case' than the classic algorithm.
arulbero
Legendary
*
Offline Offline

Activity: 1915
Merit: 2074


View Profile
May 02, 2020, 03:41:11 PM
 #50

Anyway i did a test with dp 3 and it also converge to 2^21 (same algorithm, no change on the range in function of dp).

This is tricky, there is something I need to understand with those random walks...

The data of the article are:

N = 2^40
m = 2^11
θ = 2^−5

with  1.47*2^20, very strange...

Anyway I learned a lot of things in these 2 days, thank you!


I found that distinguished points are not all equal from the point of view of the probability, if you want to precompute them you have to choose carefully (don't simple reuse the DP you found in one run)

https://eprint.iacr.org/2012/458.pdf

Quote
Choosing the most useful distinguished points. Instead of randomly generating T distinguished points, we propose generating more distinguished points, say 2T or 10T or 1000T, and then keeping the T most useful distinguished points.
(This presumably means storing 2T or 10T or 1000T points during the precomputation, but we follow standard practice in distinguishing between the space consumed during the precomputation and the space required for the output of the precomputation. As an illustrative example in support of this practice, consider the precomputed rainbow tables distributed by the A5/1 Cracking Project [26]; the cost of local RAM used temporarily by those computations is much less important than the network cost of distributing these tables to users and the long-term cost of storing these tables.)
The natural definition of “most useful” is “having the largest number of ancestors”. By definition the ancestors of a distinguished point are the group elements that walk to this point; the chance of a uniform random group element walking to this point is exactly the number of ancestors divided by L.
Unfortunately, without taking the time to survey all L group elements, one does not know the number of ancestors of a distinguished point. Fortunately, one has a statistical estimate of this number: a distinguished point found by many walks is very likely to be more useful than a distinguished point found by fewer walks.
This estimate is unreliable for a distinguished point found by very few walks, especially for distinguished points found by just one walk.

I have the feeling that this algorithm win in average complexity by favoriting keys on the middle of the range as the density of wild is increased in the center. It has a worse 'worst case' and a much better 'best case' than the classic algorithm.

Maybe a hybrid approach, with some precomputed DP loaded in hashtable (with the precomputed DP near only to the extreme of the range) to compensate.
ricardosuperx
Member
**
Offline Offline

Activity: 109
Merit: 13

A positive attitude changes everything *_*


View Profile
May 03, 2020, 12:06:13 AM
 #51

But with both signs (+-) or not?

Could you try with an average of 2^21 - 2^22 instead?
u try with an average of 2^21 - 2^22 instead?

Yes at the beginning I used the y sign to select positive or negative jump then I tried an otehr set of random negative jumps without success.
It seems that this "bownian motion" is rather tricky to tune.

Tomorrow, I'll try other things...


Only negative jumps? I would try with positive/negative jumps and a greater average (my sensation)
To me it looks promising. 1.50sqrt(N) would be amazing!
Another thing to pay attention to is: when we translate our interval, we have only 2^(n-1) x-coordinates, and then half DP. For the tuning it's like we worked in a 2^(n-1) space.


I was thinking another thing,
if you want to overcome/avoid the brownian-motion tuning problem,

go back and put together

1) the linearity (and therefore the simplicity) of the classic approach (that you used in the release 1.3)
2) the power of the symmetry

Exploiting the symmetry means that we use equivalence classes to increase the probability of a collision (same number of tries, half 'points'); in our case, given a point P(x,y), 'x' decides the width of the jump and 'y' only the direction.
In the secp256k1 there are 2 points that share the same 'x', (x,y) and (x,-y).

In a interval of consecutive points, all x-coordinates are different, but if we move the interval like we did it becomes symmetric.

The drawback of our approach is that the kangaroo, instead of going in only one direction, keep going back and forth, creating many loops. Besides, not only we don't know where a wild kangaroo is (obviously we don't know its position-private key), but we don't control even the direction of its motion (and often it comes out from the interval)


But this curve has another symmetry that respects the linearity (unlike the negative map that overturns the interval and the kangaroo algorithm): endomorphism.

There are 3 points with the same 'y' , that means that this curve has about 2^256 points and
 
1) 2^256 / 2 different x-coordinates
2) 2^256 / 3 different y-coordinates

Why don't we use the symmetry that mantains the linearity of the interval?

We can work on the y-coordinates, and a jump should be depend on 'y' and not on 'x'.
We set equivalence classes where [P] = {P, Q, R} if and only if  yP = yQ = yR

What is the relation between the private key of these points P, Q, R?

If P=k*G, then Q=k*lambda*G and R =k*lambda^2*G (endomorphism)
and P=(x,y) -> Q=(beta*x, y) -> R=(beta^2*x,y)

observation:

if P=k*G, i.e. k is the private key of P respect of the generator G, then
lambda*P = k*lambda*G, i.e. the private key of P'=lambda*P respect of the generator G'=lambda*G is always k.

In other words, if we know that k lies in [a,b], resolving the problem P=k*G is equivalent to resolve the problem P'=k*G'
The interval is the same (from the scalar's point of view) but there are actually 3 different intervals of points. They are all consecutive from a different generator's point of view:

           point                                scalar
[a*G,   (a+1)*G, ....,  b*G]     <-> [a,b]   interval 1
[a*G',  (a+1)*G', ....,  b*G']    <-> [a,b]   interval 2  where G'=lambda*G
[a*G'', (a+1)*G'', ...., b*G'']   <-> [a,b]   interval 3  where G''=lambda*G

We could work on  3 "spaces" simultaneously :

P=k*G, with all points with generator G   -> interval 1     jumps (1*G, 2*G, ,,,2^(n-1)*G)

P'=k*G' with all points with generator G' -> interval 2     jumps(lambda*G, lambda*2*G, .... lambda*2^(n-1)*G)

P''=k*G'' with all points with generator G'' -> interval 3  jumps(lambda^2*G, lambda^2*2*G, .... lambda^2*2^(n-1)*G)

if the movements of a kangaroo depends only by the y-coordinates, that means that if a kangaroo reach a point T in the first interval and another kangaroo reachs a point W in the second interval with the same y, we will have a collision!
From that point they will walk together along the same path (across the same equivalence classes) until they reach a DP (in this case a y-coordinate with some zero bits), we detect this collision.

EDIT:
We work in this way in a space of 3*2^n points (3 intervals), but with only 2^n y-coordinates, then we should have a collision with a higher probability.  WRONG

I just wanted to understand this: https://bitcointalk.org/index.php?topic=5245379.0


COBRAS
Member
**
Offline Offline

Activity: 885
Merit: 22


View Profile
May 03, 2020, 04:29:18 AM
 #52

@Luc, then you provide release ?Smiley

Br.

[
Jean_Luc (OP)
Sr. Member
****
Offline Offline

Activity: 462
Merit: 696


View Profile
May 03, 2020, 05:25:25 AM
Last edit: May 03, 2020, 05:51:04 AM by Jean_Luc
 #53

Hi,

I must say I'm a bit lost, they announced an algorithm in 1.36√N and they measured 1.47√N. They say that the complexity is actually (1.36 + epsilon)√N and they estimate this epsilon to ~0.1 without being clear on it.
https://www.iacr.org/archive/pkc2010/60560372/60560372.pdf

If we compute the complexity of a symmetric set, we have well : (2(2−√2)PI)/√2 √N = 1.468√N !

So i would rather say that the complexity is (1.47+epsilon)√N with an epsilon which depend on number of kangaroo, dp and N as for the standard version.

I'm still fighting with collision, when using dp > 0, it seems that the symmetry generates a lot of wrong collisions.

arulbero
Legendary
*
Offline Offline

Activity: 1915
Merit: 2074


View Profile
May 03, 2020, 08:17:31 AM
Last edit: May 03, 2020, 10:39:38 AM by arulbero
 #54

I must say I'm a bit lost, they announced an algorithm in 1.36√N and they measured 1.47√N. They say that the complexity is actually (1.36 + epsilon)√N and they estimate this epsilon to ~0.1 without being clear on it.
https://www.iacr.org/archive/pkc2010/60560372/60560372.pdf

If we compute the complexity of a symmetric set, we have well : (2(2−√2)PI)/√2 √N = 1.468√N !

So i would rather say that the complexity is (1.47+epsilon)√N with an epsilon which depend on number of kangaroo, dp and N as for the standard version.

Very interesting.

I'm still fighting with collision, when using dp > 0, it seems that the symmetry generates a lot of wrong collisions.

From your tests the algorithm converges in 1.47.sqrt(N) with dp=0:

I did a test using dp 0 (much slower):
The algorithm runs in 2^20.554 = ~1.47 sqrt(N) (1000 trials) as announced on the paper so it seems ok

but 2.sqrt(N) with dp=3:

i did a test with dp 3 and it also converge to 2^21 (same algorithm, no change on the range in function of dp).
This is tricky, there is something I need to understand with those random walks...

Assuming you used the same jumps, there is only one explanation: the algorithm produces a useful collision in 1.47*sqrt(n) steps but you take a long time to detect it because the walks are too short, they collapse in a loop before to reach a DP. It is not a problem of bad collisions between different tames or between different wilds but "between" the same tame / the same wild (loops). Then a wild and a tame meet frequently (first collision after 1.47.sqrt(n)) but they die immediately after. This issue can be fixed (in my opinion), because these bad collisions (I call them "bad collisions 2" = collision with itself = loop) are distinguishable from the others and you can lower them without increasing the other ones.

With DP = 3 means that the lenght of many walks are less than 8!!! It takes +36% (1.47 -> 2) to produce a collision followed by a DP!!!

You have to change your jumps. But choose them in a "intelligent" way, I think it is not too difficult.
'Intelligent' means that a man can't do this, let the probability do it. I think that the main problem is that you want to choose the jumps by yourself. Why?

You should study separately a sequence of jumps such that the probability to come back before 2^DP steps is low. This study depends only from jumps, DP and m (the average jump), not from points/coordinates/N!

For example, with these jumps [+-1,+-2,+-4,+-8,+-16]  you try to generate many sequences (starting from 0). If the number of sequences with lenght < 2^DP (or < k*2^DP) is too high, change randomly the jumps and retry. Lenght of sequence: number of steps before to get back to 0. Deal simply with integer numbers.

This setting phase should be done automatically at the start of the program. In this way for each DP you will have configured the jumps in order to get a lower probability of generating loops.
COBRAS
Member
**
Offline Offline

Activity: 885
Merit: 22


View Profile
May 03, 2020, 09:35:00 AM
 #55

@Luc

Can you realise in your code modificarion of ranges ?


From:

Start-dfggsdgdsfh1111111
End- FFFFFABC112222233

To:
O

ABC888888

Etc...

Huh?



Biiigesr thank you.



@Luc, can you realiase this in code ?

Pliiiise Huh


[
Jean_Luc (OP)
Sr. Member
****
Offline Offline

Activity: 462
Merit: 696


View Profile
May 03, 2020, 10:44:00 AM
 #56

In fact i have removed the negative jump.
There is no need to do complex thing in order to take benefit of symmetry, the main trick is just to translate the public key by -N/2.G and you search a priv key between [0,N/2]. The symmetry is 100% free.
Si I spread my tame between [0,N/2.G] and the wild between [P-N/4.G,P+N/4.G] and it converges to expected average of 1.47sqrt(N) using dp=0.
As I have a large number of kangaroo, each kangaroo does not move a lot. The average distance for sqrt(N/2) steps in (N/2)/numberOfKangaroo.

with dp=0 there is not much bad collisions, no more than expected.
Strangely when using dp > 0, lots of dead kangaroos appear and probably cycle, even with a small number of kangaroo. I have to understand why may be bug somewhere...
arulbero
Legendary
*
Offline Offline

Activity: 1915
Merit: 2074


View Profile
May 03, 2020, 11:10:44 AM
Last edit: May 03, 2020, 12:04:45 PM by arulbero
 #57

In fact i have removed the negative jump.
There is no need to do complex thing in order to take benefit of symmetry, the main trick is just to translate the public key by -N/2.G and you search a priv key between [0,N/2]. The symmetry is 100% free.

If you have removed the negative jumps you are not exploiting the symmetry! If a tame and a wild meet 2 symmetric points (same coordinate-x), they don't collide!

As I have a large number of kangaroo, each kangaroo does not move a lot. The average distance for sqrt(N/2) steps in (N/2)/numberOfKangaroo.

with dp=0 there is not much bad collisions, no more than expected.
Strangely when using dp > 0, lots of dead kangaroos appear and probably cycle, even with a small number of kangaroo. I have to understand why may be bug somewhere...

Loops are not possible if you go always in one direction.
Jean_Luc (OP)
Sr. Member
****
Offline Offline

Activity: 462
Merit: 696


View Profile
May 03, 2020, 11:21:28 AM
 #58

You still exploit the symmetry. Instead of searching between [0,N] you search between [0,N/2] and there is few calculation to make when a collision on a tame.x and wild.x is reached in order to get the good key. I did test and it works as expected.
I think the trick may be that after sqrt(N) total steps the kangaroos starts to overlap each other, still investigating...
arulbero
Legendary
*
Offline Offline

Activity: 1915
Merit: 2074


View Profile
May 03, 2020, 11:58:52 AM
Last edit: May 03, 2020, 12:50:07 PM by arulbero
 #59

You still exploit the symmetry. Instead of searching between [0,N] you search between [0,N/2] and there is few calculation to make when a collision on a tame.x and wild.x is reached in order to get the good key. I did test and it works as expected.
I think the trick may be that after sqrt(N) total steps the kangaroos starts to overlap each other, still investigating...


No at all. You are simply searching in the second half of the interval, and you concentrate all the tame there.


In fact i have removed the negative jump.
 ... you search a priv key between [0,N/2].  I spread my tame between [0,N/2.G] and the wild between [P-N/4.G,P+N/4.G] and it converges to expected average of 1.47sqrt(N) using dp=0.

You would have the same exact result (with DP > 0) of this:

given [a,b],  translate to [0,N] and search between [N/2, N]

tame start from [N/2,N]
wild from [(P-N/2) - N/4, (P-N/2) + N/4] = [P - 3N/4, P - N/4]

you don't need the symmetry to do this. You are using the old algorithm with different start ranges.

Instead of searching between [0,N] you search between [0,N/2] and there is few calculation to make when a collision on a tame.x and wild.x is reached in order to get the good key.

with dp=0 there is not much bad collisions, no more than expected.
Strangely when using dp > 0, lots of dead kangaroos appear and probably cycle, even with a small number of kangaroo. I have to understand why may be bug somewhere...

Bug found!

if tame.x = wild.x how can you detect the collision? they move from (-Q , Q) to (-Q+k*G , +Q+k*G)
same x, same jump, different paths.
If you don't use negative jumps (+kG and -kG), this is not possible.

This explain pefectly why when you set DP > 0 you pass from 1.47.sqrt(N) to 2.sqrt(N), because:

    -Q     !=     Q
 tame.x  = wild.x  could be a collision, but at next step:

    -Q     !=     Q            -->       -Q+k*G   !=  Q+k*G     you have now 2 separate walks
 tame.x  = wild.x                         tame.x  !=   wild.x

then tame and wild don't reach a common DP, unless you turn -Q and Q into distinguished points setting DP=0. With DP>0  tame.x = wild.x (-Q  != Q) becomes a collision not detectable.
Jean_Luc (OP)
Sr. Member
****
Offline Offline

Activity: 462
Merit: 696


View Profile
May 03, 2020, 02:34:52 PM
 #60

It works with dp=0 because random walk are not really important, they just generate random sequence, and you don't care about the path, so it correctly solve the key (the distance is also stored in the hastable) but with dp>0 paths are of course important.
You're right there is probably a bug with my storage of symmetric point.

Pages: « 1 2 [3] 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 ... 142 »
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!