Bitcoin Forum
June 16, 2024, 03:08:30 PM *
News: Voting for pizza day contest
 
  Home Help Search Login Register More  
  Show Posts
Pages: « 1 ... 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 [70] 71 72 73 »
1381  Bitcoin / Mining / Re: Mining inefficiency due to discarded work on: February 01, 2011, 06:31:01 AM
I split this topic from python OpenCL bitcoin miner because, as m0mchil pointed out, the posts were largely unrelated to poclbm. Is this title OK?

Perhaps "How Python OpenCL (poclbm) is mining inefficiently"
1382  Bitcoin / Mining / Re: Mining inefficiency due to discarded work on: February 01, 2011, 04:17:41 AM
SHA-256 returns a random number that is impossible to know before actually doing the work. Since the number returned is random, doing the hashes for one work gives you the exact same chance of solving a block as doing the hashes for another work.

It's like having two boxes full of raffle tickets. Each contains 2 winning tickets. If you find a winning ticket in one box, it doesn't help you (or hurt you) to continue drawing from that box. Nor is it more "efficient" in any way.

So you have 4 winning tickets, and these tickets would make you eligible to win the grand prize of 50 bitcoins, but only 1 of the 4 tickets is the grand prize winning ticket.
If I choose to quit looking in box 1 after finding just 1 ticket, and do the same for box two, you only find half the tickets.  The grand prize winner ticket might have been left behind in the boxes, but I equally might get lucky and win the grand prize.

I don't know about you, but I'd like to find all 4 tickets, not half of them.
1383  Bitcoin / Mining / Re: Mining inefficiency due to discarded work on: February 01, 2011, 03:38:29 AM

I have those numbers, but I'm not interested to make fancy GUI to provide this. I can publish database dump if you're interested.

I would love do some stats on a DB dump. PM me the link (or post it). Thank you Smiley

Btw I think we're slightly offtopic here.

Only slighty.
1384  Bitcoin / Mining / Re: Mining inefficiency due to discarded work on: February 01, 2011, 03:12:15 AM
Say I get 50 million hashes a second on my GPU.
2^32 / 50,000,000 = 86 seconds to process an entire keyspace.
If my askrate is set to 5 seconds, I'm only checking 5.82% of each keyspace before moving on and assuming the getwork holds no answers.

You're potentially ignoring 94.18% of answers. Numbers obviously vary based on the speed of the GPU, but for a 5 second askrate to be effective, you would need 859 Million Hashes/s to process a keyspace of a single getwork, and even then the way m0mchills code is written, once it finds the first answer, it moves on to the next getwork anyway. This is flawed.


Exactly.  The slower the GPU, and the lower the askrate the worse off your efficiency will be because more possible hashes are being ignored.
1385  Bitcoin / Mining / Re: Mining inefficiency due to discarded work on: February 01, 2011, 03:10:31 AM
I wouldn't call it "more" hashing overhead, since it's the same number of kHash/s regardless of *what getwork* it's on.  My kHash/s doesn't change just because I'm on a different getwork.

You can call it whatever, but with long getwork period, you are hashing shits for many % of time :-).

No, I get the same number of accepted as I do with the normal miner. Smiley

Quote
Quote
You can't ever expect to see (or find) the entire puzzle (the block) when you are choosing to ignore any part (the skipped hashes in a getwork) of that puzzle.

Well, getwork is not a puzzle. It is random walk, when you hit valid share time to time. Nonces are just numbers. It's irrelevant if you are trying to hash 0xaaaa or 0xffff. The probability that you hit valid share is still the same.

But if I get to 0xcccc, find an answer and stop looking, I *could be missing* more answers.

Quote
Quote
1) Not ignoring nonces of the getwork when a hash is found

Well, this is the only point which make sense. Diablo already implemented this and if it isn't in m0mchil's, it would be nice to implement it, too. But it's definitely on m0mchil decision, not on us.

That's why I posted in his thread.

Quote
Also sorry for some impatient responses, but I'm responding those questions to pool users almost every day and it become little boring Wink. It isn't anything personal. To be honest, it isn't long time ago when I had very similar questions as you have right now. But thanks to m0mchil, Diablo and few other people on IRC, now I know how much wrong I was Wink.

Maybe I'm totally wrong in thinking that ignored POSSIBLE answers COULD BE *THE* ANSWER for the block....since I've already found 10 blocks for the pool. Wink
If Diablo Miner does look through the entire 2^32 possible answers, then it is being 100% efficient.  I'd like to see the same with m0mchill's miner, so I made the changes I wanted to see and it bothered when I realized it was ignoring possible answers.
1386  Bitcoin / Mining / Re: Mining inefficiency due to discarded work on: February 01, 2011, 03:06:29 AM
If 4 getworks are requested without having valid answers to submit back, and then on the 5th, it finds one answer and submits it back, then moves on without checking the remaining keyspace for more answers, you have a 20% efficiency.

Maybe I'm wrong, but you are not paid for higher getwork/submit efficiency, but for finding valid blocks. So you are optimizing wrong thing Wink. Maybe you can get 100% getwork/submit ratio, but you are crunching old jobs. But it is your choice and your hashing power.


Yes, I will be crunching old jobs for about 30 seconds.  Already working on a mod to check my local bitcoind between "work" (32 of them) in the python code for the current block.  You and I both know what our bitcoind's can do. Wink  This way we can stop within 1 second on a block update and get a new getwork.  So quit thinking in terms of OLD JOBS.  Whether it's old or new, I'm talking about the ability to search the 2^32 hashes.

Your server just happens to be the only server that is running a public pool, hence, it might feel like I'm picking on it...but I'm not.  All these changes helping increase the probability of the pool as a whole to finding the block in the getwork instead of ignoring most of the getwork when just a single answer is found.  Maybe Diablo is doing it different (i hate java personally so I haven't even looked at the code), m0mchill's is ignoring part of the 2^32 POSSIBLE answers after finding just 1.

More blocks == more pay for everyone

OK Slush, do this. 

In your server stats, I want you to list:
1) The number of get requests for the CURRENT round
2) The number of submitted hashes (both ACCEPTED and INVALID/STALE listed separately) for the CURRENT round.

If you wanted to increase the accuracy of this, separate the INVALID/STALE hashes based on the reason they were rejected, ie (WRONG BLOCK) or (INVALID/ALREADY SUBMITTED).

Then take (# of getwork this round)/(# of accepted/invalid(already submitted))*100 and publish that number in real time.
That's how you check the efficiency of the pool's ability to search all hashes for each getwork sent out. 

This will show if you are really get that 1:1 ratio of getwork/solved hashes.

Can you do that?  Publish those numbers on the server stats page? 
Then we all can see what the efficiency of getwork/solved hashes is.

Also, you can't make up for the inefficiency of quiting the search after 1 submitted answer, or askrate tigger, with increasing the speed at which you get work.
That only increases the speed of your inefficiency, it doesn't solve the problem of not looking for more answers.

1387  Bitcoin / Mining / Re: Mining inefficiency due to discarded work on: February 01, 2011, 02:46:30 AM
This brings up the question; If some getwork() simply do not have answers, is this due to a 2^32 keyspace not being an actual equal portion of the block, or is this due to overlapping 2^32 getworks?

There is no reason why there should be valid share in every getwork.

OK.

Quote
Quote
Do we share an overlap of 2^16 (arbitrary figure for the sake of example) in our respective keyspaces?

No overlapping, every getwork is unique. Read more how getwork() works. Especially the extranonce part.

That's what I thought.  Just wanted to make sure.

Quote
Quote
Meaning, am I getting invalid or stale because there are multiple people working on the same exact portions of keyspace? If so, isn't that an issue with the getwork patch?

No. It may be because another bitcoin block was introduced in meantime between getwork() and submit. Then share from old block cannot be candidate for new bitcoin block. Read my last posts in pool thread. By the way, this is not pool/m0mchil miner related, it is how bitcoin works.

Not true.  Look again.
31/01/2011 06:46:34, Getting new work.. [GW:291]
31/01/2011 06:46:38, 41b15022, accepted at 6.25% of getwork()
31/01/2011 06:46:53, c597d2b5, accepted at 31.25% of getwork()
31/01/2011 06:46:59, 9babdadf, accepted at 50.0% of getwork()
31/01/2011 06:47:07, 41b15022, invalid or stale at 75.0% of getwork()
31/01/2011 06:47:11, 1ba08127, accepted at 87.5% of getwork()

3 were accepted, then the 4th was invalid.  So if this was invalid because the block count went up by one (and work from old blocks are now discarded as invalid), how was the 5th answer from this now *old* work, why was it accepted?  Your logic doesn't work here BECAUSE the 5th answer was accepted!  And, the miner doesn't know that the block has increased until it makes the next getwork request. Wink


Quote
Quote
I've also asked m0mchill about the askrate, and it seems his answer to why the client fixes the askrate is basically a "fuck it if it doesn't find it quick enough". Although, he speaks more eloquently than that.

And he was true. Longer ask rate, more hashing overhead.

I wouldn't call it "more" hashing overhead, since it's the same number of kHash/s regardless of *what getwork* it's on.  My kHash/s doesn't change just because I'm on a different getwork.


Quote
Quote
He has also stated that, yes, we are ignoring large portions of the keyspace because we submit the first hash and ignore anything else in the keyspace, whether it's found in the first 10% of the keyspace, or the last 10% of the keyspace. He believes that this is trivial though, since you are moving on to another keyspace quick enough.

By skipping some nonce space, you don't cut your probability to find valid share/block. There is the same probability of finding share/block by crunching any nonce.

I agree that the probability of finding the share/block is the same for every getwork....IF YOU LOOK AT THE ENTIRE GETWORK.  Skipping over half the possibilities when just the first hash is found screws with your idea for a perfect 1:1 ratio for getwork/found hashes.  You can't ever expect to see (or find) the entire puzzle (the block) when you are choosing to ignore any part (the skipped hashes in a getwork) of that puzzle.

Quote
Quote
So, we're not searching the entire keyspace in a provided getwork. What if one of those possible answers you're ignoring is the answer to the block? You just fucked the pool out of a block.

Definitely not. It is only nice optimization for pool to continue hashing of current job, it save some network roundtrips, but it basically does not affect pool success rate.

That is true! 
I'm seeing the same amount of accepted hashes in a given hour with both miners.  The only thing this does is increase efficiency by:
1) Not ignoring nonces of the getwork when a hash is found
2) Not stopping part way through the getwork because the askrate was triggered.

1388  Bitcoin / Pools / Re: Cooperative mining (>10000Mhash/s, join us!) on: February 01, 2011, 01:19:15 AM
Anyone else care to jump in on this side thread, please do.
http://bitcointalk.org/index.php?topic=1334.msg42992#msg42992
1389  Bitcoin / Mining / Re: Mining inefficiency due to discarded work on: February 01, 2011, 01:18:06 AM

False. there are zillions of possible blocks/hashes to find and no one ever duplicates any searches.


Ever?
Duplicates will never-ever-ever be found in the getwork searches?
Each getwork is unique, and no result from getwork A could possibly match getwork B?

Just trying to make sure I understand that is what you're saying.


Assuming it is working as designed. Bugs throw out all bets.

Hashes are 256 bits. The odds of finding two random hashes the same would be much less than the odds of getting struck by lightning on the same day you buy a single ticket and win the lottery. I call that never.


Assuming it is working as designed........that is indeed the question I'm raising.

So why is it when I process the ENTIRE getwork request (vs just finding the first answer and then moving on to the next getwork), I'm seeing a much different result, and quite frequently at that.

Notice that this getwork got 5 answers, and reported all of them to slush's server and were accepted.
29/01/2011 01:41:42, Getting new work.. [GW:29]
29/01/2011 01:41:57, 9d5069e9, accepted at 40.625% of getwork()
29/01/2011 01:41:58, 09fc7161, accepted at 43.75% of getwork()
29/01/2011 01:41:59, 27af8f3c, accepted at 46.875% of getwork()
29/01/2011 01:42:02, ad105798, accepted at 56.25% of getwork()
29/01/2011 01:42:12, 0e920ae8, accepted at 75.0% of getwork()

This sample shows that 3 answers were accepted, 1 invalid, then 1 more accepted.  Notice the Invalid answer is the same as the 1st accepted answer.
31/01/2011 06:46:34, Getting new work.. [GW:291]
31/01/2011 06:46:38, 41b15022, accepted at 6.25% of getwork()
31/01/2011 06:46:53, c597d2b5, accepted at 31.25% of getwork()
31/01/2011 06:46:59, 9babdadf, accepted at 50.0% of getwork()
31/01/2011 06:47:07, 41b15022, invalid or stale at 75.0% of getwork()
31/01/2011 06:47:11, 1ba08127, accepted at 87.5% of getwork()

This sample shows 4 getworks, all without a single answer, then just a single answer for 1 getwork.
31/01/2011 06:04:21, Getting new work.. [GW:212]
31/01/2011 06:04:54, Getting new work.. [GW:213]
31/01/2011 06:05:26, Getting new work.. [GW:214]
31/01/2011 06:05:59, Getting new work.. [GW:215]
31/01/2011 06:06:32, Getting new work.. [GW:216]
31/01/2011 06:06:43, d08c8f3d, accepted at 31.25% of getwork()
31/01/2011 06:07:04, Getting new work.. [GW:217]

These outputs are from a modified version of m0mchill's miner.

Changes include:
- Does not stop when first answer/hash/nonce is found.  It continues searching until all possible hashes have been searched.
- Removes the hard coded limit of 10 seconds for the askrate.  (My card does a single getwork in 30 seconds, so I just set the askrate to 3600 and forget it)

The advantages of these modifications are:
- The entire getwork request is processed, finding all possible answers to a getwork.
- Raises efficiency of getwork/answers to 100%.  Right now it's between 20-30%.
- Lowers the amount of requests and bandwidth used by pool servers.

Right now, I believe, huge portions of hashes in the getwork requests are not being processed because when just 1 answer is found, it moves on to the next getwork, or if the askrate has been reached, it then moves on to the next getwork.  Also, in comparing the results over time of the original miner and this modified version, they find the same amount of answers in the same amount of time (on average).  The original miner seems to generate about 260% more traffic for the pooled mining server and discards well over 60% of the getwork when a single answer is found, or when the askrate is reached.

SO....

If the chances are 1 in a zillion in finding the same answer for different getwork request, then how is this miner finding multiple answers for a SINGLE getwork, some are valid, some a DUPLICATES, how is this happening?

If all getwork request are unique, and possible answers are overlooked when the first answer is found, how does the pool/bitcoin handle searching the rest of that discarded hashes?  Does it?

Is this a problem with the miner, or the getwork patched version of bitcoin?

Have YOU searched the entire 2^32 possible hashes of a getwork before, or do you just trust the miner knows best when it stops short of checking all 2^32 possibilities?

If I'm correct in my findings, how many more blocks could the pool be finding by searching all of the 2^32 hashes per getwork?? 
How many blocks have we missed because we didn't search the entire getwork??

That's your food for thought.  Please think about this and provide good answers.
I'm expecting to release the modified client in the next week or so after I get the pre-compiled versions and sources wrapped up.

Looking forward to feedback on this one.. Wink
1390  Bitcoin / Pools / Re: Cooperative mining (>10000Mhash/s, join us!) on: January 31, 2011, 08:18:36 AM

I do not see any problems with that. Those who slow to submit work are late for dinner.


Then most CPU's are going to be starving.  And I thought the pool would be the only way to go with a CPU......I guess CPU's will get the short end of the stick on this one.
1391  Bitcoin / Mining / Re: Mining inefficiency due to discarded work on: January 31, 2011, 07:41:56 AM

False. there are zillions of possible blocks/hashes to find and no one ever duplicates any searches.


Ever?
Duplicates will never-ever-ever be found in the getwork searches?
Each getwork is unique, and no result from getwork A could possibly match getwork B?

Just trying to make sure I understand that is what you're saying.
1392  Bitcoin / Mining / Re: Mining inefficiency due to discarded work on: January 31, 2011, 12:29:29 AM
Is every getwork unique?  If so, how did I get the same answer twice?

Hi FairUser,

'invalid or stale' for unique nonces can be explained by pool update. Pool now doesn't accept shares when new bitcoin block come (see pool thread for more). Now it works in the same way as standalone mining.

About same getworks - it looks like network troubles, which are more and more common. I hope that it will be solved by protocol change, which significantly reduce network communication. Will be introduced this week.

This is not a network communications problem.
I record my getwork request and answers.
The two getwork were ENTIRELY different getwork request, with different data and midstates, yet produced the same answers.
1393  Bitcoin / Pools / Re: Cooperative mining (>10000Mhash/s, join us!) on: January 31, 2011, 12:27:19 AM
Today's pool update introduced small change in counting shares. Only submitted shares which are valid for current Bitcoin block are calculated. So if your miner ask for job and submit a share, it isn't calculated if new Bitcoin block arrived in meantime and your share cannot be a candidate for next Bitcoin block. This does not affect fairness of pool, when your miners are configured correctly. Please check your miners, if they does not use custom getwork timeout. Default period of getwork (typically 5 seconds) is the best settings. In this way, you should obtain 'stale' shares only with ~1% probability.

Please note, that also other miner settings can affect time between getwork() and share submit. For example, "-f 1" parameter in Diablo miner rised latency between getwork and submit significantly (from <5s to >10s for ATI 5970). I solved this by "-f 5".

Please look @ http://nullvoid.org/bitcoin/statistix.php
Look at the 100 block duration. 

SEVERAL BLOCKS ARE FOUND IN LESS THAN 60 SECONDS.

You should change the system to support submitted answers for the last block and the current block.  That way anyone that submits an old work from the previous block doesn't get an invalid.

I understand the desire to cut off those abusing the system, which is a good thing, but you should make sure it doesn't affect those playing by the rules.
1394  Economy / Marketplace / Re: Buying Bitcoins for XIPWire Cash on: January 30, 2011, 11:44:12 PM
XIPWire has recently changed policy and only allows 2 Cards to be linked to your account.
Their terms and fees page still says you can have 5.

NOT COOL.

Why not allow people to have many, many accounts.  It's no skin off their backs to let people do that.
If anything allowing more than 5 accounts would encourage people to use their services.  But instead they want to restrict users and lie to new comers by providing mis-information.

1395  Bitcoin / Mining / Mining inefficiency due to discarded work on: January 30, 2011, 11:32:43 PM
I keep seeing these "invalid or stale" a couple of times a day.

30/01/2011 15:26, bea41297, accepted
30/01/2011 15:26, bea41297, invalid or stale
30/01/2011 15:27, b9c26c57, accepted
30/01/2011 15:27, fb27e19a, invalid or stale

Notice the two lines at the top in bold.  
Could there possibly be more than one answer per getwork?
As you can see, I got the answer previously, then got the same answer again.
Then on the last line, it just came back invalid...probably cause someone else got the answer to the getwork before I did.

Is every getwork unique?  If so, how did I get the same answer twice?

UPDATE:  Got two more
30/01/2011 15:30, e67d85ec, invalid or stale
30/01/2011 15:36, f1f124d9, invalid or stale
1396  Bitcoin / Pools / Re: Cooperative mining (>10000Mhash/s, join us!) on: January 24, 2011, 05:02:52 AM
Slush,
Something strange is going on.
This is the second day now that I've woke up, checked my stats, and it shows that I've produced 0 shares....even though my miner's are running and finding hashes!

So I restart my miner, and still, no shares showing up.
So again I have to restart my miner, and nothing is showing up.
Just restarted it a third time, and finally it's showing up with submitted shares.

Slush, the two workers I'm seeing this with are:
fairuser.desktop
fairuser.ATI5770

How can I be producing hashes, and the server is not recording them.
Because of this, my payout has been dramatically cut...which is not cool.

Since this is happening on two different machines, I certain it's a server side issue.

Any ideas?  Anyone else seeing this?


It just happened again at the beginning of round with block #104271.
Shares were being "accepted", but not showing any credit on the website, and I had to restart my miner before the server started showing them.
Very strange.

UPDATE:  So we just started a new round again (#467), and my miner is still runnings, says it's ~ 175,000 khash/s, but doesn't get any new "accepted" after the round begins.  This is very strange.  I shouldn't have to restart my miners at the beginning of every new round!!!

UPDATE 2: My miner is no longer getting any active shares/hashes since I reboot my PC and restarted my miner.  It appears to be getting the "getwork" request, but after 5 minutes I have not gotten a single share.  I also just found a the solution for the block on one of my miners thats working. w00t!

Slush, did you change something?

UPDATE 3:  I created and starting using a different account and worker on my 3rd PC w/ the ATI 5770.  Right away I started getting "accepted" shares that are being accepted and counted on the server.  So, Slush, why is my account with 3 active workers having problems?  My two workers seem to be running fine, but the third worker is just getting a bunch of "getwork" request that don't have answers....which leaves my card doing a lot of work for nothing.  Did you add some type of new restrictions, because if you did you just broke my setup......which ONLY HAS 3 MINERS! Sad


What gives?
1397  Bitcoin / Pools / Re: Cooperative mining (>10000Mhash/s, join us!) on: January 24, 2011, 01:08:18 AM
Slush,
Something strange is going on.
This is the second day now that I've woke up, checked my stats, and it shows that I've produced 0 shares....even though my miner's are running and finding hashes!

So I restart my miner, and still, no shares showing up.
So again I have to restart my miner, and nothing is showing up.
Just restarted it a third time, and finally it's showing up with submitted shares.

Slush, the two workers I'm seeing this with are:
fairuser.desktop
fairuser.ATI5770

How can I be producing hashes, and the server is not recording them.
Because of this, my payout has been dramatically cut...which is not cool.

Since this is happening on two different machines, I certain it's a server side issue.

Any ideas?  Anyone else seeing this?
1398  Bitcoin / Pools / Re: Cooperative mining (>10000Mhash/s, join us!) on: January 19, 2011, 04:42:12 AM
BTW, my modified Linux clients can maintain 1024 active connections (yes, I modified the code...452 connections at the moment) to other users bitcoind processes, so when a new block is announced it should be propagated to most of the other clients within a second or two (Fiber-optic internet connection...very low latency and lots of bandwidth). Hence why I'm still trying to figure out exactly how the Mining Pool server AND my windows client didn't get updated when my Linux client did.  Very very strange.
with 6 minutes difference, it's not only strange, it smells fishy.

6 minutes was the highest I've seen with my own eyes.
There were a couple of days when it was about a minute or two.
However, it doesn't appear to be happening now.  I'll post next time I see it with more stats/time stamps.
1399  Bitcoin / Pools / Re: Cooperative mining (>10000Mhash/s, join us!) on: January 19, 2011, 03:22:05 AM
Does that mean my clients get updates within seconds of a new block? Smiley
What would happen to the "work" my miner is currently working on when a "push" is received?
Would my miner stop working on it's current getwork and start working on the new getwork right away? :O
Would my current work be discarded or counted?
Even bigger though, what would the miner clients have to update to get this all working correctly.

I've gotten 7 blocks so far, and anything that helps increase those chances would be cool.

The notion of "wasted" work is not applicable to mining. You are constantly (300,000,000 times per second on my 5850) trying to find a valid hash that is between difficulty 1 and the current difficulty. Basically, you're playing the lottery many times per second, and you immediately know whether or not you won. When a new block is found, you are actually "wasting" work until the next time you GetWork, or have it pushed to you.

Yeah, I get that, and I understand how mining works.  Been doing it since the server came online. Smiley
I never said anything about "wasted" work either.  Read with care please.

I'm just asking lots of questions because a week or so ago I saw the block count increase by two on my modified Linux client, but not on the server or my windows client....and this lasted for 6 minutes.  Kinda strange.  If a new and better method for distributing getwork request could be achieved, that'd be great.  The existing system seems fairly decent since I've only noticed very few hash submissions when the block had rolled over to the next in line.  Hence, the submission of the getwork hash didn't need to be submitted because the block count increased by 1 (or 2 on quickly found blocks).  I'm just wondering if it could be improved.

BTW, my modified Linux clients can maintain 1024 active connections (yes, I modified the code...452 connections at the moment) to other users bitcoind processes, so when a new block is announced it should be propagated to most of the other clients within a second or two (Fiber-optic internet connection...very low latency and lots of bandwidth). Hence why I'm still trying to figure out exactly how the Mining Pool server AND my windows client didn't get updated when my Linux client did.  Very very strange.
1400  Bitcoin / Development & Technical Discussion / Re: bitcoind command help (extended descriptions) on: January 18, 2011, 09:52:55 AM
How does one list the accounts and addresses known to bitcoind?  How does the GUI generate the account list?  I can only see how to return information about a specific account or address.

BUMP

Anyone got an answer?
Pages: « 1 ... 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 [70] 71 72 73 »
Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!