Bitcoin Forum
April 28, 2024, 11:46:53 AM *
News: Latest Bitcoin Core release: 27.0 [Torrent]
 
   Home   Help Search Login Register More  
Poll
Question: Will you support Gavin's new block size limit hard fork of 8MB by January 1, 2016 then doubling every 2 years?
1.  yes
2.  no

Pages: « 1 ... 1357 1358 1359 1360 1361 1362 1363 1364 1365 1366 1367 1368 1369 1370 1371 1372 1373 1374 1375 1376 1377 1378 1379 1380 1381 1382 1383 1384 1385 1386 1387 1388 1389 1390 1391 1392 1393 1394 1395 1396 1397 1398 1399 1400 1401 1402 1403 1404 1405 1406 [1407] 1408 1409 1410 1411 1412 1413 1414 1415 1416 1417 1418 1419 1420 1421 1422 1423 1424 1425 1426 1427 1428 1429 1430 1431 1432 1433 1434 1435 1436 1437 1438 1439 1440 1441 1442 1443 1444 1445 1446 1447 1448 1449 1450 1451 1452 1453 1454 1455 1456 1457 ... 1557 »
  Print  
Author Topic: Gold collapsing. Bitcoin UP.  (Read 2032139 times)
thezerg
Legendary
*
Offline Offline

Activity: 1246
Merit: 1010


View Profile
July 05, 2015, 01:31:40 AM
 #28121

If the txn input states that block B is the UTXO then the invalid proof is simply to supply B, right?
That's one way to do it, however even this can be shortened.

Right now with all the blocks < 1 MB it's not really a big deal to supply the entire block to prove that the referenced transaction doesn't exist, but it'd be nice to not require the entire block especially for when blocks are larger.

By adding a rule to new blocks that require all the transactions to be ordered by their hash, you don't need to supply the entire block to prove that the transaction doesn't exist.

It would be good to have that ordering requirement in place before blocks are allowed to grow to make sure that fraud proof size is bounded.

Makes sense... I'd recommend a quick line or two in your blog to explain that:

"In order to reduce the size of the fraud proof needed to show that a transaction input does not exist, additional information must be added to Bitcoin blocks to indicate the block which is the source of each outpoint used by every transaction in the block.

A node can provide the source block to the SPV client to prove or disprove the existence of this transaction.  But with a few more changes we can provide a subset of the source block.  This may become very important if block sizes increase.
"

1714304813
Hero Member
*
Offline Offline

Posts: 1714304813

View Profile Personal Message (Offline)

Ignore
1714304813
Reply with quote  #2

1714304813
Report to moderator
1714304813
Hero Member
*
Offline Offline

Posts: 1714304813

View Profile Personal Message (Offline)

Ignore
1714304813
Reply with quote  #2

1714304813
Report to moderator
1714304813
Hero Member
*
Offline Offline

Posts: 1714304813

View Profile Personal Message (Offline)

Ignore
1714304813
Reply with quote  #2

1714304813
Report to moderator
If you want to be a moderator, report many posts with accuracy. You will be noticed.
Advertised sites are not endorsed by the Bitcoin Forum. They may be unsafe, untrustworthy, or illegal in your jurisdiction.
TPTB_need_war
Sr. Member
****
Offline Offline

Activity: 420
Merit: 257


View Profile
July 05, 2015, 01:35:33 AM
Last edit: July 05, 2015, 02:30:24 AM by TPTB_need_war
 #28122

The block size debate is iatrogenesis — any 'cure' is worse than the illness.


Freemarket, in 2010 everyone could buy thousands of Bitcoin for almost nothing, what hindered it was, besides being relatively unknown at that point in time, few people actually believed cryptocurrencies could be a thing, with Monero its almost the same, the difference being its swiming in a sea of shitcoins and not many can see its potential, its the second Cryptonote coin, the first being heavily premined, and it was launched with a MIT licence, there is absolutely no merit to claims Monero stole anything, its like saying Ubuntu stole code from Debian, or that Apple stole from FreeBSD, so even though Monero market cap is low, few people will actually bother buying a large stack because it is not a 100% certain bet, but its clear there is nothing close to Monero as Zerocash/Zerocoin is vaporware and Bitcoin sidechains are like dragons.

My point about my personal preference where I employed the word "clusterfuck" to describe the hundreds of Cryptonote clones and that Monero's marketing (on these forums) to some extent had to vilify other CN clones in order to assert its dominance of CN clones, is instead I would have preferred to add features to CN that would naturally assert dominance over other CN clones. It felt to me like Monero used strong-armed community tactics instead to gain more critical mass than the other CN clones, yet not so much capabilities innovation (rather a lot of refinement which I assume includes a lot of fine grained performance innovations). And I am nearly certain this (lack of outstanding capabilities other than the on chain rings) is why Monero is not more widely adopted and will be the downfallstunted growth of Monero (and I say this with specific knowledge of capabilities that I think will subsume Monero very soon). And that is precisely why I would not prematurely release those features in a whitepaper for 1000s of clones to go implement simultaneously. And yet people criticize me for not spilling the beans before the software is cooked.

The marketing battle is not against the other "shitcoins" thus differentiating Monero from shit. Rather the battle is against Bitcoin core on who is going to own the chain that most of the BTC migrates to.

Also most of the interest in altcoins is not ideological, but rather speculative. We are in a down market now until BTC bottoms this October, so only getting mostly ideological investment in Monero, not speculative fever. This will turn after October, but it might be too late for Monero depending on the competition that might arise interim. However, I tend to think Monero will get a big boost after October in spite of any new competition, because it is a more stable codebase. As smooth pointed out, the greatest threat to breakage is in implementation error. It would behove Monero to be the first CN coin to apply my suggested fix to insure combinatorial analysis of partially overlapping rings can't occur.

P.S. CN is very important.

Peter R
Legendary
*
Offline Offline

Activity: 1162
Merit: 1007



View Profile
July 05, 2015, 01:46:56 AM
 #28123

Am I being sensitive or is this an unnecessarily spiteful reply from Greg Maxwell?:

Quote
...
You've shown that you can throw a bunch of symbolic markup and mix in a lack of understanding and measurement and make a pseudo-scientific argument that will mislead a lot of people, and that you're willing to do so or too ignorant to even realize what you're doing.

Run Bitcoin Unlimited (www.bitcoinunlimited.info)
ssmc2
Legendary
*
Offline Offline

Activity: 2002
Merit: 1040


View Profile
July 05, 2015, 01:50:41 AM
 #28124

Am I being sensitive or is this an unnecessarily spiteful reply from Greg Maxwell?:

Quote
...
You've shown that you can throw a bunch of symbolic markup and mix in a lack of understanding and measurement and make a pseudo-scientific argument that will mislead a lot of people, and that you're willing to do so or too ignorant to even realize what you're doing.

He sounds bitter.
laurentmt
Sr. Member
****
Offline Offline

Activity: 384
Merit: 258


View Profile
July 05, 2015, 01:53:15 AM
Last edit: July 05, 2015, 02:30:20 AM by laurentmt
 #28125

What this shows is that since the subtracted term, τ (1- Pvalid), is strictly positive, the miner's expectation of revenue, <V>, is maximized if the time to verify the previous block is minimized (i.e., if τ is as small as possible).
Actually, <V> is also maximized if Pvalid == 1 (or Pvalid as close as possible to 1).
How to reach this result ? My humble proposal: make a deal with a few mining pools. Participants will never push invalid blocks to others participants. Blocks received from the cartel aren't checked before hashing a new block.

Conclusion: As the average blocksize gets larger, the time to verify the previous block also gets larger. This means that miners will be motivated to improve how quickly their nodes can perform the ECDSA operations needed to verify blocks or that they will be more motivated to trick the system.

EDIT:
Quote from: Peter R
Am I being sensitive or is this an unnecessarily spiteful reply from Greg Maxwell?
Well, he seems a bit upset for now Wink but I think his message is close from what I've tried to suggest with my comment.
We must analyze all the possibilities before jumping to a conclusion which backs our initial hypothesis. The point is valid for all of us, whatever our opinion on this blocksize issue.
thezerg
Legendary
*
Offline Offline

Activity: 1246
Merit: 1010


View Profile
July 05, 2015, 02:06:05 AM
 #28126

Am I being sensitive or is this an unnecessarily spiteful reply from Greg Maxwell?:

Quote
...
You've shown that you can throw a bunch of symbolic markup and mix in a lack of understanding and measurement and make a pseudo-scientific argument that will mislead a lot of people, and that you're willing to do so or too ignorant to even realize what you're doing.

Its a strategy (implemented unconsciously by many) to limit participation to the select few.  Unfortunately it tends to create a situation where only similar personalities contribute which is where we are today with the core devs, Gavin excepted.


I read his 21ms validation number but its weird because I was wondering just weeks ago why it was taking so long to sync a measly week of blockchain data and came to the conclusion that either the P2P code is complete garbage (compared to bittorrent for example) OR the validation cost is high (given my fan speed, I assumed it was validation).  And if validation is so fast, why would these pools have custom code to skip it?

It will be interesting to look at stats gathering mode he mentions.
Peter R
Legendary
*
Offline Offline

Activity: 1162
Merit: 1007



View Profile
July 05, 2015, 02:17:54 AM
 #28127

Am I being sensitive or is this an unnecessarily spiteful reply from Greg Maxwell?:

Quote
...
You've shown that you can throw a bunch of symbolic markup and mix in a lack of understanding and measurement and make a pseudo-scientific argument that will mislead a lot of people, and that you're willing to do so or too ignorant to even realize what you're doing.
Its a strategy (implemented unconsciously by many) to limit participation to the select few.  Unfortunately it tends to create a situation where only similar personalities contribute which is where we are today with the core devs, Gavin excepted.

Thanks.  That makes me feel better.

Quote
I read his 21ms validation number but its weird because I was wondering just weeks ago why it was taking so long to sync a measly week of blockchain data and came to the conclusion that either the P2P code is complete garbage (compared to bittorrent for example) OR the validation cost is high (given my fan speed, I assumed it was validation).  And if validation is so fast, why would these pools have custom code to skip it?

This is the empirical fact that motivated me to perform the analysis. I was surprised how many "defensive blocks" we were seeing.  Furthermore, based on these numbers:

...in the last 27,027 blocks (basically since jan 1st 2015), f2pool-attributed blocks: 5241, of which coinbase-only: 139
For antpool, this is 4506 / 246.  

it looks like the average time these pools are mining empty blocks is 16 seconds (F2Pool) and 35 seconds (AntPool) before switching to non-empty blocks.  Like you said, why are these numbers so big if processing the blocks is so fast?

Run Bitcoin Unlimited (www.bitcoinunlimited.info)
TPTB_need_war
Sr. Member
****
Offline Offline

Activity: 420
Merit: 257


View Profile
July 05, 2015, 02:38:54 AM
 #28128

actually, there's 2 names for what's he's done.

one, from a moral standpoint, and one from a legal standpoint.  i'll let you figure out what those names are.

Unethical and extortion? (my pops is an attorney perhaps I inherited some of it ... and now you know why I won't go on Skype with you ...)

cloverme
Legendary
*
Offline Offline

Activity: 1512
Merit: 1054


SpacePirate.io


View Profile WWW
July 05, 2015, 02:57:50 AM
 #28129

Am I being sensitive or is this an unnecessarily spiteful reply from Greg Maxwell?:

Quote
...
You've shown that you can throw a bunch of symbolic markup and mix in a lack of understanding and measurement and make a pseudo-scientific argument that will mislead a lot of people, and that you're willing to do so or too ignorant to even realize what you're doing.

I agree with part of the quote. Also, the blocksize debate needs to go away by people stopping throwing 2 cents in a problem Gavin has a good solution for.

What we really need solutions for...
-Greater adoption with ease of use (Generation x's and earlier are still using paypal because it's easier)
-China mining pools are greed based and harm bitcoin and the blockchain
-When people think of bitcoins, they assume drugs and other shady activities

Enough.... sorry, no idea who you are, so no offense, but tired of seeing these blocksize posts show up on Reddit.
TPTB_need_war
Sr. Member
****
Offline Offline

Activity: 420
Merit: 257


View Profile
July 05, 2015, 03:03:28 AM
 #28130

Let τ be the time it takes to verify a typical block and let T be the average block time (10 min).  The fraction of time the miner does not know whether the most recent block was valid is clearly τ / T; the fraction of the time the miner does know is 1 - τ / T = (T - τ) / T.  We will assume that every miner applies the same policy of producing empty SPV blocks before they've verified, and blocks of size S' after they've verified.  

Under these conditions, the expectation value of the blocksize is equal to the expectation value of the blocksize during the time a miner doesn't know, plus the expectation value of the blocksize during the time he does know:

    Seffective = ~0 [(τ / T)]   +  S' [(T - τ) / T]          
                 = S' [(T - τ) / T]                          (Eq. 1)

The time, τ, it takes to process a block is not constant, but rather depends linearly** on the size of the block.  Approximating the size of the previous block as S', we get:

    τ = k S'

....

QED. We've shown that there exists a limit on the maximum value of the average blocksize, due to the time it takes to verify a block, irrespective of any protocol enforced limits.

Your egregious mathematical error (myopia) of course is that you assume k is the same for all miners. And this is why you totally miss the centralization caused by your nonproof.

TPTB_need_war
Sr. Member
****
Offline Offline

Activity: 420
Merit: 257


View Profile
July 05, 2015, 03:14:58 AM
Last edit: July 05, 2015, 03:26:36 AM by TPTB_need_war
 #28131

The fraction of time the miner does not know whether the most recent block was valid is clearly τ / T, which means the fraction of the time the miner does know is 1 - τ / T = (T - τ) / T.

You are attempting to develop an equation on orphan rate relative to the propagation delay (which includes verification delay), but this can't be done without context of the distribution of computation in the network, which some argue is modeled by a Poisson distribution.

... miners will be motivated to improve how quickly their nodes can perform the ECDSA operations needed to verify blocks.

I had argued upthread that bandwidth limitation (propagation delay) is the only justification for not expending more resources on verification instead of forming 0 txn blocks (when blocks are mostly funded by txn fees, which may not be the case now):

This delay is a form of propagation delay and thus drives up the orphan rate for miners with less resources. Afaik proportional increases in orphan rate are more costly than proportional decreases in hashrate, because the math is compounded (but diminishing) on each subsequent block of the orphaned chain. Thus this action doesn't appear to make economic sense unless it is explained as a lack of bandwidth and not a lack of desire to apply more of their resources to processing the txns than to hashrate. If it is bandwidth that is culprit, then it argues against larger block sizes.

This poll is inaccurate because voters can't change their vote!! Peter R posts nonsense, then the Yes votes go bonkers and they can't change their vote after Peter R has been thoroughly refuted.

Peter R
Legendary
*
Offline Offline

Activity: 1162
Merit: 1007



View Profile
July 05, 2015, 03:23:52 AM
 #28132

Quote
Your egregious mathematical error (myopia) of course is that you assume k is the same for all miners. And this is why you totally miss the centralization caused by your nonproof.

I agree that in reality, k will not be constant across miners. What I posted was a simplified model so that the effect can be easily isolated and understood. It would be interesting to try to model a distribution of verification times, among other details. In any case, I suspect the important result will be the same.

I'd love to see you show with mathematics that under certain assumptions the mining will centralize. Why don't you give it a try?

Run Bitcoin Unlimited (www.bitcoinunlimited.info)
TPTB_need_war
Sr. Member
****
Offline Offline

Activity: 420
Merit: 257


View Profile
July 05, 2015, 03:31:27 AM
 #28133

I'd love to see you show with mathematics that under certain assumptions the mining will centralize. Why don't you give it a try?

Just model the relative ROI even assuming same costs per hash (even exacerbate by applying the selfish mining attack math) with the orphan rate higher for those who don't have as much bandwidth and verification resources as centralized mining (which can amortize those fixed size per block costs over greater hashrate). If you want, insert IBLT to rectify that orphan rate disparity which is obfuscated centralization (e.g. in terms of your upthread definition of "entity" and not "node").

solex
Legendary
*
Offline Offline

Activity: 1078
Merit: 1002


100 satoshis -> ISO code


View Profile
July 05, 2015, 04:01:12 AM
 #28134

it looks like the average time these pools are mining empty blocks is 16 seconds (F2Pool) and 35 seconds (AntPool) before switching to non-empty blocks.  Like you said, why are these numbers so big if processing the blocks is so fast?

I think these metrics should specifically exclude the two pools above as they are the ones who were SPV mining and not switching to using a validated block-header as soon as possible.

Don't get put-off from posting by Greg's feedback as it is easy to be silenced by someone-who-knows-best and let them run the show.
My big takeaway from his comment is how you got him into arguing from a position that big blocks are OK (handclap and kudos to you). In fact, I have learnt now that even 7.5GB blocks are today theoretically tolerable in at least one respect (validation where tx are pre-validated), although l suspect not in quite a number of other crucial respects.
Wasn't Gavin's doubling end-point 8GB in 2036? Effectively the same end-point!

I am reminded of Mike Hearn's words of wisdom:
Quote
Scaling Bitcoin can only be achieved by letting it grow, and letting people tackle each bottleneck as it arises at the right times. Not by convincing ourselves that success is failure.

One less bottleneck to expect with larger blocks.


TPTB_need_war
Sr. Member
****
Offline Offline

Activity: 420
Merit: 257


View Profile
July 05, 2015, 04:26:29 AM
 #28135

I am reminded of Mike Hearn's words of wisdom:
Quote
Scaling Bitcoin can only be achieved by letting it grow, and letting people tackle each bottleneck as it arises at the right times. Not by convincing ourselves that success is failure.

One less bottleneck to expect with larger blocks.

This is an example of myopic, one-dimensional 'thinking' (illogic) that is promulgated widely by n00bs.

Centralization isn't a bottleneck. Monopolies don't restrict degrees-of-freedom. Neither are an outcome of increasing orphan rate by increasing blocksize relative to block period.

generalizethis
Legendary
*
Offline Offline

Activity: 1750
Merit: 1036


Facts are more efficient than fud


View Profile WWW
July 05, 2015, 04:55:50 AM
Last edit: July 05, 2015, 05:48:13 AM by generalizethis
 #28136

And yet people criticize me for not spilling the beans before the software is cooked.


Wrong assertion of why people (at least me and some observable others) are criticizing you. Maybe, someone is, but certainly not everyone....

Gene Siskel, the better half of the movie critic duo Siskel and Ebert, was asked about if he ever wanted to direct--since he was so knowledgeable and his love of films was evident. He stated that he did want to direct, but as long as he was a critic, thought any directing efforts would be a conflict of interest. To put it more simply, he knew he could game his fans into believing his techniques were the best techniques and didn't want the potential to delude himself or the audience--even if his efforts were a sincere and earnest attempt at making a great film.

Rarely, do good critics make better artist. ATM Ben Jonson, the poet/critic is the only example I can think of, but that is more of a tie than him being better at one or the other.

TPTB_need_war
Sr. Member
****
Offline Offline

Activity: 420
Merit: 257


View Profile
July 05, 2015, 06:11:01 AM
Last edit: July 05, 2015, 06:41:02 AM by TPTB_need_war
 #28137

Rarely, do good critics make better artist.

Except I've already proven three times in my history (twice as main author and the last and most successful as the ONLY* author), that I am also builder of popular software.

https://www.google.com/search?q=neocept+wordup (late 80s and early 90s)
http://relativisticobserver.blogspot.com/2012/01/illustrated-evolution-of-painter-ui.html (mid 90s)
https://www.google.com/search?q=3Dize+coolpage (late 90s and early 00s)

I fell off the cliff in the mid-00s due to mostly what I can summarize as "Philippines" and family background. The details are blinding (see light & dark only) an eye, lost marriage, mid-life crisis, severe STD infection in last week of May 2006 (leading to M.S. by now) and murder of my only non-step sister in last week May 2006, etc..

I really hate this being incited to talk about myself. I know people are going to use this against me. Besides I am interested in creating new things, not the past.

P.S. Armstrong's 8.6 cycle (1000 x Pi days) added to last week of May 2006, is Feb 2015. That was exactly when I began coding a social network which I did ship. First software I've shipped since 2006. Note in 2008 - 2011 period I got off into learning new programming languages such as Haxe, Haskell, and Scala and completely new ways to think about programming. In that period I messed around with numerous projects without shipping any, which maybe was a required gestation period for the new insights and to recharge artistic inspiration. I also got acutely ill (ER, ICU) in May 2012 and chronically hence.

Edit: I am not really acting as a critic, i.e. I haven't changed what I've always done. I am doing engineering analysis for design work. I just happen to share it, which turns out to be critical against inferior engineering.

* except for the 2-3 weeks of coding for the Objects window, for which I paid $30,000 in 2001 (inflation-adjust that!) to a former programmer of Borland C (because I was face down in the bed with gas in my eye to hold the 100% detached retina in place so the lasered joins could solidify).

sickpig
Legendary
*
Offline Offline

Activity: 1260
Merit: 1008


View Profile
July 05, 2015, 06:44:22 AM
 #28138

interesting, isn't it?

We will continue do SPV mining despite the incident, and I think so will AntPool and BTC China.

Another very good reason people should not mine on Chinese pools.  This is EXTREMELY bad for the Bitcoin network.

Bitcoin is a participatory system which ought to respect the right of self determinism of all of its users - Gregory Maxwell.
generalizethis
Legendary
*
Offline Offline

Activity: 1750
Merit: 1036


Facts are more efficient than fud


View Profile WWW
July 05, 2015, 06:54:33 AM
Last edit: July 05, 2015, 07:07:03 AM by generalizethis
 #28139

Rarely, do good critics make better artist.

Except I've already proven three times in my history (twice as main author and the last and most successful as the ONLY* author), that I am also builder of popular software.

https://www.google.com/search?q=neocept+wordup (late 80s and early 90s)
http://relativisticobserver.blogspot.com/2012/01/illustrated-evolution-of-painter-ui.html (mid 90s)
https://www.google.com/search?q=3Dize+coolpage (late 90s and early 00s)

I fell off the cliff in the mid-00s due to mostly what I can summarize as "Philippines" and family background. The details are blinding (see light & dark only) an eye, lost marriage, mid-life crisis, severe STD infection in last week of May 2006 (leading to M.S. by now) and murder of my only non-step sister in last week May 2006, etc..

I really hate this being incited to talk about myself. I know people are going to use this against me. Besides I am interested in creating new things, not the past.

P.S. Armstrong's 8.6 cycle (1000 x Pi days) added to last week of May 2006, is Feb 2015. That was exactly when I began coding a social network which I did ship. First software I've shipped since 2006. Note in 2008 - 2011 period I got off into learning new programming languages such as Haxe, Haskell, and Scala and completely new ways to think about programming. In that period I messed around with numerous projects without shipping any. I also got acutely ill (ER, ICU) in May 2012 and chronically hence.

Edit: I am not really acting as a critic, i.e. I haven't changed what I've always done. I am doing engineering analysis for design work. I just happen to share it, which turns out to be critical against inferior engineering.

* except for the 2-3 weeks of coding for the Objects window, for which I paid $30,000 in 2001 (inflation-adjust that!) to a former programmer of Borland C (because I was face down in the bed with gas in my eye to hold the 100% detached retina in place so the lasered joins could solidify).

...inciting to talk about myself--LOL.

Until you offer your own coin for peer review, you're going to sound like that know-it-all brat on the basketball court who points out the flaws in everyone else's game, goes on endlessly about how great he is, but never picks up a ball and backs up the talk. Since your American, you've probably heard this: Put up or shut up.

Again, it's not that you are criticizing or refusing peer review, it's that you are simultaneously criticizing and refusing to put your project up for peer review--it may be a good (or even correct) strategy, but don't bitch when others point out its annoyance factor.

Peter R
Legendary
*
Offline Offline

Activity: 1162
Merit: 1007



View Profile
July 05, 2015, 07:28:42 AM
 #28140

We just need someone to figure out how to constantly feed them invalid blocks. Wink

I'm surprised to hear this from you, Holliday.  Can you explain why you think mining on the blockheader during the short time it takes to validate the block is bad?  It is clearly the profit-maximizing strategy, and I also believe it is ethically sound (unlike replace-by-fee).  

Run Bitcoin Unlimited (www.bitcoinunlimited.info)
Pages: « 1 ... 1357 1358 1359 1360 1361 1362 1363 1364 1365 1366 1367 1368 1369 1370 1371 1372 1373 1374 1375 1376 1377 1378 1379 1380 1381 1382 1383 1384 1385 1386 1387 1388 1389 1390 1391 1392 1393 1394 1395 1396 1397 1398 1399 1400 1401 1402 1403 1404 1405 1406 [1407] 1408 1409 1410 1411 1412 1413 1414 1415 1416 1417 1418 1419 1420 1421 1422 1423 1424 1425 1426 1427 1428 1429 1430 1431 1432 1433 1434 1435 1436 1437 1438 1439 1440 1441 1442 1443 1444 1445 1446 1447 1448 1449 1450 1451 1452 1453 1454 1455 1456 1457 ... 1557 »
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!