Bitcoin Forum
August 18, 2017, 01:20:51 AM *
News: Latest stable version of Bitcoin Core: 0.14.2  [Torrent].
 
   Home   Help Search Donate Login Register  
Poll
Question: Will you support Gavin's new block size limit hard fork of 8MB by January 1, 2016 then doubling every 2 years?
1.  yes
2.  no

Pages: « 1 ... 1424 1425 1426 1427 1428 1429 1430 1431 1432 1433 1434 1435 1436 1437 1438 1439 1440 1441 1442 1443 1444 1445 1446 1447 1448 1449 1450 1451 1452 1453 1454 1455 1456 1457 1458 1459 1460 1461 1462 1463 1464 1465 1466 1467 1468 1469 1470 1471 1472 1473 [1474] 1475 1476 1477 1478 1479 1480 1481 1482 1483 1484 1485 1486 1487 1488 1489 1490 1491 1492 1493 1494 1495 1496 1497 1498 1499 1500 1501 1502 1503 1504 1505 1506 1507 1508 1509 1510 1511 1512 1513 1514 1515 1516 1517 1518 1519 1520 1521 1522 1523 1524 ... 1558 »
  Print  
Author Topic: Gold collapsing. Bitcoin UP.  (Read 1953274 times)
cypherdoc
Legendary
*
Offline Offline

Activity: 1764



View Profile
July 29, 2015, 07:32:48 PM
 #29461

i think this proves that miners are perfectly capable of handling bigger blocks as evidenced by the "no delays" in confirmation intervals.  if anything i find it interesting that they get sped up to <10 min during these attacks.  with bigger blocks, we'd clear out that mempool in an instant:



Well, no.
Bigger blocks just make the "attacks" marginally more expensive.  
Individually, I could also swamp the mempool at any of the proposed sizes.

The "attacks" are providing some funding, so I don't really mind them all that much.  This tragedy of the commons is not quite so tragic as the UTXO set growth and chain size would be with a similar attack on larger blocks, especially if combined with a validation time attack (TX with many inputs and outputs in a tangled array).
Very large blocks would be a much more vulnerable environment in which to conduct such an attack.  We'll get to the point of ameliorating these vulnerabilities, and the right people are working on it.

Patience...  Bitcoin is still in beta, its so young.  Lets give it the chance to grow up without breaking it along the way.

so looking at the peak of the last attack ~40BTC per day worth of tx fees were harvested.  let's say we go to 20MB blocks tomorrow.  since real tx's only amount to ~500kB per block worth of data, you'd have to essentially fill the 20MB all by yourself and spend 20*40BTC=800BTC per day give or take.  at $290/BTC that would equal $232000 per day.  and then you'd have to sustain that to even dent the network; say for a month.  that would cost you $7,000,000 for one month.  you sure you can do that all by yourself?

Advertised sites are not endorsed by the Bitcoin Forum. They may be unsafe, untrustworthy, or illegal in your jurisdiction. Advertise here.
NewLiberty
Legendary
*
Offline Offline

Activity: 1162


Gresham's Lawyer


View Profile WWW
July 29, 2015, 07:36:30 PM
 #29462

i think this proves that miners are perfectly capable of handling bigger blocks as evidenced by the "no delays" in confirmation intervals.  if anything i find it interesting that they get sped up to <10 min during these attacks.  with bigger blocks, we'd clear out that mempool in an instant:



Well, no.
Bigger blocks just make the "attacks" marginally more expensive.  
Individually, I could also swamp the mempool at any of the proposed sizes.

The "attacks" are providing some funding, so I don't really mind them all that much.  This tragedy of the commons is not quite so tragic as the UTXO set growth and chain size would be with a similar attack on larger blocks, especially if combined with a validation time attack (TX with many inputs and outputs in a tangled array).
Very large blocks would be a much more vulnerable environment in which to conduct such an attack.  We'll get to the point of ameliorating these vulnerabilities, and the right people are working on it.

Patience...  Bitcoin is still in beta, its so young.  Lets give it the chance to grow up without breaking it along the way.

so looking at the peak of the last attack ~40BTC per day worth of tx fees were harvested.  let's say we go to 20MB blocks tomorrow.  since real tx's only amount to ~500kB per block worth of data, you'd have to essentially fill the 20MB all by yourself and spend 20MB*40BTC=800BTC per day give or take.  at $290/BTC that would equal $232000 per day.  and then you'd have to sustain that to even dent the network; say for a month.  that would cost you $7,000,000 for one month.  you sure you can do that all by yourself?



Why would you doubt it?

But in your example of 20MB, there are much easier and cheaper ways to DoS the network, as mentioned.

FREE MONEY1 Bitcoin for Silver and Gold NewLibertyDollar.com and now BITCOIN SPECIE (silver 1 ozt) shows value by QR
Bulk premiums as low as .0012 BTC "BETTER, MORE COLLECTIBLE, AND CHEAPER THAN SILVER EAGLES" 1Free of Government
awemany
Newbie
*
Offline Offline

Activity: 28


View Profile
July 29, 2015, 07:36:56 PM
 #29463


1-3mb are ok.  8mb is too much right now (yes the miners are wrong, there's bad stuff they haven't seen yet).
Getting blocks that take >10mins to validate is not a good thing.


I believe Gavin wants to limit TXN size to 100kiB at the same time the bigger block hard fork comes into effect. Shouldn't this very much remove your worries about an increase in blocksize?

And, yes, I agree, faster validation is even better than putting another artificial limit into the code base.
iCEBREAKER
Legendary
*
Offline Offline

Activity: 1750


Support SEGWIT on 8/1/17 https://github.com/UASF


View Profile WWW
July 29, 2015, 07:37:10 PM
 #29464

The introduction of sidechains, LN and whatever other solutions is a complicated solution. It's also a solution motivated by a desire to fix a perceived economic issue, rather than sticking to the very simple issue at hand. It is the very opposite of what you are claiming to be important, that software should be kept simple.

That is a contradiction.

The simple solution is to remove the artificial cap. A cap that was put in place to prevent DDOS.

Your reference of CVE-2013-2292 is just distraction. It is a separate issue, one that exists now and would continue to exists with a larger block size.

Bloating Layer 1 is a complicated solution; scaling at Level 2+ is an elegant one.

You still don't understand Tannenbaum's maxim.  Its point isn't 'keep software simple FOREVER NO MATTER WHAT.'  That is your flawed simpleton's interpretation.

"Fighting features" means ensuring a positive trade-off in terms of security and reliability, instead of carelessly and recklessly heaping on additional functionality without the benefit of an adversarial process which tests their quality and overall impact.

One does not simply "remove the artificial cap."  You may have noticed some degree of controversy in regard to that proposal.  Bitcoin is designed to strenuously resist (IE fight) hard forks.  Perhaps you were thinking of WishGrantingUnicornCoin, which leaps into action the moment anyone has an idea and complies with their ingenious plan for whatever feature or change they desire.

Like DoS, CVE-2013-2292, as an issue that exists now, is fairly successfully mitigated by the 1MB cap.  It is not a separate concern because larger blocks exacerbate the problem in a superlinear manner.  You don't get to advocate 8MB blocks, but then wave your hands around eschewing responsibility when confronted with the immediate entailment of purposefully constructed 8MB tx taking 64 times longer to process than a 1MB one.  The issue is intrinsic to larger blocks, which is why Gavin proposed a 100k max tx size be married to any block size increase.

Fully parsed, what you are claiming is

Quote
The simple solution is to remove the artificial cap hard fork Bitcoin.

Do you realize how naive that makes you look?

The difference between bad and well-developed digital cash will determine whether we have a dictatorship or a real democracy.  David Chaum 1996
"Monero" : { Private - Auditable - 100% Fungible - Flexible Blocksize - Wild & Free® - Intro - Core GUI - Podcats - Roadmap - Dice - Blackjack - Github - Android }
MoneroForCash.com  |  Buy and sell XMR near you  |  Easymonero.com  |  Bitsquare.io - Decentralized XMR Exchange  |  Buy XMR with fiat
Fungibility provides privacy as a side effect.  Adam Back 2014

Bitcoin is intentionally designed to be ungovernable and governance-free.  luke-jr 2016
Blocks must necessarily be full for the Bitcoin network to be able to pay for its own security.  davout 2015
Blocksize is an intentionally limited resource, like the 21e6 BTC limit.  Changing it degrades the surrounding economics, creating negative incentives.  Jeff Garzik 2013


The raison d'être of bitcoin is trustlessness. - Eric Lombrozo 2015
It is an Engineering Requirement that Bitcoin be “Above the Law”  Paul Sztorc 2015
Resiliency, not efficiency, is the paramount goal of decentralized, non-state sanctioned currency -Jon Matonis 2015

Bitcoin is intentionally designed to be ungovernable and governance-free.  luke-jr 2016

Technology tends to move in the direction of making surveillance easier, and the ability of computers to track us doubles every eighteen months. - Phil Zimmerman 2013

The only way to make software secure, reliable, and fast is to make it small. Fight Features. - Andy Tanenbaum 2004
cypherdoc
Legendary
*
Offline Offline

Activity: 1764



View Profile
July 29, 2015, 07:50:22 PM
 #29465

i think this proves that miners are perfectly capable of handling bigger blocks as evidenced by the "no delays" in confirmation intervals.  if anything i find it interesting that they get sped up to <10 min during these attacks.  with bigger blocks, we'd clear out that mempool in an instant:



Well, no.
Bigger blocks just make the "attacks" marginally more expensive.  
Individually, I could also swamp the mempool at any of the proposed sizes.

The "attacks" are providing some funding, so I don't really mind them all that much.  This tragedy of the commons is not quite so tragic as the UTXO set growth and chain size would be with a similar attack on larger blocks, especially if combined with a validation time attack (TX with many inputs and outputs in a tangled array).
Very large blocks would be a much more vulnerable environment in which to conduct such an attack.  We'll get to the point of ameliorating these vulnerabilities, and the right people are working on it.

Patience...  Bitcoin is still in beta, its so young.  Lets give it the chance to grow up without breaking it along the way.

so looking at the peak of the last attack ~40BTC per day worth of tx fees were harvested.  let's say we go to 20MB blocks tomorrow.  since real tx's only amount to ~500kB per block worth of data, you'd have to essentially fill the 20MB all by yourself and spend 20MB*40BTC=800BTC per day give or take.  at $290/BTC that would equal $232000 per day.  and then you'd have to sustain that to even dent the network; say for a month.  that would cost you $7,000,000 for one month.  you sure you can do that all by yourself?



Why would you doubt it?

b/c i'm assuming it would be cost prohibitive for you?  and pretty much anyone else i might add altho i'm sure you'd argue not for a gvt; which actually might be true which is why i advocate NO LIMIT as that would throw a huge amount of uncertainty as to just how successfully one could jack the mempool in that scenario as it removes user disruption and thus greatly increases the financial risk ANY attacker would be subject o.

Quote
But in your example of 20MB, there are much easier and cheaper ways to DoS the network, as mentioned.

i know; your normal sized convoluted multi-input block that would have to, btw, be constructed as a non-standard tx block that is self mined by a miner which makes it unlikely b/c you'd have to believe a miner would be willing to risk his reputation and financial viability by attacking the network in such a way which has repercussions. that's also good news in that it removes a regular spammer or gvt attacker (non-miner) from doing this. and finally, that type of normal sized but convoluted validation block will propagate VERY slowly which risks orphaning which makes that attack also unlikely.
cypherdoc
Legendary
*
Offline Offline

Activity: 1764



View Profile
July 29, 2015, 07:52:01 PM
 #29466


1-3mb are ok.  8mb is too much right now (yes the miners are wrong, there's bad stuff they haven't seen yet).
Getting blocks that take >10mins to validate is not a good thing.


I believe Gavin wants to limit TXN size to 100kiB at the same time the bigger block hard fork comes into effect. Shouldn't this very much remove your worries about an increase in blocksize?

And, yes, I agree, faster validation is even better than putting another artificial limit into the code base.


he's backed off the 100kB tx limit in deference to limiting the #sigops and simplifying the hashing process.  for details, someone provide the link to his ML post.
awemany
Newbie
*
Offline Offline

Activity: 28


View Profile
July 29, 2015, 07:58:50 PM
 #29467

he's backed off the 100kB tx limit in deference to limiting the #sigops and simplifying the hashing process.  for details, someone provide the link to his ML post.

Thanks for the update! The effect on transaction validation time should be essentially the same, though... it should be enough to make 'overly complicated' blocks impossible for the time being.
brg444
Hero Member
*****
Offline Offline

Activity: 644

Bitcoin replaces central, not commercial, banks


View Profile
July 29, 2015, 08:03:16 PM
 #29468

http://lists.linuxfoundation.org/pipermail/bitcoin-dev/2015-July/009727.html

Greg's reply.

This is entertaining  Cheesy

"I believe this will be the ultimate fate of Bitcoin, to be the "high-powered money" that serves as a reserve currency for banks that issue their own digital cash." Hal Finney, Dec. 2010
iCEBREAKER
Legendary
*
Offline Offline

Activity: 1750


Support SEGWIT on 8/1/17 https://github.com/UASF


View Profile WWW
July 29, 2015, 08:08:23 PM
 #29469

-
#rekt

   >It was _well_ .... understood that the users of Bitcoin would wish to protect its decenteralization by limiting the size of the chain to keep it verifyable on small devices.

No it wasn't. That is something you invented yourself much later. "Small devices" isn't even defined anywhere, so there can't have been any such understanding.

Hearn #rekt confirmed:

Quote
In the above statement you're outright backwards-- there was a clear
expectation that all who ran nodes would mine.

Gmax nailed this, Hearn's understanding (only miners run nodes) of Satoshi's original design (all nodes are miners) is completely bungled.

Now its Frap.doc's turn to get rekt:

Piling every proof-of-work quorum system in the world into one dataset doesn't scale.

Bitcoin users might get increasingly tyrannical about limiting the size of the chain so it's easy for lots of users and small devices.


 Cheesy

The difference between bad and well-developed digital cash will determine whether we have a dictatorship or a real democracy.  David Chaum 1996
"Monero" : { Private - Auditable - 100% Fungible - Flexible Blocksize - Wild & Free® - Intro - Core GUI - Podcats - Roadmap - Dice - Blackjack - Github - Android }
MoneroForCash.com  |  Buy and sell XMR near you  |  Easymonero.com  |  Bitsquare.io - Decentralized XMR Exchange  |  Buy XMR with fiat
Fungibility provides privacy as a side effect.  Adam Back 2014

Bitcoin is intentionally designed to be ungovernable and governance-free.  luke-jr 2016
Blocks must necessarily be full for the Bitcoin network to be able to pay for its own security.  davout 2015
Blocksize is an intentionally limited resource, like the 21e6 BTC limit.  Changing it degrades the surrounding economics, creating negative incentives.  Jeff Garzik 2013


The raison d'être of bitcoin is trustlessness. - Eric Lombrozo 2015
It is an Engineering Requirement that Bitcoin be “Above the Law”  Paul Sztorc 2015
Resiliency, not efficiency, is the paramount goal of decentralized, non-state sanctioned currency -Jon Matonis 2015

Bitcoin is intentionally designed to be ungovernable and governance-free.  luke-jr 2016

Technology tends to move in the direction of making surveillance easier, and the ability of computers to track us doubles every eighteen months. - Phil Zimmerman 2013

The only way to make software secure, reliable, and fast is to make it small. Fight Features. - Andy Tanenbaum 2004
thezerg
Legendary
*
Offline Offline

Activity: 1246


View Profile
July 29, 2015, 08:21:55 PM
 #29470

anyone concerned about the BTC lower high?
cypherdoc
Legendary
*
Offline Offline

Activity: 1764



View Profile
July 29, 2015, 08:22:56 PM
 #29471

Now its Frap.doc's turn to get rekt:

Piling every proof-of-work quorum system in the world into one dataset doesn't scale.

Bitcoin users might get increasingly tyrannical about limiting the size of the chain so it's easy for lots of users and small devices.


 Cheesy

i find it amusing that you bash Satoshi on the one hand but when you find a morsel quote of his (which you've misread) you latch onto it as gospel.

if you read the actual link you posted, Satoshi was talking about how Bitcoin should not be combined with BitDNS.  he's not even talking about tx's.  he wanted them to be separate with their own fates.  he also drew an extreme example of how BitDNS might want to include other huge datasets while Bitcoin might want to keep it small as an example of how the decision making might diverge btwn the two.  not that Bitcoin users wanted to keep a small blockchain.
brg444
Hero Member
*****
Offline Offline

Activity: 644

Bitcoin replaces central, not commercial, banks


View Profile
July 29, 2015, 08:28:21 PM
 #29472

Now its Frap.doc's turn to get rekt:

Piling every proof-of-work quorum system in the world into one dataset doesn't scale.

Bitcoin users might get increasingly tyrannical about limiting the size of the chain so it's easy for lots of users and small devices.


 Cheesy

i find it amusing that you bash Satoshi on the one hand but when you find a morsel quote of his (which you've misread) you latch onto it as gospel.

if you read the actual link you posted, Satoshi was talking about how Bitcoin should not be combined with BitDNS.  he's not even talking about tx's.  he wanted them to be separate with their own fates.  he also drew an extreme example of how BitDNS might want to include other huge datasets while Bitcoin might want to keep it small as an example of how the decision making might diverge btwn the two.  not that Bitcoin users wanted to keep a small blockchain.


Bitcoin users might get increasingly tyrannical about limiting the size of the chain so it's easy for lots of users and small devices.


 Huh

Are you really trying to spin this one again?

"I believe this will be the ultimate fate of Bitcoin, to be the "high-powered money" that serves as a reserve currency for banks that issue their own digital cash." Hal Finney, Dec. 2010
cypherdoc
Legendary
*
Offline Offline

Activity: 1764



View Profile
July 29, 2015, 08:31:43 PM
 #29473


Greg has made it abundantly clear that he is a Bear on Bitcoin.  he should step down as a core dev.

and what's this about him not thinking Bitcoin can be used on small devices?  smartphones are KEY to Bitcoin's long term success.
brg444
Hero Member
*****
Offline Offline

Activity: 644

Bitcoin replaces central, not commercial, banks


View Profile
July 29, 2015, 08:34:45 PM
 #29474


Greg has made it abundantly clear that he is a Bear on Bitcoin.  he should step down as a core dev.

and what's this about him not thinking Bitcoin can be used on small devices?  smartphones are KEY to Bitcoin's long term success.

Yeah... maybe you need to read this again. What did I tell you about putting words in people's mouth.

"A bear on Bitcoin"  Cheesy So much non sense!

"I believe this will be the ultimate fate of Bitcoin, to be the "high-powered money" that serves as a reserve currency for banks that issue their own digital cash." Hal Finney, Dec. 2010
Adrian-x
Legendary
*
Offline Offline

Activity: 1372



View Profile
July 29, 2015, 08:36:13 PM
 #29475

Lightning network and sidechains are not magical things, they're orthorgonal and not alternatives to relay improvements. But since you bring them up: They're both at a _futher_ level of development than IBLT at the moment; though given the orthogonality it's irrelevant except to note how misdirected your argument is...

I appreciate the further detail that you describe regarding the relay situation. So it seems that you are not optimistic that block propagation efficiency can have a major benefit for all full-nodes in the near future. Yet block propagation overhead is probably the single biggest argument for retaining the 1MB as long as possible. To cap volume growth to a certain extent.

I'm not sure about that.  My opinion is that the biggest argument for retaining the cap (or, at least, not introducing a 20x plus exponential increase or something like that) is the burden on full nodes due to (cumulated) bandwidth, processing power and, last but not least, disk space required.  Of course, for miners the situation may be different - but from my own point of view (running multiple full nodes for various things but not mining) the block relay issue is not so important.  As gmaxwell pointed out, except for reducing latency when a block is found it does only "little" by, at best, halving the total bandwidth required.  Compare this to the proposed 20x or 8x increase in the block size.

Bandwidth is always a much bigger concern than blockchain usage on disk. TB disks are very cheap, v0.11 has pruning.

Block (1) validation time as well as (2) propagation time are both issues for mining security and avoiding excessive forks/orphans which prolong effective confirmations.
They are separate issues however, and you can have either problem without the other in different blocks, or both together.

Both of these are exacerbated by block size.

As eager as I am for a block size increase for scalability, I'm not convinced that it is yet warranted given the risks...  The developers are aware of the issues.  They aren't goofing off.  It is just better to get it right than to get it done.
Validation time, in its self is not an issue, miners have approximately 10 minus to validate, the issues is not the time, it's that you can hack the protocol and validate nothing or force your competitor to waste time validating to gain an advantage, (this is also a temporary hack and if anything highlights where development should occur)

Propagation time is a feature not an issue, (a symbiotic byproduct) that adds to the incentives that make bitcoin work.

their are hypothetical issues with how these features could be abused, the problem is the developers are not separating the hypothetical issues form the features, and working on that, some are working to to change the features.    

Thank me in Bits 12MwnzxtprG2mHm3rKdgi7NmJKCypsMMQw
Odalv
Legendary
*
Offline Offline

Activity: 1176



View Profile
July 29, 2015, 08:38:43 PM
 #29476

notice how the extra fees paid by full blocks actually strengthened the mining hashrate during the last attacks.  probably the extra revenue from tx fees encouraged miners to bring on more hashrate.  that's a good thing and could actually be even better if they were allowed to harvest/clear all the additional fees in the bloated mempools:



your willingness to connect two dots is astounding
cypherdoc
Legendary
*
Offline Offline

Activity: 1764



View Profile
July 29, 2015, 08:42:11 PM
 #29477


Greg has made it abundantly clear that he is a Bear on Bitcoin.  he should step down as a core dev.

and what's this about him not thinking Bitcoin can be used on small devices?  smartphones are KEY to Bitcoin's long term success.

Yeah... maybe you need to read this again. What did I tell you about putting words in people's mouth.

"A bear on Bitcoin"  Cheesy So much non sense!

dude, you need to read Satoshi's post again.  he's arguing to keep Bitcoin and BitDNS separate b/c either one of them might want to increase their size:

The networks need to have separate fates.  BitDNS users might be completely liberal about adding any large data features since relatively few domain registrars are needed, while Bitcoin users might get increasingly tyrannical about unreasonably limiting the size of the chain so it's easy for lots of users and small devices.

bolded part mine.
cypherdoc
Legendary
*
Offline Offline

Activity: 1764



View Profile
July 29, 2015, 08:47:50 PM
 #29478

notice how the extra fees paid by full blocks actually strengthened the mining hashrate during the last attacks.  probably the extra revenue from tx fees encouraged miners to bring on more hashrate.  that's a good thing and could actually be even better if they were allowed to harvest/clear all the additional fees in the bloated mempools:



your willingness to connect two dots is astounding

notice how i used that graph in a series of graphs and data to support my supposition.  unlike you who is here to troll and cherrypick.
Adrian-x
Legendary
*
Offline Offline

Activity: 1372



View Profile
July 29, 2015, 08:48:18 PM
 #29479

This is a disaster.

On the one hand, we have a person who's actually telling the truth and also is the same guy who tried to break Tor, add blacklists, and other assorted nasty features.

On the other hand we have someone who tends to invent legitimately useful things, and also lie when it suits his purposes.



There's nothing worse than having to agree with the turd sandwich, because he's the only one telling the truth.


its not as bad as a turd sandwich is it?

it looks to me like on one let him make a turd sandwich, and now he isn't pushing it as he knows it's not a good idea.

on the other hand Gmax is going through the lessons MikeH learned with blacklists just happens that MikeH has a stronger character.  

Thank me in Bits 12MwnzxtprG2mHm3rKdgi7NmJKCypsMMQw
smooth
Legendary
*
Offline Offline

Activity: 1512



View Profile
July 29, 2015, 08:57:50 PM
 #29480

decision to force a fee market is a centralized solution

On it's face this is a nonsense argument since any development decisions are centralized in the same manner.

Increase the blocksize, decrease the blocksize, or leave it alone, they are all (centralized) development decisions.

It's also false that anything is really centralized about it because if there were truly a consensus for change (over the objections of the 'centralized' developers) there would be a successful fork.


Yes all dev decisions are essentially centralized, including the decision to NOT do something.  Since that is trivially true, I am talking about the effect of the decision.  And in one case miners can optimize their profitability by choosing to include transactions while in another case they are artificially limited.

Listen to New Liberty, he got this completely right. Whether miners can optimize their profitability is beside the point, because in doing so they also influence others' costs, and they are most certainly not optimizing that.

The idea of a sensible market arising for block size in the current structure if the consensus block size rule (which is the only mechanism for the "others" in the previous paragraph to participate in such a market) is a fantasy.
Pages: « 1 ... 1424 1425 1426 1427 1428 1429 1430 1431 1432 1433 1434 1435 1436 1437 1438 1439 1440 1441 1442 1443 1444 1445 1446 1447 1448 1449 1450 1451 1452 1453 1454 1455 1456 1457 1458 1459 1460 1461 1462 1463 1464 1465 1466 1467 1468 1469 1470 1471 1472 1473 [1474] 1475 1476 1477 1478 1479 1480 1481 1482 1483 1484 1485 1486 1487 1488 1489 1490 1491 1492 1493 1494 1495 1496 1497 1498 1499 1500 1501 1502 1503 1504 1505 1506 1507 1508 1509 1510 1511 1512 1513 1514 1515 1516 1517 1518 1519 1520 1521 1522 1523 1524 ... 1558 »
  Print  
 
Jump to:  

Sponsored by , a Bitcoin-accepting VPN.
Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!