Bitcoin Forum
May 25, 2024, 11:06:29 PM *
News: Latest Bitcoin Core release: 27.0 [Torrent]
 
  Home Help Search Login Register More  
  Show Posts
Pages: « 1 ... 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 [104] 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 ... 288 »
2061  Bitcoin / Development & Technical Discussion / Re: How to Create a Bitcoin Address from a Coin Flip on: February 06, 2015, 07:35:14 PM
I will argue that 256 coin flips from random.org is the best random number possibility available.  And assuming that you push the results through an offline computer using brainwallet offline, you will have a VERY SAFE, VERY RANDOM private key.
LOL.  A "VERY SAFE" number which is trivially known to a third party.  Is someone at "random.org" paying you to encourage people to have them generate their private keys, or did you come by this cluelessness naturally?

I haven't looked recently but last I checked random.org methods were secret and not peer reviewed. So not only may the results be trivially maliciously logged (by the site operators or anyone whos compromised their system; or the operators of the VPSes they use (rackspace cloud)), they're probably more likely to be accidentally flawed because their methods are not reviewed.
2062  Alternate cryptocurrencies / Altcoin Discussion / Re: Trustless, Tradable, Reality-Based Contracts on cryptonote coins on: February 06, 2015, 06:44:35 PM
As a general bit of advice, people tend to throw writeups in the trash when they begin with a number of strongly worded incorrect comparative claims.

The most visible (and seemingly most widely used) approach to binary external information contracts in Bitcoin is the reality keys approach:  This has the oracles selectively reveal a private key based on an outcome. This is consistent ('reality based') and one can happily use multiple oracles in your thresholds, which meets your (IMO misleading) definition of trustless.  The contracts are also tradable with the cooperation of all the parties (but not the oracles), which is indeed an area that could be improved.  The key reveal approach also has other useful properties, like being private (used well, no one who isn't a party to the contract-- even the oracle-- can tell a contract happened).

In future writeups you should limit your comparison to a very specific implementation to avoid turning people off with "wtf, thats not right"; or better: focus on describing the positive qualities of your proposal rather than criticizing other things. Smiley

On a more technical note, ... are you really putting a URL fetch into a consensus rule in the system, what happens when a malicious site gives random results?  WRT "trust" at least in Bitcoin the trust in miners is generally narrower than you think, and the consequences of violating that trust are more limited. (Also keep in mind you're still single threading your trust on that URL).

Cheers,
2063  Bitcoin / Development & Technical Discussion / Re: I thought trxid is unique on: February 06, 2015, 07:03:00 AM
I don't trust blockchain.info as much as before.
BC.i frequently shows outright incorrect information. Some of it is because some of the data they show is "unphysical" e.g. it's some synthesis of data in the blockchain, and the actual operation of the system is not well matched to the model the site presents, so the corner cases produce weird results.  E.g. there was (and still is? unsure) a bunch of "addresses" showing negative "balances". It's less surprising to see errors like that when you understand that there is nothing really like an "address balance" in the Bitcoin system itself.
2064  Bitcoin / Bitcoin Discussion / Re: Permanently keeping the 1MB (anti-spam) restriction is a great idea ... on: February 06, 2015, 05:15:42 AM
Why couldn't MAX_BLOCK_SIZE be self-adjusting?
That very vague.... based on what?   The hard rules of the protocol are what protect the users and owners of Bitcoins from miners whos interests are only partially aligned.  Sadly, miners have substantial censoring power for data that goes into the blockchain.  I suppose it's useful to have an in-protocol way of coordinating rather than depending on potentially non-transparent back room dealing; but almost anything in the network would be easily gamable by miners. There are some things that I think are preferable to just having no effective limit (e.g. having a rolling median, and requiring mining at higher diff to move the needle for your own blocks, and requiring difficulty to not be falling over-all for the size to go up) but these don't address half the concerns and potentially add a fair bit of complexity (which has its own risks.). 
2065  Bitcoin / Bitcoin Discussion / Re: Permanently keeping the 1MB (anti-spam) restriction is a great idea ... on: February 05, 2015, 10:29:43 PM
The chance of orphan blocks should provide some competition for space.
Centralized miners suffer much lower orphan blocks if the orphan block rate is macroscopic and driven by actual propagation time. If you're in a regime where one would want to do something to lower their orphan rate, the optimal income maximizing strategy is to centralize, not to reduce sizes.

Though at least fundamentally we know there there is no need for the orphan rate to increase proportional to block-size, if miners use more efficient relaying mechanisms that take advantage of the transactions having been already sent in advance.
2066  Bitcoin / Bitcoin Discussion / Re: Permanently keeping the 1MB (anti-spam) restriction is a great idea ... on: February 05, 2015, 10:17:26 PM
Do not forget that the hard-coded fees constants fix should be addressed simultaneously with this issue since they are inter-linked .... or we'll be back arguing about that eventually also.
We don't have hardcoded fees in Bitcoin Core... except very low ones for relay permission which have been, in practice, below typical. They're kind of ugly, and I'm generally opposed to hardcoded fees, but if they're below behavior in practice they don't cause much harm (and are very very help at preventing resource exhaustion attacks). Bitcoin Core 0.10 has a automatic fee system based on the transactions in the mempool and recent blocks, where you can set a target number of blocks to wait and it will pay based on recent history.

The "constrained" block size scenario makes necessary the ability for ordinary users to increase the fee. Users will want to update the fee on their unconfirmed tx to manage the instability in confirmation times, otherwise their tx can remain stuck in cyberspace, and they are helpless.
This is relatively straight forward to support. When a new transaction comes into the mempool, if it pays at least $increment more fees per KB than the conflicting already mempooled transaction, replace it and forward on.  Then you just need fairly simple wallet support to revise a transaction. Petertodd (IIRC) already wrote "replace by fee" code that does this.  The catch is that implemented this way it makes zero-confirmed transactions less safe, since you could have a greater success in double spending.   This can be addressed by narrowing the set of allowed replacements (e.g. all outputs must be equal or greater), but AFAIK no one has bothered implementing it.

Quote
Certainly, protocol block limits should not be hit unless all wallets first support the updating of fees on unconfirmed tx.
Chicken and egg. Without fee pressure there is no incentive to work on software to do that. Most non-bitcoin core wallets just set rather high hardcoded fees (even constant ones that don't related the the txsize metric that miners use to prioritize transactions into blocks.).

Unfortunately over-eager increases of the soft-limit have denied us the opportunity to learn from experience under congestion and the motivation to create tools and optimize software to deal with congestion (fee-replacement, micropayment hubs, etc).

Look at the huge abundance of space wasting uncompressed keys (it requires ~ one line of code to compress a bitcoin pubkey) on the network to get an idea of how little pressure there exists to optimize use of the blockchain public-good right now.

Because they are economically rational and facing different prices for bandwidth and electricity in their respective neighborhoods, they all set different minimum-fee policies.
With correctly setup software there is no relationship between your bandwidth or electricity costs as a miner and the transactions you accept into your blocks, and any slight residual relation can be divided down to nothing by pooling with other N other miners (centralizing the consensus in the process) in order to have 1/Nth the bandwidth/cpu costs. As a miner you maximize your personal income by accepting all available transactions that fit which pay a fee, it's best for you when other miners reject low fee paying transactions to encourage people to pay high fees, but you dont and instead hoover up all the fees they passed up. They take the cost of encouraging users to pay higher fees, you defect and take the benefit.

A more detained explaination is forthcoming.
Sounds good, but hopefully you can understand that some people are not very comfortable betting Bitcoin's future on not-yet-public theorems (which sounds like they must be at odds with the best understanding available from the active technical community _and_ academia...).  There have been many "bitcoin scaling" ideas that accidentally turned out to have no security or implied extreme centralization once considered more carefully. There are a few ideas which I think will someday help a lot, but they're not practical yet and its not clear when they will be.
2067  Bitcoin / Bitcoin Discussion / Re: Permanently keeping the 1MB (anti-spam) restriction is a great idea ... on: February 05, 2015, 08:49:46 PM
All you've done here is reinforce the fact that the design of the P2P network is broken and should be fixed, which is indeed an argument I am making, with a side order of red herring regarding the issuance schedule.
The difference between us is that I don't accept a permanently broken P2P network as a given and conclude that we should employ broken economics as a work around.
The broken economics of having a block size limit, and the broken P2P network should both be fixed.
I was already assuming a perfectly idealized p2p network that had no overhead or sub-linear scaling. I've done as much to explore the space of efficiency gains in this kind of system as any two other people combined here, come on. Please don't try to play off that I don't know how the system works. Decentralization has inherent costs.  You're not saying anything to escape that. It's not good enough to just say "broken broken" when reality doesn't behave like you wish it did.  I also wish there wasn't a tradeoff here, but it doesn't make it so. Sad  (And to be clear, I think there is some amount where the costs are insignificant and not a concern and that cutoff changes over time; it's only the unlimited view which I think is clearly at odds with strong decentralization and risks disenfranchising the actual holders and users of bitcoin; people who weren't signing up for a system controlled by and operated at the complete whim of a few large banks ('miners'/pools)).
2068  Bitcoin / Bitcoin Discussion / Re: Permanently keeping the 1MB (anti-spam) restriction is a great idea ... on: February 05, 2015, 07:53:29 PM
A non-scarce good is one that does not require allocation because the available supply at a price of zero exceeds the maximum achievable demand at that price.
No good that requires time or energy to deliver can be non-scarce.
Including transactions in a block will always require both time and energy, therefore the space in a block will be scarce.
Because space in a block is scarce, miners will need to allocate the inclusion of transactions into a block, and there exists a price below which they will not do so.
We can't calculate ahead of time what the equilibrium price of a transaction will be in the future, because that depends on the future actions and preferences of millions of other people.
This is a bit unhinged.  The _inherent_ costs of transactions is roughly   size_of_data * decenteralization_level (actually there is a quadratic component in a decentralized network too, but lets ignore that; good design can make it small).  In a free market for transaction capacity based purely on the inherent cost optimal competition can drive decentralization down to lower costs. With a completely centralized system the cost of almost any imaginable scale is basically nothing (e.g. a single <$2000 host on a sub-gigabit network connection is able to process a hundred thousand transactions per second).

So effectively one can replace the fee market with a market that favors the most centralization as they have the lowest costs (as thats all network income would pay for).  This may be true, but it's not interesting-- since if a highly centralized system were desirable there are more efficient and secure ways to achieve one.

I believe you're making a false comparison. None of the market participants have a way to express their preference for a decentralized network except by defining Bitcoin to be one though the rules of the system. Absent that someone who doesn't care and just wants to maximize their short term income can turn the decentralization knob all the way down (as we've seen with the enormous amount of centralization in mining pools) and maximize their income-- regardless of what the owners of bitcoins or the people making the transactions prefer. You could just as well argue that miners should be able to freely print more Bitcoins without limit and magically, if the invisible-pink-hand decides it doesn't want bitcoin to inflate, "the market" will somehow prevent it (in a way that doesn't involve just defining it out of the system).

Of course, 28 minutes is still long. That is based on 2013 data.
This data is massively outdated... it's before signature caching and ultra-prune, each were easily an order of magnitude (or two) improvements in the transaction dependent parts of propagation delay. It's also prior to block relay network, not to mention the further optimizations proposed but not written yet.

I don't actually think hosts are faster, actually I'd take a bet that they were slower on average, since performance improvements have made it possible to run nodes on smaller hosts than were viable before (e.g. crazy people with Bitcoind on rpi). But we've had software improvements which massively eclipsed anything you would have gotten from hardware improvements. Repeating that level of software improvement is likely impossible, though there is still some room to improve.

There are risks around massively increasing orphan rates in the short term with larger blocks (though far far lower than what those numbers suggest), indeed... thats one of the unaddressed things in current larger block advocacy, though block relay network (and the possibility of efficient set reconciliation) more or less shows that the issues there are not very fundamental though maybe practically important.
2069  Bitcoin / Development & Technical Discussion / Re: Is bitcoin v0.10's new libsecp256k1 safe & without mathematical backdoors? on: February 05, 2015, 07:44:07 AM
I guess we are just setting returns to negatives to represent errors?
This is clearly documented in the interface for the function:

Code:
 *  Returns: 1: correct signature
 *           0: incorrect signature
 *          -1: invalid public key
 *          -2: invalid signature

Quote
Code:
void bench_scalar_sqr(void* arg) {
    int i;
    bench_inv_t *data = (bench_inv_t*)arg;

    for (i = 0; i < 200000; i++) {
        secp256k1_scalar_sqr(&data->scalar_x, &data->scalar_x);
    }
}
Why 200,000?
Sorry, just trying to understand the code better.
It's a benchmark. Not part of the library itself. As typical for benchmarks, it runs enough times to make the measurements have reasonable resolution.
2070  Bitcoin / Development & Technical Discussion / Re: What is a Bitcoin soft fork? (in laymen's terms) on: February 04, 2015, 11:00:00 PM
Hi, I was wondering if anyone can describe what a soft fork is and how it is implemented, in a very easy-to-understand way?
I know what a hard fork is. That's where you modify the source code and get more than 50% of the nodes to adopt it.
This is incorrect in that a hard fork with just 50% is a system failure, the network would split... and coins could be spent twice (on each new network). A working hardfork needs an overwhelming support of ~all the participants, and everyone else just isn't a participant anymore.

Quote
How does this differ from a soft fork? Does a soft fork require modifying the bitcoin source code, or only the clients?
I'm not sure what you think is the difference between "the bitcoin source code, or only the clients". Bitcoin clients are the network, and they're all software.


A soft fork is a change to the rules enforced in the blockchain which is a strict narrowing. Nothing previously invalid becomes permitted, but blocks/transactions which were previously valid may be denied. This is more powerful than you might guess at first blush because Bitcoin was designed to be forward extendable and there are many conditions where you can create transactions which say "do nothing, anyone can spend" but a later soft-fork can carve a new feature out of that. E.g. a whole new script system can be introduced in this way (and we more or less did with BIP16), it just has to look like "anyone can spend" to old nodes.

Think those 'anyone can spend' parts as blocks of marble where new features can be chiseled out of them.
2071  Other / Archival / Re: delete on: February 03, 2015, 03:15:46 PM
A possible attack scenario would be to shoot down mining pools so that others are favourized. Also netsplits are being a lot easier now, this is a serious bug in my humble opinion.
Mining pools hide their private mining nodes from the network, so it's not quite so simple.

Quote
I am just thinking on how to disclose it, because I would like to have my time honored in some manner.
If someone would promise me, to honor my time in a proper way in case the bug really works, I would disclose it (to you privately if preferred) immediately.
I would be also willing to donate all my bitcoins to the bitcoin foundation in case my DOS is not working ;-)

I have a proof of concept script, that will shoot down your local (or any other node that you can reach by its ip) in a manner of microseconds. Ready when you are.
If it's really as simple as send a few messages and crash a node and effects 0.10 then I agree it needs to be fixed right away... You'd be credited in the commit for the fix (and likely a CVE, if its an outright crash), like anyone else who has reported a similar issue. This is the reasonable and customary way things are handled in open source projects, and the only reasonably scalable one (even if you put in 'a lot' of time, it pales in comparison to the thousands of hours put in by others; besides who do you think can afford that? non-technical people don't give a crap about this stuff... they think the software is magic).  I'd also remove the negative trust I have against you here on the forum, since you made good; and not harass you in the future about initial asking for a huge out-of-the-norm bounty in this case. Thats all I can offer.  Otherwise, if something exists here that is unknown, it'll have to wait until someone else rediscovers it.
2072  Other / Archival / Re: delete on: February 03, 2015, 08:32:25 AM
You are right, I was not always transparent, not always right, and not very communicative. But I was working day and night to understand every single part of the software and the protocol, sometimes I was right sometimes I was wrong.Anyways ... I am preparing a video for you right now demonstrating the DOS on a stock Bitcoin 0.9 node (of mine) and send it to you in private.
Why use year old software? I'm not sure what a video is supposed to prove. The bogus ECDSA "cracker" had a proof video too.
2073  Other / Archival / Re: delete on: February 03, 2015, 07:33:59 AM
Maybe I would have acted differently if you would have reacted differently back then, meaning facing my ideas with interest (even if they were wrong, as you correctly pointed out) instead of immediate negative trust.
Immediate? Only your continued deceptive behavior earned you that negative trust. Your post was on January 18th, the down rating was on March 18th, in between there there was a half dozen posts by me. You never even backed out your deceptive claims.
2074  Other / Archival / Re: delete on: February 03, 2015, 02:38:06 AM
I guess you didn't learn after your prior stunts resulting in negative trust?  (For some context Evil-Knievel incorrectly (and seemingly dishonestly) claimed to have compromises for ECDSA in the past and tried charging for them; conduct which he currently bears negative trust for.)

If you believe you have some DOS attack please report it responsibly to bitcoin-security@lists.sourceforge.net  (or feel free to report it encrypted privately to any of the Bitcoin core committers if you think its super critical), just like anyone else does. We consider DOS attacks to be important, but fundamentally you cannot prevent DOS because an attacker can just exhaust your bandwidth, instead DOS is prevented by not exposing your critical infrastructure to the public network directly.  We usually fix several DOS-ish issues in each release, it may also be that anything you know about is already known and a coordinated fix is in progress. In any case, you'll be credited for your contribution.  Demanding an enormous bounty for what sounds like something that is not terribly concerning is unreasonable and isn't likely to happen (it would be incredibly counterproductive to pay you when other people have done _far_ more work and found far more serious issues in the past).

If your actions caused foreseeable and preventable harm to others you may find yourself subject to civil litigation by the harmed parties. I would strongly encourage you to behave responsibly.
2075  Bitcoin / Development & Technical Discussion / Re: Is bitcoin v0.10's new libsecp256k1 safe & without mathematical backdoors? on: February 01, 2015, 09:40:56 AM
To be fair, OpenSSL has a much wider goal. It's an apples and oranges comparison in that sense; but we don't need those extra parts.
2076  Bitcoin / Development & Technical Discussion / Re: Is bitcoin v0.10's new libsecp256k1 safe & without mathematical backdoors? on: January 31, 2015, 12:19:17 PM
If the squaring bug referenced is not a concern for Bitcoin implementations, then why was a new library required? It sounds like the bug affects things other than bitcoin, but bitcoin is safe from it.
Independent of any particular bugs, OpenSSL is maintained in a way which is unsafe for consensus ( http://sourceforge.net/p/bitcoin/mailman/message/33221963/ ). OpenSSL is also on the order of >>300k lines of messy, difficult to review code (even for someone who is familiar with the algorithms in use), which-- for Bitcoin's narrow use-- can be replaced with <10k lines and get a 6-8x speed-up at the same time (and 21% of that 10k is testing code; compared to 0.9% in OpenSSL-- another reason for the reason to believe, coupled with the basically 100% branch coverage of the libsecp256k1 tests).  OpenSSL also has huge timing/cache side-channel leaks (http://eprint.iacr.org/2014/161.pdf), and can't be used with best-practices derandomized DSA without moving part the low level cryptographic code into your own application. The point about the BN squaring bug wasn't that that particular issue was a problem for Bitcoin (though it narrowly dodged being a trivial attack to fork the network), but that the tests for libsecp256k1 found it without even trying to test OpenSSL is some level of evidence that the library may be practically better tested already.
2077  Bitcoin / Development & Technical Discussion / Re: Is bitcoin v0.10's new libsecp256k1 safe & without mathematical backdoors? on: January 31, 2015, 03:50:20 AM
Here's a counter-point:
The author of that page seems to not understand that characteristic-2 curves and GLV-capable prime curves are entirely different things; its not helpful that both get named after Koblitz. Their argument seems mostly to reduce to, by effect, "use things only from heavily NSA influenced standards bodies"-- I assume you can understand why some people may not find that very persuasive. Smiley

According to this comparison: http://safecurves.cr.yp.to/ there are curves with either smaller, similar and larger key sizes, with 100% non-rigged constants (e.g. Curve1174, Curve25519, E-222, E-382, E-521, M-221, M-383, M-511) that pass certain safety criteria that secp256k1 doesn't. I understand these are no direct threat to the way secp256k1 is used in Bitcoin, but still.

Or was it purely Satoshi's consideration of ECDSA efficiency (algorithm speed) to choose secp256k1?
None of these were available in signature systems when Bitcoin was created, many of these didn't exist at all, the few that did weren't mature or widely used.  Some of safety criteria listed there are not terribly interesting, as has been discussed several times on the forum; e.g. some  relate to how simple the fastest constant time arithmetic is to write (but thats somewhat moot once its already written), or details which change security by a bit or two (but then it ignores the curve simply being smaller and thus losing several bits). The curves that pass the criteria there fail a different criteria for "safety of implementation" which has arguably been of more practical importance... that the curves have a cofactor (which both lowers their discrete log security and makes broken protocols more likely).  To be clear they're also generally good choices, but the page leans a little bit to much towards marketing, IMO.  Insanely slow brainpool curves, or NIST curves with mystery meet suspicious seeding remain worse options no matter which way you cut it, at least for our purposes.

At the time Bitcoin was created secp256k1 was the only curve in widely available software that didn't have magic constants. With a good specialized implementation it's still one of the fastest curves available at anywhere near its security level, and more secure than many other curves in the same size range given what the best known information about discrete log security. I can only imagine that it didn't see wider adoption because specialized high speed software for it wasn't available, and because one of the more interesting performance techniques for speeding it up is potentially patented.
2078  Bitcoin / Development & Technical Discussion / Re: Reused R values again on: January 31, 2015, 03:29:16 AM
I just need one important question answered: why did Satoshi or whoever decide to use this highly vulnerable signature scheme?
LOL. What would you expect to be used instead?

There is nothing "highly vulnerable" here.  The software getting hit are _extremely incompetent_.  Incompetent implementations of cryptosystems are almost universally insecure.

That DSA requires state/randomness is an extra thing to get right and it would be preferable if that weren't so... but there isn't a reasonable alternative than some kind of DSA signature even now-- and certainly not when Bitcoin was created.... nor is one needed, when coupled with competent software; and without competent software you are already doomed.
2079  Bitcoin / Development & Technical Discussion / Re: Did satoshi not know that public key is recoverable from ECDSA signature? on: January 30, 2015, 04:57:09 AM
Honestly the wire protocol is very poorly done.
I suspect you don't have much experience with protocols.  Variable length encoding are obnoxious to deal with and are a frequent source of security vulnerabilities, especially for cases where future parsing is conditional on the data being read.  Bitcoin already arguably overuses variable length encodings (and has had some sources of problems arising from them), using a constant length version identifier is a sound decision and consistent with many other protocols.

There are potential patent complications related to public key recovery, it also requires a more CPU expensive verification. I would vigorously oppose using it in the protocol even today. One can define a compression format for long sequences of blocks that uses pubkey recovery to reduce the size without ever having them be the committed data and thus forcing other people to deal with them.

DeathAndTaxes' points are fine, though keep in mind there is a cost to pealing back the black box of cryptographic primitives too much. With the distorting benefit of hindsight many people miss how well Bitcoin was designed overall (go look at the orgy of failure hardfork frenzy that many altcoins that were complete rewrites have been); time spend discovering that DER could be safely stripped (or the like) would likely have meant less time refining the rest. 8 bytes of overhead or so isn't the end of the world, esp for something that can be mooted by new soft-fork-added checksig operators.
2080  Bitcoin / Development & Technical Discussion / Re: New HD wallet that tolerates leakage of some child private keys on: January 29, 2015, 01:39:23 PM
So I think it would be an obvious improvement and might well be worth an increase in the resulting master public key size just for additional robustness, I don't know that in practise it would safely permit intentional use of it.

To say that public key size is the problem here seems kind of vague, I think that the main deficiency is that you need to perform m elliptic curve exponentations to derive the next pubkey, instead of a single exponentation. So I'm not sure if the tradeoff between the extra complexity versus the supposed better security makes sense (with non-hierarchical variant), it depends on whether the security improvement is significant in practical scenarios.
You use multi-exp which is not N times slower, and wnaf with some big tables on each of your points, so it's only a couple adds even if your coefficients are big. libsecp256k1 on a fast laptop does something like 70k ecdsa verifies per second, and that involves a multiexp on two points (and a number of other expensive operations: a modinv for s and a sqrt to recover the pubkey). So, I don't see why you'd consider that an issue even with hundreds of points.  The reason I cited size is that the advantage of the homomorphic derivation over independent keys at all is size, and having to grow the pubkey linearly in use (to be secure in the worst case) erodes that improvement.
Pages: « 1 ... 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 [104] 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 ... 288 »
Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!