Bitcoin Forum
May 26, 2024, 03:13:35 AM *
News: Latest Bitcoin Core release: 27.0 [Torrent]
 
  Home Help Search Login Register More  
  Show Posts
Pages: « 1 ... 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 [75] 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 ... 288 »
1481  Bitcoin / Development & Technical Discussion / Re: Does SegWit require any change in using send/receive API? on: March 18, 2016, 04:03:32 PM
I am using blockcypher send/receive API in certain services and accept Tx with 1+ confirmations. Post SegWit, do I need any change to be made at my end?
No, only if that API did something weird.
1482  Bitcoin / Development & Technical Discussion / Re: Segwit details? SEGWIT WASTES PRECIOUS BLOCKCHAIN SPACE PERMANENTLY on: March 18, 2016, 09:25:16 AM
* Malleability could be fixed by just skipping the signatures (the same data that SegWit would segregate) when computing the txid.  
That _is_ segregation of the signatures up to completely non-normative ordering of data transferred. Segwit could just as well order the data into the same place in the serialized transactions when sending them, but its cleaner to not do so.

Quote
* GREATER bandwidth savings could be obtained by providing API/RPC calls that simple clients can use to fetch the data that they need from full nodes, sans the signatures etc.  This enhancement would not even be a soft fork, and would let simple clients save bandwidth even when fetching legacy (non-SegWit) blocks and transactions.  
This would be no greater and it would have _no_ security at all. The clients would be _utterly_ beholden to the third party randomly selected servers to tell them correct information and they would have no way to verify it.

I normally don't expect people advocating Bitcoin Classic to put security first, but completely tossing it out is a new turn. I guess it's consistent with the latest validation removal changes in classic.

Quote
* Pruning signature data from old transactions can be done the same way.
Has been for years.
1483  Bitcoin / Development & Technical Discussion / Re: Segwit details? SEGWIT WASTES PRECIOUS BLOCKCHAIN SPACE PERMANENTLY on: March 18, 2016, 09:21:30 AM
So far I haven't seen this desirability in itself argued,
Please read the fine thread here.
1484  Bitcoin / Development & Technical Discussion / Re: Segwit details? SEGWIT WASTES PRECIOUS BLOCKCHAIN SPACE PERMANENTLY on: March 17, 2016, 05:53:37 AM
This is a networked society, I don't think a hard fork is that difficult as you said. Ethereum just had one and no one complains
You're getting caught up on terms, thinking that all hard forks are the same. They aren't.  Replacing the entire Bitcoin system with Ethereum would, complete with the infinite inflation schedule of ethereum would just be a hardfork. ... but uhhh.. it's not the same thing as, say, increasing the Bitcoin Blocksize, which is not the same as allowing coinbase txn to spend coinbase outputs...

Quote
Just like a soft fork, you have a long period to inform all the users to upgrade, those who don't care, their software will just not be able to talk to the network and the transactions will be dropped.
That isn't like a soft fork, soft forks don't kick anyone out of the network. And you seem to have missed what I said, because of nlocked transactions changing the transaction format would effectively confiscate some people's Bitcoins.

Quote
When a large bank upgrading their system, all the users of that bank can not access the banking service for at least hours or whole night/weekend, no one complains.
Yes, Banks are centralized systems-- ones which usually only serve certain geographies and aren't operational 24/7. Upgrading them is a radically different proposition than a decentralization system.  A Bitcoin hard fork is a lot more like switching from English to Metric system, except worse, because no one values measurement systems based on how immune to political influence they are.

I’m aware that Core is focused on encouraging a gradation of nodes on the network. To me, a full node means a full, archival, fully validating node, and that’s what I’m
Your usage of the word full node is inconsistent with the Bitcoin communities since something like 2010 at least. A pruned node is a full node. You can invent new words if you like, but keep in mind the purpose of words is to communicate, and so when you make up new meanings just to argue that you're right, you are just wasting time.

You claim to be concerned with validating, but I do not see you complaining that classic has functionality so that miners will skip validation: https://www.reddit.com/r/Bitcoin/comments/4apl97/gavins_head_first_mining_thoughts/

Quote
So… changing these incentives was _the_ ray of light that allowed “lots of people” (assuming blockstream here) that a capacity increase could be had, fascinating. Before your email became the core roadmap, and before the conclusion of the HK conference, almost everyone thought that we would be hard forking at least some block size increase. Interesting to hear that perspective was wrong all along.
No, not blockstream people (go look for proposals from Blockstream people-- there are several blocksize hardforks suggested). Because of the constant toxic abuse, most of us have backed away from Bitcoin Core involvement in any case.

Quote
Not surprising, segwit was designed with the "side" benefit of making sig heavy settlement tx cheaper, and a main benefit of fixing malleability which LN requires.
Fixing this is a low enough priority that we canceled work on BIP62 before soft-fork segwit was invented. In spite of this considerable factual evidence, you're going to believe what you want, please don't waste my time like this again:

Quote
(2) Lightning HTLC transactions have tiny signatures, and benefit less than many transaction styles (in other words the recosting should slightly increase their relative costs), though no one should care because channel closures are relatively rare. Transactions that do large multisigs would benefit more, because the current size model radically over-costs them relative to their total cost to Bitcoin nodes.

Waves hands.

luke-jr told me it takes 2 bytes per tx and 1 byte per vin extra using segwit as opposed to a 2MB hardfork. I thought you also confirmed this. Now you are saying that using segwit reduces the total permanent space used by 30%, if that is really the case then I will change my view.

please explain to me how lukejr is wrong when he says it takes 2 bytes per tx and 1 byte per vin. i will update the title to match my understanding, without shame when I see my mistake. Imagine I am like rainman. I just care about the numbers
Luke told you what the Bitcoin Core segwitness implementation stores. For ease of implementation it stores the flags that way. Any implementation could do something more efficient to save the tiny amount of additional space there, Core probably won't bother-- not worth the engineering effort because it's a tiny amount.

Part of what segwitness does is facilitate signature system upgrades. One of the proposed upgrades now saves an average of 30% on current usage patterns-- I linked it in an earlier response. It would save more if users did whole block coinjoins. The required infrastructure to do that is exactly the same as coinjoin (because it is a coinjoin), with a two round trip signature-- but the asymptotic gain is only a bit over 41%.  It'll be nice for coinjoins to have lower marginal fees than non-coinjoins; but given the modest improvement possible over current usage, it isn't particularly important to have whole block joins with that scheme; existing usage gets most of the gains.
1485  Bitcoin / Development & Technical Discussion / Re: LevelDB reliability? on: March 17, 2016, 05:38:38 AM
That's precisely what we did with Monero. We abstracted our blockchain access subsystem out into a generic blockchainDB class,
Thats exactly how core has been done for years.

Though we don't consider it acceptable to have 32bit and 64 bit hosts fork with respect to each other, and so prefer to not take risks there!
1486  Bitcoin / Development & Technical Discussion / Re: Segwit details? SEGWIT WASTES PRECIOUS BLOCKCHAIN SPACE PERMANENTLY on: March 17, 2016, 01:44:50 AM
What about fixing those "other problems" (I don't want to say "hard", because IMO they aren't "hard" by themselves) without the segregation? Impossible or just not worth it?
A strong malleability fix _requires_ segregation of signatures.

A less strong fix could be achieved without it if generality is abandoned (e.g. only works for a subset of script types, rather than all without question) and a new cryptographic signature system (something that provides unique signatures, not ECC signatures) was deployed.

And even with giving up on fixing malleability for most smart contracts, it's very challenging to be absolutely sure that a specific instance is actually non-malleable. This can be seen in the history of BIP62-- where at several points it was believed that it addressed all forms of malleability for the subset of transactions it attempted to fix, only to  later discover that there were additional forms.  If a design is inherently subject to malleability but you hope to fix it by disallowing all but one possible representation there is a near endless source of ways to get it wrong.

Segregation removes that problem. Segwitness using scripts achieve a strong base level of non-malleability without doubt or risk of getting it wrong, both in design and by script authors. And only segregation applies to all scripts, not just a careful subset of "inherently non-malleable rules".

Getting signatures out from under TXIDs is the natural design to prevent problems from malleability and engineers were lamenting that Bitcoin didn't work that way as far back as 2011/late-2012.

Can't you, gmaxwell and knightdk, settle on verifying txid at last?
It's really hard to get info on SegWits here if even such an obvious thing (one would think) gets contradictory answers. Wink
Knightdk will tell you to defer to me if there is a conflict on such things.

But here there isn't really, I think-- we're answering different statements. I was answering "The thinking being that if the hashes dont match, there is no point in wasting time calculating the signature".

Knightdk is responding about verifying lose transactions, there is no "verify the transaction ID", because no ID is even sent. You have nothing to verify against. All you can do is compute the ID.

I was referring to processing blocks. Generally first step of validating a block, after connecting it to a chain, is checking the proof of work. The second step is hashing the transactions in the block to verify that the block hash is consistent with the data you received. If it is not, the information is discarded before performing further processing. Unlike a loose transaction, you have a block header, and can actually validate against something.

In fact, if you do it in a hard fork, you can redesign the whole transaction format at will, no need to do so many different hacks everywhere to make old nodes unaware of the change (these nodes can work against upgraded nodes in certain cases, especially when some of the upgraded hashing power do a roll back)
No, you can't-- not if you live in a world with other people in it.  The spherical cow "hardforks can change anything" ignores that a hardfork that requires all users shutting down the Bitcoin network, destroying all in flight transactions, and invalidating presigned transactions (thus confiscating some amount of coins) will just not be deployed.

Last year I tried proposing an utterly technically simple hard fork to fix the time-warp vulnerability and provide extranonce in the block header using the prev-hash bits that are currently always forced to zero (often requested by miners and ASIC makers-- and important for avoiding hardcoding block logic in asics) and it was _vigorously_ opposed by Mike Hearn and Gavin Andresen-- because it would have required that smartphone wallets upgrade to fix their header checks and difficulty calculation.  ... and that was for something that would be just a well contained four of five lines of code changed.

I hope that that change eventually happens; but given that it was attacked so aggressively by the two biggest advocates of "hard forks are no big deal", I can't imagine a radical backwards incompatible change to the transaction format happening; especially when the alternative is so easy and good that I'd prefer to use it for increased similarity even in an explicitly incompatible system.

The discount is the question you won't get a good answer for. Fundamental economics of Bitcoin, price per byte, changed drastically, with a soft fork.
What? It's an explicit goal. Transaction "size" in a particular serialization (which isn't necessarily used for transmission or storage) does not well reflect the costs of a transaction to the system. This has created a misalignment of incentives which has been previously misused (e.g. a miner creating blocks which expand the UTXO set size by almost a megabyte twiddling around with dust-spam (known private keys)).  

At the end of the day signatures are transmitted at most once to a node and can be pruned. But data in the UTXO set must be in perpetual online storage. It's size sets a hard lower bound on the amount of resources to run a node. The fact that the size limit doesn't reflect the true cost has been a long term concern, and it's one of the biggest issues raised with respect to blocksize limits (even acknowledged by strong proponents of blocksize increase: e.g.  http://gavinandresen.ninja/utxo-uhoh (ignore anything in it about storing the UTXO set in ram, no version of Bitcoin Core has ever done that; that was just some confusion on the part of the author)). Prior problems with UTXO bloating attacks forced the introduction of the "dust limit" standardness rule, which is an ugly hack to reduce the bleeding from this misalignment of incentives.

In Montreal scaling Bitcoin fixing this costing imbalance was _the_ ray of light that got lots of people thinking that some agreement to a capacity bump could be had: if capacity could be increased while _derisking_ UTXO impact, or at least making it no worse-- then many of the concerns related to capacity increases would be satisfied.  So I guess it's no shock to see avowed long time Bitcoin attackers like jstolfi particularly picking on this aspect of a fix as a measure to try to undermine the ecosystem.

One of the challenges coming out of Montreal was that it wasn't clear how to decide on how the corrected costing should work. The "perfect" figures depend on the relative costs of storage, bandwidth, cpu, initial sync delays, etc.. which differ from party to party and over time-- though the current size counting is clearly poor across the board. Segwit addressed that, open parameter because optimizing it's capacity required a discount which achieved a dual effect of also fixing the misaligned costing.

The claims that the discounts have something to do with lightning and blockstream have no substance at all.
(1) Lightning predates Segwit significantly.
(2) Lightning HTLC transactions have tiny signatures, and benefit less than many transaction styles (in other words the recosting should slightly increase their relative costs), though no one should care because channel closures are relatively rare. Transactions that do large multisigs would benefit more, because the current size model radically over-costs them relative to their total cost to Bitcoin nodes.
(3) Blockstream has no plans to make any money from running Lightning in Bitcoin in any case;  we started funding some work work on Lightning because we believed it was long term important for Bitcoin and Mike Hearn criticized us for not funding it if we thought it important, because one of our engineers _really_ wanted to work on it himself, and because we were able to work out a business case for using it to make sidechains scalable too.

N + 2*numtxids + numvins > N
I still claim that is true, not sure how that loses me any credibility
In one post you were claiming 42 bytes per a one in / one out transaction, the other you appeared to be claiming 800 bytes.  In any case, even your formula depends on what serialization is used; one could choose one where it was smaller and not bigger. The actual amount of true entropy added is on the order of a couple bits per transaction (are segwit coins being spent or not and what script versions).

To characterize that as "SEGWIT WASTES PRECIOUS BLOCKCHAIN SPACE PERMANENTLY", when the same signaling will allow the use of new signature schemes that reduce the size of transactions on average about _30%_ seems really deceptive, and it makes me sad that you're continuing with this argument even after having your misunderstandings corrected.

I thought you said you were said you were actually going to write the software you keep talking about and speak through results; rather than the continued factually incorrect criticisms you keep making of software and designs which you don't care to spend a minute to learn the first thing about? We're waiting.

In the mean time: Shame on you, and shame on you for having no shame.
1487  Bitcoin / Development & Technical Discussion / Re: address balances at specific block? on: March 16, 2016, 10:56:19 PM
Since nothing in the Bitcoin consensus algorithm works with balances, using them for comparison would be potentially unwise-- it's perfectly possible to have an incorrect 'balance' that is actually a latently corrupted state.  Bitcoin Core gettxoutsetinfo will give you a hash of the serialized utxo set for this kind of diagnostic purpose, though there is no text specification for the particular serialization it uses, you'd have to extract that from the implementation.
1488  Bitcoin / Development & Technical Discussion / Re: Segwit details? segwit wastes precious blockchain space permanently on: March 16, 2016, 10:22:57 PM
I was told by gmax himself that a node that doesnt validate all signatures should call itself a fully validating node.
A node not verifying signatures in blocks during the initial block download with years of POW on them is not at all equivalent to not verifying signatures _at all_.

I agree it is preferably to verify more-- but we live in the real world, not black and white land; and offering multiple trade-offs is essential to decentralized scalability.   If there are only two choices: Run a thin client, verify _nothing_; or run a maximally costly node and verify EVERYTHING then large amounts of decentralization will be lost because everyone who cannot justify or afford the full cost will have no option but to not participate in running a full node.  This makes it essential to support half steps-- it's better to allow people to choose to save resources and not verify months old data-- which is very likely correct unless the system has failed-- since the alternative is them verifying nothing at all.

Quote
Also, I am making an optimized bitcoin core and one of these optimizations is rejecting a tx whose contents doesnt match the txid. The thinking being that if the hashes dont match, there is no point in wasting time calculating the signature                                        
Every piece of Bitcoin software does this.  It is a little obnoxious that you spend so much time talking about these optimizations you're "adding" which are basic behaviors that _every_ piece of Bitcoin software ever written has always done, as if you're the only person to have thought of them or how they distinguish this hypothetical node software you claim to be writing.                                    
                                                                                                                                            
Quote
However, with such drastic assumptions I can (and have) already saved lots more space without adding a giant amount of new protocol and processing.
Your claims of saved space (10GB) earlier on the list, were already five times larger than what Bitcoin Core already does... another case of failing to understand the state of the art while thinking that some optimization you just came up with is vastly better while it's actually inferior.                                                                                                            
                                                                                                                                            
Segwit is not about saving space for plain full noes, the space is already saved in Core (if the user chooses to save it). As you note, local space savings can be done purely locally.  Segwit increases flexibility; fixes design flaws; saves space for nodes acting as SPV servers; and saves _bandwidth_; and none of these be done as purely local changes.

Quote
I still claim that:
N + 2*numtx + numvins > N
As I pointed out, that is purely a product of whatever serialization an implementation chooses to store the data.

Quote
However on the benefits claims, one of them is the utxo dataset is becoming a lot more manageable. this is irrelevant as that is a local inefficiency that can be optimized without any external effects. I have it down to 4 bytes of RAM per utxo, but I could make it smaller if needed
Taking a hint from your earlier pedantry... It sounds like you have a long way to go... Bitcoin Core uses 0 bytes of RAM per UTXO. By comparison, the unreleased implementation you are describing is embarrassingly inefficient-- Bitcoin core is infinity fold better. Smiley

What I still dont understand is how things will work when a segwit tx is sent to a non-segwit node and that is spent to another non-segwit node. How will the existing wallets deal with that? What happens if an attacker created segwit rawtransactions and sent them to non-segwit nodes? there are no attack vectors? what about in zeroconf environments? how does a full relaying node mine a block with segwit inputs? or do existing full nodes cease to be able to mine blocks after segwit softfork?
jl777, I already responded to pretty much this question directly just above. It seems like you are failing to put in any effort to read these things, disrespecting me and everyone else in this thread; it makes it seem like responding to you further is a waste of time. Sad

The segwit transactions are non-standard to old nodes. This means that old nodes/wallets ignore them until they are confirmed-- they don't show them in the wallet, they don't relay them, they don't mine them, so even confusion about unconfirmed transactions is avoided.
If you don't understand the concept of transaction standardness, you can learn about it from a few minutes of reading the Bitcoin developer guide: https://bitcoin.org/en/developer-guide#non-standard-transactions and by searching around a bit.

This is a really good explanation, thanks for taking the time to write it up. My understanding of Bitcoin doesn't come direct from the code (yet!) I have to rely on second hand information. The information you just provided has really deepened my understanding of the purpose of the scripting system over and above "it exists, and it makes the transactions work herp" which probably helps address your final paragraph...
[...]

Indeed it does. I am sincerely sorry for being a bit abrasive there: I've suffered too much exposure to people who aren't willing to reconsider positions-- and I was reading a stronger argument into your post than you intended--, and this isn't your fault.

Quote
I'm trying not to get (too) sucked into the conspiracy theories on either side, I'm only human though so sometimes I do end up with five when adding together two and two.

A question that still niggles me is segwit as a soft fork. I know that just dredges up the same old discussion about pros and cons of soft vs hard but for a simpleton such as me it seems that if the benefits of segwit are so clear, then compromising on the elegance of implementation in order to make it a soft fork seems a strange decision.
It would be a perfectly reasonable question, if it were the case there was indeed a compromise here.

If segwit were to be a hardfork? What would it be?

Would it change how transaction IDs were computed, like elements alpha did? Doing so is conceptually simpler and might save 20 lines of code in the implementation... But it's undeployable: even as a hardfork-- it would break all software, web wallets, thin wallets, lite wallets, hardware wallets, block explorers-- it would break them completely, along with all presigned nlocktime transactions, all transactions in flight. It would add more than 20 lines of code in having to handle the flag day.  So while that design might be 'cleaner' conceptually the deployment would be so unclean as to be basically inconceivable. Functionally it would be no better, flexibility it would be no better.  No one has proposed doing this.

Would it instead do the same as it does not, but instead put the commitment someplace else in the block rather than as a coinbase transaction OP_RETURN? -- at the top of the hashtree?  This is what Gavin Andresen responded to segwit proposing.  This would be deployable as a lite-client compatible semi-hardfork, like the blocksize increase. Would this be more elegant?

In that case... All that changes changing is the position of the commitment from one location to another. Writing the 32+small extra bytes of data in one place in the block rather than another place. It would not change the implementation except some constants about where it reads from. It would not change storage, it would not change performance. It wouldn't be the most logical and natural way to deploy it (the above undeployable method would be).  Because it would be a hard fork, all nodes would have to upgrade for it at the same time.  So if you're currently on 0.10.2 because you have business related patches against that version which are costly to rebase-- or just because you are prohibited from upgrading without a security audit, you'll be kicked off the network under the hard fork model when you don't upgrade by the flag day. Under the proposed deployment mechanism you can simply ignore it with no cost to you (beyond the general costs of being on an older version) and upgrade whenever it makes sense to do so-- maybe against 0.14 when there finally are some new features that you feel justify your upgrade, rather than paying the upgrade costs multiple times.  One place vs the other doesn't make a meaningful difference in the functionality, though I agree top 'feels' a little more orderly. But again, it doesn't change the functionality, efficiency or performance, it wouldn't make the implementation simpler at all. And there there is other data that would make more sense to move to the top (e.g. stxo/utxo commitments) which haven't been designed yet, so if segwit was moved to the top now that commitment at the top would later need to be redesigned for these other things in any case.  It's not clear that even greenfield that this would be more elegant than the proposal, and the deployment-- while not impossible for this one-- would be much less elegant and more costly.

So in summary:  the elegance of a feature must be considered holistically. We must think about the feature itself, how it interacts with the future, and-- critically-- the effect of deploying it.  Considered together the segwit deployment proposed is clearly the most elegant approach.  If deployment were ignored, the elements alpha approach would be slightly preferable, but only slightly -- it makes no practical difference-- but it is so unrealistic to deploy that in Bitcoin today that no one has proposed it. One person did propose changing the commitment location; but the different location to a place that would only be possible in a hardfork but the location makes no functional difference for the feature and would add significant amounts of deployment cost and risk.
1489  Bitcoin / Development & Technical Discussion / Re: Segwit details? on: March 16, 2016, 09:02:30 PM
Segwit transactions are considered by old nodes as transactions which spent an anyonecanspend output and thus are treated with a grain of salt. The best course of action is to of course wait for confirmations as we already should still be doing now.
The segwit transactions are non-standard to old nodes. This means that old nodes/wallets ignore them until they are confirmed-- they don't show them in the wallet, they don't relay them, they don't mine them, so even confusion about unconfirmed transactions is avoided.

Quote
Ah *sigh of relief*; here comes somebody who actually knows what they are talking about.

Could you also let me know if I presented any misinformation? I have been trying my best to not and to make jl777 understand why he is wrong but I may have accidentally (either due to misunderstanding the BIPs or just really bad typing) given him false information.
At least the above was the only minor correction I've seen so far.

Quote
Since Pieter and no one on the IRC reponded either, I will ask this again. Will there be a full write up (preferably before segwit's release) of all of the changes that segwit entails so that wallet developers can get working on implementing segwit? AFAIK the segwit implementation contains omissions and changes from what was specified in the BIPs.
If that was you asking in #bitcoin-dev earlier, you need to wait around a bit for an answer on IRC-- I went to answer but the person who asked was gone.  BIPs are living documents and will be periodically updated as the functionality evolves. I thought they were currently up to date but haven't checked recently; make sure to look for pull reqs against them that haven't been merged yet.


my reaction was based on the answers I was getting and clearly it is a complex issue. segwit is arguably more changes to bitcoin than all prior BIP's combined. I dont think anybody would say otherwise.
I'll happily say otherwise.  It's a change of somewhat more complexity than P2SH; certainly less than all combined. The implementation, however is smaller than the BIP101 implementation (comparing with tests removed). The Bitcoin community is getting better at documenting changes, so there is more documentation written about this than many prior ones.  Conceptually segwit's changes are very simple; based on signaling in the scriptPubkey, scriptsigs can be moved to the ends of transactions, where they are not included in the txid. An additional hashtree is added to the coinbase transaction to commit to the signatures. The new scriptsigs begin with a version byte that describes how the scripts are interperted, two kinds are defined now, the rest are treated as "return true".

Quote
Now please ignore the space savings for nodes that are not full nodes. I am assuming that to bootstrap a node it will need to get the witness data from somewhere, right? so it is needed permanently and thus part of the permanent HDD requirement.
You can't "please ignore" major parts of the system scalability and hope to pose a discussion worth reading, if one is willing to ignore all the facts that disagree with them they can prove anything.  None the less, no-- right now existing full nodes do not verify signatures in the far past, but currently have to download them. Under segwit they could skip downloading them.  If you're not going to check it, there is no reason to download it-- but the legacy transaction hashing structure forces you to do so anyways; segwit fixes that.

Quote
I still dont fully understand how the size of the truncated tx+ witness data is as small as 2 bytes per tx + 1 byte per vin. But even if that is the case, my OP title is accurate. as N+2*numtx+numvins is more than N
There is no such thing as "size"-- size is always a product of how you serialize it.  An idiotic implementation could store non-segwit transactions by prepending them with a megabyte of zeros-- would I argue that segwit saves a megabyte per transaction? No.  

It's likely that implementations will end up using an extra byte per scriptsig to code the script version, though they could do that more efficiently some other way... but who cares about a byte per input? It certainly doesn't deserve an ALL CAPS forum post title-- you can make some strained argument that you're pedantically correct; that doesn't make you any less responsible for deceiving people, quite the opposite because now it's intentional. And even that byte per input exists only for implementations that don't want to do extra work to compress it (and end up with ~1 bit per transaction).

Meanwhile, that version byte makes it easy to safely deploy upgrades that reduce transaction sizes by ~30%.  What a joke that you attack this. God forbid that 'inefficient' implementations might store a byte for functionality that makes the system much more flexible and will allow saving hundreds of bytes.

Quote
Also I made the mistake of making sure the transaction hash matches for a transaction. I had assumed that if the transaction hash doesnt match, it is invalid rawbytes. Are you saying that we dont need to verify that the transaction hashes match? As you know verifying signatures is very time consuming compared to verifying txid. So if verifying txid is not available anymore, that would dramatically increase the CPU load for any validating node.
Before I go making new threads about that, let us wait for some clarity on this issue.

I think if the witness data is assumed to be there permanently, then we dont increase the CPU load 10x or more to have to validate sigs vs validate txid, so it would be a moot point.
You are still deeply confused. With segwit the witnesses-- the part containing the signature-- are not part of the transaction ID. They _must_ not be for malleability to be strongly fixed, and they really shouldn't be to optimal scalability.  This is no way increases the amount of signature validation anyone does.

(Nor does it decrease the amount of signature validation anyone does, though while you've been ranting here-- the people you're continually insulting went and shipped code that makes signature validation more than 5x faster.)

That leads me to another question I've been having that hasn't been answered as far as I know. If segregating the signatures out of the tx leads to a stable txid (malleability fixed), then why can't we simply fix malleability independantly by simply ingoring the signatures when hashing the txid?
This is what segwit effectively does, among other improvements. The first version of segwit that was created for elements alpha does _EXACTLY_ that, but there was no way to deploy that design in bitcoin because it would deeply break every piece of Bitcoin software ever written-- all nodes, all lite wallets, all thin clients, all hardware wallets, all web front ends, all block explorers, all pre-signed nlocked timed transactions, even many pieces of mining hardware; we learned about how impactful doing that was with elements alpha when it was very difficult getting existing software working with it... and for a while we didn't see any realistic way to deploy it short of rebooting the whole blockchain in a great big flag day (which would inevitably end up unintentionally confiscating some peoples' coins)-- not just a hard fork but an effective _rewrite_.  The clever part of segwit was reorganizing things a bit-- the signature field is still part of the txid but we don't use it for signatures anymore, we use a separate set of fields stapled onto the end to achieve exactly the same effect; but without blowing everything up.
1490  Bitcoin / Development & Technical Discussion / Re: Segwit details? SEGWIT WASTES PRECIOUS BLOCKCHAIN SPACE PERMANENTLY on: March 16, 2016, 07:15:16 PM
Wow. The deceptive misinformation in this thread is really astonishing.

Contrary to the claims here, segwit doesn't increase transaction sizes (as was noted, it adds a single coinbase commitment per block).

all this seems to be above and beyond what would be needed for the normal, plus the nVersion (4 bytes) and nLockTime (4 bytes) are duplicated. To a simple C programmer like me it sure looks like instead of reducing the net amount as required by anything claiming to save space, it is increasing the size by approx 50 bytes.

Maybe its 32 + 4 + 1 + 1 + 4, so 42 bytes?

jl777, to be blunt, and offer some unsolicited advice: You have almost no chance of actually writing that bitcoin full node you say you want to be working on when you are so unwilling to spend more than a second reading or take any time at all to understand how existing Bitcoin software works.  Virtually every post of yours contains one or another fundamental misunderstanding of the existing system/software-- and your abrasive an accusatory approach leave other people disinterested in spending their time educating you. Even here, I am not responding to for your benefit-- as I would otherwise-- but because other people are repeating the misinformation you've unintentionally generated due to your ignorance. Please take a step back: Bitcoin is not "bitcoin dark", "nxt", or the other altcoins you've worked on in the past where an abusive/armwaving style that leans heavily on native intelligence while eschewing study will itself establish you as an expert. Bitcoin is full of really remarkably intelligent people, so simply being smarter than average doesn't make you a shining star as it may in some places.

The text you are quoting is instructions on computing a hash. None of the data involved in it is stored, not any more than the tens of times the transaction size of data used for the sighashes on a large transaction is stored.

If the carefully constructed, peer reviewed specifications are not to your liking; you could also spend some time studying the public segnet testnet. Given that there are both specifications and a running public network, the continued inquisitory "needs to be answered" conspiracy theory nonsense-- even after being given a _direct_ and specific answer ("segwit adds one coinbase output that contains the root hash of the Merkle tree that commits to the witness transaction ids. It uses 47 extra bytes per block")-- is highly inappropriate. Please do not subject other contributors to this forum to that kind of hostility.  

Quote
My assumption is that for a segwit compatible full relaying node to be able to relay the full blockchain it would need to have ALL the data, original blockchain and witness data.
Your lack of understanding about how Bitcoin is structured and existed today works against you. A full node does not need to store "ALL the data", and in Bitcoin Core today you can set an option and run a full node with only about 2GB storage. Configured in this pruning manner, the node relays transactions, blocks, fully validates everything, etc.  This is the state _today_.

Segwit improves scaling in several ways as was already explained in this thread:
Quote
  • The extra space used by witnesses is more prunable than normal block space, as it's not needed by non-validating clients.
  • Has less effect on bandwidth, as light clients don't need the witness data.
  • Has no effect on the UTXO set, so does not contribute to database growth and/or churn.
  • Enables script versions, which will make the introduction of Schnorr signatures much easier later on, which are more space efficient than what we have now (even for simple single-key outputs/inputs).

For example, all existing full node software that I'm aware of widely used on the current network does not validate signatures in the far past chain. They just download them, and if pruning is enabled, throw them out. They can't verify the transaction hashes, make sure no inflation or other non-signature validation rule violations happened, and build their UTXO set without downloading them... but the download is pure waste. Segwit makes it possible for a node which isn't going to verify all the signatures in the fast past to skip downloading them.  Segwit reduces greatly the bandwidth required to service lite nodes for a given amount of transactions, segwit increases the capacity (in terms of transactions per block) without increasing the amount of UTXO growth per block... and all this on top of the non-scaling related improvements it brings.

This is why the technical space around Bitcoin is overwhelmingly in favor of it

Script versioning is essentially about changing this consensus mechanism so that any change can be made without any consensus. Giving this control to anyone, even satoshi himself, entirely undermines the whole idea of bitcoin. *Decentralised* something something.
The content of your scriptpubkey, beyond the resource costs to the network, is a private contract between the sender of the funds and the receiver of the funds. It is only the business of these parties, no one else. Ideally, it would not be subject to "consensus", in any way/shape/form-- it is a _private matter_. It is not any of your business how I spend my Bitcoins; but unfortunately, script enhancing softforks do require consensus of at least the network hashpower.

Bitcoin Script was specifically designed because how the users contract with it isn't the network's business-- though it has limitations. And, fundamentally, even with those limitations it already, at least theoretically, impossible to prevent users from contracting however they want. For example, Bitcoin has no Sudoku implementation in Script, and yet I can pay someone conditionally on them solving one (or any other arbitrary program).

Bitcoin originally had an OP_VER to enable versioned script upgrades. Unfortunately, the design of this opcode was deeply flawed-- it allowed any user of the network, at their unannounced whim, to hardfork the network between different released versions of Bitcoin.  Bitcoin's creator, removed it and in its place put in facilities for softforks. Softforks have been used many times to compatibly extend the system-- first by Bitcoin's creator, and later by the community. The segwit script versioning brings back OP_VER but with a design that isn't broken---- it makes it faster and safer to design and deploy smart contracting/script improvements (for example, a recently proposed one will reduce transaction sizes by ~30% with effectively no costs once deployed); but doesn't change the level of network consensus required to deploy softforks; only perhaps the ease of achieving the required consensus because the resulting improvements are safer.

If you're going to argue that you don't want a system where hashpower consensus allows new script rules for users to use to voluntarily contract with themselves, you should have left Bitcoin in 2010 or 2011 (though it's unclear how any blockchain cryptocurrency could _prevent_ this from happening).  Your views, if not just based on simple misunderstandings, are totally disjoint with how Bitcoin works. I don't begrudge you the freedom to want weird or even harmful things-- and I would call denying users the ability to choose whatever contract terms they want out of principle rather than considerations like resource usage both weird and harmful--, but Bitcoin isn't the place for them, and the restrictions you're asking for appear to be deeply disjoint with Bitcoin's day-one and every-day-since design, which has a huge amount of complexity in the original design for user (not consensus) determined smart contracting and where softforks (hashpower consensus) have been frequently used to extend the system.
1491  Bitcoin / Development & Technical Discussion / Re: Signature aggregation for improved scalablity. on: March 14, 2016, 07:41:52 PM
I wasn't aware of 131 when I wrote that text, but aggregating public key reuse is a perennial proposal which reoccurs every 6 to 9 months and is shot down each time.

Practical fungibility is an essential property of money, privacy is an essential requirement for a financial transaction system.  No widespread money system is in use without these properties. Bitcoin's ability to have them depends _strongly_ on the use of random pseudonymous addresses, whenever people don't use that Bitcoin's status as money is degraded for everyone.  Inserting a huge incentive to compromise fungibility and privacy into the system to get a modest capacity boost is a non-starter, even more than it was a non-starter in 2011 when I first recall seeing it proposed. And yes, some people currently use Bitcoin in ways that damage the system-- it can take that-- but in no way makes it acceptable to reward that harmful behavior.

(As an aside, the example you give is pretty broken-- if every customer pays to the same address you cannot distinguish which of multiple concurrent payments has actually gone through; so that only works so long as your hotdog stand is a failure, as soon as you had multiple customers paying close together in time it turns into a mass of confusion.)


Quote
It seems to me that BIP131 pretty much solves zipping inputs into a single input in a very simple manner. [...]
All you have to do to enable the "wildcard" feature is to flip a bit in the version field.
When you lack understanding many things which are not simple seem simple, and many things which are simple seem not simple.

No kind of aggregation can be just done by "flipping a bit in the version field", as that is utterly incompatible with the design of the system; violates all the layering, and would be rejected as coin-theft by all the existing nodes.

Quote
I feel like I can explain BIP131 to a group of kindergartners and they'll pretty much know what I'm talking about. [...] Now all inputs that have the same scriptPubKey and are confirmed in a block lesser than the block of the input you did sign, then those inputs gets spent by the transaction as well
In fact, the way you're describing it here would result in _immediate_ funds loss, even absent an attacker. Imagine an additional payment shows up that you weren't expecting when you signed but happens to arrive first at miners, and the total value of that additional payment get converted into fees! As you described it here, it would also be replay vulnerable... where someone sends the same transaction to the chain a second time to move new payments that have shown up since. This is why we don't have kindergartners design the Bitcoin protocol, I guess.   That kind of design also results in a substantial scalablity loss as every node would need an additional search index for the utxo set (or perform a linear scan of it, which takes tens of seconds currently) in order to gather all the inputs with the same scriptpubkey.

What I've described here is actually very simple, straight forward to implement, and is understood by many people... and it achieves its goal without the massive downside of forcing address reuse-- and in doing so avoids trashing fungiblity and sensible business workflows; and as a bonus isn't grievously broken.

If I've bamboozled you with my explanation, that is likely because I took the time to explain some of the history of the thinking in this space and because my intended audience was other Bitcoin experts (and not PHD's I can assure you)-- whom understood it just fine; not your conjectural kindergartners. Not to mention, ... good explanation is a difficult art and generally only ideas which are too simple to actually be correct can be simply explained without considerable effort. When implementation moves forward you can trust that simple and clear explanations will be provided.

1492  Bitcoin / Development & Technical Discussion / Re: LevelDB reliability? on: March 11, 2016, 01:17:40 AM
LevelDB being stupid is one of the major reasons that people have to reindex on Bitcoin Core crashes. There have been proposals to replace it but so far there are no plans on doing so. However people are working on using different databases in Bitcoin Core and those are being implemented and tested.

This is incorrect.

LevelDB needs a "filesystem interface layer". It doesn't come with one for windows; when leveldb is used inside Chrome it uses special chrome specific APIs to talk to the file system.  A contributor provided a windows layer for Bitcoin which is what allowed Bitcoin to use leveldb in the first place.

This windows filesystem interface layer was incorrect: it failed to flush to disk at all the points which it should. It was fixed rapidly as soon as someone brought reproduction instructions to Wladimir and he reproduced it.   There was much faffing about replacing it, mostly by people who don't contribute often to core-- in my view this was an example of bad cargo-cult "engineering" where instead of actual engineering people pattern-match buzzwords and glue black boxes together: "I HURD YOU NEED A DATABASE. SOMEONE ONCE TOLD ME THAT MYCROSAFT SEQUAL IS A GREAT DATABASE. IT HAS WEBSCALE". When the actual system engineers got engaged, the problem was promptly fixed.

This is especially irritating because leveldb is not a generic relational database, it is a highly specialized transactional key/value store. Leveldb is much more like an efficient disk-backed MAP implementation than it is like anything you would normally call a database. Most other "database" systems people suggest are not within three orders of magnitude in performance for our specific very narrow use case. The obvious alternatives-- like LMDB have other limitations (in particular LMDB must mmap the files, which basically precludes using it on 32 bit systems-- a shame because I like LMDB a lot for the same niche leveldb covers; leveldb also has extensive corruption detection, important for us because we do not want to incorrectly reject the chain due to filesystem corruption).

I think it's more likely that Bitcoin Core would eventually move to a custom data structure than to another "database" (maybe a swap to LMDB if they ever support non-mmap operations... maybe); as doing so would basically be a requirement for performance utxo set commitments.

A large number of these corruption reports were also being caused by anti-virus software randomly _deleting_ files out from under Bitcoin Core. It turns out that there are virus "signatures" that are as short as 16 bytes long... and AV programs avoid deleting random files all over the users system through a set of crazy heuristics like extension matching which failed to preclude the Bitcoin information (though I'm sure actual viruses have no problem abusing these heuristics to escape detection). Core implemented a whitening scheme that obfuscate the stored state in order to avoid these problems or any other potential for hostile blockchain data to interact with weird filesystem or storage bugs.

Right now it's very hard to corrupt the chainstate on Windows in Bitcoin Core 0.12+. There still may be some corner case bugs but they're now rare enough that they're hard to distinguish from broken hardware/bad drivers that inappropriately write cache or otherwise corrupt data-- issues which no sane key value store could really deal with. If you're able to reproduce corruption like that, I'd very much like to hear from you.

We've suffered a bit, as many other Open Source projects do -- in that comparatively few skilled open source developers use Windows (and, importantly, few _continue_ to use windows once they're hanging out with Linux/BSD users; if nothing else they end up moving to Mac)-- so we're extra dependent on _good_ trouble reports from Windows users whenever there is a problem which is Windows specific...

why use a DB for an invariant dataset?
After N blocks, the blockchain doesnt change, right?
Bitcoin Core does not store the blockchain in a database (or leveldb) and never has. The blockchain is stored in pre-allocated append only files on the disk as packed raw blocks in the same format they're sent across the network.  Blocks that get orphaned are just left behind (there are few enough that it hardly matters.


[Lecture about generic reasons to use a RDBMS]
None of which are applicable to the storage of a disk backed map storing highly compressed state information at the heart of a cryptographic consensus algorithm, but good points generally.
1493  Bitcoin / Bitcoin Technical Support / Re: Gentoo Hardened eats 3 GB of memory after closing Bitcoin Core on: March 10, 2016, 09:20:43 AM
htop can sort by memory usage. Press M and see what process is using it...
1494  Bitcoin / Bitcoin Technical Support / Re: Gentoo Hardened eats 3 GB of memory after closing Bitcoin Core on: March 08, 2016, 10:15:31 PM
Same kernel on my laptop, never seen anything like what is being described here.

How are you measuring "eats 3GB memory"-- are you just getting confused by the page cache?
1495  Bitcoin / Bitcoin Discussion / Re: Precious Metals Leader JM Bullion Now Accepts BTC Payments 4% discount on: March 08, 2016, 10:12:45 PM
The "$10,000" limit for Bitcoin transactions that doesn't exist for checks or bank wires still leaves Bitcoin an inferior option... and simply unavailable if you want to-- say-- purchase a 1KG gold bar.
1496  Bitcoin / Bitcoin Discussion / Re: There is a problem with core development on: March 08, 2016, 01:14:34 AM
My response, not part of my original post, is that clients do not always do what is necessary to protect users and often have code bugs.
Yes and there is _nothing_ that can be done about that generally. Even the example you gave of address checksums are enforced purely in the client, and never show up in the Bitcoin protocol-- and no one is arguing that the txout size should be increased by 20% to accommodate them. You've already had something like seven developers respond (now eight, with me) to you telling you that these kinds of checks can only sanely be implemented in clients (and already are implemented, by me in fact, in Bitcoin Core).

If the client software is broken (or worse, malicious) all bets are off-- no amount of consensus rules can make them safe;  by adding arbitrary restrictions you might cover up one or two corner cases in incompetent clients, but at a cost of handicapping Bitcoin and making node implementations more complex and buggy; leaving us with strange economically significant paramters hard coded into the protocol... and would likely fail to prevent the broken clients from actually losing any money. It's not a meaningful protection; and mishandling fees has never been a bug in any widely distributed wallet software that I'm aware of...

Quote
but it should not have been hidden from them by a list moderator.

They're not hidden, they're moved elsewhere in public-- according to the rules of the list, almost certainly by someone who isn't a Bitcoin Core developer-- presumably because you were simply repeating yourself in a non-productive way. As a reader of the list I'm thankful for that service.  Just because a message is accepted doesn't mean that it's reasonable for a discussion to go on forever.

and somehow someone hacked a retail terminal to charge 300% fee
If someone is hacking your terminal you likely have much greater things to worry about than them making you pay higher fees-- like them directing all payments to themselves.  Software is not magic.  Sprinkling around an endless series of handcuffs to close off implausible what creates a spiked trap of complexity that would undermine the survivability of the system and provide negligible to no protection.   You're welcome to disregard concerns like that add such things to your own software; you're not welcome to waste unbounded amounts of other peoples time demanding they do it for you in their own.
1497  Bitcoin / Bitcoin Discussion / Re: There is a problem with core development on: March 07, 2016, 10:50:29 PM
The community here could have pointed that out just as easily.
Yep. Thought it often does a fairly inconsistent job.

OP failed to actually mention or link to any of the discussion and only wrote about it in vague terms. If I hadn't actually linked to the requests and quoted from the messages would you be saying "but it really is an unnecessary change" now?  If you only went on what the OP said, I think it would sound pretty bad...

Unfortunately, there has been a rash of misinformation where I looked at it and thought exactly as you suggested-- this isn't important and other people will handle it-- and then people didn't handle it, and not it's being continually repeated as fact from so many directions that it seems hopeless to correct. (or the corrective effort would be so great that it would just send the wrong message implicitly; and so it goes uncorrected.)

In this case, a pretty clear response took about 12 mouse clicks to bring up all the relevant messages and then copy and paste some quotes... which allowed fully contextualizing the issue. This was easy for me since I saw but didn't participate in the original discussion --  in fact, I thought that the other respondents "had it"-- but seemingly not since it had now spread to here in a more accusatory meta-issue form.

In any case, I hope it was a good investment. I wish I could turn back time and do this in a number of other places.

1498  Bitcoin / Bitcoin Discussion / Re: There is a problem with core development on: March 07, 2016, 10:40:29 PM
Interesting that you had plenty of time to write that long winded response though. If you didn't have time then why do you have time now?
Because a direct response here doesn't waste any of the other developers time, time wasted for the Bitcoin dev community is at most the ten minutes I'm spending here-- and potentially a large amount of time _saved_; if the sunlight I shone on the issue prevents it from turning into another forest fire FUD fest. Not to mention that anyone spending their time reading the general discussion here is by definition not trying to get something useful accomplished.  The bitcoin-dev list is a working forum, not a place for low impact casual discussion.
1499  Bitcoin / Bitcoin Discussion / Re: There is a problem with core development on: March 07, 2016, 10:12:29 PM
I think it's interesting that you've omitted any links to the discussion.

Let me help out:

You opened an issue asking Bitcoin Core to "hardfork" modify the rules of the Bitcoin Blockchain to prohibit transactions that pay over some arbitrary limit in fees.

https://github.com/bitcoin/bitcoin/issues/7638

Doing so might even effectively confiscate coins created by people who locked them up in precomputed nlocktime transactions. (Though this is unlikely, I hope it makes it more clear what a substantial change that would be!)

Paveljanik, a lower activity contributor, pointed out that Bitcoin Core already has a "-maxtxfee=" configuration option, and if that wasn't enough-- and you were insisting on the consensus rule change it should be taken to the mailing list. You indicated you would.

Wladimir responded pointing out the absurd fee protection in core, that users may have sensible reasons to override it and 'pay' high fees in transactions so precluding it in consensus would be unwise.  He agreed that the issue was the incorrect place to advocate for such a change. The issue was closed.

You posted to the bitcoin-dev list:
http://lists.linuxfoundation.org/pipermail/bitcoin-dev/2016-March/012509.html

Five different people responded:

"I think there is no need to do a hardfork for this. Rather it should be implemented as a safety-mechanism in the client.",  (Henning Kopp)

"Bitcoin Core already implements this safety limit with the "absurd fee" limit of 10000 * the minimum relay fee", (Peter Todd)

"And it's the responsibility of the operators to make the wallet user friendly. Apart from that, there are legit use cases where one would want to "pay" a large transaction fee:", (Marco Falke)

"There's  an absurd fee (non-consensus) check already. Maybe that check can be improved, but probably the wallet layer is more appropriate for this.", (Jorge Timón)

"It would be a shame to prohibit someone from rewarding whoever mines their transaction" (Dave Scotese)

You responded to the forth one with (entire message):

"A consensus rule however would protect users from a bug in the wallet  protection. Just like the checksum in a payment address does."
(https://lists.ozlabs.org/pipermail/bitcoin-dev-moderation/2016-March/000082.html)

This is almost a word for word repetition from your initial email: "Adding protections may help give confidence and there is precedence to  doing things to prevent typo blunders - a public address has a four byte  checksum to reduce the odds of a typo."; and it's clear that the people you were responding to were aware of that argument. Nothing here is hidden-- the moderation rejects for that list are all public.

I'm sorry you don't feel that your opinions are adequately heard here; but the community cannot spend unbounded time on any particular person's pet issue--  each post to the developer mailing list ends up consuming many man hours to man days of time across all the readers; it's counterproductive to continue a looping discussion.  It wasn't my call to not forward on your message (I'm not a moderator there), and I hope whomever did wrote an explanation to you-- but I think it was probably the correct call.

Ultimately there are thousands of ways poorly written software can cause losses for users-- they can leak private keys, use insecure nonces, munging scriptpubkey data, etc. Additional consensus rules make the system less flexible and more costly to maintain. Cutting down the flexibility of Bitcoin with limits that could only help to protect people against a very narrow class of software bugs, is probably not a great idea right now-- at least not without a unique, compelling, well considered argument and _concrete_ proposal.  There are too many other more important things going on.

But just because one community isn't giving you a free pulpit to argue your point isn't any reason that you couldn't work on it elsewhere; you just don't have the right to demand that other people spend their time on it.  I'm not sure what experience you have with Open Source projects; but no large one survives without methods and process to avoid an unbounded time loss from every wild idea and wish that comes along.
1500  Bitcoin / Development & Technical Discussion / Re: SIGHASH precedence for multisig? on: March 03, 2016, 08:51:22 PM
If I use the block 290000 checkpoint to skip verifying sigs prior to that, could I still claim to be a fully validating node? (without being inaccurate)
I think it would be inaccurate to call a node that cannot validate the complete original rules of the system a full node.

Also, Bitcoin Core plans to remove that mechanism entirely. It no longer provides a useful performance improvement on most systems; and right now is really only preserved because it prevents some corner case DOS attacks (ones unrelated to signatures).

I am assuming I dont have to worry about bitcoin reorganizing 100,000+ blocks. Do I?
You don't have to do anything. But a complete and correct implementation of Bitcoin's rules will handle this. Bitcoin core can reorganize back to block 1 just fine (though will refuse to do so while checkpoints are enabled).
Pages: « 1 ... 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 [75] 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 ... 288 »
Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!