Bitcoin Forum
May 24, 2024, 05:06:48 PM *
News: Latest Bitcoin Core release: 27.0 [Torrent]
 
  Home Help Search Login Register More  
  Show Posts
Pages: « 1 2 3 4 5 6 7 8 9 10 11 12 13 [14] 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 »
261  Bitcoin / Project Development / Re: ChromaWallet (colored coins): issue and trade private currencies/stocks/bonds/.. on: December 28, 2013, 07:02:35 PM
Also I'd like to get comments from Peter Todd (bitfield-tagged color kernel mentioned above is based on his ideas). He now works for Mastercoin, but I don't know whether that preclude him from helping us with coloring schemes.

In simple terms, I know a way to make both padding and scaling parametrizable on transaction level, which will let us to achieve nearly-optimal satoshi efficiency. Also transactions will have unique signature, which is important for backward-scan traversal strategy.

I'm not sure whether we should use nSequence or OP_RETURN. nSequence is more efficient, but seems kinda hackish.

So the big tl;dr with nSequence vs any other scheme is that only nSequence is based on per-txin-scriptSig information, or basically it's the only scheme(1) that's CoinJoin-compatible. The reason why it's compatible is that all the information about where the "color" of the txin is meant to go is in the txin, rather than any other part of the transaction, which allows the rest of the transaction to be specified by other individuals.

I'm rather inclined to keep that feature simply because being able to easily hide who paid for a given colored coin is useful privacy, and increases the anonymity set for everyone else.

1) You can use the R-value nonce in the ECC signature too, but... ugh. Smiley Though it would have the advantage of making colored coin transactions completely indistinguishable from regular ones.
262  Bitcoin / Development & Technical Discussion / Re: Why TxPrev.PkScript is inserted into TxCopy during signature check? on: December 26, 2013, 01:31:21 AM
(the actual v0.1 release inserted OP_CODESEPARATOR explicitly prior to calling EvalScript, breaking the idea, so I'm not sure if Satoshi actually realized what could have been made possible)

It seems so needlessly complicated that it must have had a reason. I wonder if maybe he intended to add a 'sticky' flag that required conditions on an output to be concatenated onto any new outputs it was spent to. The whole thing is very confusing to me though, and I'm having great trouble understanding your post.

It's confusing to me too; I didn't clue into the soft-fork stuff until about the third or fourth time I looked at that code.

Keep in mind Bitcoin had pretty poor software engineering in v0.1, so finding features that turned out to be poorly thought through shouldn't be surprising.
263  Bitcoin / Development & Technical Discussion / Re: Why TxPrev.PkScript is inserted into TxCopy during signature check? on: December 24, 2013, 11:53:45 PM
I think that the point is to "mark" the transaction input that is being signed to prevent one signature to be used for more than one input.

Doesn't the signature already apply to the hash of the previous transaction and the index in it? Why would you need any additional data at all?

Bitcoin prior to the v0.1 release had a system where the scriptSig and scriptPubKey were concatenated prior to evaluation; OP_CODESEPARATOR was included in scripts (or scriptSigs) explicitly to mark what part of that concatenated script was to be hashed. The mechanism was broken because OP_RETURN could be used in a scriptSig to return prematurely, but other than that oversight the idea was sound. (the actual v0.1 release inserted OP_CODESEPARATOR explicitly prior to calling EvalScript, breaking the idea, so I'm not sure if Satoshi actually realized what could have been made possible)

Inserting this script into the txcopy scriptSig was the mechanism by which items in the scriptSig could be included in the signature. Something interesting about SignatureHash is you can use non-standard hashtypes in a backwards compatible way: nHashType is ANDed with mask 0x1f, or 0b00011111, prior to testing against SIGHASH_SINGLE and SIGHASH_NONE, which means that if they are set to anything other than those two values the signature hash is calculated without modifying vout. Similarly bits 6 and 7 of nHashType are completely ignored. Had the original design been kept additional SignatureHash() flags could have easily and efficiently been added in a soft-fork, for instance:

scriptPubKey: <pubkey> OP_CHECKSIG
scriptSig: <additional hashed data> <signature>

OP_CHECKSIG has defunct code to remove the signature from the concatenated script prior to calling SignatureHash(), so the final concatenated script inserted into the txin scriptSig would be:

<additional hashed data> <pubkey> OP_CHECKSIG

That additional hashed data could have be, for instance, the hash of the values going into the transaction to allow for signatures to cover fees. Similarly it could have been used with SIGHASH_NONE to redefine how signatures worked. Though note that for this to be a soft fork failure to match that additional data would have to be handled with as an immediate fail, turning OP_CHECKSIG into OP_CHECKSIGVERIFY with respect to the new features.

Having said that we can still do this, although it gets less efficient. Basically you just make an entire second signature, ~72 bytes worth, and have the special signature hash bits trigger the OP_CHECKSIG code to check it as well. For instance:

scriptPubKey: <pubkey> OP_CHECKSIG
scriptSig: <sig2> <sig1>

Where sig1 uses the old signature algorithm, and sig2 uses a new algorithm. For pre-soft-fork nodes sig2 is just useless data and does nothing, but for post-soft-fork nodes if sig2 is invalid the transaction fails.
264  Alternate cryptocurrencies / Altcoin Discussion / Re: MasterCoin: New Protocol Layer Starting From “The Exodus Address” on: December 24, 2013, 03:50:26 PM
Could someone explain how Mastercoin prevents double spending?

Killerstorm's reply is absolutely correct.

I also wrote a longer paper talking about why mining doesn't need to be verification as well: Disentangling Crypto-Coin Mining: Timestamping, Proof-of-Publication, and Validation
265  Bitcoin / Development & Technical Discussion / Re: Bitcoin source from November 2008. on: December 23, 2013, 08:39:54 PM
OP_CODESEPARATOR was in the first released version too. It was part of his broken method of running scripts via concatenation.

In the Bitcoin v0.1 release OP_CODESEPARATOR was always inserted between the scriptSig and scriptPubKey prior to calling EvalScript() This pre-release sourcecode implies that was not automatically done, which allows for scripts to take advantage of that after the fact to delegate signing authority after the fact. (though to fully take advantage of the idea you need the notion of a OP_CODESEPARATOR "stack")

The idea is "broken" only in that OP_RETURN originally could cause a script to return valid prematurely; now that OP_RETURN only fails a script prematurely an explicit OP_CODESEPARATOR design would work fine.
266  Bitcoin / Development & Technical Discussion / Re: Bitcoin source from November 2008. on: December 23, 2013, 08:07:38 PM
Also interesting is how there's a "getmywtxes" command in ProcessMessage, which looks like it's designed to get a thin-client's wallet transactions for them. Seems to be that it retrieves all transactions related to the specified scriptPubKey hashes in a specific block.

There's also a "wtx" command which seems to add a CWalletTx to the node's local wallet! Bizzare - quite possibly just some code for testing.

Accept block is fascinating, just look at the commend "Add atoms to user reviews for coins created" (?!)


That is the whole source.  There was no script.cpp at that time.  

You mean this is all he sent you; the source is obviously missing functions that are called and wouldn't have compiled. Pity, fascinating to see Bitcoin in this intermediate stage of development.
267  Bitcoin / Development & Technical Discussion / Re: Bitcoin source from November 2008. on: December 23, 2013, 07:47:45 PM
Is that all the source you have? No script.cpp?
268  Bitcoin / Development & Technical Discussion / Re: Bitcoin source from November 2008. on: December 23, 2013, 07:47:18 PM
Cool!

Something that I immediately noticed was how the scriptPubKey's all start with OP_CODESEPARATOR; for me that's fascinating to see as I commented a few months ago on how it would have allowed signing authority on a transaction to be delegated.
269  Bitcoin / Development & Technical Discussion / Re: Bitcoin source from November 2008. on: December 23, 2013, 07:38:48 PM
Interesting.

What was the context of you getting them from Satoshi? (I assume)

edit: might be helpful to have the exact files rather than a cut-n-paste to get an "official" copy. Feel free to email them to me, I can put them up somewhere for you: pete@petertodd.org
270  Bitcoin / Development & Technical Discussion / Re: unlinkable public deterministic wallet addresses on: December 23, 2013, 03:21:03 PM
There is one additional advantage of sender derived addresses: the recipient has a global shared static address so it can act as a trust anchor to ward off diversion attacks in a simple and space efficient way without signatures.  It can act like an SSH TOFU (trust on first use) fingerprint.  Users can compare fingerprints, call up the company, expect the fingerprint advertised on all official emails, SSL static content web site, business cards, trust directories, PGP signed by key employees etc.  (Diversion attack meaning where someone hacks a server and replaces the addresses with their own).   Furthermore people can check that fingerprint in their offline wallet for investment level amounts.

The main reason I, and for that matter Amir Taaki, proposed "stealth addresses" in the first place was because I wanted to add them to OpenPGP keys - later other CA systems too - as an additional user ID so that wallet software could re-use that infrastructure to validate who you were trying to pay. I'd strongly suggest using that basic mechanism even if trust-on-first-use is used to validate rather than web-of-trust. One interesting thing about all this is it does suggest that the ability to encode a small amount of additional metadata that can only be seen by the recipient in the transaction would be useful, such as an account code, to disambiguate payments.

I also think this stuff ties into privacy models fairly tightly, so I want to do some formalization of that first. IE for SPV clients, what's your anonymity set of other transactions you are hidden in, and how do you make sure that set stays at the level you think it does?
271  Bitcoin / Bitcoin Discussion / Re: Dark Wallet Certification on: December 10, 2013, 12:29:33 PM
The beginnings of organizing this on a wiki:
https://wiki.unsystem.net/index.php/DarkWallet/Certification

I'm also working on a summary document of my take on the certification requirements, including decentralization and security issues as well as privacy.
272  Bitcoin / Development & Technical Discussion / Re: New paper: Accelerating Bitcoin's Trasaction Processing on: December 07, 2013, 08:46:52 PM
Peter, I agree with you that there is a big problem with high orphan rates, but this is not a symptom of GHOST, but rather a symptom of high transaction rates.

Whether you use GHOST or Longest-chain at high rates you must either resort to large blocks that propagate slowly or to high block creation rates. There is no getting around that. Both cause a high orphan rate (or perhaps the right term to use is high rate of off-chain blocks). we do not pretend GHOST solves this problem The only thing we claim is that GHOST makes the protocol secure at high rates -- the 50% attack can no longer be executed with less than 50% of the hash rate.

Going back to your original post:

Quote
We note that block propagation times are the primary obstacle for scalability.

The obstacle to scalability is keeping Bitcoin decentralized while scaling up; we know that Bitcoin can scale if we sacrifice decentralization - Visa and Mastercard are doing just fine. Ultimately you're proposing something that solves one problem - the high granularity of confirmations and the long wait associated with them - at the expense of scalability with decentralization. So don't claim you've done anything other than presented an interesting academic analysis of a specific trade-off possible in the system.

This also suggests why the Bitcoin community has talked about the underlying idea in your paper repeatedly(1) among themselves, but no-one has ever bothered to investigate it more fully - it's obviously a bad trade-off in the context of proof-of-work. (with the possible exception of applying the idea to the p2pool share-chain)

h
1) I myself proposed it for my zookeyv key-value global consensus system proposal - #bitcoin-wizards 2013-05-31 - though in the context of proof-of-sacrifice.
273  Bitcoin / Development & Technical Discussion / Re: New paper: Accelerating Bitcoin's Trasaction Processing on: December 07, 2013, 04:32:46 PM
I'm working on analyzing centralization incentives

I suggest that the value of a bitcoin is directly related to the trust in the decentralization of the system.
And the level of trust in Bitcoin is somewhat related to its (de)centralization. The higher the centralization, the lower the value per block.
I don't see any mention of that in your paper about 'centralization incentives'. Shouldn't that be part of the equation?

It's an externality. If your logic was right, coal power plants wouldn't exist, but they do.
274  Bitcoin / Development & Technical Discussion / Re: Is anybody working on pruning on the main client? on: December 07, 2013, 04:29:52 PM
Mike != the dev team.

Just because you're looking at a guy raping a girl, while doing nothing to stop it - it does not make you innocent in the crime.
Unless he was keeping a gun aimed at you, while you were watching it, was he?

Nah, we're just subtle about it, the kind of subtlety that involves orchestrating an angry mob to stop the rapist rather than doing so ourselves. More concretely, remind yourself again about who's been behind the latest interest in CoinJoin; I personally just spent a week at the DarkWallet hackathon.


Anyway, putting strife away and focusing on the job that eventually has to be done and will be done by someone, some day.

There is only one ultimate purging solution which does not affect the network's decentralization and address the scalability issues.
Start distributing snapshots of UTXO database, with the snapshots' security protected by the blockchain and the miners.
But that's definitely too far fetched idea, as for this team - 25 years at least, 10 of which just to realize that all the other options suck...
I'll be using viagra, instead of the memory impairing drug, by then Smiley

Look up "MMR TXO commitments" among other things - we're way ahead of you mate.

FWIW I've been hired by Mastercoin to work full-time on crypto-coin research - scalability will definitely be one of the focuses of my work.
275  Bitcoin / Development & Technical Discussion / Re: New paper: Accelerating Bitcoin's Trasaction Processing on: December 07, 2013, 03:50:50 PM
Quote
1) Advantage of nodes with low latency. Nodes that can reach the rest of the network quickly have more blocks on the main chain and less orphans.

Answer: This is already going on in the current bitcoin protocol. We don't improve things, but our modification doesn't hurt either.

Actually it makes the problem significantly worse as more orphans leads to more opportunities for a larger miner to have an advantage over a smaller one through orphans.

Note the equations for P(Q,L) in the paper I'm working on analyzing centralization incentives - they all depend on the block interval in such a way that a smaller block interval makes the larger hashing power more significant.

I suggest you correct your post.
276  Bitcoin / Development & Technical Discussion / Re: Is anybody working on pruning on the main client? on: December 07, 2013, 03:38:33 PM
though, more likely scenario is that within the next 8 people will forget about the original bitcoin dev team and start making/using their own mods.
it's actually already happening - has happened while you were busy with designing black and red lists during a conference with US financial "authorities", otherwise known as the core of the wold's financial regime Smiley

Mike != the dev team.


Anyway, Litecoin contracted me to implement pruning, although there's no particular timeline on that contract; it depends on sipa's headers-first which has its own set of issues.
277  Bitcoin / Development & Technical Discussion / Re: New paper: Accelerating Bitcoin's Trasaction Processing on: December 06, 2013, 11:07:56 AM
Hi Aviv,

Has your paper already been peer-reviewed?

We are peer-reviewing it now ;-). No better review mechanisms than a btctalk thread.

+1


Firstly: it's awesome to see such research.

Now, is litecoin too conservative to try to incorporate this?

I think yes, so let's make TreeCoin?

Note how this idea can easily be implemented as a merge-mined coin, and if merge-mined with Bitcoin, that can naturally lead towards an eventual "pure" Bitcoin implementation; I'd recommend considering using the Bitcoin UTXO set as the basis for the coin's initial distribution. Of course, until the merge-mined chain gets >50% hashing power of Bitcoin as a whole it's in theory insecure, but that's IMO fine for an experiment that may lead to something concrete. (do it merge-mined with testnet first)
278  Bitcoin / Development & Technical Discussion / Re: New paper: Accelerating Bitcoin's Trasaction Processing on: December 06, 2013, 10:51:59 AM
Speaking of, while the paper presents a solution preserving security guarantees, a quick skim of it doesn't seem to indicate they take into account the incentives around block propagation. If you wind up with a situation well large, centralized, mining pools earn more money as a part of this high-speed propagation game, even though in theory all the work being done contributes towards 51% security, the overall result may be a serious negative due to the incentives towards centralization. Lately I've done some work (pdf) on that topic; it's a very important crypto-currency design consideration that I'd like to see other people analyzing as well.

You make a really good point. Decentralization is hurt at higher transaction rates, and better incentives are needed. We do mention it as a problem that remains at the very end of the paper (issues related to decentralization). The problem seems inherent to both our suggestion and to Bitcoin's current implementation.

Let me just say, that with small block sizes (i.e., when full bloom filters are implemented) it will be very hard to distribute blocks faster than the network does already. IMO, this is less of a problem than people think.

Oh good to hear your thinking about this! I agree with you that there aren't easy answers yet.

Something to keep in mind is that the network distributing blocks != miners distributing blocks. Large miners and mining pools can and do peer to each other directly, so propagation delays on the global P2P bitcoin network don't affect them the same way as smaller miners who are not in a position to do that. When the block "interval" is getting down to just a few seconds, even stuff like physically locating your mining pool hashing power closer to other concentrations of hashing power may net you more revenue, with obvious problems for decentralization. You might have a small mining setup, and want to contribute that decentralized hashing power, but find you can't profitably because all the big mining pools already have peering agreements with each other and dedicated high-speed connections.

Also I'm not convinced that bloom filters are the way to do block distribution with only txid hashes; peers simply maintaining lists of what transactions each peer knows about and transmitting short tags instead is probably more efficient in terms of bandwidth as it is not probabilistic.

Finally, I've got a proposal for a blockchain where blocks themselves could only contain tx hashes at a fundamental level; transactions don't get verified at all in the scheme and the blockchain is strictly for proof-of-publication. Seems to me that it'd be well-suited to your idea as the lack of verification makes it easy to have actual tx publication be something done after the fact. (though note my comments about issues when there is a lack of incentives to actually publish the data that is being committed)
279  Bitcoin / Development & Technical Discussion / Re: New paper: Accelerating Bitcoin's Trasaction Processing on: December 06, 2013, 10:42:06 AM
I'm going to have to sit down and read the paper more carefully to have a solid opinion on it, but my meta-opinion is that I think it's great to see people taking scalability and blocksize seriously on an academic level. We've got some incredibly naive ideas floating around the Bitcoin space right now - like the idea that just removing the blocksize limit entirely will work out fine - and we need research into scalability solutions that considers incentives and the resulting security carefully. That kind of reasoning tends to involve a lot of math, rather than platitudes about "market forces"


Speaking of, while the paper presents a solution preserving security guarantees, a quick skim of it doesn't seem to indicate they take into account the incentives around block propagation. If you wind up with a situation well large, centralized, mining pools earn more money as a part of this high-speed propagation game, even though in theory all the work being done contributes towards 51% security, the overall result may be a serious negative due to the incentives towards centralization. Lately I've done some work (pdf) on that topic; it's a very important crypto-currency design consideration that I'd like to see other people analyzing as well.

I'm not a developer, so excuse my ignorance. But wouldn't 1 second blocks essentially make solo mining much more plausible? It seems at that rate even lowly cpu miners could pull of solo mining. Might not be profitable per se, but the chances of at least mining a block solo would be greatly increased.

Yeah, but if the result of the solution is such that solo-mining earns you only 50% of the revenue that hashing in a large, 25% hashing power pool got you, you're incentive would be to mine in the pool instead leading to centralization. The problem is we know that pools already have higher revenue per unit hashing power; my suspicion is the solution proposed by the paper makes that problem even worse. But I've got a flight to catch in an hour or two and will get a chance to think about it all more carefully later. Smiley
280  Bitcoin / Development & Technical Discussion / Re: New paper: Accelerating Bitcoin's Trasaction Processing on: December 06, 2013, 10:26:21 AM
I'm going to have to sit down and read the paper more carefully to have a solid opinion on it, but my meta-opinion is that I think it's great to see people taking scalability and blocksize seriously on an academic level. We've got some incredibly naive ideas floating around the Bitcoin space right now - like the idea that just removing the blocksize limit entirely will work out fine - and we need research into scalability solutions that considers incentives and the resulting security carefully. That kind of reasoning tends to involve a lot of math, rather than platitudes about "market forces"


Speaking of, while the paper presents a solution preserving security guarantees, a quick skim of it doesn't seem to indicate they take into account the incentives around block propagation. If you wind up with a situation well large, centralized, mining pools earn more money as a part of this high-speed propagation game, even though in theory all the work being done contributes towards 51% security, the overall result may be a serious negative due to the incentives towards centralization. Lately I've done some work (pdf) on that topic; it's a very important crypto-currency design consideration that I'd like to see other people analyzing as well.
Pages: « 1 2 3 4 5 6 7 8 9 10 11 12 13 [14] 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 »
Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!