Bitcoin Forum
May 24, 2024, 12:43:53 PM *
News: Latest Bitcoin Core release: 27.0 [Torrent]
 
  Home Help Search Login Register More  
  Show Posts
Pages: « 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 [21] 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 »
401  Bitcoin / Development & Technical Discussion / Re: Testnet script which does not follow basic chunking rules... on: October 21, 2013, 11:36:49 AM
There's nothing in Bitcoin that checks the contents of a scriptPubKey until someone tries to spend it.

Don't assume anything about what's in one.
402  Bitcoin / Development & Technical Discussion / Re: merged mining vs side-chains (another kind of merged mining) on: October 21, 2013, 10:39:11 AM
killerstorm: https://github.com/petertodd/decentralized-consensus-systems.git <- github repo for that paper; I just added a bunch of todo notes on what needs writing.
403  Bitcoin / Development & Technical Discussion / Re: merged mining vs side-chains (another kind of merged mining) on: October 21, 2013, 10:13:59 AM
I think 99% of these arguments would go away if we had a simple library and some out of the box tutorials on how to make a parasitic consensus system. So many of these insecure schemes boil down to "it's less coding if I copy namecoin"
404  Alternate cryptocurrencies / Altcoin Discussion / Re: MasterCoin: New Protocol Layer Starting From “The Exodus Address” on: October 21, 2013, 10:02:54 AM
You're on the right track, although be warned...
All the same, I hope this helps.
Wow!  This guy is huge.  We need him on board. 

Peter - if you give me a MSC address I'll kick down 50 MSC just for that very useful post.  Further, I'll challenge other MSCers to do the same next time we get this kind of excellent feedback which certainly makes this project go down a better path.

Thanks for the offer, but part of the deal with that anonymous patron was that I remain un-invested in Mastercoin and thus preserve my independence. It's the same thing I did with Litecoin for their 0.8 audit: I didn't accept any Litecoins up front as payment until my audit was complete on the basis that if I found something bad, the readers of the audit would want me to have no incentives not to reveal the problems.
405  Alternate cryptocurrencies / Altcoin Discussion / Re: MasterCoin: New Protocol Layer Starting From “The Exodus Address” on: October 21, 2013, 09:55:50 AM
Another option is to ask bitcoin foundation to allow msc data transaction.    like telcos buying cell phone frequency for the government.

JR has stated a core goal of remaining censorship resistant which necessitates a design that does not require any permission/allowances (though of course it would be fantastic to have their support).  As for the 'buying' aspect, we're paying fees just like any other user of the blockchain - we'd like a miner to include our transaction and we thus pay them a fee to do so.  As reward halving decreases the amount of bitcoins mined per block, transaction fees are going to be an increasing part of a bitcoin miner's revenue stream so more transactions for bitcoin is not necessarily a bad thing if done the right way (which is exactly what we're trying to do).

Keep in mind that the Bitcoin Foundation does not control Bitcoin, right now at least. As an example at least %8.5 of the hashing power right now, Eligius, won't accept transactions to SatoshiDice or the  "correct horse battery staple" address. Eligius also uses non-standard fee rules, and will mine transactions with output values less than the dust limit. BTC Guild, 27% of the hashing power, also ignores the dust rules. I take advantage of this for timestamping stuff all the time, and there are enough nodes and miners out there that ignore the dust rules that all I have to do to get those transactions mined is disable the IsDust() test on my client.

The Bitcoin Foundation "officially" allowing mastercoin transaction may or may not have any effect at all on whether or not you can get them mined. Most likely IMO is you'll always be able to get them mined, regardless of what form they take, although it might take awhile and you may have to pay a fair bit to do so. It's possible that won't be true in the future though: Gavin has advocated that miners only accept blocks that contain nearly only transactions that they themselves would have mined, and further more contain nearly only the transactions that they were going to mine themselves. The logic behind this works on a number of levels: for instance if you only mine blocks that contain the transactions you yourself would have mined, you can make zero-conf transactions safer because miners who mine double-spends (even accidentally due to an attacker tricking them into doing it) will get their blocks rejected. Another aspect is how it can help shut out miners who mine "too few" transactions in their blocks, maybe because they have a low-speed internet connection and can only afford to mine the highest profitability, highest fee transactions. It also makes it easier to impose top-down rules on miners, perhaps because some transactions are considered "spam" or there's a desire to impose blacklists. (remember how you only need support of 50% of the hashing power to impose rules via block discouragement; right now that's BTC Guild, GHash.IO, and Eligius having control) Gavin also said he wants a very fast 6-9 month cycle of deliberate hard-forks, something that would very much centralize control around the Bitcoin Foundation.

More immediate inconveniences are how bare CHECKMULTISIG scriptPubKeys may be made non-standard in the near future in the reference client. (I've advocated this myself) I think you can say most of the dev team supports the idea to keep data out of the UTXO set, and make it more expensive to get data into the blockchain at all. Gavin and Mike don't like that idea last I heard, but don't assume their opinion actually counts for much. You should support stuffing Mastercoin data in P2SH transactions. (by that I mean the scriptSig spending a P2SH txout) Doing this also makes you immune to gmaxwell's P2SH^2 idea, and the overhead isn't big.

Speaking of, so a good way to think about this stuff is that there's always some cost to put data in the blockchain. Now if you are doing a standard transaction, the way Bitcoin intended, your cost is FeePerKB * 1. If you're doing something else, like a MasterCoin transaction where you need to go to a bunch of effort to encode it, your cost is FeePerKB * k. The overhead associated with, say, stuffing data into bare CHECKMULTISIG's makes your k higher, so you're basically paying more per transaction that if you were doing plain-old Bitcoin transactions.

Colored Coins have a real advantage here, because they are standard transactions and the data they need to represent matches that model perfectly. MasterCoin transactions on the other hand aren't, so you're paying a premium. Fortunately they're likely to be more valuable per transaction, so the premium is affordable, but you don't want to make a standard that isn't flexible.

Colored Coins have another big advantage too: they aren't global. In MasterCoin every transaction must be visible to every other MasterCoin user, which conversely means that every transaction is identifiable, and thus censorable with blacklists. Colored Coins are independent of each other, so those blacklists need to include up-to-date genesis transactions for every colored coin out there, and logic to follow them. Even worse is that some colored coin schemes hide what transactions are involved completely, preventing you from knowing anything at all unless you were a participant in the particular transaction. Now if you used a commit + reveal scheme, Mastercoin could be more censorship resistant, but there's a lot of issues with making such schemes work. (I'll try to talk about them in my paper that killerstorm mentioned eventually)

Finally, one last point: scalability. It'd be really convenient for some people in this community if we could set transaction costs to "reasonable" level - perhaps a penny each - through top-down control (like I mentioned above) regardless of what transaction volume Bitcoin sees. The problem with this idea is there will always be limits on maximum transaction volume due to the limits of technology, even without a blocksize limit. Stuff like Mastercoin, Colored Coins and timestamping, is really scary to these people because it involves use-cases with transactions that can easily be far more valuable to their users than standard payment-oriented transactions; people routinely pay $20 per trade brokerage fees when trading stocks without blinking an eye. Given that blockchain space is a market, it's easy to see how these very high value uses could crowd out the low-value payments that some people in this community are more interested in. Heck, just look at how the Multibit and Android Bitcoin wallet developers refuse to even add the ability to set transaction fees based on the idea that it's ridiculous to have people "fight" for block space, and that miners should just create bigger blocks. (even at the expense of their own profitability) Every application for the Bitcoin system that isn't payment-oriented is a threat to these people unfortunately, which is probably why they push so hard for things like insecure merge-mining systems.
406  Bitcoin / Project Development / Re: [ANN] Bitmsg - A Proof-of-Sacrifice distributed messaging layer over Bitcoin on: October 21, 2013, 02:11:56 AM
You may find this post on generating valid-looking pubkeys that hide data interesting: https://bitcointalk.org/index.php?topic=265488.msg3377058#msg3377058
407  Alternate cryptocurrencies / Altcoin Discussion / Re: MasterCoin: New Protocol Layer Starting From “The Exodus Address” on: October 21, 2013, 01:11:42 AM
I ran up 20,000 compressed pubkeys from random, then used the last byte rotation method to try and make them valid ECDSA points.  

Seems to have worked for all 20,000 - but my ECDSA point validity testing is being done with code from the Casascius Bitcoin Address Utility and there is a line in there that is making me wonder whether the validatePoint() and other check functions are giving me the whole picture on whether the keys are valid:

Code:
// todo: ensure X and Y are on the curve

You can find the results here.  Tachikoma, when you're back and have a bit of time would you mind taking the '*RAW' file and running some of those pubkeys through your Mastercoin::Util.valid_ecdsa_point? function and see what results you get?  They are the corrected keys with the last byte rotated and all 20,000 are supposed to be valid.

You're on the right track, although be warned that the assumption that modifying a single byte until the X point is on the ECDSA curve is always possible may be invalid; my understanding is that there's no upper-bound known on the gaps between valid points on the curve. Having said that, a decent assumption is, as you've already figured out, they are randomly distributed and about half of all randomly chosen 256-bit integers are valid X points.

Stepping back for a moment, keep in mind that censorship isn't as simple as just "block invalid X points" - censoring any data that looks like it isn't random is quite possible too. I've been told that some pools are already implementing this with rules that block transactions where the data encoded in them is too "compressible" or has too many ASCII characters. What you need to be doing, at minimum, is encrypting the data you are embedding too. As I suggested in the Bitmsg thread a good way to do this is to use a cipher, taking the first CTxIn's prevout.hash + prevout.n + nSequence + nLockTime, hashing all them together to create an initialization vector, and then using that IV with some type of cipher. The resulting encrypted data is just as likely as passing randomness tests as any other pubkey, and doing so doesn't have any real performance impact. (encryption is fast!) Of course, for Mastercoin, this isn't really encryption as encryption, more encryption for the purpose of steganography, but for that purpose it's still effective.

Once we've settled on using encryption we can then make use of it to give a very robust way of ensuring the data in question looks like valid public keys. Essentially what you want is a chained block cipher mode, but one where each PUSHDATA corresponds to one block; another equally valid version is to have a stream cipher where the cipher is re-keyed on every PUSHDATA.

Now lets say we've split our Mastercoin message into a series of PUSHDATA's, forming plaintext P_0 to P_n. We also have our steam cipher E_k for key k, where the k is well known. (could be SHA256('MasterCoin Rocks!'); it's not a secret) We compute the cipher text as C_i = E_k(P_i XOR C_{i-1}), C_0 = IV. (AKA CBC mode)

Now, what what we do is for every C_i we test if it's a valid pubkey, if not we modify P_i until it is, just like you were doing above. Given that ~50% of all X points are valid pubkeys the probability of not being able to find a valid one is 2^-256, essentially impossible, and because the IV is different for every message there's no danger of an attacker finding a specific one that doesn't work for everybody. (by finding some region where a large number of X points are invalid and constructing some kind of asset or something where trading in it is likely to result in messages hitting that case)

AES works on 128-bit blocks, while a compressed pubkey makes for a 256-bit block, so you'll have to make the iterator being modified be at the beginning rather than end of a block. You'll also need a crypto library that can save state, allowing you to back-up two blocks whenever you need to modify the iterator. That said if you put the iterator in the second block, you only need to back-up a single block, and it's rather unlikely for a 128-bit prefix to have no 128-suffixes that are valid pubkeys; you'll probably get away with it, but be warned that I'm not a cryptographer.

A more general "stuff-data-in-the-blockchain" solution would also allow for stuffing in any PUSHDATA, specifying which PUSHDATAs happened to be valid with, say, the low-order bits of the nValue in each CTxOut. In this case you'd want a rule that handles 33-byte PUSHDATA's specially, dropping the first byte and the iterator byte from the plaintext output. I've actually got code that does most of this; I might go publish it as a general purpose library for censorship-resistant embedding of data in the blockchain.

Also in case you guys aren't already aware, currently the IsStandard() rules consider any PUSHDATA from 33 bytes to 120 bytes as standard, and there is nothing checking that pubkeys are valid at all. (including the first byte that determines compressed/uncompressed) For instance my OpenTimestamps timestamper stuffed timestamps into bare CHECKMULTISIG scriptPubKeys by setting the first byte of the pubkey to zero to be sure it could never be spent.(1)

1) Note that because the IV is constructed from a one-way-function, and CBC-mode encryption is also a one-way-function, we can be sure that no attacker could ever come up with a way to get you to create a pubkey for which they knew the secret key for. In theory an XOR stream cipher mode doesn't have this guarantee.


Disclosure: Someone contacted me privately and offered to pay me to help you guys out, although they didn't realize at first how far along you guys were in understanding the problem. I assume they were a Mastercoin investor, but I don't know and they asked to remain anonymous. All the same, I hope this helps.
408  Bitcoin / Bitcoin Discussion / Re: Colored coins VS Mastercoins - Which one is better? on: October 20, 2013, 10:13:55 PM
The only thing Coloredcoin or Mastercoin have in common is that they both will die as failed experiments.

If colored coins don't pan out they won't take ~5,000BTC of investor money down with them; on a technical level both are decent ideas where a good implementation could be useful.
409  Bitcoin / Development & Technical Discussion / Re: Reducing UTXO: users send parent transactions with their merkle branches on: October 19, 2013, 11:40:17 PM
We were discussing this issue the other day on IRC actually; I've got an idea called TXO commitments that lets full nodes and mining require no disk storage at all, and pushes the cost of storing the chain to those whose wallets are involved. I'll write it up better when I get a bit of time, but in the meantime you might find the IRC logs interesting reading: https://s3.amazonaws.com/peter.todd/bitcoin-wizards-13-10-17.log
410  Bitcoin / Development & Technical Discussion / Re: CoinJoin: Bitcoin privacy for the real world on: October 19, 2013, 10:46:49 PM
It works pretty well, although currently there's a serious privacy problem because the pool uses compressed keys, while the blockchain.info client only uses compressed keys, even for sharedcoin change. To figure out where the money came from and went you just have to match up uncompressed inputs to uncompressed outputs. In addition to that the pool re-uses addresses, and the client doesn't, so it's pretty easy to figure out which addresses are in the pool and use them to isolate the client addresses.

Having said that, both problems can be easily fixed.

411  Economy / Web Wallets / Re: Blockchain.info - Bitcoin Block explorer & Currency Statistics on: October 19, 2013, 09:58:52 PM
Only if you are mixing with other users who want to send different amounts. When mixing with the sharedcoin pool the server doesn't desire any particular output size so can deliberately add input and output combinations which mimic your own. Then when you include randomised fees into the mix it makes it almost impossible to determine the inputs used from the value of one output.

One issue with the current version is that the blockchain.info wallet app only uses uncompressed keys, especially for sharedcoin change addresses, while internally the pool uses compressed keys.

I know iPhone compatibility is a problem, but maybe you could make the pool at least occasionally use uncompressed keys, or add compressed key change as an advanced experts-only option?
412  Bitcoin / Development & Technical Discussion / Re: merged mining vs side-chains on: October 19, 2013, 01:00:06 AM
Now let's compared these side-chains to "parasitic consensus systems" which were described in Peter Todd's (incomplete) article:

Quote
{Parasitic consensus systems}
A proof-of-work blockchain, such as the Bitcoin blockchain, can be made use of parasistically for a secondary consensus system. Recall the two fundemental proofs that a blockchain provides: consensus ordering/timestamping and and proof-of-publication. A Satoshi-style blockchain can be used as an ordered message publication service - it is not possible to completely prevent the publication of data without whitelisting censorship ...Thus for a given block height i we have a set of blocks B={b_0 ... b_i} containing messages M={m_0 ... m_j}. By applying a fixed set of rules to that set of messages multiple parties can independently arrive at the same state of the system.

... (here goes description of "string bling" system used as an example)
The Mastercoin system uses this principle. While not yet well developed, there exists an agreed upon set of rules that, from the contents of the Bitcoin blockchain, can derive a set of "Mastercoin" transactions and a final ledger state derived from data encoded in the Bitcoin blockchain.

Parasitic consensus systems inherently gain the benefits of the security of the underlying consensus system. Though the "string bling" system may have only a handful of users interested in it, an attacker attempting to change the state of the consensus of what strings have what bling would need to attack the Bitcoin blockchain directly - a signififantly harder problem. A merge-mined or independently mined string-bling implementation would probably never be secure against an attacker with a budget of even just a few thousands dollars, by parasiticly using the Bitcoin blockchain the attackers required budget swells to tens of millions.

I think it's obvious that side-chain have same properties as parasitic consensus systems, except they have smaller footprint and need external storage.

This means that, for example, Mastercoin won't lose anything if it will be re-implemented in form of side chain, it would just need its own block storage and incentive system.

It would "just" need that...

Yeah, I should cover side-chains... I've actually thought a fair bit about them before, and there are all kinds of ugly issues with data hiding and incentives.

Of course, at this rate maybe I should just rename the damn thing "Decentralized Consensus Systems: A survey"
413  Bitcoin / Bitcoin Discussion / Re: Momentum Proof-of-Work on: October 19, 2013, 12:07:00 AM
Yes it is very simple and elegant After the Fact...  but I have posted bounties trying to find better proof of work and spent weeks trying to find a way to make a fast to validate memory hard problem.    Then I had to find a way to make it 'scale' the difficulty smoothly.   Lots of little things to make something simple, elegant, and algorithmically secure.

Interesting idea, but for crypto-coins a proof-of-work scheme that isn't a random lottery - that is if not every attempt at creating a valid PoW has an equal chance of succeeding - can be really problematic because it means faster miners have an advantage. You give an example of a system where miners wouldn't want to add a transaction to the block they were mining because they'd have to start over. Such a system would mean that whom ever had the fastest implementation of the scheme would find the majority of the blocks, which really rewards people with highly-tuned custom hardware implementations - exactly what you are trying to avoid.

I'm also extremely skeptical of the idea that you've actually created a ASIC resistant scheme. You mention parallelism in your paper as a possible problem, but brush it off assuming that a hash table would be the optimal implementation, and lock contention and "atomic operations" would prevent a highly parallel implementation; I'm very, very skeptical that you're correct.

Fundamentally your design has two basic primatives: the generator function G(m, k)=H(m + k) producing candidate digests, and the content addressable memory that stores potential digests and allows for matches to be searched for. The problem is that a solution is defined as successful if any a and b are found such that G(m, a) & (2^d)-1 == G(m, b) & (2^d)-1 for some difficulty d, a small positive integer. (a minor correction form your paper; you forgot to include the masking operation that makes finding a and b possible at all)

As you hint an ideal operation will run multiple generators in parallel - the problem is that an optimal implementation of the content addressable memory is very probably not a simple hash table. Here we have a situation with really weak demands on the CAM: it's ok if it doesn't always find a match, it's ok if there is no global synchronization, it's ok if sometimes it returns a false positive, and worst of all it doesn't even have to actually store all the data! Dedicated silicon implementations of CAMs are already really common for things like network routers, and they have vastly better performance than lookup tables built from commodity memory and CPU's. They also use a plethora of complex and application specific tricks to get the performance they need, even going as far as to make use of analog computation and probabilistic retrieval.

For instance off the top of my head a very fast design with very good utilization of silicon would be to use a custom ASIC consisting of a number of generator units feeding their results into a probabilistic CAM. A nice trick we can take advantage of is that for each candidate digest the generator function produces, we only actually need to store it's index to recreate it. That is if G(m, i)=d_i, we only need to store i, and we can even cheat further by only storing some of the bits of i, doing a quick brute force search after the fact to figure out which actual i was the match.

Hardware CAMs are usually implemented as a series of cells, with some number of search lines connected to each cell in parallel. Each cell matches the search line against it's own contents, asserting a series of read-out lines if the contents match. Writing a value to a cell is similar to regular memory. Matches are done very quickly, a single cycle, but at the cost of high power consumption as the memory grows larger. In our case we want a match to be done on the value of G(m, i), and we want the CAM cell to return the index i.

Lets suppose the difficulty is such that we're finding 64-bit birthday collisions or 2^32 items for a 50% chance of collision. This means our index values will need to be about 32-bits, as we have to search from 0 to 2^32 for a 50% chance of finding a collision. Naively we've have to store 64 bits per value for the digest, and 32 bits for the index, or 96bits * 2^32 = 48GiB. But we can do a lot better... First of all suppose only store 24 bits of digest in each cell: by the time we've got 2^32 items we'll get on average 2^8 false positives - pretty manageable with some parallelism to test them all. Next we can split our gigantic CAM array into multiple independent arrays, say 256 of them. Now for that 2^8 false positives, we only need to store 16 bits - pretty good! As for the indexes, we can cut down on them too: lets drop the lowest 8 bits, and just bruteforce the solution for the previous candidate digest at a cost of 2^7 average operations. Sure, that wasn't free, but we're now down to just 48 bits per cell, or 24GiB total.

Now I'm not giving you exact numbers, but that's not the point: I'm showing you how what's optimal turns out to be a crazy-looking hyper-optimized set of custom ASICs. Heck, a quick read of the literature on CAM design says that you'd probably go even further in this case, doing really crazy stuff like using multi-level kinda analog DRAM technology in the CAM cells, making convoluted trade-offs between actually storing indexes, or just leaving them implied by what bank the results are being stored in, etc.

I really suspect that rather than creating a ASIC hard design where commodity hardware has the same profitability as anything else, you've actually done the exact opposite and created a PoW function where the optimal implementation costs a tonne of money to implement, involves custom ASICs, and has performance miles ahead of anything else. Even worse is that with the non-lottery-random "momentum" aspect of what you're doing whomever implements this crazy custom hardware first will not only have the highest hashing rate, but they'll also solve the PoW problems the fastest, and hence get nearly all blocks.

Finally note that if G() was made to be made non-parallizable the idea might work, for instance by defining it as G(i)=H(m+k) ^ G(i-1), but then you wouldn't be able to do verification cheaply and might as well just use scrypt.

tl;dr: Cool idea, but the end result is probably the exact opposite of what you want.
414  Bitcoin / Development & Technical Discussion / Re: Time for another testnet reset? on: October 18, 2013, 05:45:08 PM
I just wasted 20 minutes of my time reading through this thread without noticing how old the posts are.

Seriously though, be nice with the testnet and don't sell testnet coins  Angry

I think I'm entitled for some shameless self-promotion. Get your free test coins from:

testnet.mojocoin.com

BTW thank you for your service, it's really useful!
415  Bitcoin / Project Development / Re: [ANN] Bitmsg - A Proof-of-Sacrifice distributed messaging layer over Bitcoin on: October 18, 2013, 05:04:49 PM
Yeah, long-term data storage on the blockchain will never be a day-to-day backup solution, but there's a whole class of applications where the cost is worth it.

Sensitive data that should be preserved at all costs?

Yup.

Heck, I uploaded the very first computer program I ever wrote to the blockchain a few months ago.

There's a whole range of applicable programs that could be built on top of a data-neutral messaging layer.  I've wondered about how a peer to peer marketplace for selling of goods and services could be done.  One guy can (semi-)anonymously broadcast an "I've got a TV for sale for 4 BTC. Send me a message, here's my RSA public key."-style message. An application can take care of sorting through and searching available offers, building responses, reputation, etc.  Reputation can be done with more proof-of-sacrifice sends.  All really fascinating scenarios, really.

Yup. You'll probably find that at some point the fees per KB simply become too expensive, but that's just a practical consideration - figure out what parts of the system are best done on the blockchain, and what parts on a secondary system. For instance when someone has committed fraud and you want to destroy their reputation, the guaranteed wide audience of the Bitcoin blockchain may be worth the higher cost.

Quote
Nope, I just mean the txid:n pair, also known as an OutPoint. (from the COutPoint class in the reference client) That you don't need a full-weight client for, as you already know.

Anyway, if you're using the txid as an IV I think you've already met my criteria just fine!
Ah, right. I could have used the input_n in the hash, but the tx hash should be enough right?

Re-using IV's is often very bad and leads to plaintext compromise. Having said that, here we're using encryption for censorship resistance and steganography, not privacy, so the odd IV re-use isn't a disaster. Still you should avoid it, and including the input # helps. Come to think of it, including the nSequence of the first txin and the nLockTime of the whole transaction would be a good idea too as it's been proposed to alway set nLockTime to the current blockheight to discourage fee-sniping, thus really reducing the chances of accidental IV re-use. Point being, doing that in the future may be totally normal, so a transaction that does that won't stick out, and in the meantime including them in the calculation of the IV does no harm.

Well, then you don't end up with any more data space than regular pay-to-address.  Are you suggesting that pay-to-address/scripthash is better at disguising messages than a 1-of-M multisig tx?  If you use the 0x04 prefix on the public keys in the multisig transaction, you wouldn't be able to tell if they're real keys or message data.

Again, the point isn't to be efficient, it's to allow the creation of "stealth" data-encoding transactions whose existence can't be proved unless you have the secret key required to decode them.

It would be interesting to see what this could actually reveal.  Anyone who's really interested in protecting where the message comes from could connect to tor and broadcast through blockchain.info/pushtx. 

I've got Tor support in my dust-b-gone tool that you might find instructive.
416  Bitcoin / Project Development / Re: [ANN] Bitmsg - A Proof-of-Sacrifice distributed messaging layer over Bitcoin on: October 18, 2013, 03:04:40 PM
Another thought: make Bitmsg pay at least slightly over the 0.0001BTC/KB minimum fee by default so you beat out badly designed wallet software that doesn't let you set fees. You can go a bit further and make the default based on the fee/KB such software would pay for a standard 1 txin, 2 txout transaction, thus beating out most stuff in the fee competition.

Less relevant for Bitmsg really, as the fact that the txs get mined eventually doesn't actually matter too much, but for a small data storage tool this is quite useful, and the marginal cost over the absolute minimum fees isn't much because of how you just need to beat a fixed-in-stone fee.
417  Alternate cryptocurrencies / Altcoin Discussion / Re: MasterCoin: New Protocol Layer Starting From “The Exodus Address” on: October 18, 2013, 09:04:06 AM
I'd use the word symbiotic rather than parasitic, since I believe we'll be good for bitcoin in the long run.
I had a similar thought as I read it.  "epiphytic" - is more accurate in my view.  See: http://en.wikipedia.org/wiki/Epiphyte  Mastercoin doesn't take nutrients from or otherwise 'feed' off the blockchain - it merely hangs thereon and takes nothing away therefrom.   A true 'parasite' feeds off its host and takes essence therefrom.  

But this cannot be a surprise.  Both Peter and Meni jump at every chance to play Mastercoin into poor light - even when that includes using terms which are not accurate.

I'm writing a paper on digital asset representation - the term "parasitic consensus system" refers to a whole class of systems, not just Mastercoin. Systems that I incidentally happen to think are the only sane and safe way to to decentralized consensus actually, at least if you're creating something new.

Anyway, if you guys can come up with a neutral term for these systems I'm open to suggestions. The term "embedded consensus system" is a possibility, although I don't know that it gets the meaning across.

Here's a (very) rough draft of that section of my paper:

A proof-of-work blockchain, such as the Bitcoin blockchain, can be made use of
parasistically for a secondary consensus system. Recall the two fundemental
proofs that a blockchain provides: consensus ordering/timestamping and and
proof-of-publication. A Satoshi-style blockchain can be used as an ordered
message publication service - it is not possible to completely prevent the
publication of data without whitelisting censorship\footnote{By this we mean
    that a majority of miners use whitelists to determine if a transaction is
allowed to be mined.} although publication can be made expensive.\footnote{By
"expensive" we mean the fees required to get the transction mined; see FIXME
for a discussion of data embedding techniques.} Thus for a given block height
$i$ we have a set of blocks $B={b_0 ... b_i}$ containing messages $M={m_0 ...
m_j}$. By applying a fixed set of rules to that set of messages multiple
parties can independently arrive at the same state of the system.

A concrete toy example is the is the "String Bling" system. Here we want to
determine what string $s$ is defined as being the most "blinged", where bling
is gained by provably destroying Bitcoins. In addition we want to make it
possible for a given string $s_1$ to have its bling stolen and transferred to
another string, $s_2$, by a hostile act.

FIXME: rewrite without OP_RETURN, making bling be abstract

To do this we make use of provably unspendable
outputs\cite{provably_unspendable}: a scriptPubKey whose first operation is
OP\_RETURN can never evaluate true, and thus is trivially proven to be
unspendable. We say that if a transaction with an output containing a
scriptPubKey of the following form exists in the Bitcoin blockchain $v$ BTC
worth of bling have been assigned to string $s$:

OP\_RETURN "BLING" s

We also say that the bling associated with $s_1$ can be transferred to $s_2$ if
a transaction of the following form exists in the Blockchain, provided that the
total bling associated with $s_1$ at the transaction's height is less than $v$:

OP\_RETURN "STEAL" s\_1 s\_2

The algorithm to evaluate the amount of bling associated with all blinged strings is simple:

S = map()

for block in blocks:
    for tx in block:
        if tx is of form bling:
            S[\s] += tx.value

        else if tx is of form steal:
            if S[s_1] <= tx.value:
                S[s_2] += S[s_1]
                S[s_1] = 0

Note how there may be transactions in the blockchain that are invalid according
to the rules of the bling system - a transaction ma have fields missing or an
attempt to steal bling may have less value than the bling stolen. The presence
of such transactions is of no concern however as it is the rules, not the
blockchain data to which the rules are applied, that determines of the final
state of the system.

The Mastercoin system uses this principle. While not yet well developed, there
exists an agreed upon set of rules that, from the contents of the Bitcoin
blockchain, can derive a set of "Mastercoin" transactions and a final ledger
state derived from data encoded in the Bitcoin blockchain.

Parasitic consensus systems inherently gain the benefits of the security of the
underlying consensus system. Though the "string bling" system may have only a
handful of users interested in it, an attacker attempting to change the state
of the consensus of what strings have what bling would need to attack the
Bitcoin blockchain directly - a signififantly harder problem. A merge-mined or
independently mined string-bling implementation would probably never be secure
against an attacker with a budget of even just a few thousands dollars; by
parasiticly using the Bitcoin blockchain the attacker\'s required budget swells
to hundreds of millions of dollars.
418  Bitcoin / Project Development / Re: [ANN] Bitmsg - A Proof-of-Sacrifice distributed messaging layer over Bitcoin on: October 18, 2013, 08:11:55 AM
Well, if messages are encrypted it would be hard for anyone contributing to the network to even know what they were transmitting.  As far as they know, they are only serving to help Bitcoin.  Long-term data storage is something I've thought about - it's really cool you know, you have to *pay* for storage. A 500kB file will cost you 5 BTC. How cool is that?  The space is expensive, and will only get more expensive, so it's unlikely people will bloat the chain with copies of their favorite mp3s.

Yeah, long-term data storage on the blockchain will never be a day-to-day backup solution, but there's a whole class of applications where the cost is worth it.

One interesting one I came up with is for "pseudo-HD wallets": you use a master key, like a normal HD wallet, but instead of deriving a series of secondary keys with ECC magic you just encrypt a series of private keys and store them in the blockchain tagged such that SPV clients can easily find them again with bloom filters. It's a nice way to take a "bag-of-keys" wallet and upgrade it to HD wallet form, without having to throw away the original keys, yet a complete backup is still just that master key. No less secure either as compromise of the master key compromises the HD wallet completely anyway.

Quote from: retep
1) Make sure any data storage/messaging scheme is designed such that to prove any given message has a data payload you need to provide the entire message; that is if you don't have the entire message the authorities may actually be using the "data blacklist" as a way to freeze peoples legitimate Bitcoin funds, and if you do have the entire message through the "data blacklist" the blacklist itself is serving as the means to distribute the data. I think the scheme I outlined of encryption using the first txin as an IV meets this criteria.

The problem with using the amount of the first input is knowing what the amount is in the message-receiving end.  Unless you're a full node, you can't connect inputs properly, which sucks if all you want to build is a lightweight message retrieval application.  The current implementation does use the first input's txid as the IV, however.

Nope, I just mean the txid:n pair, also known as an OutPoint. (from the COutPoint class in the reference client) That you don't need a full-weight client for, as you already know.

Anyway, if you're using the txid as an IV I think you've already met my criteria just fine!

How does using P2SH help?  P2SH standard scripts only have that hash of the script. There's no good place to store data.

As long as P2SH^2 isn't implemented nothing actually checks that the hash is actually a hash; you can stuff whatever data you want in it. Here's an example tx doing just that: https://blockchain.info/tx/5143cf232576ae53e8991ca389334563f14ea7a7c507a3e081fbef2538c84f6e
419  Bitcoin / Development & Technical Discussion / Re: HD wallets for increased transaction privacy [split from CoinJoin thread] on: October 18, 2013, 03:52:48 AM
Of course all this stuff is really assuming that blockchain space remains cheap enough that most transactions happen on the blockchain, in which case you have to wonder why anyone would care much about fees. On the other hand if blockchain space is expensive, Alice's transaction patterns are going to be mainly receiving large chunks of Bitcoins, and moving them to off-chain tx services periodically. That's easy to do with single-txin, single-txout transactions, has no privacy issues at all, and will naturally be supported by wallet software to save on fees.
You mean, no privacy issues other than that users are forced to hold their funds with a third party service who can monitor their transactions or lose or steal their bitcoins.

Off-chain Transactions - Bitcoin 2013 Conference - Peter Todd <- it's well understood how to prevent all those things.

Anyway decentralized consensus theory has advanced to the point where we can probably build crypto-coin systems that allow for much better tradeoffs when it comes to scalability, transaction volume, and security. But they'll require thousands of lines of code, and years of testing before any of it gets implemented. They're also all systems that clearly expose the underlying "There Ain't No Such Thing As A Free Lunch" nature of the world; outsourcing your security to miners comes at a cost and I mean it when I say these systems have tradeoffs. But so does Bitcoin - it's just not made obvious yet because Bitcoin is still in its infancy.

Anyway for now we've got the Satoshi Bitcoin v1 system, and we have to live with it's limitations.
420  Bitcoin / Development & Technical Discussion / Re: HD wallets for increased transaction privacy [split from CoinJoin thread] on: October 18, 2013, 12:27:11 AM
Sure, I understand that, but then you're back to wondering - how many keys should I use, and what values should I assign to them?
That's for the sender to figure out, based on whatever balance of privacy vs speed of transaction vs fees matches their own preferences.

The recipient doesn't need to know or care about that at all. If they are expecting a certain amount of funds to arrive they just keep watching their sequence of addresses for it to show up until it does. It shouldn't matter to them whether it arrives in a single transaction or 100 transactions.

Yes it does: spending 100 transactions costs more in fees than spending 1 transaction.

At least, that's what it looks like at first glance; in reality it depends on how you spend your coins. Take the example of Alice, who receives a weekly paycheck from Bob, valued 100BTC: She starts off with a single txout, of 100BTC value, and pays her bills, buys lunch, gives her kids their allowance etc. Each payment she makes results in a wallet with a (100-expenditures) txout, so she's averaging one input and two outputs per transaction. If her payments were all 0.5BTC, she'll would have made ~200 transactions before that txout is finally used up. She could have instead received ten 10BTC txouts instead, and her total transaction fees paid would have increased slightly because a few more of the txouts would have had multiple inputs as the money ran out. But all the same her total tx fees have increased for the sake of maybe privacy.

On the other had Bob, who runs an online store, has to combine the payments of 200 customers to pay Alice. He has to pay transaction fees on the ~142 bytes required per txin, and the last thing he needs is for his customers to start paying him with even more txouts.

Having said that even Bob still has a privacy problem as it's likely that his payments to Alice, and his other employees, are going to end up linked together; Alice certainly has a problem. Overall they both could benefit from cut-thru payments, both to reduce total transaction fees, and improve privacy. I think in most cases it'd be enough to have Alice's wallet software just give Bob a list of addresses and amount ranges she's interested in - I don't think it's really required that for Bob's customers' wallet software support anything beyond single txout payments, although it wouldn't hurt.

Of course all this stuff is really assuming that blockchain space remains cheap enough that most transactions happen on the blockchain, in which case you have to wonder why anyone would care much about fees. On the other hand if blockchain space is expensive, Alice's transaction patterns are going to be mainly receiving large chunks of Bitcoins, and moving them to off-chain tx services periodically. That's easy to do with single-txin, single-txout transactions, has no privacy issues at all, and will naturally be supported by wallet software to save on fees.
Pages: « 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 [21] 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 »
Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!