Bitcoin Forum
May 24, 2024, 03:37:33 AM *
News: Latest Bitcoin Core release: 27.0 [Torrent]
 
  Home Help Search Login Register More  
  Show Posts
Pages: « 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 [33] 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 »
641  Bitcoin / Development & Technical Discussion / Re: Blockchain Compression on: July 08, 2013, 05:20:41 PM
Including a UTXO hash in the checkpoint makes for a much less secure model than not doing so.

With checkpoints the client only skips validating transactions, it still builds the UTXO set. You've still shown that an vast amount of hashing power work has been spent towards the blocks leading to the checkpoint, and validating that work, as well as the work after the checkpoint, is pretty easy. More importantly if the checkpoint is invalid and the majority of hashing power isn't following it you'll soon notice because your client doesn't see as many blocks as it should.

On the other hand a UTXO hash is trusting the developers to compute the UTXO set properly for you. They could easily leave UTXO's out and you would never know until your node tried to spend one of them and wound up forked from the main network, or with a majority of hashing power, someone found out their coins were now unspendable by developer fiat. Point being fraud on the part of the developers isn't detected automatically or immediately.

UTXO hashes will be acceptable once UTXO proofs are a requirement for a block, and you'll be able to reason about how much work has been done for any given UTXO proof, but as you know we're a long way from finishing that.
642  Bitcoin / Development & Technical Discussion / Re: SIGHASH_WITHINPUTVALUE: Super-lightweight HW wallets and offline data on: July 04, 2013, 08:25:18 PM
Also, it is only needed for transfers to cold storage addresses. 

A client could have a checkbox to create the appropriate transaction.

On the other hand we're probably going make miners create proofs of transaction fees sometime in the future, either as part of the UTXO proof stuff or something separate, so I'm not sure a hacked up mechanism like this is really worth it.

Users aren't going to want to have to check some checkbox due to some strange technical reason that they have no understanding of.
643  Bitcoin / Development & Technical Discussion / Re: SIGHASH_WITHINPUTVALUE: Super-lightweight HW wallets and offline data on: July 04, 2013, 07:31:03 PM
Oh, I understood that part, but I forgot about the part where they don't have to provide a large signature in order to spend it to themselves.  And I missed the part that they can mine all of them in a single transaction and produce only one extra output for themself.  Okay, it's kind of cool.

Basically you're adding 1+1+8=10 bytes for the OP_TRUE txout and 36+4+1=41 bytes to spend it, or 51 extra bytes in total per transaction. That's not great - it's roughly a 25% overhead for typical transactions - but it's not infeasible either.
644  Bitcoin / Development & Technical Discussion / Re: Zerocoin when? on: July 04, 2013, 01:58:47 PM
What I meant is that I use Bitcoin as an end user pretty regularly and didn't yet encounter a P2SH address. They certainly have been used, but are they being used regularly? And if someone is using them regularly, would they continue after the payment protocol is available? These things aren't really clear.

We're wandering off-topic, but what does P2SH have to do with the payment protocol?

The payment protocol lets the receiver of funds tell the sender what scriptPubKey they want the payment made too; the protocol doesn't specify what form the scriptPubKey will take other than saying it normally would be a standard Bitcoin transaction script. A merchant could, for instance, choose to provide clients a OP_CHECKMULTISIG scriptPubKey directly for instance, or even something completely non-standard. However any sane client implementation is going to filter out anything other than totally standard transaction formats pubkey, pubkeyhash and P2SH because anything else risks getting your funds stuck while the merchant tries to get the transaction mined.

Overall a P2SH txout adds just 24 bytes to a transaction by the time you spend it. (21 bytes to encode the hash, 2 bytes of opcodes, and 1 byte to encode the length of the scriptPubKey when you eventually spend it) For the minimum size tx format, bare compressed pubkeys, it takes 158 bytes round-trip to create and spend a txout (44 bytes for the txout, 114 bytes to spend the txout) so the overhead of P2SH is just 15% Compared to the standard pay-to-pubkey-hash P2SH is actually 1 byte smaller because it doesn't need the OP_DUP, and it's still just as secure in terms of keeping pubkeys unknown against a ECDSA break.

There's a pretty good chance we'll eventually ban non-P2SH outputs entirely to prevent people from storing data in the UTXO set, possibly with Gregory Maxwell's P2SH^2 proposal. (obviously prunable OP_RETURN outputs would be exempt, and possibly also some type of anyone-can-spend) There are some niche use-cases where knowing the scriptPubKey is prior to spending is important, mainly auditing, but with clever protocols you can make the scriptPubKey be either deterministically generated, or put the required data in an OP_RETURN output in almost every case. We can also add "tearable" data that is temporarily stored in the UTXO set and is guaranteed to be relayed as part of a block, but is removed from the set after some defined time period. Interestingly one way to implement the latter is with time-locked anyone-can-spend outputs with minimum output amounts: with a reasonably large output amount after the time is up a miner will spend them and collect the output as a fee. Time-locked outputs can be done as a soft-fork adding a new instruction and are very useful for Bitcoin sacrifices as well.
645  Bitcoin / Development & Technical Discussion / Re: Blockchain Compression on: July 04, 2013, 11:31:34 AM
One catch is that unfortunately UTXO commitments are in themselves very dangerous in that they allow you to safely mine without actually validating the chain at all.

You still need the UTXO tree though.  Updating the tree requires that you process the entire block.

That's not true unfortunately. You'll need to do some extra UTXO queries strategically to get the right information on the branches next to the transaction being changed, but with that information you have enough to calculate the next UTXO tree state without having any of the data. This is inherent to having a fast UTXO commitment system in the first place because part of making it cheap to provide those commitments is keeping the data changed for every new block at a minimum - exactly opposite the need to have miners prove they actually have the UTXO set at all.

Ideally, there would be some way to merge multiple sub-blocks into a higher level block.

For example, someone publishes a merkle root containing only transactions which start with some prefix.  A combiner could take 16 of those any produce a higher level merkle root.  The prefix for the combined root would be 4 bits less.

Effectively, you would have 16 sub-chains, where each sub-chain deals with transactions that start with a particular prefix.  Each of those chains could have sub-chains too.

The only way to make it work with Bitcoin would be to have a defined number of transactions per block.  You can only combine two merkle trees into 1 merkle tree if both children have the same number of transactions.

There would also need to be some kind of trust system for the combiners.  If you have to verify, then it defeats the purpose.

A "trusted-identity" would certify the combination (and get fees).  If proof is given that the claim is false, then that id is eliminated.

You're missing a key point: transactions can touch multiple parts of the UTXO set so once the UTXO set is split into subsets participants validate (at minimum) pairs of those subsets.
646  Bitcoin / Development & Technical Discussion / Re: Blockchain Compression on: July 04, 2013, 05:16:22 AM
I don't follow.  How could an SPV node function with only SPV peers?  How would it get the tx data that it needs?

Payment protocols.

Anyway the whole idea of just assuming SPV nodes are going to be able to get the tx data they need from peers for free is really flawed. Running bloom filters against the blockchain data costs resources and at some point people aren't going to be willing to do that for free.

I've got writing "tit-for-tat" peering on my todo list actually: IE silently fail to relay transactions to peers if they don't relay enough new and valid transactions to you for being leeches on the network. Pure SPV clients, like Android clients, can pay for the resources they consume via micropayment channels or at least proof-of-work. It'd also prevent attackers from DoSing the network by making large numbers of connections to large numbers of nodes to fill the incoming peer slots of nodes on the network; you need very little in the way of resources to pull off that attack right now.
Ways to reduce the load off of full nodes, and pay for their efforts seems like good ideas.  But do you expect some SPV nodes to not be able to connect to any full nodes at all at some point in the future?  If there's a market for the service, then surely they will always be able to find some willing to provide it, especially if their security depends on it.

Among other things there is the problem that without the way to prove that a txout doesn't exist the network operator can prevent a SPV node from ever knowing that they have been paid and there is nothing the SPV node can do about it.
That sounds a bit obnoxious, sure, but is it really that big a problem?

Yes: it allows someone to claim they have funds that they have since spent.

Also look at in the broader sense: if you do have UTXO proofs an SPV node can pay for a full-node connection and SPV-related services, either with real funds or a anti-DoS proof-of-work, and be sure that the node is being honest and they are getting accurate data with nothing more than a source of block header info. (relaying of block headers between SPV nodes is something I'm also planning on implementing)
647  Bitcoin / Development & Technical Discussion / Re: Blockchain Compression on: July 04, 2013, 04:30:10 AM
Okay, so the network operator could mislead a node onto his invalid chain by handing it fake fraud challenges.  Can't he do this regardless, by simply refusing to relay valid block headers?

Among other things there is the problem that without the way to prove that a txout doesn't exist the network operator can prevent a SPV node from ever knowing that they have been paid and there is nothing the SPV node can do about it.
648  Bitcoin / Development & Technical Discussion / Re: Blockchain Compression on: July 04, 2013, 04:26:48 AM
I don't follow.  How could an SPV node function with only SPV peers?  How would it get the tx data that it needs?

Payment protocols.

Anyway the whole idea of just assuming SPV nodes are going to be able to get the tx data they need from peers for free is really flawed. Running bloom filters against the blockchain data costs resources and at some point people aren't going to be willing to do that for free.

I've got writing "tit-for-tat" peering on my todo list actually: IE silently fail to relay transactions to peers if they don't relay enough new and valid transactions to you for being leeches on the network. Pure SPV clients, like Android clients, can pay for the resources they consume via micropayment channels or at least proof-of-work. It'd also prevent attackers from DoSing the network by making large numbers of connections to large numbers of nodes to fill the incoming peer slots of nodes on the network; you need very little in the way of resources to pull off that attack right now.

I addressed this as well in that thread: https://bitcointalk.org/index.php?topic=131493.msg1407971#msg1407971.  The Merkle tree of transactions is used to authenticate the maximum fee reward calculation in each block.  It didn't seem to require utxo set commitments.

That's a good point, a changed merkle tree can achieve that too. However maaku is quite correct that network operator attacks are trivial without UTXO set commitments, whereas with them the worst a network operator can do is prevent you from getting a fraud proof message; creating fake confirms is extremely expensive.
649  Bitcoin / Development & Technical Discussion / Re: Blockchain Compression on: July 04, 2013, 03:03:10 AM
For fraud proofs, I think d'aniel raised that before, I'm afraid I don't remember which proofs require the commitments. Double spends don't. At any rate, whilst those proofs would indeed be useful they weren't the rationale given for "ultimate blockchain compression" originally. A full design doc for different kinds of fraud proofs would be useful.
Actually, none of the fraud proofs/challenges I mentioned in that thread relied on utxo set commitments.  Transactions with nonexistent txins would benefit from them, since then there could be concise proofs of nonexistent txins, but the way I described it, a peer would issue a challenge to find a valid Merkle branch to an allegedly nonexistent txin from some other peer.  If at least one peer is honest (which Bitcoin generally assumes anyway), then they will find the branch if the peer turned out to be lying, and can ignore that peer going forward.

Yeah, you can go very far without UTXO set commitments, but without them your scenario only works if you assume your peers are full-nodes. If you are an SPV node with SPV peers - a completely valid scenario that we will need in the future and one that is useful with payment protocols - you're stuck and can't do anything with the fraud proofs.

The other issue is that only UTXO set commitments can prove inflation fraud without a copy of the blockchain, IE miners deciding to changing the subsidy and fee rules and create coins out of thin air.
650  Bitcoin / Development & Technical Discussion / Re: Zerocoin: Anonymous Distributed E-Cash from Bitcoin on: July 04, 2013, 02:14:51 AM
There aren't any compelling reasons to use soft forks, IMHO. It is at best controversial.

Lol.

For the newbies reading this, basically Mike is saying there isn't any compelling reason for backwards compatibility, and every change should require that every single Bitcoin node upgrade all at once even when doing so in a backwards compatible way is trivial. Needless to say, it's not an opinion shared by many... For reference, here's Gavin's "Bitcoin Rule Update Process" guidelines: https://gist.github.com/gavinandresen/2355445 which were quite successfully used to implement BIP 34
651  Bitcoin / Development & Technical Discussion / Re: Blockchain Compression on: July 04, 2013, 01:51:52 AM
The final issue I have is that you've raised a fair bit of money to work on it, but I didn't see where you got consensus that it should be merged into existing codebases. Perhaps I missed it, but I don't think Gavin said he  agreed with the necessary protocol changes. He may well have accepted your design in some thread I didn't see though.

My conversations with Gavin in the past were that UTXO commitments and UTXO fraud proofs are a necessary precondition to raising the blocksize because without them you have no way of knowing if miners are committing inflation fraud and no way of proving that to others. Other core dev's like Gregory Maxwell share that opinion. If anyone succeeds in creating a robust implementation doing a PR campaign to educate people about the risks of not making it part of a blocksize increase is something I would be more than happy to be involved with, and I'm sure you realize it'll be an even easier message to get across than saying limiting the blocksize is a good thing.

Who doesn't want auditing, fraud prevention and fresh home-made all-American apple pie?

One catch is that unfortunately UTXO commitments are in themselves very dangerous in that they allow you to safely mine without actually validating the chain at all. What's worse is there is a strong incentive to build the code to do so because that capability can be turned into a way to do distributed mining, AKA P2Pool, where every participate has low bandwidth requirements even if the overall blockchain bandwidth requirement exceeds what the participants can keep up with. If miners start doing this, perhaps because mining has become a regulated activity and people want to mine behind Tor, it'll reduce the threshold for what is a 51% attack by whatever % of hashing power is doing mining without validation; we're going to have to require miners to prove they actually posses the UTXO set as part of the UTXO-commitment system. That said I'm pretty sure it's possible to allow those possession proofs to be done in a way where participants can maintain only part of the UTXO set, and thus part of the P2P relay bandwidth, allowing low-bandwidth true mining + larger blocksizes without decreasing the 51% attack threshold and helping solve the censorship problem and keeping thus Bitcoin decentralized. The devil is in the details, but at worst it can be done as a merge-mined alt-coin.

Of course that still doesn't solve the fundamental economic problem that there needs to be an incentive to mine in the first place, but an alt-coin with sane long-term economic incentives (like a minimum inflation rate) can always emerge from Bitcoin's ashes after Bitcoin is destroyed by 51% attacks and such an alt-coin can be bootstrapped by having participants prove their possession or sacrifice of Bitcoin UTXO's to create coins in the "Bitcoin 2" alt-coin, providing a reasonable migration path and avoiding arguments about early-adopters. Similarly adding proof-of-stake to the proof-of-work can be done that way. (note that proof-of-stake may be fundamentally incompatible with SPV nodes; right now that's an open research question) None of this is an issue in the short-term anyway as the subsidy inflation won't hit 1% until around 2032 by the very earliest, even later than that in real terms depending on how many coins turn out to be lost. Applications like fidelity bonds and merge-mined alt-coins that all subsidize mining will help extend the economics incentives to mine longer as well.

Yes, I'm less worried about the blocksize question these days because I'm fairly confident there can exist a decentralized cryptocurrency in the future, but I don't know and don't particularly care if Bitcoin ends up being that currency. I'll also point out that if Bitcoin isn't that currency, the alt-coin that is can be created in a way that destroys Bitcoin itself, for instance by making the PoW be not only compatible with Bitcoin, but by requiring miners to create invalid, or better yet, valid but useless Bitcoin blocks so that increased adoption automatically damages Bitcoin. For instance a useless block can be empty, or just filled with low or zero fee transactions for which the miner can prove they posses the private keys too; blocking non-empty blocks would require nodes to not only synchronize their mempools but also adopt a uniform algorithm for what transactions they must include in a block for the block to be relayed and any deviation there can be exploited to fork Bitcoin. (another argument for requiring miners to prove UTXO set possession) Also note how a 51% attack by Bitcoin miners on the alt-coin becomes an x% attack on Bitcoin. Sure you can mess with the Bitcoin rules to respond, but it becomes a nasty cat-and-mouse game...
652  Bitcoin / Development & Technical Discussion / Re: Proposal: We should vote on the blocksize limit with proof-of-stake voting on: June 29, 2013, 05:32:00 PM
What? He saved the network? That's the best joke I've heard for days.. Smiley

First of all, the DoS attack was not on the network, but on a buggy official client, which I don't even use myself, so I couldn't care less.
Moreover, yesterday on IRC I even proposed a simple improvement that would solve this kind of problems once and for all (fetching data length, along with the hash), but nobody did care about it, including the guy, who among others was extremely impolite to me, so I have just adopted myself to his standards.

Now you get it?

Ah, that's who you are. Yeah /ignore


jdillon: I'll do a write-up in a week or so if you are interested, but waiting a bit longer isn't a bad idea; the network as a whole is safe right now but there are still a lot of non-upgraded nodes out there.
653  Bitcoin / Development & Technical Discussion / Re: [REQUEST] Developing Bootable Bitcoin-QT on: June 28, 2013, 05:41:25 PM
How about hardware trojan and hardware keylogger?

Use hardware built prior to 2009
654  Bitcoin / Development & Technical Discussion / Re: Proposal: We should vote on the blocksize limit with proof-of-stake voting on: June 28, 2013, 05:18:57 PM
Bitcoin is hopeless if majority of miners are evil.

The design of Bitcoin only requires a majority of miners to be economically rational for Bitcoin to work correctly; good or evil has nothing to do with it. It's easy to see how if a majority of miners think that it is economically advantageous for them to mine large blocks they have every reason to do so and every reason to do "evil" things like block votes against a blocksize increase. (after all, they're just "protecting" Bitcoin from those who want to "hold it back")


One more comment, on lifting the max block size, which I myself was bitching about.
Today I am tending to say that I was wrong - it won't matter.
I see it now that most miming pools keep the limit at 250KB and even if you remove the hardcoded 1MB from the client, they will still keep their soft limits at whatever level they want - and it will be rather lower than higher.
Anyone who decides to mine bigger blocks is betting against himself, because bigger blocks need more time to propagate over the net, thus naturally having a bigger chance to get orphaned. Measure a difference between propagating 250KB vs 25MB block and you will not think twice of which one your mining pool prefers.

This is a brilliantly designed "ecosystem" that is regulating itself - therefore, seeing it at work, I am against any additional regulations.

Unfortunately you are quite wrong there. Pools can easily peer with each other with dedicated connections on fast servers to ensure that blocks propagate quickly to other pools and we can expect more of this in the future; the incentives to produce small blocks due to propagation delays and orphans are greatly diminished by peering because your block will reliably get to the majority of hashing power fast, the "little-guys" with the most decentralized hashing power be damned.

FWIW the real reason why pools have mostly switched to a 250KB limit seems to be just a change in the block creation code that makes the 250KB default limit a hard limit, rather than the previous behavior where the hard limit was 500KB and it took transactions with increasingly higher fees to fill up that space. If anything it just shows how little pool operators care about transaction fees right now in favor of the still very high inflation subsidy.

Interestingly the recent DoS attack against Bitcoin - caused by how Bitcoin was allowing messages up to 32MiB in size with no anti-DoS limits - gave me a unique opportunity to observe the speed at which extremely large messages propagated across the network, not unlike extremely large blocks. While successive waves of the attack hit my Amazon EC2 hosted nodes very quickly - seconds - due to their high bandwidth it took multiple minutes for those same messages to even begin to appear at less well connected nodes, such as at my apartment, and especially any node connected via Tor. It would have been nice to actually run some experiments, but unfortunately I was too busy trying to stop the attack and the patch that I wrote which fixed the issue had the useful side-effect of "innoculating" even unpatched nodes to the attack. Oh well.
655  Bitcoin / Development & Technical Discussion / Re: Proposal: We should vote on the blocksize limit with proof-of-stake voting on: June 28, 2013, 01:52:49 PM
It's almost certain that Satoshi is the richest Bitcoin holder, as well as other early adopters.  By definition these "rich" people are likely to be quite insightful and ideologically pure: they recognized Bitcoin's value and persevered when no one else was interested and the product was going through difficult times.  Personally, I'd trust him and other early adopters to make wiser decisions than any persons (or hashing power) showing up since this year's media explosion.

One of the more interesting aspects of John Dillon's proposal is that it could give many of those large Bitcoin holders a reason to move their coins so that the txouts will be fresher than 1 year old and can participate in the vote fully; seeing some really early coins vote would be fascinating.
656  Bitcoin / Bitcoin Discussion / Re: WARNING! Bitcoin will soon block small transaction outputs on: June 25, 2013, 06:41:49 PM
Mining of transactions aren't a problem, and EMC continues to mine dust outputs.  However, I keep a reference client to make sure things are working properly and I can no longer use sendmany to send small outputs without modifying the reference client.  That's my issue/beef with this.  The client is artificially being forced to prevent sending, disregarding whether or not a miner will include it in a block.

The solution is to let the miners decide, not force the client into a specific course of action, thereby taking the decision out of the hands of the miners.

Unfortunately if you relay a transaction that is unlikely to get mined you make the whole Bitcoin network vulnerable to DoS attacks by flooding it with transactions; essentially you are using network bandwidth without paying for it. Gavin wants to make the dust limit adaptive to what miners actually mine, but doing so won't be easy and P2P layer changes are always risky.

As always, if you feel strongly that you want your pool to mine such transactions and want to let people create them, you can always advertise a node to submit such tx's to directly. In fact I think it's enough to just connect your pool's nodes to 173.242.112.53, which relays any transaction indiscriminately regardless of fee or even if it passes the IsStandard() test.
657  Bitcoin / Bitcoin Discussion / Re: WARNING! Bitcoin will soon block small transaction outputs on: June 25, 2013, 05:54:22 PM
Yeah, this change is causing problems for me as well on EMC.  I definitely do not support it at this point.

How is this a problem for you? EMC is a mining pool and can mine whatever transactions they want.
658  Bitcoin / Development & Technical Discussion / Re: A short introduction to TPMs on: June 20, 2013, 02:49:41 AM
I've been reviewing mobile phone security.  It appears that we might be able to implement a scheme like this on the Samsung KNOX architecture.  Though I'm not sure if it does general remote attestation it does however appear to have all the other requirements.  

Does anyone here know if KNOX supports remote anonymous attestation?

(It must be able to do some sort of attestation to support the features that it claims to although it looks like the phone has to configured by a company's/organization's central IT department first and then it can attest to their computers (only theirs?) when necessary ).

The problem with remote attestation implementations is almost always they have no signed root of trust; basically that means you can buy a device, securely set it up, and you yourself can verify the attestation remotely yourself, but anyone else doing so is trusting that you set it up correctly in the first place.

However this is what Samsung says about KNOX:

Quote
Samsung KNOX technology uses a Secure Boot protocol that requires the device boot loader, kernel, and system software to be cryptographically signed by a key whose root of trust is verified by the hardware. Commercially sold Samsung devices will have Samsung-issued root certificates.

So if the hardware is in fact secure, in principle it will do what we need to do off-chain/instant-conf transactions. But Samsung is delaying the launch of KNOX until "later this year" and they haven't as far as I know released a developer API. It's quite possibly that it won't be possible for developers to just download the API and develop KNOX apps without signing NDA's and getting approval from Samsung for every app.
659  Bitcoin / Development & Technical Discussion / Re: Please some one fix this https://en.bitcoin.it/wiki/Testnet on: June 19, 2013, 04:52:36 PM
Go right ahead yourself. This isn't Wikipedia; no one will revert your edit for reasonably adding a new service, especially for test net.
660  Bitcoin / Development & Technical Discussion / Re: Please some one fix this https://en.bitcoin.it/wiki/Testnet on: June 19, 2013, 01:57:43 AM
Nice block explorer, just sent you a tip.

You can edit it yourself; the wiki simply requires a small Bitcoin payment as an anti-spam measure when you sign up.
Pages: « 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 [33] 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 »
Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!