Bitcoin Forum
June 16, 2024, 07:17:55 PM *
News: Voting for pizza day contest
 
  Home Help Search Login Register More  
  Show Posts
Pages: [1] 2 »
1  Bitcoin / Development & Technical Discussion / Re: Is there any full node implementation 100% compatible with the core client? on: January 18, 2015, 03:27:01 AM
Quote
One question with formal verification is what are you formally verifying?

Right. So maybe we can try to think about how to break bitcoin up into modular parts (eg. secp256k1, vm/script, encoding/decoding, merkle tree, create/verify block header, add block to tree, etc.). Then we can start with verifying the parts, and with verified parts you are much more likely to have a valid whole. Maybe at that point we can all band together and build an asic for it, then all run the asic instead of building software on the myriad OS's (which practically precludes the possibility of formal verification, as you've noted). Then maybe we'll have a safe bitcoin?! It's extreme but I'll entertain it.
2  Bitcoin / Development & Technical Discussion / Re: Is there any full node implementation 100% compatible with the core client? on: January 14, 2015, 09:21:55 PM
Quote
BlueMatt wrote a pretty extensive agreement testing framework and discovered several corner cases that were not previously known. He's generally of the opinion that achieving a correct reimplementation is more or less intractable. His approach made progress, especially in unearthing weird behaviours, but doesn't result in great confidence of soundness; and doesn't show an implementation free of its own buts not related to non-obvious behavior in Bitcoin Core.  Our strategy in Bitcoin Core as of late has been toward compartmentalizing and simplifying in order to make code reuse more reasonable; and also get things structured to be more agreeable to approaches that would possibly make formal analysis more realistic, but thats a long term effort.

Is there somewhere I can read about the kinds of corner cases that were discovered and how they were dealt with?

Also, what would it take to produce a formally verifiable implementation of bitcoin? Is such a thing even possible? Can we expect a PhD thesis to deliver the matter in the next decade?
3  Bitcoin / Development & Technical Discussion / Re: Funding network security in the future on: November 03, 2014, 03:20:08 AM
Exactly. I think the proper analyses is to look at hashing vs cost. In the beginning, it was essentially linear. For the last couple and probably the next 10 or so years at least, it will be exponential as better ASICs offer exponential speedups over CPUs. Eventually, as their capacity saturates, the curve will be linear again, based on number of devices (rather than capacity per device) and as you say on electricity.

A linear situation is much more amenable to the volunteer. If bitcoin becomes important enough, for example, Apple and the rest will throw ASIC miners and EC verifiers into their products, and the network security will be funded by its usefulness in maintaining society all together.

A bit too romantic, maybe, but I'm an optimist Wink
4  Bitcoin / Development & Technical Discussion / Re: Funding network security in the future on: November 03, 2014, 01:18:33 AM
Quote
No one has seriously considered it because the concept is a non-starter.  Even if you could devise some way to enforce the rule on a per-device basis, there is no known and accepted way to stop a single entity from controlling multiple devices.  People would figure out a way to control multiple hashing devices, and in such case your idea would introduce only added complexity and possible attack vectors and, I think, nothing positive.

Hm I think maybe you misunderstood my point (did you read the whole post?). I'm talking about ASICs achieving the limit set by the universe on hashes per second. No enforcing necessary, physics takes care of that. So I have not actually "proposed" anything complexifying. Simply wondering how the economics change when ASICs hit such a limit (assuming the limit exists - hello physicists?) and become "cheap as cpus", motivating EC verification to achieve a similar limit, also become "cheap as cpus", and finally have both integrated into modern (being X decades into the future) personal devices. Sure, anyone can buy more devices, but the idea is that speedup in a given device may saturate, so the advantage is only linear (rather than exponential) in cost. By that point tx fees may be tiny, or even nil, but it will be relatively cheap for an average user to participate, just like in bittorrent. So indeed, as Gavin keeps saying, the market may figure out how to pay for security, in this case by saturating ASIC capacity and making it feasible for the average user to once again play.
5  Bitcoin / Development & Technical Discussion / Re: Funding network security in the future on: November 03, 2014, 12:08:35 AM
Most of the conversation seems to have been about ensuring sufficient transaction fees are paid to miners. But what about the verifiers? Currently, all tx validation is done by volunteers. I think Satoshi initially intended for validators to double as miners, but in a world where the two are largely mutually distinct, how do we support the verifiers? And if we can't, isn't the network doomed anyway?

Related to this is something I have not seen considered: the upper limit on hash speed per device.

A naiive calculation would go like this. Take the universe's maximum bit-operations/second (assuming it exists) as the speed of light in nanometers/second, or 3x10^17 (ie. 300 peta(nm/s)). Suppose a SHA256 takes 1000 bit operations. So we can say ASICs will top out at 0.3 PH/S or 300 TH/S (from what I can tell, they are currently around 1-10 TH/s, so this would be order of 10-100 times speed up). I know very little about hardware, but given the incentives and the acceleration of knowledge, this may not be unreasonable within say the next 20 years, to pull a number out of my ass. All this assuming of course there is an upper bound set by the cosmos on computations per second, which may or may not be reasonable, depending on your approach to modern physics. Of course my estimate of SHA256 bit ops may be wildly off and obviously depends on the size of the input, but it doesn't matter - what does matter is ASIC manufacturers reach the limit in the next couple (maybe even five?!) decades. Suppose its a graphene based breakthrough, if you're hung up on transistors and the end of Moore's Law.

(As an aside, note that it takes ~40ms to get from New York to Hong Kong at the speed of light. So unless we break that barrier, high performance large scale distributed systems are kind of screwed anyways. In other words, the speed of light is too slow for our needs Wink ).

Supposing ASICs do reach this limit (and will probably be the first devices in our corner of the cosmos to do so), then some point afterward we will be seemingly back to the kind of thing Satoshi originally envisioned. A 300TH/s ASIC cheap as a modern CPU today. One 300 TH/s ASIC, 1 vote. Of course there is still the centralization incentive, so let's assume we have solved that by transitioning to something like Hashimoto or another such hashing scheme that requires the entire blockchain to be available to the miner. The next step, driven by market demand of regular people mining again, will then be ASIC EC verification. Give it a couple decades or so to also become "cheap as a cpu".

Now we're in a situation again where full nodes validate and mine, just like the good ole days, supporting the system for the same reasons bittorrent is supported - it's useful, it provides for us, it sticks it to the man, whatever.

And they lived happily ever after?
6  Bitcoin / Development & Technical Discussion / Re: Proof of Storage to make distributed resource consumption costly. on: November 02, 2014, 10:35:23 PM
This seems like a great idea. Has there been any work towards implementing it in core or elsewhere?

One implication though is that honest services which provide monitoring and statistics of the network as a whole will become much more costly. Solutions might be public key authentication of such monitoring services or else for servers to accept "read only" connections (could be http-like or normal tcp socket that is never read from).
7  Bitcoin / Development & Technical Discussion / Re: What are checkpoints in bitcoin code? on: November 02, 2014, 03:10:26 AM
Quote
What you're describing is not a checkpoint then. A checkpoint forces the identity of the selected chain, regardless of it has the most work or not. So that would be the point of departure.

Indeed I see where the misunderstanding has been then. Perhaps check-point was the wrong term but it sure has a "check-point-like" feel to it. Glad we are more on the same page now.

Essentially what I'm trying to figure out is a mechanism for blockchain compression so that we can drop very old txs with minimal to no loss in security. Perhaps what I have proposed is not sufficient, I merely thought of it yesterday and thought we could explore something like it here. You're right, the complexity of such a protocol may simply not be worth it. But consider a new node in 50 years having to go back to genesis and start validating all those txs. Poor soul.

In a certain sense, it boils down to resetting the genesis block to something more recent (of course including the utxo hash) in a manner compatible with the network's consensus.  Do you suppose there is any secure way to do this? Would it even be worth it?

Quote
and no, newly generated coins cannot be spent for 100 blocks

Right. I forgot about this.
8  Bitcoin / Development & Technical Discussion / Re: What are checkpoints in bitcoin code? on: November 02, 2014, 02:23:21 AM
Quote
the tone you've taken here is irritating and is likely to cause experienced people to ignore your messages in the future if you continue with it.

My apologies. Did not intend to irritate. Just trying to understand this problem better. And many thanks to you for the time you're taking to go through it here - very much appreciated.

I hope you don't mind if I continue:

Quote
once the honest network is observed its like the node never saw the forgery. When you start talking about "check-pointing" based on that chain the situation changes and you get the attack

This is where I'm losing you. Yes there may be a checkpoint, but highest total difficulty still wins out. If the highest difficulty chain conflicts with a check-pointed one, surely the client should go with the higher difficulty one, as you say
Quote
chain will simply be unwound and replaced, so giving it that extra data is harmless, once the honest network is observed its like the node never saw the forgery
. The checkpoint is merely a mechanism to avoid verifying very old txs. But if I see a competing chain with higher difficulty, I ought to go with that one, whether it has a checkpoint or not.

Quote
I can mine 80 blocks in a row at height 100,000 trivially in a few seconds myself

Granted. But if you go back and do that, the chain you create will not have difficulty of the canonical chain. So even if I see yours first, again, so long as I eventually see the real chain I will ignore yours. This check-pointing mechanism would have to start from the current head if it wants to stay valid. We could check point block 100,000 by submitting today a tx with the hash of that block. If you try to actually create that checkpoint further back by forking around block 100,100, say, you will not be able to create a chain on par with the current difficulty. So despite your checkpoint, I will still ignore you, even if it means I have to hop on a chain that starts from satoshis genesis and has no checkpoints.

Is this not correct?
9  Bitcoin / Development & Technical Discussion / Re: A(nother) downside to Proof-of-Stake? on: November 01, 2014, 11:52:12 PM
Quote
Then you are introducing trust assumptions and new attack vectors. There are no universally trusted parties to provide checkpoints.

Yes, I'm introducing trust of large scale society itself, but not a particular institution. We already trust society implicitly with basically everything we do.

Quote
And if somebody has hacked Facebook or Twitter? Or put pressure on them from some USG agency? Or has compromised your access to them? Or maybe you just don't trust them because they routinely censor data and besides treat their users as data crops?

Exactly. It's not just facebook and twitter. It's them, and hacker news, and slashdot, and the various subreddits, and this forum, and wikipedia, and the google homepage, and the local grocery store's bulletin board, and the lcd display above the central square, and everyone who cares to participate's website or other medium. You'd have to break all of them - reduce the world to the Truman Show. Good luck!

Granted, it may increase the potential for consensus failure, if the USG posts a different hash than Russia, or w/e. But at least it will be much clearer which agencies are vying for which consensus outcomes.

The idea has obviously not been fully fleshed out. But I think these kinds of things are worth thinking about to the extent that internet based consensus systems can be reflected off the real world.  There's more to this than simply accelerating the heat death of the universe Wink
10  Bitcoin / Development & Technical Discussion / Re: What are checkpoints in bitcoin code? on: November 01, 2014, 11:40:40 PM
Quote
since with headers first it knows the amount of work on top of them and can perform the tests only probabilistically past a certain point.

Indeed, so contrary to andytoshi's assertion, pow is a form of validity. If you haven't verified every single sig yourself, can you really be called a full node?

Quote
Great now I create a simulated history which that sets a bogus 'checkpoint' back early in the chain, but any _new_ nodes that attach to me I give this simulated history to before they know there is a better chain elsewhere and they start enforcing that rule and they are now forked off onto this bogus alternative chain;

this argument applies to any blockchain. If I can get the node to think the chain I give it is the right one before it even sees any other, I win. But here, there is still a PoW element, so as soon as the node sees a chain with higher total diff it will know the one I sent was bogus.  

Quote
Worse, because the forking off can be arbitrarily far back it becomes exponentially cheaper to do so long as hash-power is becoming exponentially cheaper.

The mechanism I proposed requires a tx that is much more recent than the block it is actually checkpointing. And there is still the normal difficulty calculation. The canonical chain as it stands and the canonical chain with a checkpoint back at block 10,000 will have heads with identical difficulty. So you can start your fork wherever you want, but so long as I haven't been partitioned off the internet completely, this isn't a problem (and if I have been, it's a problem for bitcoin proper too).

Quote
The result is that you give miners a new power, instead of just being able to reorder the history, they could also create arbitrary inflation just by adding new utxo to their updates. (which, if course, would be in all of their short-term interests to do)

They can already do this by arbitrarily augmenting the coinbase reward. But they don't, because they know other nodes will drop the block and their efforts will go to waste. Similarly here. My proposal involved X of Y consecutive blocks to include the same checkpoint for it to be valid. Set that to 70 and 80 say. So for a checkpoint to be valid, 70 of 80 blocks in a row must include it. It is very unlikely a single entity will control all that. If they can, bitcoin is already screwed. Since they can't, they have the same incentive to be honest about the utxo set at a checkpoint as they do about following the coinbase reward schedule.

The honest proposal is the schelling point. We can easily increase the X/Y ratio to be more secure. If one pool is mining 100 blocks in a row, we have much bigger problems on our hands...
11  Bitcoin / Development & Technical Discussion / Re: What are checkpoints in bitcoin code? on: November 01, 2014, 07:32:46 PM
Quote
PoW does not imply validity.

The point is, everyone trusts the genesis block, and all updates from there are made via pow. This is essentially a proposal for a consensus process to update the genesis block forward, and to attach a tree of utxos so that the txs between the new gen block and the old never need to be seen again (we can put them in a museum of bitcoin history, if you like, but spare the new full nodes!). I still have not heard a reasonable argument as to why this can't/won't work.
12  Bitcoin / Development & Technical Discussion / Re: What are checkpoints in bitcoin code? on: November 01, 2014, 06:46:05 PM
Quote
Time-related words like "old", "new" and "long after" only make sense if you have an existing blockchain with which to tell time. So the circularity is still there.

You do have an existing blockchain. The bitcoin one, up to now. And you can tell time in number of blocks. The genesis block was the first checkpoint. We could have the hashing power vote to checkpoint block 10,000, including the patricia tree hash of the utxo set up to that point. Then anything from before the checkpoint can be ignored, since the checkpoint can be considered part of the PoW consensus mechanism - if you trust the PoW generally to make ledger updates, then (conceivably) you can trust it to checkpoint.


Quote
because any transaction data that you "compress out" is transaction data that can't be validated by new nodes.
Exactly the point. Node's already trust that blocks are valid because they have PoW on them. The checkpoint will have PoW too, and hence be trusted in the same way, relieving the new client from having to validate anything before the checkpoint. That's the point, it's like a new genesis block, plus a utxo set.

Quote
Quote
From what I understand, headers first doesn't affect the new full node sync time at all. Please correct me if I'm wrong

It does, for two reasons:
- By downloading the headers first, you can quickly (in low bandwidth) eliminate stales, orphans and bad chains.
- Once you have the headers, you can download full blocks out of order from multiple peers (currently blocks are downloaded sequentially from a single peer, which if you get a bad one, can be extremely slow).

You're right that the time taken to validate the correct chain is unaffected.
Good point. Parallel downloading is awesome. But the CPU still has to crunch ALL those EC verifies. ::sigh::
13  Bitcoin / Development & Technical Discussion / Re: A(nother) downside to Proof-of-Stake? on: November 01, 2014, 06:37:43 PM
andytoshi, what do you think about saving PoS by bouncing checkpoints/blockhashes off reality?

You want to know the top of the chain that everyone is using? Check facebook and twitter. Seeing something different in your client? Someone's trolling you ...

14  Bitcoin / Development & Technical Discussion / Re: What are checkpoints in bitcoin code? on: November 01, 2014, 03:40:57 PM
Far better to just get rid of them: Headers first makes most reasons obsolete.  The circus above doesn't really help, since it's using the chain itself, which of course checkpoints distort the selection of, so it's just circular.


Old checkpoints distort the selection of the chain, but there's no reason new checkpoints can't be done with network consensus on chain long after a previous checkpoint (thus its more bootstrapping than circular). It's potentially a powerful new way to compress the history and bring new nodes up to speed fast, especially since you can include a utxo patricia tree hash in there too.

From what I understand, headers first doesn't affect the new full node sync time at all. Please correct me if I'm wrong
15  Bitcoin / Development & Technical Discussion / Re: Web on a sidechain? on: November 01, 2014, 02:14:58 AM
I think you may be looking for the ethereum project
16  Bitcoin / Development & Technical Discussion / Re: btcd: a bitcoind alternative written in Go on: November 01, 2014, 01:58:20 AM
Started playing with this today. Great work conformal team!
17  Bitcoin / Development & Technical Discussion / Re: Proposing new feature in Bitcoin protocol to reduce the number of thefts on: November 01, 2014, 01:43:06 AM
doesnt the bitcoin script already support timelocked outputs?
18  Bitcoin / Development & Technical Discussion / Re: A(nother) downside to Proof-of-Stake? on: November 01, 2014, 01:35:42 AM
As andytoshi points out, all of these analyses are complicated by the specific model assumptions and therefore the different systems are not necessarily directly comparable. However, it would be interesting to work towards a formal proof under a standard bitcoin model that shows PoW is the only way to achieve secure consensus.

I'm still not completely convinced this is true, though. So long as the protocol is entirely self-contained, perhaps, but supposing we can rely on "reflecting" the consensus off reality (through social networks and other media), I think we can actually solve this in the real world.

The main issue with PoS is so-called nothing at stake. Slasher can mitigate this effectively for its temporal range (Vitalik likes 3000 blocks), but is subject to long-range attacks. Long-range attacks can be mitigated by check-pointing, so the problem becomes one of secure check-pointing (say every 3000 blocks). One approach would be a proof-of-work based checkpointing mechanism in an otherwise fully proof-of-stake system. The PoS people probably won't like that, and it could be very dangerous (I literally just thought of it). The other approach is stake based check-pointing on chains of progressively higher security (where security is effectively measured by the size of the security deposits that must be put up to be eligible for signing/checkpointing). So the question can be reduced further to one of secure-checkpointing on the most secure chain (we are assuming here an interweb of chains, where lower security chains checkpoint on higher security chains). The highest security chain then checkpoints against the real world, by literally broadcasting hashes on facebook and twitter and so on.

It's a little ridiculous, but it has an interesting appeal in that in brings the consensus full circle by embedding it back in reality. Of course it already is semi embedded in reality due to the nature of software development (clients are not developed according to a protocol, they are made by humans who do their best, but are not infallible).

Either way, it will be interesting to see this field play out!

As to your original question, hardware devices that do not export keys but simply allow inputs to be signed and spit those out can mostly mitigate your concern. Stay tuned!
19  Bitcoin / Development & Technical Discussion / Re: Basic Mining Question on: November 01, 2014, 01:19:42 AM
Right. It's also not the case that "51% of miners must confirm the transaction". Only one miner has to include your transaction in a block and "solve" that block (ie find a nonce such that the hash of the block header (which includes the nonce and a fingerprint of the transactions in the block) is less than some number), to have it officially added to the blockchain. Of course, most people wait for a few blocks before accepting the transaction as truly confirmed to protect against the possibility of some other blocks being released that replace the one with your transaction. It takes 51% to be able to guarantee control of what new blocks are added to the blockchain.
20  Bitcoin / Development & Technical Discussion / Re: What are checkpoints in bitcoin code? on: October 31, 2014, 08:31:26 PM
It seems pretty clear that checkpointing by the devs introduces an avenue for corruption/compromise of the chain. But checkpoints are certainly a reasonable approach to preventing forms of DoS and potentially even accelerating new nodes catching up with the chain.

What if we worked towards a check-pointing process based on proof-of-work? It could work something like this:

- A checkpoint is proposed on bitcointalk or reddit or wherever.
- Check-pointing becomes an on-chain transaction, where the block-to-be-checkpointed's hash is included in a tx. It could be earmarked by doing something silly like spending a millibit from the coinbase reward.
- A checkpoint is accepted as valid if it is included in some X of Y blocks in a row in this manner. X could be something like 70 and Y 80, say (to be totally arbitrary). Then the checkpointing processes requires consensus from the whole network, but is not spoiled if a few miners/pools decide they want to be adversarial. If large pools are staunchly anti-checkpoints, then arguably that's the network's decision to make.

Once a checkpoint is conceded on the blockchain, the devs can add it to the source code, and the client can verify that it is indeed a valid checkpoint by finding the first block it is checkpointed in and verifiying the same hash exists in X of Y consecutive blocks. I believe this can be done without a fork (presuming coinbase rewards can be used as inputs within the same block, else we need another way to earmark the special checkpoint txa, but I imagine this shouldn't be too difficult).

Thoughts?
Pages: [1] 2 »
Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!