Bitcoin Forum
April 30, 2024, 10:27:39 AM *
News: Latest Bitcoin Core release: 27.0 [Torrent]
 
  Home Help Search Login Register More  
  Show Posts
Pages: « 1 ... 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 [137] 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 ... 288 »
2721  Alternate cryptocurrencies / Altcoin Discussion / Re: Turing complete language vs non-Turing complete (Ethereum vs Bitcoin) on: May 04, 2014, 12:11:51 AM
It isn't really a question of storage or bandwidth, since nodes are free to compress the representation they use to store (especially) or transmit to other compatible nodes. What was interesting to me is how easily/cleanly all of the ethereum examples I've seen unrolled. I'm not quite sure what I'd call it a question of... "size of the simplest serialization"?

I think the more important bit of cognition here is that what Script in a consensus system is actually doing:  99.999% of it is not computation it's _verification_ of a computation someone else performed. And verification is computationally distinct from actually computing, it's fundamentally easier.

Beyond the obvious but less practical connections to non-interactive zero-knoweldge proofs for NP statements, there are some pretty important down to earth ramifications of this.

For example, take the P2SH idea where the PubKey is a script hash and apply it recursively— so a script opcodes and interior hash nodes (you could then think of the script as a merkelized abstract syntax tree). From an implementation perspective, you can think of this as having a OP_choice which takes one serialized script and one scripthash. What you reveal to the network then is not the whole program, but just the parts covered under the taken branches, for the rest you only give their hashes. Other nodes don't care about what happened behind untaken branches, only that the branches which were taken ultimately resulted in acceptance.

Verification time then, is, as before, linear in the size of the signature.

But you've improved privacy— assuming there was enough entropy the untaken branch the public doesn't learn anything about it.

And you've removed much of the blowup in the size of the circuit from the visibility of the verifying nodes.

This kind of tool has been on my must-have-todo-list for any major revisions to Bitcoin script for since something like 2011. While looping might be a reasonable construct (though the complexity of the 'gas' stuff above is not inspiring me with confidence on that front— and complexity is very important here since it directly impacts correctness: alternative implementations of bitcoin already get script wrong over and over again, and no one is yet trying anything fancy like JIT execution of scripts) that may be worth including simply because it makes some scripts have a much more succinct serialization (e.g. consider a script implementing a lamport signature that iterates over all the bits in a hash), I think ideas like the MAST are much more interesting and more likely to be useful...

Looping isn't mutually exclusive with techniques like MAST, but there is a question of finite cognitive bandwidth. Smiley  In my opinion the only thing the looping has been really solidly demonstrated to be useful is enabling the not quite accurate and not at all relevant claim of turing completeness for marketing purposes... which itself is not even all that useful except that it enables all kinds of thoroughly confused ideas like thinking the system in question is a competitor to EC2. Misunderstanding which some promoters of some alternative systems decline to correct in what seems to be a rather convent move in terms of encouraging participation in their public investment scheme(s).
2722  Bitcoin / Mining support / Re: How do pools/miners secure their Bitcoins? on: May 03, 2014, 10:33:11 PM
How do pools secure their bitcoins?  Well mostly pools are holding other people's bitcoins. Why bother securing them?   (The answer: pretty poorly.)
2723  Bitcoin / Development & Technical Discussion / Re: Few advertised node responding to me : Am I blacklisted ? on: May 03, 2014, 06:15:33 PM
I prefer to make sure my implementation does not work only in the lab.
Thats good and fine, but it should be a test reserved for software that first works locally. Running it locally you can get far more visibility into whats happening and you don't waste other people's resources.

I'm glad its working better for you now.
2724  Bitcoin / Hardware wallets / Re: RNG vs PRNG for bip39 it's relation to Trezor on: May 02, 2014, 09:39:41 PM
but why doesn't a potential bias there reduce the overall assurance of the system?
Because it is mixed into unknown state using a cryptographic function. Even if an attacker completely controls the additional inputs, without knowledge of the interior state and the ability to compromise a cryptographic assumption (e.g. the one-way-ness of SHA1) the system remains secure.

Hardware RNGs are mostly tricky because they can fail in difficult to detect ways. Good designs have several layers of checking and still whiten them using a CSPRNG just in case the entropy is lower than expected. E.g. the no-longer-available entropy key used two hw rngs and checked their individual entropy with an estimator, along with the entropy after debiasing and the entropy of their xors and xor of the debiased values... and then followed it with the FIPS rng test (though I thought their particular placement of the fips test was a bit ill advised).

rather than a CSPRNG seeded periodically by a timestamp of 'randomly' occurring events such as IO events.
Many apparently 'random' IO events are not really random or are less random than you might assume, so great care is required. A good CSPRNG design will substantially preserve any entropy which is provided, and so long as there is enough in total, no matter how much non-random bits dilute it, the output will be strong.
2725  Bitcoin / Development & Technical Discussion / Re: Standardising block versions on: May 02, 2014, 09:32:52 PM
We already haved used the block numbers for upgrades, but nodes need to have new code for the upgrade, that new code implements the switch rule. Different changes have different reasonable switch criteria. Defining it in advance would constrain behavior without any advantage that I can see.

Your suggestion also presupposes a single sequential line of changes. Version X comes after version Y strictly and monotonically. This seems unlikely to be possible to ensure without a heavy amount of centralization.
2726  Bitcoin / Development & Technical Discussion / Re: Few advertised node responding to me : Am I blacklisted ? on: May 02, 2014, 09:19:54 PM
what are the rule so I can avoid that in the future ?
Run your own nodes locally to test against. Don't do development against other people's systems, thats a waste of their resources.
2727  Bitcoin / Hardware wallets / Re: RNG vs PRNG for bip39 it's relation to Trezor on: May 02, 2014, 01:28:14 AM
Private keys must _always_ be derived from attacker unknowable data. That data can pass through a CSPRNG on its way, but from the attacker's perspective it must be 'random'.

Just sticking in a PRNG would instantly cause users to be robbed, since an attacker would just run the same program and get the same results.  On a desktop the OS provides a cryptographically strong PRNG which operates over and an entropy pool and is constantly fed sources of true randomness. This is the primary source of key entropy in Bitcoin core (the actual implementation is under the hood of the library, which is why you did not see the /dev/urandom calls in the bitcoin codebase itself).

Perhaps you should reevaluate if you're prepared to be creating cryptographic software for third parties to use at this time? If you make a mistake here people will lose money but the mistake could be undiscovered for many months. Security failures are usually completely invisible until they aren't.
2728  Bitcoin / Development & Technical Discussion / Re: superblock checkpoint architecture on: April 30, 2014, 06:59:15 AM
OK I think I am finally getting your drift.  A full client can basically *already* (sorta?) make a utxo list and use that rather than hold the whole chain?  Incidentally if the set is 300mbytes my estimate was way off.  For 1 million addresses (I think we are at more than that now), at 160 bits each, that leaves 15 bits to each address for the amount of unspent.   Am I calculating something wrong there?  And if this is true, why aren't we doing it already?         
Bitcoin doesn't track "addresses" it tracks txouts. There are currently 10,623,198 txouts and the data is 369,816,656 bytes serialized. And we already are storing it this way (which is why its so easy to produce these numbers, the gettxoutsetinfo rpc provides them).  We just do not have a facility to delete the historical blocks for two reasons:  One we need to add discovery mechanisms to synchronize nodes in a network where every full node doesn't juts have the blocks, and two just basic software engineering of finding all the stats/etc. features that use the historic data and disabling them in this state instead of allowing it to crash.

Quote
you could be sending 300Mb seems suboptimal.
Oh you absolutely cannot just send 300MB, your peers could just lie to you and it would be greatly in their short term financial interest to do so (because they could make you accept inflation or spends of non-existing coins). But you can produce this trusted state for yourself and you don't need to store anything else.  ECDSA is cheap, with libsecp256k1 my i7 desktop here validates at almost 6000x the maximum realtime rate for the Bitcoin network.
2729  Bitcoin / Development & Technical Discussion / Re: enormous outgoing bandwidth with Satoshi client v0.9.1.0-g026a939-beta (64-bit) on: April 29, 2014, 10:16:10 PM
Looking at how bittorrent handles peer selection and load distribution is probably a good starting point.
Fairly poorly in the absence of a centralized tracker.

There is a complete implementation of a better fetch mechanism for bitcoin since last year: https://github.com/sipa/bitcoin/tree/headersfirst  It works quite well and avoids slamming just a single peer or being slow because a peer is slow. It couldn't make it into 0.9 due to a lack of testing as it's a rather big change. Instead its slowly being broken into staged commits. Hopefully, if this interests you, you'll be able to help with testing when more is needed. Smiley
2730  Bitcoin / Hardware / Re: [ANN] Spondoolies-Tech launches a new line of ASIC miners - Best W/GH/s ratio on: April 29, 2014, 07:32:56 AM
Please do try to keep posts on-topic. While a moderate amount of comparisons with competition makes sense, extensive sidebar discussions about all the other gear you're running or the ship dates of other vendors is really getting pretty far afield. Smiley
2731  Bitcoin / Development & Technical Discussion / Re: superblock checkpoint architecture on: April 29, 2014, 07:31:05 AM
This one kind of blew my mind Smiley  You're not talking about SPV nodes either here I think.  I guess the idea is that when a transaction appears on the network and I want to validate it, the TX already has the UTXO data and I just need to hash it, if necessary up a tree until I can compare to a merkle root that I do have stored.  But if I really have nothing stored, then the high level security of validating everything myself can't possibly be there?  Anyway, I am looking forward to seeing more of this. 

What I am proposing is a restructuring of the database.  The block chain is a log structured database due to the way it is built, but this is in general not an efficient way to store this kind of data.  A restructuring could be performed and validated by the network.  However if I understand what you are saying it may never be necessary.
The data you use to validate doesn't have to have any relationship to the data sent over the wire beyond the fact that you get the same results when verifying.  No hashing is required to verify a new transaction that shows up (other than the hashing in the signatures, of course).  A transaction shows up, it names which txid:vouts it is spending and provides the required scriptSigs. You look up those txid:vouts in your utxo set and accept the transaction if the signatures pass.  The utxo set is currently about 300mbytes. You don't store the history there, you just store the much smaller set of spendable coins. The resulting behavior is indistinguishable from a node with the history except that a node without it can't serve old blocks to new bootstraping nodes... and this all works today, Bitcoin core is already structured this way, but doesn't yet support actually removing the old data.

With some p2p protocol additions validation actually can be made completely storageless but as a non-trivial bandwidth cost (by instead storing the root of a hash tree over the utxo set and asking peers to give the connecting fragments whenever they give you a transaction)... but the bandwidth cost of doing that is high enough that it's probably not worthwhile.


 
How is this different than a miner today issuing a block with a coinbase transaction greater than 25 coins?  Anybody can verify at any time that the block is fraudulent and ignore it and all blocks above it.
Can is very different than _does_. Today everyone constantly validates these things and instantly rejects invalid data. If you instead make it so that validation has a high marginal cost and by default people don't validate then you'll find many never validate and a few do. If miners start rewarding themselves too much coin ("A bit of inflation is good for the economy!") the network will partition into real validating and non-real validating nodes.  You might then say that the non-validating now need to go fix themselves to validate but what if they have spends in the non-validating fork that they've already acted on which are conflicted in the other fork already they're certantly not going to want to do that. Once the network has a non-trivial partition there is an extreme risk that it is fundamentally impossible to resolve it without causing severe monetary loss for someone.  So the "don't check and if something goes wrong, hopefully people will figure something out" mode doesn't really work— the only time you can really deal with an invalid block is the moment it happens, not later after people have taken irreversible actions because they thought it was okay.
2732  Bitcoin / Development & Technical Discussion / Re: Faster recovery; just in case on: April 28, 2014, 05:30:25 AM
This has been discussed many many many times. Please use the search.

Any _automated_ method to lower the difficulty means that an attacker who has isolated a node can cheaply simulate a functioning network in order to rob it.  This is not a good security tradeoff against a speculative risk about an already-broken situation which could be addressed when it arose instead of taking the risk in advance.
2733  Bitcoin / Development & Technical Discussion / Re: How does pool mining and mining work under the hood? on: April 27, 2014, 08:24:40 PM
That isn't right. Mining is memoryless, there is no progress made— analogous to throwing fair dice, if you go 100 rolls without rolling a 1 your next roll is still no more or less likely than your first roll.

With respect to OP's questions on fees and selecting transactions. The important thing is that as far as the protocol is concerned someone who is just working on hashes isn't a miner any more than AMD is— someone who is just working on hashes is just selling CPU time to the real miner elsewhere (the pool).  Only P2Pool users, solo miners, and mining pools themselves are actual miners from the perspective of the protocol.
2734  Bitcoin / Development & Technical Discussion / Re: List of good seed nodes to put in bitcoin.conf? on: April 27, 2014, 05:04:11 PM
Please don't just stuff addnodes in to your configuration for random public nodes— unless that kind of usage has been solicited. If you do that you'll cause unequal load for nodes that people have listed online.
2735  Bitcoin / Development & Technical Discussion / Re: superblock checkpoint architecture on: April 26, 2014, 11:39:18 PM
I guess one could argue that we trust the network to verify transactions and provide the block chain, why not trust it to
We do not trust it to verify transactions, we trust it only to order transactions, verifying we do for ourselves.  By verifying for ourselves we eliminate possible benefits of including invalid transactions and the profit that would go along with doing so. This is important because we depend only on economic incentives to get the honest behavior for ordering at all— there is no exponential gap between attacking and defending in POW consensus. If by being dishonest you can steal coins (or even reclaim lost ones) it's an entirely different trade-off question than a case where you can only rearrange transactions (and potentially replace your own).  This isn't to say that a system couldn't be constructed where only miners verified, but its a different and weaker set of security/economic incentives— and not just some minor optimization.

Quote
but that needs to be weighed against the integrity and security lost by the number of full nodes dropping slowly off due to the storage and computation requirements.
The computation is a one time requirement at initialization, not ongoing (and because of bandwidth requirements I don't expect computation to ever be limiting) and could be performed in the background on new nodes.  There is _no_ storage requirement in general for the past history. Full nodes do not need to store it, they've already validated it and can forget it.  This isn't implemented in full nodes today— little need because the historical storage is not very great currently, though Bitcoin core's storage is already structured to enable it: You can delete old blocks and your node will continue to work normally, validating blocks and processing transactions, until you try to request an older block via rpc or p2p (and then it will crash). The protocol is specifically designed to avoid nodes having to store old transaction data as described in section 7 of bitcoin.pdf, and can do so without any security tradeoff.

Quote
A state commitment which redistributes unspent outputs, lost or not, to different addresses, would be easily spotted and rejected
Validation is the process which accomplishes this spotting and rejection. Smiley If you seek to not validate, then in cases where there is no validation those errors can be passed in. If some parties validate and some do not then you risk partitioning the network— e.g. old nodes 'ignore' your "superblock" and stay on the honest network, while new nodes use it and are willing to follow a dishonest network (potentially to the exclusion of the honest one).... and the inconsistency is probably worse than the fraud. So you're still stuck with if someone mines a invalid state that some nodes will not catch because they do not verify, then all must accept it without verifying it.

Couldn't a block still be created and after consensus has been established on the block, and after some time has passed, it could be used instead of the entire chain? How would that violate security assumptions?
In addition to the incentives point above: The participants are not constant  and  set out at the front... anonymous parties come and go, so what does a "consensus" really mean if you're not a part of it and all those who are are anonymous self-selecting parties and perhaps sybils?  If I spin up 100 fake nodes and create a "consensus" that I have a trillion bitcoins and you join later— so it it matters greatly that the rules were followed before you showed up, e.g. that the creator of the system hadn't magically assigned himself a trillion coins using a fake consensus before you showed up. Smiley Of course you don't need the data any more once you've validated it— you can just remember that it was valid... but if you haven't, how are you to know except either by processing a proof of it (e.g. checking it yourself) or by trusting a third party?  Bitcoin was created to eliminate the need for trust in currency systems, at least to the extent thats possible.
2736  Bitcoin / Development & Technical Discussion / Re: Orphaned blocks on: April 26, 2014, 11:25:51 PM
Looking on blockchain.info, I see there's been orphaned blocks in the last month or so ,and never any before that.  Is this something they just started tracking , or is there a sudden emergence ...and if so, why?
As usual, BC.i has given out misleading data. They're obviously just forgetting old ones. There have always been orphan blocks (on the order of 1%) and will always be some, the finite speed of light ensures it.
2737  Bitcoin / Development & Technical Discussion / Re: OP_CHECKMULTISIG question on: April 26, 2014, 11:23:33 PM
Yes, they have to be in order in order to pass.
2738  Bitcoin / Development & Technical Discussion / Re: superblock checkpoint architecture on: April 26, 2014, 05:47:35 PM
This kind of stuff has been discussed many times before.  What you're suggesting here violates the Bitcoin security assumptions— that the integrity of the data is solid because all nodes full nodes have verified it themselves and not trusted third parties to do it— and in the simple form described opens up some pretty nasty attacks:  Make your state commitment always be to one where you own all the "lost" coins, sure, you're unlikely to find the next commitment yourself, but if you do— ca-ching!.   Of course, if you're verifying the state commitments, then there is no reason to use a 'high apparent difficulty' to select which blocks provide them (see the proposals about merkleized utxo for how a reduced security model can be implemented this way).
2739  Bitcoin / Development & Technical Discussion / Re: OP_VERIFY question on: April 26, 2014, 10:01:51 AM
It doesn't matter, someone was being overly detailed while documenting there.
2740  Bitcoin / Development & Technical Discussion / Re: Performance of Account structures in bitcoind on: April 26, 2014, 09:33:29 AM
I was able to severely corrupt the wallet file by terminating bitcoind process. I did not lose any keys, but the account balance information was corrupted. In essence I was able to lose track of what the correct balance is in each account without any effort at all.
Can you provide some more information here?  Were you running the release binaries? What version? What operating system? How did you kill the process? What state was it in when you brought it back up? What errors did you receive?  Would it be possible for you to provide the courrupted wallet and database/ directory to me?

I ask because last year I ran a loop killing the process under load for more than a month, killing it thousands and thousands of time trying to tease out some rare issues and was not able to generate a single instant of corruption that way. Before I start trying to reproduce your experience I want to have a comparable setup.

Generally use of the 'account' functionality is not recommended it wasn't designed for what most people who try to use it expect to use it for, and other methods (which support durability across hardware failure) should be used instead.  Wrt large amounts of transactions, there I must disagree— for better or worse some of the largest bitcoin using sites collect their transactions in a bitcoind using wallet. Unfortunately, none of the people interested in those high transaction load applications are contributing to the code base but they tell me that they don't need to because it currently works for them with reasonable considerations.  If you've automated your tests enough that they could be run against a testnet/regtest wallet out of a script it might be useful to get them imported into the integration testing used for bitcoin core— it's quite shy on wallet related tests.

Quote
The really bad news is that transfers end up taking several seconds each, on average
I assume you were spending unconfirmed coins in these transactions?   Taking several seconds per-spend is a known artifact of the current software behavior— the code that traverses unspent coins has factorial-ish complexity. While it could be improved— there are patches available, and simply disabling spending unconfirmed outputs avoids it—, since the overall network capacity is not very great I've mostly considered this bug helpful at discouraging inept denial of service attacks so I haven't personally considered it a priority. (And most of the people who've noticed it who have mentioned it to me appear to have just been conducting tests or attempting denial of service attacks…)
Pages: « 1 ... 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 [137] 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 ... 288 »
Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!