2241
|
Bitcoin / Development & Technical Discussion / Re: Bitcoin protocol standarization
|
on: November 30, 2014, 12:59:59 AM
|
The issue in creating a standard protocol is documenting all the bugs in the current reference implementation. Until this happens most other implementations will fork the chain.
No, they'll continue to fork the chain regardless; many (most?) of the previously observed implementation behaviour discrepancies have been in clearly well documented behaviour; or even in behaviour triggered on the conformance testing harness. Turns out that getting radically different software to behave in a manner which is absolutely identical under all conditions is quite hard.
|
|
|
2243
|
Bitcoin / Development & Technical Discussion / Re: Pseudonimity compromised?
|
on: November 28, 2014, 09:01:03 AM
|
The headline is kind of misleading. I'm not sure that anyone who'd considered the subject thought they were at all private if they used the system without tor. Bitcoin.org surely suggests no such thing.
Fair enough. The interesting part for me was Tor being easy to block. I have never heard that before. I guess I also found it surprising that it was worthy of a study at a cryptography/ security department. We specifically added direct support for tor hidden services as one tool to deal with tor exit banning. HS inbound peers are not banned persistently. Lots available to improve here, though at least tor is enough that you can't be screwed over without your help (E.g. turning tor off if you get dos attacked), for advanced users this is at least a basic level of capability.
|
|
|
2246
|
Alternate cryptocurrencies / Altcoin Discussion / Re: Anonymity in the Mini-Blockchain scheme
|
on: November 27, 2014, 09:29:32 AM
|
You're right, any additively homomorphic encryption would do. I chose Paillier because it is was reviewed extensively and has a ton of libraries written for it. Unfortunately ElGamal is multiplicatively homomorphic so it can't be used.
I don't see what you're saying that. ElGamal can do addition fine, as long as you don't need decryption (or can brute-force the decryption), you can just tell the recipients the values (e.g. just separately in the transaction or out of band). ElGamal is much older and well studied, and you're already taking the same security assumption for the commitments. So it's just a _ton_ more code for the paillier, new cryptographic assumptions, and a lot of overhead. Having looked a bit more at it, I don't see how you're proving the values don't wrap around. E.g. that I don't give someone a negative amount (a huge amount) of coins, which yet still adds up because of the wrap. I thought there was something in there before, but I I'd stopped before getting that far with confusion on the paillier ... because it seemed very strange to invoke a new very inefficient cryptosystem (and I wasn't sure if the rest was going to be worth reading).
|
|
|
2247
|
Alternate cryptocurrencies / Altcoin Discussion / Re: Anonymity in the Mini-Blockchain scheme
|
on: November 27, 2014, 06:02:05 AM
|
It's not clear to me why it's using the paillier encryption. The commitment agreement proof looks like it would work fine with any additively homomorphic encryption, e.g. ElGamal over the same prime group used for the commitments, which would make things much simpler and more efficient. What am I missing here? The paper appears to be missing citations on the agreement proof approach and only cites the basic underlying cryptosystem; which might answer my question there.
|
|
|
2249
|
Alternate cryptocurrencies / Altcoin Discussion / Re: Something at stake - proof of stake alternative
|
on: November 25, 2014, 07:05:14 PM
|
This is not addressing the fundamental limitation which is that 'bet' is entirely internal to the system, which means that nothing preventsthe people with the keys from going back and replaying the history, even years after long after they've sold their coins and exited the system... and the resulting forged and legitimate chains are indistinguishable to a new participant.
I feel like you read the snazzy "nothing at stake" words and then stopped thinking before finding out in detail what they actually meant.
If you're going to invoke POW with POS you can try, but it's very difficult to end up with a result where the security doesn't simply reduce to one or the other (or worse, since POS signers can prevent new entrants from joining into mining in most designs; the admissions freeness of POW is potentially lost).
|
|
|
2251
|
Bitcoin / Development & Technical Discussion / Re: As a developer, what's the best way to accept BTC without using third-parties
|
on: November 24, 2014, 03:03:36 AM
|
Scalability is not ultimately limited by the bitcoin network, but also by connections/users. Bitcoin-core I think is becoming less and less used by companies just because it is so heavy and not scalable. It really tops out at only a few 1000 connections, even raising the file limit from 1024 (default for most linux systems) to higher and higher numbers doesn't help anymore.
It intentionally doesn't support more concurrent connections at this time. While it would be easy to make it do so I am not aware of a single use case for which supporting more would make sense. Nothing of relevance to this thread should result in running many concurrent connections to the daemon. If you'd care to state an application, I'd be interested in hearing it. (Considering that the entire network can only support on the other of 10tx per second, I am at a complete loss as to what aspect of customer invoicing would require you to run a thousand concurrent connections to a single Bitcoin daemon.) This causes another problem which we have seen time and time again, that if you do a withdraw and don't say it could take 12-24 hours to process, and it doesn't show up on the network in a few seconds. The idea that company is now a scammer, is what consumes the mind. So now you are not able to use the the programical instant withdraw that only the bitcoin network as a bootstrap startup without risking your reputation.
Can you retry saying this, I'm unable to parse your English. I believe we are failing as a community in many ways about this, I recommend 3rd party apis because that is the only solution currently. There are no other full nodes that can handle this unless you enter the enterprise stage. You've failed to state an actual reason here. Can you try to clearly specify particular problems with concrete details. Simply saying "it doesn't scale" is not helpful.
|
|
|
2252
|
Bitcoin / Development & Technical Discussion / Re: As a developer, what's the best way to accept BTC without using third-parties
|
on: November 23, 2014, 11:12:34 PM
|
No, but you are concerned about scalability while I don't know what you are doing, that is the key word that I would use a 3rd party api. Lowest overhead and easiest to be scaling with.
"Scalability" is ultimately limited by the Bitcoin network, which-- by definition-- a Bitcoin full node can handle. Interposing a 3rd party does not necessarily improve scalability-- it likely reduces it, especially since you're talking about unpaid services, so there is nothing to fund your utilization-- and using a third party service necessarily reduces security, more so than SPV, and necessarily adds additional points of failure: the reliability of third party blockchain services has been very low. I'm sad to see that no one seems to have noticed the Baron link I gave above. It's a full order processing solution based on a local full node. It deserves more attention.
|
|
|
2254
|
Alternate cryptocurrencies / Altcoin Discussion / Re: Idea for ASiC resistance
|
on: November 20, 2014, 05:21:17 PM
|
Suppose a coin switches its hashing algorithm each time a new block is found. Also suppose that new algorithm itself is randomly generated, and the previous block contains the instructions on how to perform it.
Bitcoin is a cryptosystem. Like other cryptosystems the details matter _greatly_. You say "random" ... well what the heck does that mean? Does the randomness fairy bless you with a magical number by striking your brain with a cosmic ray? Everyone needs to agree on what the state is, so presumably not. Perhaps we should have a tradition here where anything unspecified in a proposal can be filled in whatever way the person responding wants, instead of giving them the responsibility of reading the tea leaves and trying to extract (or prove the non-existence of) a single secure proposal out of the infinite class of proposals your underspecified message invoked. With that kind of tradition I could just analyze your proposal assuming random means that You, Muis, "randomly" pick the hash functions, and broadcast signed messages... allowing you to make it easy for yourself to mine, and also allowing you to partition the network by announcing conflicting hash functions. Perhaps not? I'd guess your post really means not-at-all-randomly but based on the prior block hash, since thats the most common thing that people commonly mistake as "random" in these sorts of systems. If so, this would mean that an attacker could grind his current block to make sure to come up with an algorithim which has weaknesses he knows how to exploit or is especially fast on his hardware. This could be a pretty extreme vulnerability.. And, of course, you can't block hardware ... or else a regular computer couldn't verify it either as they are hardware too after all, all you could hope to do is limit the domain for hardware optimization, but you haven't suggested anything specific about your parameterization which makes it clear that it would actually achieve that. E.g. people would just make hardware specialized to the space of functions the 'random' generation can produce (or a subset, and grind blocks to get it into the state they can support). You could perhaps try to structure the circuit so that the 'randomness' can't introduce strong optimizations, though that would seem to be at odds with it making hardware optimizations of the base design hard. Then after that, how do you propose to deal with the incomparability of the computational complexity of different functions? You're given two chains and need to decide which has the most work... one has more hash-function runs, but maybe it's less worth because the functions were easier to execute. In any case, as others have pointed out... it's far from obvious that a meaningful improvement here is possible, that 'ASIC's are harmful, or that it's possible to resist improved hardware. Keep in mind that hardware which is only a few times more power efficient will eventually push everyone else out, since mining is in near perfect competition... and the increased startup cost for increasingly sophisticated hardware creates its own centralization risks. I suggest mediating on https://download.wpsoftware.net/bitcoin/asic-faq.pdf some more.
|
|
|
2255
|
Bitcoin / Development & Technical Discussion / Re: chaum, offline coins vs BGP & bitcoin
|
on: November 20, 2014, 12:23:05 AM
|
Its kind of curious that according to the selfish-mining paper, if that remains the conclusion, hashrate BGP is also assuming 1/3 honest hashrate (same ratio as previous BGP solutions, but just with "vote per hashrate" rather than "vote per participant". If the selfish mining paper is correct that a miner with 1/3 of the network can pull off a successful attack, does that not imply that 2/3 of the hashrate must be honest to solve BGP? Thats a misunderstanding about what the selfish mining paper is talking about. The Bitcoin whitepaper talks about 51% being honest. In the selfish mining paper, instead we assume that miners are greedy ("rational" in the BAR model, which calls honest parties that follow the rules no matter what "altruistic") and will do whatever makes them the absolute most money. This is interesting because Bitcoin also has security in this more adversarial model, and you can handwave about those incentives as to why miners might be aligned in the honest bunch. Partly I say handwave because the BAR model really fails down when talking about Bitcoin. E.g. consider some move that gives a miner a bunch of bitcoins but immediately makes Bitcoin worthless by undermining trust in it faster than the participant could exist the system, is this a rational attack or a byzantine attack? In the selfish mining paper they show given certain assumptions about informational advantages (they need to be able to partition much of the network in order to outrace blocks) the existence of a sufficiently large but minority dishonest cartel can behave in a way that rational parties would earn more mining income by joining with the cartel rather than following the rules. The paper proposes an 'improvement' which isn't really an improvement. It eliminates the cartel advantage below 25% but at the cost of eliminating a need for an information advantage and guaranteeing success for cartels larger than 1/3rd.
|
|
|
2258
|
Bitcoin / Development & Technical Discussion / Re: Treshold Signature Implementation ?
|
on: November 13, 2014, 12:28:31 AM
|
The scheme in the paper doesn't really work. It turned out to be unimplementable as described and require additional keys (beyond the threshold) to reliably sign. It also required multiple communication rounds, which is pretty burdensome on implementations (e.g. having to get your offline wallet out multiple times).
|
|
|
2259
|
Bitcoin / Development & Technical Discussion / Re: Merkle tree of the block hashes
|
on: November 12, 2014, 02:14:01 AM
|
A miner could be lucky to get the lowest hash. The odds of getting N low hashes should be much lower.
And honest miners can be unlucky. The point I was making is that the work you compute that way is not the same as the work computed the normal way. There is always some random error in the number, and so you may select the wrong chain that way, e.g. even assuming no attacker. Thats all.
|
|
|
2260
|
Bitcoin / Development & Technical Discussion / Re: Merkle tree of the block hashes
|
on: November 11, 2014, 11:30:06 PM
|
Using the lowest value hashes, however, let you pick the correct longest chain among multiple contenders... Which IMO is unfortunate since that means only really useful in a hard-coding model. A much more powerful scheme (which basically uses the same commitments you're talking about) is proposed in the pegged sidechains paper as appendix b: http://www.blockstream.com/sidechains.pdf it gives _exactly_ the same work computation as just traversing the chain one step at a time, and takes (in the expectation) exactly that amount of work to forge.
|
|
|
|