4321
|
Bitcoin / Bitcoin Discussion / Re: [ANN] Bitcoin blockchain data torrent
|
on: July 10, 2013, 10:30:35 PM
|
- Post a new torrent on a weekly basis (or some other viable interval) automaticly based on a bootstrap.dat maintained by a node running on the server.
What do you think, will there be enough users of a service like this? Or will a solution like this thread in combination with a 6 month interval suffice? If you keep updating the file it will break people seeding it and so you'll never have a big distribution network.
|
|
|
4322
|
Bitcoin / Development & Technical Discussion / Re: proposal: delete blocks older than 1000
|
on: July 10, 2013, 10:21:55 PM
|
Agreed. I also propose redesigning 3-stage rockets so that the top two stages carry payload and only the bottom stage carries fuel. That way twice as much could be carried into orbit for half the fuel. I am surprised it wasn't done like that for the manned space program.
We better make the speed of light higher so that optic fibers can allow much faster data transfers
I think we can achieve both of these by first making space-time riemannian instead of Pseudo-Riemannian. With euclidean space-time there should be no need for pesky limits like a constant speed of light, and the extra payload mass should be offset-able by simply moving some of the fuel you didn't need into the past. ObOntopic: While not all nodes need to constantly store the complete history— it is not so simple as waving some hands and saying "just keep X blocks": access to historical data is important to Bitcoin's security model. Otherwise miners could invent coins out of thin air or steal coins and later-attaching nodes would know nothing about it, and couldn't prevent. There is a careful balancing of motivations here: part of the reason someone doesn't amass a bunch of computing power to attack the system is because of how little they can get away with if they try.
|
|
|
4323
|
Bitcoin / Development & Technical Discussion / Building current Bitcoind on Fedora 19
|
on: July 10, 2013, 05:35:22 PM
|
Just some quick notes to help anyone trying to build from git on Fedora 19: First you need an OpenSSL without ECC removed. I've put up RPMs: https://people.xiph.org/~greg/openssl/fedora19/(The i686 ones are untested, but should work) These have the patch Warren did for F18, and are set as epoch 2 so the system shouldn't "upgrade" it out from under you to a new version without ECC. If you did not install as a development-workstation you may need to yum groupinstall 'Development Tools' Once you've installed that, yum install libdb4-cxx-devel libdb4-cxx libdb4 libdb-utils.x86_64 boost-devel gcc-c++ BDB_LIB_PATH='/usr/lib64/libdb4/' BDB_INCLUDE_PATH='/usr/include/libdb4' BOOST_LIB_SUFFIX='-mt' make -j4 -f makefile.unix bitcoind USE_UPNP= Tada.
|
|
|
4326
|
Bitcoin / Development & Technical Discussion / Re: Zerocoin when?
|
on: July 09, 2013, 01:27:02 AM
|
Indeed, as I said— it only gets better. My most recent thinking is that the best way to drive this kind of technology forward would be to implement a joint transaction system that helps people anonymously make group payments to improve fungibility and decrease transaction costs (using the transaction pattern described here) using libzerocoin for the parties to agree on who gets paid what. E.g. N parties show up in a communications group and want to make a joint transaction. They each name an input they want to spend and signmessage for a zerocoin creation showing that they have the authority to spend that coin. They then return anonymously and provide zerocoin spends that specify the outputs they're interested in. Everyone then knows what the final transaction should look like and they all sign. In this case the zerocoin part is used to prevent parties from jamming up the mix, e.g. by joining and providing inputs but refusing to sign. If someone refuses to sign— it can only be because either zerocoin has been exploited (and their preferred output isn't in the mix) or because they're trying to jam it. In any case, you just blacklist their input, and restart the process. Because zerocoin is only used for anti-dos in that context it also means that you could use a faster reduced security instance of it, also allowing some experimentation with the security boundaries. The fact that the data is slow and big is harmless when its only among a small number of participating parties. This avoids a bunch of scalability concerns, it avoids the reorg risk of an altchain, the network risk of a (soft)fork, the complexity of a global scale decentralized consensus algorithm, allows rapid software evolution because only the participating users need compatible software (vs a blockchain which largely sets the software in stone), etc. but it also puts zerocoin in production use. The downside is the smaller anonymity sets from small near-realtime mixes but thats also a consequence of ratelimiting a chain based zerocoin.
|
|
|
4328
|
Bitcoin / Development & Technical Discussion / Re: Blockchain corruption during power loss?
|
on: July 07, 2013, 04:13:32 PM
|
Can you please disclose what OS, OS version, and Bitcoin version you're running?
I've tried to reproduce unclean shutdown corruption and in hundreds of shutdowns in Linux been unable to do so.
Contrary to what KJJ claims— it is actually not supposed to do this, and at least on some systems it does not appear to (or at least does so with only negligible probability). I suspect that leveldb has some bugs on some systems/enviroments which degrades its durability, but with basically nothing to go on its hard to determine why.
We absolutely _must_ get this fixed— or at least reduced to negligible probability for all users— before we can support pruning.
|
|
|
4330
|
Bitcoin / Mining speculation / Re: Is there a Fork!!!
|
on: July 07, 2013, 08:23:58 AM
|
Blocks are sometimes found in quick succession, no "fork" required. Some pool payment schemes, when they don't have anyone else to pay just become proportional and pay the recent miners.
|
|
|
4331
|
Bitcoin / Hardware / Re: Process-invariant hardware metric: hash-meters per second (η-factor)
|
on: July 06, 2013, 05:31:03 AM
|
And how would a consumer buying ASIC based product use this metric for choosing which product to buy?
You wouldn't— perhaps you've confused this for a thread in a marketplace section? Products are part of the subject of this subforum, but they're not the only part. This is a technology thread, not a what to buy thread. You'll note that there is no mention of prices in the original post: any what-to-buy thread would be useless with them. There are only 2 metric that are useful: 1) Cost in $/GH for manufacturer to make - only few know what it is exactly and can vary by 200%-500% 2) Cost in $/GH for consumer to buy
Ha. On the contrary, I think both of those metrics are irrelevant. What matters— when it comes to buying mining products at this time— is what is available when. ... but all three of these are offtopic for _this thread_. Please keep further discussion here to the subject of set out in the first post.
|
|
|
4332
|
Bitcoin / Development & Technical Discussion / Re: if transactions were moved "off-chain", would the mainchain's blocks get the fee
|
on: July 06, 2013, 04:40:46 AM
|
First you should answer "what does the bitcoin blockchain need to exist for?"— if there really are no transaction of any kind at all, then there is no reason for it to exist.
Most (all?) discussion about off-chain transactions still has transactions happening on the chain— just not all of them. In those models network security is funded by the fees on transactions that happen on the chain and not (directly, at least) by ones that happen off of it.
(There are other motivations for transaction fees possible too— things like fidelity bonds... but in any case— I think we need more information to discuss your hypothetical future in any detail)
|
|
|
4333
|
Bitcoin / Hardware / Re: Process-invariant hardware metric: hash-meters per second (η-factor)
|
on: July 06, 2013, 03:48:00 AM
|
So ... who gave a moderator a blowjob to remove this particular post of mine? It is indeed directly on topic and valid unless the moderator was a moron? The Avalon chip that hashes slower than the chip used in the old BFL FPGA, and uses at least 1.5 times the power of a BFL SC (per MH/s) and requires ~15 times the number of chips compared to a BFL SC (per MH/s) and a box somewhere between 5 and 10 times of a BFL SC Single ... rates: [...] The Avalon above the BFL SC Not only that, but the BFL SC is pure custom ASIC, whereas the Avalon seems more and more each day to be a quick a dirty hack implementation. Again these numbers are irrelevant to anyone but someone who wants to name a new number and pretend it's important.
Yeah good on removing all the other crap, but no reason at all to remove this one. It wasn't deleted, it was moved to offtopic: https://bitcointalk.org/index.php?topic=250364.0And it _is_ offtopic: This whole thread is about a _process invariant_ metric. Its an approximation of the performance if fabricated on a similar process with a similar die size. The absolute performance of the devices as fabricated are available all over (and even in the OP, at least in per-chip form). You can yabber on about how BFL is "pure custom" and avalon is a "dirty hack"— but thats irrelevant to the thread, the thread is about the proposed process invariant number and your message was not, so it got moved with all the other off-topic dicksizing which it had inspired.
|
|
|
4334
|
Bitcoin / Development & Technical Discussion / Re: Proof of Work Question
|
on: July 06, 2013, 01:19:43 AM
|
Is it possible to make a proof of work scheme where you can prove how many iterations you have done without (the verifier) redoing the whole calculation? Yes, with a number of conditions. First, we need "iterations" which can be done in parallel: E.g. for i in 0..10: output[i] = pow(i) instead of: output[i] = pow(output[i-1]) Given that, say we're going to do 65536 iterations. for i 0..65535: output[i] = pow(i)
Now arrange the output_i into the leafs of a fully populated binary tree. It will have a depth of 16 levels for 65536 iterations. At each node in the tree hash its children... just like a transaction hash tree in Bitcoin. Take the first ~128 bits of the root hash and use as indexes to pick eight unique iteration outputs. Collect those outputs along with the tree fragments that connect them up to the root. Your proof of work is the eight solutions and the connecting tree fragments. This demonstrates with high probability that all the work was actually done— an attacker who only did eight of the iterations then searched for a random value that specified his 8 would have to do 2^128 work. You can twiddle the numbers for a bandwidth/security tradeoff. This is called non-interactive cut and choose and it's often used in various kinds of zero knowledge proof. If you were specifically wanting a _serial_ proof of work— then no, one can't be constructed. Unless you make everyone work on the same problem (and then its a race, which _bad_) anyone can use the randomization to convert an arbitrarily serial POW into a fully parallel one.
|
|
|
4336
|
Bitcoin / Development & Technical Discussion / Re: Bitcoin addon: Distributed block chain storage
|
on: July 01, 2013, 05:33:25 AM
|
I think things are going to move to an architecture where there a limited set of nodes in the network manage the currency, and most account owners are light clients of some kind.
If you want this then the Bitcoin block-chain and protocol is the wrong design to achieve it. Services like visa paypal are far better designed for serving many transactions from small clusters. More secure too— once you must trust a limited set of nodes to not cheat then protocols which cannot be compromised unless they do offer a better security model.
|
|
|
4337
|
Bitcoin / Development & Technical Discussion / Re: Unspent outputs
|
on: June 02, 2013, 11:29:43 PM
|
I think it happened in the code, when they switched to LevelDB. It doesn't matter - whoever had that coins didn't bother to spend them on time, or more likely, nobody had them anyway. And besides, he can always go to Gavin for a refund You are throughly confused. We would never change the software in way that stole coin from someone, and no one would adopt the software if they did. There were two instances of broken miners which created the same coins twice. Because of the way the software was written, with an implicit assumption that txids were unique, the second coins overwrote the first. The creator of those coins destroyed them, not anyone else.
|
|
|
4339
|
Bitcoin / Development & Technical Discussion / Zerocoin when?
|
on: May 26, 2013, 10:58:04 PM
|
Posting here in the hopes Gavin sees it... Are there any plans for a zerocoin hard fork implementation in the future? Not even the near future, but any future. Or is that just not at all in the cards?
Plenty of people other than Gavin are perfectly competent to answer this question. I've split your post off of the 0.8.2rc3 thread. Please don't make off-topic posts just to try to reach specific parties. As zerocoin is currently designed it is not viable as a production component in Bitcoin: * >40kbyte signatures (the authors of the paper give some hand wave at a DHT but this doesn't meaningfully solve the problems created by enormous transactions: the parties interested in them and all full nodes must transfer them to validate them) * Requires a trusted party to initialize the accumulator (there may be some multiparty computation trick to avoid this, but it's not clear how to apply one in the context of an anonymous system with users that come and go) * Accumulator grows forever (unprunable, though you could rotate accumulators at the cost of the anonymity set size) * Validation that runs on the order of 1-2 transactions per second. Of course, computers get faster and techniques get better: If some combination of improved technology and improved techniques made it 1000x less intensive relative to now it would be pretty interesting. Also, if threats to fungibility aren't addressed through less expensive mechanisms it may become more interesting even if the cost is still somewhat prohibitive. FWIW, inclusion of something like zerocoin would not require a hardfork. I believe it could be happily accomplished as a soft forking change. Bitcoin is designed to be extensible and able to incorporate new transaction rules without breaking compatibility.
|
|
|
4340
|
Bitcoin / Development & Technical Discussion / Re: Bitcoind on Debian (SUN SPARC)
|
on: May 26, 2013, 05:29:42 AM
|
Significant rewrite would be required to port Bitcoin to any big-endian architecture.
Thats a bit of an exaggeration. Luke has a branch which is almost but not quite there. Just about everything gets marshaled through serialization but some work is required to get all the details right. No one competent and productive working on the codebase considers it a major priority.
|
|
|
|