Cute, but interest (if you even have that anymore) is much less than inflation.
|
|
|
I think it's good for alternative systems like Freicoin to exist because now, for the first time ever, people will have a free choice between currency theories without central planning or coercion.
...
I'm do not think the monetary theories underlying Bitcoin will lose in a free marketplace of ideas, but if it turns out I'm wrong at least I'll be wrong because something even better than Bitcoin showed up. From that standpoint it's a win-win outcome.
Wise words.
|
|
|
Impaler can get excited sometimes. I am preparing a blog post explaining in detail what I propose to accomplish. Here is a listing of the major points of order:
* The bitcoin Satoshi client (bitcoind and Bitcoin-Qt) will be updated to support two separate Merkle-tree indices: txid:n -> TxOut, and script -> TxOut. * JSON-RPC commands will be added to query these indices, retrieving (1) the root hashes of the indices for a block; (2) proof of (non-)existence of an unspent transaction output; or (3) the unspent outputs associated with a script, with proof. * New P2P messages will be created for making queries (2) and (3) against nodes exposing suitable service bits. * Implement a merged-mined “meta-chain” which contains instead of transactions, checkpoints of the root hashes of the above indices. * Modify the Satoshi client to start in security-level 2 (see above), and work its way to security-level 0 as a fully-validating node, allowing for quick startup times irregardless of blockchain size.
There are other related things which may be valuable to accomplish, such as pruned operation, and many, many small details left out. However any work on alternative currencies such as Freicoin is outside of scope, and will not be paid for by this stipend.
|
|
|
In practice that probably wouldn't bloat the UTXO set at all.
|
|
|
I was responding to: "the blockchain size will increase linearly".
I don't think there is sufficient consensus as to what will happen once the blocksize limit is removed. However the odds are very slim that it will remain a straight line.
|
|
|
Difficulty adjusts every 9 blocks.
|
|
|
There was a recent article in Bitcoin Magazine that analysed the blockchain size, but most of their articles are not available online. The main point is that the blockchain size will increase linearly, while the capacity of hard drives and internet speed increases exponentially (although right now the blockchain appears to increase faster because the block size limit hasn't been hit yet). The article looked at some worst case scenarios, and even using really conservative estimates, 20 years from now people will easily be able to store the whole blockchain on their phone and download the whole thing in a few hours.
That does not hold true once the blocksize limit is removed, which will happen eventually.
|
|
|
@maaku, you said yesterday we left an open conversation more or less here:
maaku: Metachain miners could get lazy and not maintain the indexes properly. jtimon: If the indexes are in the main chain there wouldn't be such a problem. maaku: In the main chain the thing gets worse. jtimon: How that's possible? blocks with wrong indexes will be orphaned.
Actually we should let gmaxwell respond to this, as I was really just relaying his concerns expressed to me at the conference. I think perhaps he is considering the case where the information is included in the bitcoin coinbase, but not actually enforced as a protocol rule? Also, about socrates proposal. I think it's also useful for custom assets that can be issued, used and destroyed. New miners (or users) don't really care about these destroyed assets.
True except for the miner part. You can fork the blockchain just as easily with invalid asset transactions.
|
|
|
I would also suggest getting in touch with the Bitcoin Foundation. Even if they don't host the event as they did in San Jose, they might be willing to provide some sort of logistical/planning support.
|
|
|
I'm not doubting that it's possible, just wondering if you'd actually be gaining anything. I suppose the application would be for devices with suitable bandwidth but very tight memory constraints, like hardware wallets?
|
|
|
Due to constructive feedback received at the conference, I have created a ReDonate campaign as an optional method for adding to the "Ultimate blockchain compression w/ trust-free lite nodes" campaign. ReDonate is a service for contributing a one-time donation, while receiving periodic reminders in the future to re-donate if you are happy with the progress being made. http://redonate.net/dashboard/bitcoin-developer-maakuAs noted in the OP, the 3-month funding goal has been achieved, but it is expected that it will take more than 3 months to implement this feature. Any further contributions now will go toward extending the amount of time I have to work on it.
|
|
|
0.8 is ultraprune. Synchronization is a lot faster with 0.8 than before, but there are still a lot of other fixes that need to be implemented to really solve the block chain bloat 'problem'.
|
|
|
Right at the top of my list of things to do. Thanks for the reminder ![Smiley](https://bitcointalk.org/Smileys/default/smiley.gif)
|
|
|
SD has always had transaction fees. When Mike Hearn and others are cavalier about block chain bloat, it's because there are existing solutions being worked on. The process started with the 'ultraprune' branch that became 0.8, and will continue to improve. If all of the proposed optimizations are implemented, bitcoin can and will scale to VISA-level transaction processing, while being runnable on a commodity PC. It doesn't make sense then to devote valuable conference time to rehashing issues where we already know the work we need to do.
|
|
|
At the developer round-table it was asked if the payment protocol would support alt-chains, and Gavin noted that it has a UTF-8 encoded string identifying the network ("main" or "test"). As someone with two proposals in the works which also require chain/coin identification (one for merged mining, one for colored coins), I am opinionated on this. I believe that we need a standard mechanism for identifying chains, and one which avoids the trap of maintaining a standard registry of string-to-chain mappings. Any chain can be uniquely identified by its genesis block, 122 random bits is more than sufficient for uniquely tagging chains/colored assets, and the low-order 16-bytes of the block's hash are effectively random. With these facts in mind, I propose that we identify chains by UUID. So as to remain reasonably compliant with RFC 4122, I recommend that we use Version 4 (random) UUIDs, with the random bits extracted from the double-SHA256 hash of the genesis block of the chain. (For colored coins, the colored coin definition transaction would be used instead, but I will address that in a separate proposal and will say just one thing about it: adopting this method for identifying chains/coins will greatly assist in adopting the payment protocol to colored coins.) The following Python code illustrates how to construct the chain identifier from the serialized genesis block: from hashlib import sha256 from uuid import UUID def chain_uuid(serialized_genesis_block): h = sha256(serialized_genesis_block).digest() h = sha256(h).digest() h = h[:16] h = ''.join([ h[:6], chr(0x40 | ord(h[6]) & 0x0f), h[7], chr(0x80 | ord(h[8]) & 0x3f), h[9:] ]) return UUID(bytes=h)
And some example chain identifiers: mainnet: UUID('6fe28c0a-b6f1-4372-81a6-a246ae63f74f') testnet3: UUID('43497fd7-f826-4571-88f4-a30fd9cec3ae') namecoin: UUID('70c7a9f0-a2fb-4d48-a635-a70d5b157c80')
As for encoding the chain identifier, the simplest method is to give "network" the "bytes" type, but defining a "UUID" message type is also possible. In either case bitcoin mainnet would be the default, so the extra 12 bytes (vs: "main" or "test") would only be an issue for alt-chains or colored coins.
|
|
|
This is a great idea. We need something in Europe like the conference that just wrapped up in San Jose - focused on tech, business, and regulation.
|
|
|
Start hashin'
Seriously, blocks are coming in that far apart. That's why we're changing the difficulty adjustment algorithm. Current block is #28316.
|
|
|
Most if not all of the real world use cases for OP_EVAL are handled by P2SH.
|
|
|
|