Given that the exchange rate is free-floating (as it should be), changing the block reward from halving every 210,000 blocks to a constant 50 would probably not cause much difference at all.
But the speculation is moot... nobody's changing that part of bitcoin.
|
|
|
Just pushed chain reorg code to github. It seems to survive a static test... let's see how it fares on the live network.
|
|
|
I don't see any reason why we can't have the best of both worlds.
We can! Get a github account and start submitting pull requests. It's open source: if you have a need, we welcome those changes being submitted. The Qt part of the client certainly needs more Qt hackers! If you can't program, hire someone who can write the feature for you.
|
|
|
By keeping the bitcoin network stable and working smoothly they ARE indirectly helping to create lightweight clients and web wallets etc. Everyone else relies on them. With the limited manpower all open source projects have I presume they concentrate their firepower on "keeping the engine running smoothly" and leave the eye candy to other devs.
That's pretty much it. The reference client needs to be correct first and foremost, and that is primarily where the "core dev team" attention goes. But isn't "core dev team" a meaningless term? There are many other developers working on web wallets and alternative clients that are far more usable. At a minimum I would trust the clients built upon BitcoinJ. Using MtGox or another web wallet means zero blockchain download time, too.
|
|
|
Because pynode startup may take a while, here is a set of pre-indexed database files you may download: http://gtf.org/garzik/bitcoin/chaindb.tar.bz2 (2.3 gigabytes) SHA1 a8e093ce7eedb3434a3a97570be8c3b855fa7529 This was generated by pynode git HEAD (commit b93b91b1bb659b0477eeff8b0a6771bdb595f0f4). If you wish to treat this as untrusted data, you may verify hashes and TX connectivity with the dbck.py script. Runtime should be under 1 hour in most cases. If you wish to verify scripts, this will take much longer, possibly 10-12 hours. testscript.py will verify scripts in the database.
|
|
|
I think this has zero chance of ever happening to Bitcoin, but if a country declares something to be its official currency,[...]
Think for a second. Who cares if bitcoin is a country's official currency? Bitcoin use need only be protected by law in a certain country to gain added legitimacy... An affirmative law saying "yes, bitcoin is legal" would be wonderful, would it not?
|
|
|
My guesses:
1) Gavin is working for the CIA 2) Gavin is working for Goldman Sachs 3) Satoshi's real name will be revealed on the 11 o'clock news.
|
|
|
Can you fix the SatoshiDICE software to stop generating a ton of tiny outputs, in the event of a loss? One of the long term drains on resources is tracking unspent coins -- and SD creates 0.00000001 outputs for losing bets. We have to individually track each one of those outputs until it is spent. Of course, it is difficult for users to find ways to spend 0.00000001 BTC, so the obvious database bloat for each bitcoin user results...
it is a little bit offtopic, but may i suggest that if that becomes a concern - the bitcoin client just merges the output with a regular transaction just for defragmenting reasons? as a coin selection algorithm it could always pick the smallest outputs first for example. obviously there could be multiple (configurable) coin selectors, with different focus. privacy, output minimisation, transaction size/fee minimisation.. gmaxwell already made a basic patch, but it wasn't good for privacy, so it's not ready yet. Regardless, that does not fix all the implementations in the field now, nor does it fix non-Satoshi clients such as MtGox etc. Because the network must always track every unspent coin, creating tons of difficult-to-spend coins is network unfriendly.
|
|
|
URL: https://en.bitcoin.it/wiki/BIP_0035 AbstractMake a network node's transaction memory pool accessible via a new "mempool" message. Extend the existing "getdata" message behavior to permit accessing the transaction memory pool. MotivationSeveral use cases make it desireable to expose a network node's transaction memory pool: - SPV clients, wishing to obtain zero-confirmation transactions sent or received.
- Miners, to avoid missing lucrative fees, downloading existing network transactions after a restart.
- Remote network diagnostics.
Specification- The mempool message is defined as an empty message where pchCommand == "mempool"
- Upon receipt of a "mempool" message, the node will respond with an "inv" message containing MSG_TX hashes of all the transactions in the node's transaction memory pool. An "inv" message is always returned, even if empty.
- The typical node behavior in response to an "inv" is "getdata". However, the reference Satoshi implementation ignores requests for transaction hashes outside that which is recently relayed. To support "mempool", an implementation must extend its "getdata" message support to querying the memory pool.
- Feature discovery is enabled by checking two "version" message attributes:
- Protocol version >= 60002
- NODE_NETWORK bit set in nServices
Note that existing implementations drop "inv" messages with a vector size > 50000. Backward compatibilityOlder clients remain 100% compatible and interoperable after this change. ImplementationURL: https://github.com/bitcoin/bitcoin/pull/1641 DiscussionSee the bitcoin-development list on sourceforge.
|
|
|
Can you fix the SatoshiDICE software to stop generating a ton of tiny outputs, in the event of a loss? One of the long term drains on resources is tracking unspent coins -- and SD creates 0.00000001 outputs for losing bets. We have to individually track each one of those outputs until it is spent. Of course, it is difficult for users to find ways to spend 0.00000001 BTC, so the obvious database bloat for each bitcoin user results...
|
|
|
Perhaps we should request that Nefario lock this thread, and create a new one. There are already many Bitcoinica threads... please use those. No matter your position on this issue, you are spamming this thread with off-topic content. Yes, even you, genjix
|
|
|
Nice! im only testing against the satoshi client for now. im also making an alternative client in python. so far i can download and verify blocks and transactions(mostly... have implemented a hacky replacement for the scripting, just ripping out the sig and publickey and checks if it verifyes...). i have no working wallet for now, but i can send hand crafted transactions. If you are using python, please consider using the 'bitcoin' module found in https://github.com/jgarzik/pynodeIt already does full script and block verification. The only major to-do is chain reorg, and that does not impact the 'bitcoin' module at all.
|
|
|
Please move this off-topic discussion to the dozen other threads covering the situation.
|
|
|
The project was on git from get go, maybe git was forked/rebuild ?
Incorrect. The project started on subversion on SourceForge.
|
|
|
That's automated from git, and satoshi predates git actually would be my guess.
|
|
|
pynode's script engine now successfully verifies all scripts in the mainnet and testnet3 chains.
|
|
|
Updated OP to indicate the "python-bitcoin" nature of the project. Programmers may use some or all of pynode's library, without using the node itself.
|
|
|
Updated status: pynode now verifies all known scripts on mainnet and testnet3, except for the OP_IF opcodes. OP_IF is not present on mainnet right now, so this should not be a problem for testing. pynode now includes a JSON-RPC server like bitcoind. Only a few RPC API methods are implemented right now. Check back in a few days, to see more RPC API calls implemented, such as parts of the raw transaction API. pynode is not officially stable yet, as script verification is disabled (run testscript.py instead) and chain reorg is not implemented yet. Definitely interested in this and running it now. Is it normal to see a bunch of blocks labeled as orphan at the first startup?
Possibly yes. Chain reorganization is not yet implemented. Small 1-2 block forks happen on mainnet every few days. If the chain forks, pynode will not understand this, and get stuck. Known problem, will be fixed. The workaround is to delete your chain database, and re-download it. If you know python, you could save download time by writing a simple script that deleted the last 100 blocks. You may ignore orphans, as long as you see new network blocks being stored in the database, with 'height' increasing. Also the example lists /tmp/chaindb for the database location. Perhaps that should be .pynode/ if it will be helpful later to keep the database?
I trust people can figure this one out on their own They will need avoid using RPC username "XXXX" and password "YYYY" also
|
|
|
pynode now successfully verifies all transaction scripts on mainnet. testnet3 verification still a WIP.
|
|
|
|