I agree that it doesn't have to be SD specific; it could simply filter out spammy dustcoins with no tx fee.
I believe SD sends dustcoins WITH proper tx fees. SD is paying the vast majority of fees now included in blocks, and has been for a while. This is entirely irrelevant when the fees are miniscule, compared to the block reward subsidy. The cost of all those unspent outputs created by SD, for lost bets, impacts all bitcoin users.
|
|
|
second point: start to use testnet extensive before you go on prodnet. create a public testnet environment with massive transaction volume to get a feeling what can happen on prodnet. this is very important. at the moment you use the prodnet as a test and you are surprised if something goes wrong. this is not acceptable and this is not professional.
You are welcome and encouraged to fund a large, professional testing effort. We need all the help we can get!
|
|
|
BDB shouldn't even be part of the spec, the spec should abstract the actual database implementation details and just maybe talk about whether it should have atomic transactions, whether we should be able to roll those back, under what circumstances to roll than back and so on.
Enshrining some bug in some particular implementation of "database back end" should be anathema, our spec should basically be telling us "BDB isn't up to spec, don't use versions of it that aren't able to perform as we need them to".
This. So much this. Dublin Core and RDF/RDA container formats are excellent examples of standards that in all of their lenitive documentation provide an example of moving the database details out of the software implementation. This idea is from an alternate world where you don't have to worry about breaking existing users' software and data to the point that they cannot spend their own money. Bug-for-bug compatibility is not a choice made joyfully and willingly. Engineers always prefer a clean slate, a clean interface. That works here, only with a little help from science fiction's time machines.
|
|
|
It is always entertaining to watch non-contributors opine about completely obvious solutions that the devs are silly to have overlooked.
The interesting thing about bitcoin is its organic nature. The bitcoin codebase, warts and all, was dumped into the Internet's collective lap. Reality does not give anyone a chance to pause, wait for a specification to be polished, to wait for every single edge case to be tested (if that were even possible), etc.
Almost half a billion dollars in market cap, and the dev team is still largely unpaid volunteers, trailing behind events, cleaning up the messes reality leaves behind.
|
|
|
Please explain how the timestamp of block 225449(2013-03-12 05:30:02) is before block 225448(2013-03-12 05:33:45)?
It is permitted within the spec that one block may set its 'nTime' field before a previous block... but always within a certain range of time.
|
|
|
"no amount of testing would have found this" is a bit strong.
It is a longstanding bug that has existed through bitcoin's history, but it is certainly not impossible to test for this condition. testnet has many test cases embedded in the chain, and this bug trigger certainly could have been one such test case.
Water under the bridge... Satoshi had zero test cases in this original software. Gavin and gmaxwell led the charge to start adding unit tests, and testnet blockchain tests. BlueMatt has been working on a block tester as well.
|
|
|
It is my understanding that it's still possible for the problem to occur, if someone keep mining on 0.8, right? As long at 0.8 is in the field...
The defaults for 0.8 mining are just fine. However, if you increase the default block size limit, you can reintroduce the problem, yes. Don't worry though. Now that people are aware of the issue, it is easy to handle. The fork will "heal", and all coins are safe.
|
|
|
It is not a solution. Satoshi Dick can start using unique generated addresses for each bet and it will be even worse then.
Open question. Unique generated addresses might imply some per-bet or per-session communication with the users, that would open the door to a much more efficient method of sending the "bet lost" message.
|
|
|
Lets make sure Bitcoin is transaction neutral! Bitcoin was never transaction-neutral. Transactions with the highest fees per kilobyte will always "win", while transactions with no fees and new coins will always "lose". The rules exist to protect the network Much more than that -- miners have always had the power to choose which transactions to include in blocks, which transactions to exclude from blocks. The "censorship" is an intentional part of the system. Miners should be free to ignore transactions they do not feel worth mining.
|
|
|
GitHub URL: https://github.com/jgarzik/python-bitcoinlibRepository: git://github.com/jgarzik/python-bitcoinlib.git The python library for pynode has matured sufficiently to have a home of its own. The python-bitcoinlib project attempts to present a lightweight, modular, a la carte interface to bitcoin data structures and network protocols. Features: - Easy object interface to all bitcoin core data structures: block, transaction, addresses, ...
- Full transaction script engine
- Fully verifies main and testnet block chains (via pynode)
- ECDSA verification (OpenSSL wrapper)
- Object interface to all known network messages
- Binary encoding/decoding (serialization) for full bitcoin protocol interoperability
- Passes many of the tests shipped with the bitcoin reference client (bitcoind/Bitcoin-Qt)
Like pynode, this library is currently a developer-only release, not recommended for highly secure production sites. Pull requests, comments, questions and donations always welcome.
|
|
|
Readers are urged to consider that the following two positions of Satoshi are potentially mutually exclusive: - Fees will support the system, long term
- Block size may be increased in the future
Right now, the block reward subsidy supports the entire system, so it is difficult to draw any conclusions about a fee-supported future -- and yet that is what we are being asked to do. At the two extremes... If the block size strategy is too loose (too big), there is no incentive to curb spam, there are not enough fees. If the block size strategy is too tight (too small), fees are very high, in-blockchain traffic is discouraged, possibly users are discouraged away from bitcoin. The open question is... how to pick the best number? or, how to enable a market to pick the best number? or, how to pick an algorithm that picks the best number? The fee/block-size balance is a crucial balance that must be maintained for the health of the system. There is little evidence that Satoshi put much thought into this, probably supposing that the market would figure out the answer.
|
|
|
pynode is very much alive and well. It is becoming popular enough that we can branch off the internal "bitcoin" sub-directory into a proper python library, python-bitcoinlib.
Stay tuned for details.
|
|
|
The "soft" limit is set by each miner.
When I mined using vanilla, unmodified bitcoind + p2pool, it was a simple configuration setting to change the limit to 900k.
My first block was over 400k.
Soft limit "maxing out" is a non-event.
Oh, it's as simple as changing some configs? So pool operators are voluntarily limiting their blocks with no particular reason? And some people seem to fear they would go the other way around.... For pools with older bitcoind software, changing the soft limit is simply a matter of changing a constant and recompiling. For pools with recent bitcoind software, changing the soft limit is simply a matter of changing configuration settings: -blockminsize=<n> Set minimum block size in bytes (default: 0) -blockmaxsize=<n> Set maximum block size in bytes (default: 250000) -blockprioritysize=<n> Set maximum size of high-priority/low-fee transactions in bytes (default: 7000)
|
|
|
Does your service include using actual bitcoin escrow transactions (ex. 2-of-3 multisig)?
|
|
|
Clients should re-broadcast transactions or assume they are lost, if they fail to be included after X * 4 [blocks | seconds]
The current behavior of clients is fine: rebroadcast continually, when you are not in a block. Optionally, in the future, clients may elect to not rebroadcast. That is fine too, and works within the current or future system.
|
|
|
...you can see how any long-running node will eventually accumulate a lot of dead weight. Wow...tightrope walking with no net. If blocks are always filled and fees go up, the SatoshiDICE transactions (low fee) will clog the memory pool and I guess eventually there will need to be a patch. Correct. It's not needed right now, thus we are able to avoid the techno-political question of what to delete from the mempool when it becomes necessary to cull. The mempool only stores provably spendable transactions, so it is DoS'able, but you must do so with relay-able standard transactions. Why aren't mempool transactions purged after some fixed amount of time? This way someone could determine with certainty that their transaction will never make it into a block. Apologies if this has already been asked many times (it probably has). As a matter of fact, that is my current proposal on the table, with has met with general agreement: Purge transactions from the memory pool, if they do not make it into a block within X [blocks | seconds]. Once this logic is deployed widely, it has several benefits: - TX behavior is a bit more deterministic.
- Makes it possible (but not 100% certain) that a transaction may be revised or double-spent-to-recover, if it fails to make it into a block.
- mempool is capped by a politically-neutral technological limit
Patches welcome I haven't had time to implement the proposal, and nobody else has stepped up.
|
|
|
Is there a place that describes how the reference client deals with the memory pool? Like, what happens when it fills up (which transactions get purged, if any, and after how long)?
The only way transactions are purged is by appearing in a block. At present it cannot "fill up" except by using all available memory, and getting OOM-killed. Therefore, you can see how any long-running node will eventually accumulate a lot of dead weight. The mempool only stores provably spendable transactions, so it is DoS'able, but you must do so with relay-able standard transactions.
|
|
|
|