1/10th?
It's important to realize that if the rate blocks are being found is not a large multiple of how long it takes for a block to be propagated to and accepted by ~all of the network that the network will start suffering from convergence failures and see enormous reorgs. Even before the point where it would cause convergence failure a block time which is too fast relative to latencies would confer an an advantage to large hashpower consolidations (e.g. attackers).
Reducing the block size has some effect on propagation time, but it doesn't eliminate the effect of network latency.
More frequent blocks would also multiplicatively increase the header costs for SPV nodes. E.g. 10 years of headers would be 400MBytes for 1m blocks than 40MBytes.
|
|
|
... What do you think doing that will accomplish? Having some ~no difficulty odd block won't contribute to the security, and could be found basically instantly. Worse, because they have no reward except in that they gate finding another block with a reward, fast mining consolidations would be incentivized to find them fast and keep them secret unless they find the next w/ reward block. This really feels like development via oracle, where you fling poop at me until you find something I can't deflect. If you want to try out random speculative ideas, why not create an altcoin? The market is a better oracle than I am.
|
|
|
Add txindex=1 to your bitcoin.conf, and restart bitcoin with -reindex. It will rebuild your database with an index of all transactions. Adds about 1.4Gbytes of required storage.
|
|
|
Using the Benaloh homomorphic encryption system you can check that txins and txouts are equal without knowing anything about their real values. If you encrypt two bunches of numbers that add up to equal sums using Benaloh, the ciphertexts of those two bunches when interpreted as numbers have equal modular products.
But only if they have the same key. This is offtopic for this thread.
|
|
|
Sorry, you haven't stated a coherent proposal— e.g. how these "smaller blocks" fit into the consensus or any other specific details, so I can't really comment on it. All I could do is guess at what you actually meant, and I expect you'd then reply that I'd gotten my guesses wrong.
Generally ideas like this are unworkable because even if a block is very small it takes time to propagate around the network due to latency (e.g. the speed of light is finite). If the time between blocks is not a substantial multiple of the time it takes for a block to reach ~all the nodes then blocks will frequently be found faster than the network has heard about the prior ones and the consensus will fragment and there will be frequent large reorganizations. Such a state makes transactions unsafe until they have many confirmations and provides an advantage to large consolidations of hash-power (e.g. an attacker) because they're less diluted by forks.
When talking about sharding you have to figure out how to handle transactions that cross shards. E.g. if you have small inputs in one consensus and large ones in another, what happens when a transaction which is written that spends from both. If you mean to split by the transactions actual value, then you don't get any scaling gain from splitting because any transaction's input could come from either of the prior consensuses.
Beyond that, as others noted— what you're describing sounds like too much a departure from the rules of bitcoin to get adoption... and also discriminating against transactions by value is arguably undesirable: Thats behavior characteristic of the bad rent-seeking behavior of classic payment networks. If you charge lower fees per resource used based on txn value then someone who wanted to make trouble could split their coins into lots of tiny pieces and use disproportional resources.
|
|
|
Why don't you go do some testing and find out where it falls down?
Otherwise it's all just buzzwords: REDIS HAZ WEBSCALE!!!!
You might also try looking at the implementations which alreay have used postgresql...
|
|
|
I have been getting errors such as these:
those aren't relevant. Look for "Invalid".
|
|
|
I've often wondered about variations along this line myself ... and also if some kind of homomorphic encryption might used to conceal tx details completely and yet still have verifiable blocks ?
Adam had a whole thread on encrypted transactions. Though without compact zero knoweldge proofs (like the stuff I discussed in the coinwitness thread) it's hard to make them not super brittle and inefficient, because anyone you pay has to receive and validate the whole decrypted history of a coin, since it can't be validated without the history. If you had a zero knowledge proof that the transaction was valid (e.g. that all the outputs and inputs added up) which the network checked then you could accept the coin without re-verifying the history and, importantly, without revealing the history to the recipients. The trick is finding a system for zero knoweldge proofs which is powerful enough to prove the right things but fast and low bandwidth enough to actually use which doesn't have annoying limitations like requiring a trusted initialization.
|
|
|
My understanding was that the size of the proofs was the primary hurdle to implementation. Is that true?
There were several other additional limitations: * Very slow to validate (e.g. on the order of 1-2 tx per second) * Required a trusted party to initiate the accumulator, and if they violate that trust they could steal coins * Uses cryptography which is less well studied * Only handled anonymized coins with one value, reducing the anonymity set size substantially * Didn't conceal values * Spent coins list is needed for validation and grows forever (e.g. no pruning of the critical validation state). Of these only the first two and the last are probably real barriers, the others are more "doesn't work as well as some hypothetical future system might". There was no way within their prior system to achieve size reductions to the currently mentioned, I'd speculated in some other threads on some technology that could make the proofs smaller and faster, but if they've gone that route there may be some other consequences. It's hard to say much of anything useful without more information being made public. I would note that the prior ZC implementation has been made available for some time now, and no altcoin has picked it up.
|
|
|
Yea, haven't heard back from him yet. They also asked me for a shipping address but I haven't anything else. In any case, I'm very hopeful. I'm very much of the power-efficiency-matters crowd, and look forward to seeing more aggressively power efficient hardware— fastest to market with no heed of power usage is not a good strategy for someone not mining for the long term.
|
|
|
hm. They're already being auctioned for Bitcoin without any third party validation? Kinda sad.
|
|
|
Add some more transactions to the block that have come in since the last header creation Do the step needed now that the new transactions are in the block Zero the nonce Continue
Miners already do this. Some pools will even trigger longpolls to trigger miners to get work early if enough new txn come in (or at least eligius did in the past, I'm not sure if they still bother).
|
|
|
Can we get a couple of useful bits of data for someone to work on this:
* Earliest confirmed version of 10.8 with the problem * A sample of a corrupted DB * console logs from *during time of corruption* including dmesg and system.log * Information on how bitcoin built/installed, clang? gcc42? macports/brew for deps? * if the people experiencing the problem have filevault (FDE) turned on or not, whether it was turned on during the install or after, and if it's ever been cycled on/off * also whether people who have hit this are using stock fs settings or if have case-sensitivity/etc turned on
|
|
|
Apparently not many people have figured out that any 0.01 BTC minimum is rather easily circumvented. [...] Surely I'm not the first one to think of this.
The 0.01 thing is finally gone in GIT IIRC, as the thing it was trying to do was largely addressed by the anti-dust-output changes. And indeed, you can avoid the anti-dust-output rule exactly as you describe, and there is nothing wrong with that. The point of the anti-dust-output rule is to discourage people from creating UTXO which cost more to spend (in terms of marginal fees) then they yield in coin and your little protocol achieves that fine.
|
|
|
0.1001 mBTC is ABOVE the min threshold of 0.1 mBTC. I am going to shut up now as I am quickly going to cut my own throat.
The threshold is 1 mBTC: main.cpp:int64_t CTransaction::nMinTxFee = 10000; // Override with -mintxfee
|
|
|
Prioritize by fee in the stock code, at least, is made robust against that kind of gamesmanship by treating any fee below a minimum threshold as zero for the purpose of prioritizing the transactions.
|
|
|
And when I say low numbers of participants, it also seems there is an option to choose to CoinJoin entirely with inputs and outputs of addresses that only belong to you. How could someone analysing the blockchain prove they were not trying to disentangle the identities of what would in effect be a spoofed CoinJoin? I'm not sure that they could.
Normal transactions do this all the time (ignoring creating faux-anonymous output values), but today people assume common use means common ownership. Part of the advantage of CoinJoin is that if widely used it breaks the ability to assume that, recovering some privacy for everyone.
|
|
|
the post shows ~60% are happy with it, ...
|
|
|
Yes, but as far as I can see the coinjoin is still a separate transaction with separate fees.
Joe can pay for his sandwich directly— this is one option, but not a requirement. Please review the initial post, this is covered in detail.
|
|
|
Touching on gmaxwell's point about whether or not features should be forced, I think we can divide the features into two main categories: - Individual protections: Anything where the strength of the protection is not dependent or is only "linearly" dependent on how many other people also use it. For instance local encryption of wallets, encryption of p2p communication channels, etc.
- Communal protections: Anything where the protection is made significantly stronger as more people use it. Examples include CoinJoin, CoinSwap and mix networks.
See also Eben's brilliant comments regarding privacy problems as an ecological disaster.
|
|
|
|