P2pool used to store its sharechain commitments (which is how it achieves trustless, serverless pooled mining) in a zero value anyone can spend output. To clean up the utxo set some people spent all those outputs at some point. (that might have even been me making that txn, dunno).
It now uses a op_return... now that bitcoin core knows it doesn't need to track those, so there is no useless utxo to clean up.
|
|
|
One of the people seemingly involved (fire000) who was attacking everyone negative is now flooding my mailbox with demands that I remove the negative rating I left on him (after encountering him crap-flooding random unrelated threads just to harass people who raised an alarm on this stuff.
(apparently I'm supposed to be scared that he and his sketchy cronies will negative rate me on the forum, oh noes)
|
|
|
Spv could work, but youd want to be connected to a few nodes at a time to be confident. Just randomly hop around servers in the electrum server list. You can subscribe to callbacks for events from stratum servers if I recall correctly..
Be careful with assumptions like these. All (reachable) peers are malicious peers if an attacker controls your network connection. (E.g. compromised your ISP or router, or they are your ISP)
|
|
|
Phase 1 & 3 are on CPU. Phase 2 is on GPU. 0.067733 ms per verification. I have a GTX 560 Ti.
So thats 14763 per second? Thats not favourable compared to running on the cpu but there may be many optimizations still available for you.
|
|
|
cyclically xor-ed with the pass key.
This sounds like dangerously crackable snake oil cryptography. If there is plaintext with a known xor relationship then reuse of the pass key bits allows their recovery, Please don't make up novel cryptosystems and encourage other people to use them unless its strictly necessary. There are many existing, mature, well reviewed systems for symmetric encryption. This also appears to be offtopic.
|
|
|
Are they trully 100% random (generated by hardware) or is it pseudo-random (generated by software)?
They aren't random. They're a pure function of the data in the block. They are not (believed to be) predictable in advance, but miners can influence them by changing the data in the blocks to get different results. This is how mining works. Can i rely on it or any other blockchain data to make a 100% trustable raffle?
The security of doing that is limited. If you entire raffle process is known in advance miners can cheat your raffle at some cost by throwing away otherwise valid blocks that would go a direction they don't like in your raffle. There are ways to avoid this (e.g. first publishing a commitment to a secret which is combined in the selection) but really, to be frank, if you're asking very simple questions like this you likely have a long way to go before you're ready to design a cryptographic protocol for other people to use. You probably need to spend some more time reading.
|
|
|
We have intentionally not provided one because it is such a huge footgun and is almost never useful. Use an external tool to edit the wallet. (which also helpfully makes clear the level of danger in such an operation)
|
|
|
The security argument for Bitcoin works if you set the price of mining hardware to zero-- Bitcoin is proof of work, the input to work is _energy_. The security argument is generally insensitive to price except at the extreme limit of $0 (and there... there are more things to worry about than reorgs).
You have a mutually exclusive choice for what to do with your units of energy available to you. You can attack, devaluing the Bitcoin you gain from your attacks, or you can mine honestly and protect your existing Bitcoins and the new transaction fees you receive.
The market price of Bitcoin has changed >10 fold in the past and there were no security incidents as a result.
|
|
|
Could you rephrase that slightly? I am not sure if you mean it will work or,
There is no speed up over what is already implemented? or, The Bitcoin Core already sneaks some code onto the GPU, and is already fully implemented? or, The GPU would be of no advantage, the most parallel it can become can be ran on a 2+ Core CPU as fast as possible (High step complexity overall)?
It is already parallel, it does not use the GPU. Using the GPU effectively will be difficult because there is not that much parallelism available unless you are verifying multiple blocks at once, and communicating with the GPU has huge latencies. Competing with state of the art cpu code will take state of the art GPU code (my own vanitygen code on the cpu is roughly as fast as the published gpu vanitygen). I think you should try it, a port of libsecp256k1 to the OpenCL would be pretty interesting and useful. If it were faster we'd probably be willing to do the architectural changes (making it possible to verify multiple blocks at once) to enable using it.
|
|
|
But one moment in future it will rapidly fall to zero.
Similar to how one moment in the future people will stop posting half-informed hand-waving to this forum? You can keep asserting things like this-- but you've given no justification. There is no mechanism by which the security is expected to "rapidly fall to zero" known to me.
|
|
|
Bitcoin core's verification is already parallel. But there isn't as much parallelism to extract as you might guess, not without extensive architectural changes (e.g. pipelining multiple blocks and accepting partially verified ones). You should not, however, have been seeing it using only one cpu... in fact windows users have frequently complained that it must be broken because it was using all of their cores for minutes at a time.
|
|
|
(e.g. by using transaction hashes and/or IBLT), once implemented, will certainly keep the new block message size growth rate much lower than the bandwidth growth rate.
Keep in mind these techniques don't reduce the amount of data that needs to be sent (except, at most, by a factor of two). They reduce the amount of latency critical data. Keeping up with the blockchain still requires transferring and verifying all the data. Not a big deal? Well, except that we can expect the power of nodes to follow some sort of curve ("exponential" in the vernacular) such that most nodes are barely above the threshold to be viable. Meaning that this event would mean that the majority of nodes would shut down, likely permanently. Right. There is a decentralization trade-off at the margin. But this isn't scaleless-- there is _some_ level, even some level of growth which presents little to no hazard even way down the margin. The a a soft stewardship goal (not a system rule, since it can't be) the commitment should be that the system should be run so that it fits into an acceptable portion of common residential broadband, so that the system does not become dependant on centralized entities. As some have pointed out, being decenteralized is Bitcoin's major (and perhaps only) strong competitive advantage compared to traditional currencies and payment systems. How to meet that goal best is debatable in the specifics. At the moment there are a bunch of silly low hanging fruit that make running a node more costly than it needs to be; we're even at a case where some people developing on Bitcoin core have told me they've stopped running a node at home. It's hard to reason about the wisdom of these things while the system is still being held back by some warts we've long known how to correct and are in the process of correcting.
|
|
|
That isn't a hash function. It's a signature system (sadly one based on the authors team's own Blake and chacha for performance reasons, instead of more standardized functions).
Its focus is on stateless reusable signatures. The cost is that the signatures are huge by our standards... 41,000 bytes (plus a kilobyte pubkey). In Bitcoin we shouldn't generally have long lived keys and so a 'few times signature' scheme or a small tree of one time signatures (plus state, which the blockchain can provide) are often better and can be done with dramatically smaller sizes.
Certainly thats something I'd use for software releases, however!
|
|
|
The requirement is that if there is fraud it must be detectable by some user under some path and that they have the ability to create a transferable proof of their detection. You can't achieve stronger than that (e.g. that if there is fraud all users can detect it) under this approach. The criteria is met if you show the unsummed values (as listed on iwilcox page) or just show the one step deep off-path preimage.
|
|
|
No, more certification weaknesses (around 2^256 work) on reduced round versions.
|
|
|
Handling the non-1:1 case when the 1:1 case is solved is trivial, more challenging is figuring out actual applications where users would opt-into it. There are silly ones, e.g. "ponzi chain"— I guess one metric for an approach being general and supporting of freedom is that it can accommodate things that sound really ill advised too. .. but I've not really found many idea in the non-1:1 space that I'd realistically expect many to play along with. One argument might be that it's potentially desirable to have a system where lost coins eventually go to support the system somehow... but that very easily runs into issues with bad incentives as miners can censor transactions to make coins seem lost when they're not. but can merge-mining be used in the reverse way Depending on what validation rules you choose you can get whatever security "umbrella" you desire. E.g. the sidechain can have consensus rule that their work must also commit to the best valid Bitcoin chain (effectively importing bitcoin validity into the secondary system). This degrades the tidy isolation that you normally get with no linkage, but it gives full security, complete with incentives (because if you misbehave in the Bitcoin blocks you also lose your blocks in the other system). This is good news, but potentially not a cure all— in the you may still be failing to meet your decentalization objectives even though the hashrate security (ignoring centralization degree) is good (e.g. because the scaling requirements and/or goals of the more profitable secondary system are less decentralization compatible).
|
|
|
The next major version will be v 0.10
|
|
|
We don't generally use BIPs for random UI things.
As far as the idea goes— perhaps. Though I'd recommend adjusting your business plans such that the average of the fees in total is what you worry about; rather than the fees on any particular transaction. Otherwise you're just going to get jammed up and unable to transact when someone has sent you a number of dust inputs and the only way to eliminate them is to pay a somewhat high fee... all your low value transactions will end up failing until you up that limit.
|
|
|
|