That might be the case with SHA-1 and RSA/DSA-1024 (used by default in old versions of PGP), but the SHA-256 and ECDSA-256 algorithms used in Bitcoin can't be cracked in any reasonable time with current technologies. NIST believes that these algorithms/keylengths will be strong past 2030.
|
|
|
The Bitcoin genesis block contains a newspaper headline, which is supposed to anchor the start of the chain in real time. The choice of headline gives some insight into the project's purpose, I think: The Times 03/Jan/2009 Chancellor on brink of second bailout for banks
|
|
|
Neither can you "owe" a Bitcoin. Sending someone a bitcoin does not oblige the payee to do anything in return. Reciprocity is based purely on reputation. There is no enforcement of reciprocity.
If we have a contract where I provide the service of creating a valid Bitcoin transaction of a certain value in return for a good, and you fail to deliver the good, then you have breached the contract and must return the current market value of the service (bitcoins) I provided.
|
|
|
Is there a technical limit beyond this artificial one? Would it just be the bandwidth available to the weakest (in terms of bandwidth) nodes, or are there any inherent limitations of the p2p network itself?
If there are any technical limits (from data types or whatever), they can be eliminated in the same way that MAX_BLOCK_SIZE can be increased. Storage is the main issue, I think: an attacker could create huge blocks that every full network node has to store forever. With just 1 BTC you can create a block of 15+ GB by splitting it into 100,000,000 "nanocoins". In the future most people will run Bitcoin in a "simple" mode that doesn't require downloading full blocks or transactions. At that point MAX_BLOCK_SIZE can be increased a lot.
|
|
|
Block 83018 (00000000002bba570c3) cleared out a bunch of them. Last I heard nanotube still had one that's unconfirmed, though.
Edit: Nanotube's transaction cleared recently. I don't know why it was delayed, since it wasn't relying on a sub-0.01 transaction.
|
|
|
Dwdollar moved all Bitcoin Market deposits to a new wallet. That could be it.
|
|
|
Applying this patch will make you incompatible with other Bitcoin clients.
|
|
|
This could be used in a modified BitTorrent client where the initial seed charges 0.1 BTC per piece or whatever. Every peer just has to load up their client with 50 BTC and start downloading from the seed and those peers that will share. Maybe the peers could charge a small amount, too.
|
|
|
Where does the 1Mb limit come from? Is it a technical limit, or simply hardcoded to save bandwidth?
It's just an arbitrary limit, probably to limit the effects of DoS attacks. You won't accept blocks over 1 decimal megabyte.
|
|
|
ArtForz is already running with no fees, and he has 20-30% of the network's CPU power. The person who originally sent the broken transactions deleted his wallet, though, and the network has forgotten these historical transactions, so any transactions based on this won't confirm.
|
|
|
If they didn't confirm why would he clear them to go into the account? Is he counting blocks instead of confirmations? That seems odd.
He sent a transaction that took coins from a transaction that will never confirm, so this transaction will also never confirm and is therefore lost (along with any change). If the person he sent it to isn't using 0.3.13, they'll also send unconfirmable transactions. It's like a virus. People need to move to 0.3.13 ASAP. It has nothing to do with confirmations.
|
|
|
Dwdollar lost some BTC with Bitcoin Market because someone either maliciously or accidentally sent him "unconfirmable" transactions, and he hadn't upgraded. Maybe now would be a good time to test the alert feature.
|
|
|
Can you tell more about it: "they have to do weird things with extraNonce, which increases the size of the block header".
When you generate, you calculate hashes of the block header. Hashing more data is slower than hashing less data, so the block header is critically of a fixed size for everyone, with one exception. After every hash attempt, you increment the Nonce header field, but since this field is only 32 bytes long, it overflows a lot. Whenever it overflows, you increment the variable-size extraNonce field. The larger extraNonce gets, the slower generating will get. It doesn't get significantly large with normal incrementing, though. If you have a lot of computers and they're all working on the same block with the same public key, then they're all very likely to be hashing the same block at the same time, which is pointless. To fix this, each computer is given a unique extraNonce modifier value. This might be very large to prevent collisions, and it therefore slows down hashing. Undoubtedly you could design a pooling system without this flaw, but it'd be more difficult. I see that m0mchil's getwork is doing something with extraNonce. I don't know how bad that implementation is, but it theoretically must be slower than a client without it (all things being equal; clearly adding GPU support will improve performance).
|
|
|
Pools offer the advantage that nodes can co-ordinate their hashing so that they aren't generating the same hashes as each other. It's not about "total hash/s", it's about "total unique hash/s". If everyone in the pool is assigned a subset of all hashes to work on (sizes based on each nodes average hash/s), then we'll guarantee that no hashes will be repeated.
This is already guaranteed because everyone has a unique public key in their block. You reminded me of another way that pools are bad, though: since everyone uses the same public key, they have to do weird things with extraNonce, which increases the size of the block header and makes generating more difficult for them.
|
|
|
Might work for major transactions, but can you see yourself doing this in the checkout line at Kroger?
People already use PIN entry screens for debit cards, which is pretty much the same.
|
|
|
You could print out the public/private keys, but send your coins to that key with a special transaction that also requires a password to claim (done securely with the hash functions and a salt). Then tell the recipient the password orally. An attacker needs to both hear the password component and scan the key component.
You wouldn't want to use just a password because your transaction would then be vulnerable to MITM attacks.
|
|
|
Generation will be less and less interesting in that way, as the coins per block will divide by 2 until there's no coins generated at all, and the system will need to be run by "volunteers", which aren't really volunteers because if no block is generated no coins can be transfered, thus removing all value from all coins...
The companies can raise fees if generations aren't enough to profit. Every calculation can be made more efficient in hardware. Trying to prevent it is pointless. It'd be a lot like making effective DRM.
|
|
|
Pools won't eliminate the "problem" because pools are not more profitable than normal generation; they just pay out more often. They can't beat companies that have invested in specialized hardware. They also delegate all of the important network decisions to the pool maintainer, so there's no security benefit.
|
|
|
I'll be glad to stop posting code, buy some serious hw and just do the generation myself. As difficulty goes up and people stop generating, this gets more and more statistically interesting... you say I should, right?
The network will eventually be run by "oligarchs". Once software is optimized as far as it can be, it will come down to hardware, bandwidth, and, in the long-term, electricity generation. Most people won't be able to keep up. Posting GPU code now will just prolong the period when generation is feasible for normal people. This will attract a few users, and it might increase the network's total power on the short-term, but on the long-term it'll have little value. If I were you, I'd keep the code private. Publishing it wouldn't be bad for the network, though.
|
|
|
You could build an anonymous trust system by combining some aspects of BitCoin with a web of trust. In this system, anyone would be able to send as many "trust coins" as they want to other identities, but how many of these transactions you view as valid would depend on who you trust in the network. You might say that certain identities can send unlimited coins, while others can send up to 50. No identity would have an objective balance -- the balance would be determined entirely by how you process the public list of transactions.
Example: -You know Identity A personally, so you allow him to send unlimited trust coins. -He buys a CD from Identity B. Since it went OK, A sends B 100 trust coins. -Randomly and over a long period of time, B sends these coins to addresses he controls. It is impossible for an observer to know whether any of these transactions were to real people or not, so B has plausible deniability. (This is clearly more secure if there are more real people between B and you, though.) -B wants to sell you heroin online. To prove his legitimacy, he tells you one of his anonymized trust addresses. When you enter it into your software, you see that he has a number of trust coins, somehow gotten from your trusted peers (possibly in a very indirect way, but directly in this case).
|
|
|
|