Bitcoin Forum
May 24, 2024, 03:13:40 AM *
News: Latest Bitcoin Core release: 27.0 [Torrent]
 
  Home Help Search Login Register More  
  Show Posts
Pages: « 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 [49] 50 51 52 53 54 55 56 57 58 59 60 61 62 »
961  Bitcoin / Bitcoin Discussion / Re: The fork on: February 20, 2013, 03:45:14 PM
Isn't part of this whole cool story the tale of the brave defenders of the original one true bitcoin who will do just that? Smiley

The cool story isn't when the miners bravely reject the large blocks, it's when the users of Bitcoin bravely reject large blocks, by doing nothing more than not installing any version of Bitcoin that removes the current 1MiB block limit.

Bravery by doing nothing; kinda anti-climatic really. Smiley
962  Bitcoin / Bitcoin Discussion / Re: The fork on: February 20, 2013, 03:42:16 PM
Increasing the block size, and especially allowing miners themselves to determine by how much, increases the barriers to entry...

Why wouldn't miners reject interactions with miners who set the block size too high, for instance?

Read my post on the subject.
963  Bitcoin / Bitcoin Discussion / Re: The fork on: February 20, 2013, 02:47:20 PM
An oligopoly is a situation where there are very high barriers to entry.

Exactly. For mining itself, 1MiB blocks ensure that the barriers to entry are very low, and thus an oligopoly of miners won't form.

Increasing the block size, and especially allowing miners themselves to determine by how much, increases the barriers to entry, allowing for an oligopoly to form.

Bitcoin itself may be an oligopoly, but we really, really do not want Bitcoin mining to become an oligopoly.
964  Economy / Service Discussion / Re: Coinbase discourages anonymity! on: February 20, 2013, 02:19:08 PM
That's true and there are trade-offs. However, since Coinbase is mostly frequently by newbies

I'll suspect Coinbase works the way it does so that newbies don't lose their Satoshidice bets...
965  Bitcoin / Development & Technical Discussion / Re: [SUCCESS] Double Spend against a satoshidice loss on: February 20, 2013, 02:16:20 PM
You meant SIGHASH_ALL, right? SIGHASH_NONE is not generated by any client today and makes a wildcard transaction that can have any outputs.

Good catch, fixed.
966  Bitcoin / Development & Technical Discussion / Re: artificial 250kB limit? on: February 20, 2013, 02:10:34 PM

No, the hard limit has been 1 megabyte forever.


Why there is such limit?

Because if we don't have a limit, your ability to mine isn't a function of how much hashing power you have, the thing that protects us against 51% attackers, it's a function of how much network bandwidth you have, something 51% attackers need none of. Bigger blocks mean more money uselessly spent on network bandwidth, rather than what actually keeps Bitcoin secure.

See my post here, as well as other viewpoints on the issue.
967  Bitcoin / Bitcoin Discussion / Re: The fork on: February 20, 2013, 02:06:59 PM
Suffice it to say that such large, amazingly outperforming oligolies are extremely difficult to form on completely unregulated markets.

Bitcoin itself is an oligopoly. What are Bitcoins made of anyway? They're just bits, information, and by themselves information is incredibly, ridiculously cheap. Of course the incredibly low price of information is made possible by the free market itself, specifically the amazingly successful computer industry.

Bitcoin is a system by which every participant creates a shared oligopoly on a particular set of information, the blockchain. From day #1 Bitcoin was about taking information that, if subject to free market forces, would be so incredibly cheap that it'd be basically free and artificially making it expensive. This shared oligopoly, achieved through the rules set out by Satoshi, makes this information incredibly expensive, so much so that 32 bytes of information, a private key, can now be worth millions of dollars.

Basically the decision about how big our shared oligopoly should allow blocks to be is just a decision about what rules we'll follow to make our little bits of otherwise worthless information as valuable as possible. Myself, gmaxwell, and many others happen to think that if we limits blocks to 1MiB each, keeping the regulations as they are, our little oligopoly will maximize the value of that information. Gavin, Mike Hearn, and many others happens to think that if blocks are allowed to be bigger than 1MiB, thus changing the regulations, our little oligopoly will maximize the value of that information.

Don't for a second think any of this discussion is about free market forces. Bitcoin is about artificially subverting free market forces through regulation, for the benefit of everyone participating in the oligopoly that is Bitcoin. It just happens to be that the way to become part of this oligopoly isn't by, say, living in a certain part of the world that's mostly desert, it's by either buying entrance (buying some Bitcoins) or by doing a completely made up activity that has no purpose outside the oligopoly. (mining)
968  Bitcoin / Development & Technical Discussion / Re: The Long Wait for Block Chain Download... on: February 20, 2013, 01:17:20 PM
Startbitcoin.com is now offering the blockchain on DVDs that can be shipped for those who don't want to hassle with downloading it or those with data caps/bandwidth issues. It's a great way to get going if you lose your blockchain and would rather use your bandwidth for other things.

http://startbitcoin.com/blockchain-on-dvd/
 

Dammit, http://blockchainbymail.com was going to be my April Fools prank... Tongue

Anyway, if the startbitcoin.com guys want the domain, I'll happily give it to them for the 0.5BTC it cost me to register.
969  Bitcoin / Development & Technical Discussion / Re: [ANN] bitcoinj 0.7 released on: February 20, 2013, 01:00:18 PM
You mention on your site that the new "full node" operation is very likely to have hard-fork bugs.  Do you think that is a permanent situation?

Apparently, the official rule is that a chain is correct if the reference client says it is correct.

I wonder if the creation of some block-chain serialization format would be appropriate.  This could be combined with a verifier.

This would be a much shorter program than an entire client that needs to deal with networking.

Maybe that could be vetted into some kind of semi-official spec.

Probably all the blocks, one after another, in the same format as the network protocol, is sufficient, so maybe I am over-thinking it.


Well, you gotta look at the process by which the reference client determines a block is valid. First of all it's received from a peer with ProcessMessage(). That function almost immediately calls ProcessBlock(), which first calls CheckBlock() to do context independent validation of the block; basic rules like "Does it have a coinbase?" which must be true for any block. The real heavy lifting is the next step, AcceptBlock(), which does  the context dependent validation. This is where transactions in the block are validated, and that requires the blockchain as well as full knowledge of the unspent transaction outputs. (the UTXO set) Getting those rules right is very difficult - the scripting system is complex and depends on a huge amount of code. Like it or not, there is no way to turn it into a "short verifier program"; the reference implementation itself is your short verifier program.

Thus right now we are far safer if all miners use the reference implementation to generate blocks and nothing else. However, we are also a lot safer if the vast majority of relay nodes also continue to use the reference implementation, at least right now. The problem is that even if a block is valid by the validation rules, if for some reason it doesn't get relayed to the majority of hash power you've caused a fork anyway. With the reference implementation this is really unlikely - as I explained above the relay rules are the validation rules - alternate implementations of relay nodes might not have that property though.

An interesting example of relay failure is how without the block-size limit any sequence of blocks with a size large enough that some minority of the hashing power can't download and validate them fast enough creates a fork. Specifically the blocks need to be large enough that the hash power in the "smaller-block" fork still creates blocks at a faster rate than the large-blocks are downloaded. Technically, with the block-size limit this can happen too, but the limit is so low even most dial-up modems can keep up. Block "discouragement" rules can also have the same effect, for much the same reasons.


For merchants a hard-fork bug leaves them vulnerable to double-spends by anyone with a lot of hashpower, but it'll cost the attacker one block reward per confirmation required. (the attacker can amortize the attack across multiple merchants) Merchants should be running code that looks for unusually long block creation time, and automatically shuts down their service if it looks like the hash rate has dropped significantly. Just doing this is probably good enough for the vast majority of merchants that take at least 12 hours to process and ship an order.

Some merchants are more vulnerable - a really bad example would be a chaum token issuing bank. Once you accept a deposit, give the customer the chaum token you have absolutely no way of invalidating the token because redemption is anonymous. Merchants like that should be running their own reference implementation nodes, double-checking those blockchains with other sources, and keeping their clocks accurate so they'll know when hashing power has dropped off mysteriously.

For instance you could run a service that would (ab)use DNS to publish each block header as a DNS record. Headers are just 96 bytes long so they'd still fit in single UDP packet DNS requests I think. Caching at the ISP level would reduce load on the server. (although ISP's that don't respect TTL's are a problem) The proof-of-work inherently authenticates the data and parallel services should be run by multiple people with different versions of the reference client. I wouldn't want to only trust such a service, but it'd make for a good "WTF is going on, shut it all down" failsafe mechanism for detecting forks.
970  Bitcoin / Development & Technical Discussion / Re: [SUCCESS] Double Spend against a satoshidice loss on: February 20, 2013, 11:24:52 AM
A simple way to fix this issue would be to first only accept bet transactions whose inputs are confirmed, and secondly change the lucky number algorithm from hmac_sha512(secert,txid:out_idx) to hmac_sha512(secert,txin_1:out_idx | txin_2:out_idx ... | txin_n:out_idx)

That is an excellent proposal, one which we will be implementing shortly Smiley

Good to hear!

One last thing, you also need to mandate that at least one txin signature uses SIGHASH_ALL so the txin list can't be changed after the fact. Once you've taken that step you'll only be vulnerable to regular, miner-supported, double-spends.

EDIT: fixed SIGHASH_NONE brainfart.
971  Economy / Games and rounds / Re: Contest: New name for BFGMiner! (0.33 - 1 BTC prize) on: February 20, 2013, 12:53:38 AM
BFG9000
972  Bitcoin / Development & Technical Discussion / Re: artificial 250kB limit? on: February 20, 2013, 12:40:19 AM
Wow. So lets see if I have this right. Only people who use custom hacks ever made blocks over half a meg, right now only people who don't keep up with new versions and people who use custom hacks build blocks larger than a quarter meg, a vast proportion of transactions are gamblers yelling at the world about their lack of luck, free transactions reliably get into the blockchain in large numbers every day, even paid transactions are laughably dirt-cheap, we have yet to see how much impact Ripple will have on keeping transactions off the blockchain, but we "need" to raise the size limit?

Nobody is saying we need to raise the size limit now. I brought the issue up because we need off-chain alternatives, and those alternatives take time to create. Ripple is one alternative, and I'm glad to see people working on it, but competition is a good thing and multiple alternatives should be pursued.
973  Bitcoin / Bitcoin Discussion / Re: The fork on: February 19, 2013, 09:21:53 PM
I can understand how needing greater bandwidth can cut off a minority of miners.. but how can it concentrate it into the hands off just the few? If you look at bandwidth usage statistics aren't a majority of the people that mine bitcoin currently considered high bandwidth already? Therefore this "centralization" simply means into the hands of what already is a majority which should theoretically get even more dispersed the more high bandwidth connections are available right? Or is some extremely powerful bandwidth connection able to eliminate "normal" high bandwidth users?  

What makes you think miners are "high bandwidth"? Pools tend to have reasonable amounts of bandwidth, if only to resist DDoS attacks, but the pools aren't the issue, validation is. P2Pool is currently the best example, because every miner participating in P2Pool runs their own fully validating node that ensures the blocks produced follow the Bitcoin rules. With the getblocktemplate and stratum mining protocols, again miners know what blocks they are actually mining and can fully validate them to ensure the rules of Bitcoin are followed.

Relay nodes matter too. Because running a fully validating node is very cheap, there are lots and lots of relays out there. A core principle behind Bitcoin's security is that information is easy to copy, and hard to censor, which means that the large number of relays protect you because it's very likely you'll connect to an honest relay and you'll get an honest, uncensored view of what is happening on the network.

So your biggest fear is that alternative solutions to the block size limit won't be made? In other words what the other guy mentioned bitcoin clearing houses? How does that help decentralization?

You don't need to trust clearing houses and other payment services built on top of Bitcoin if you can run a fully validating node. The protocols by which those payment services operate can be written in such a way that everything they do, every single transaction, is auditable, and critically, if they commit any fraud, you'll be able to prove that fraud. You publish your proof of fraud on a P2P network, it'll get broadcast to everyone on the network in seconds, and the payment services business will collapse immediately. I've written about these concepts multiple times; I'll post a big summary of the options later tonight.

On the other hand, if you can't run a fully validating node, you can't monitor the on-chain activities of those clearing houses to make sure they really are still holding the funds they claim they do. You have to take their word for it. At the same time, if you can't fully validate blocks, what's to stop miners and the few remaining validating nodes from getting together and creating blocks that collect fees from transactions that don't exist, thus inflating the money supply?
974  Bitcoin / Bitcoin Discussion / Re: The fork on: February 19, 2013, 08:44:38 PM
Otherwise the basic plan seem to be to pull a bait-and-switch, selling people on a purportedly person to person grassroots currency then pulling the rug out from under them by migrating it to business-to-business then to megacorp-to-megacorp...

If the blocksize limit is lifted, and blocks continue to grow without bound, to me the plan seems to be a bait-and-switch, selling people a purportedly decentralized currency that anyone in the world can validate without having to rely on third parties, then pulling the rug out from under them by migrating it to a system where only big businesses able to invest the thousands of dollars required to purchase high-speed network connections and lots of harddrive space can validate blocks.

For it to be a p2p network, I think we need to do something like look at the median, mode or mean home computer on the median, mode or mean home internet connection and ensure our limits keep it reasonable for folks to run full nodes on such systems without sacrificing their ability to run their accounting software and their word processor and their browser at the same time...

...and a 1MiB blockchain limit does this. That's 55GiB/year, low enough that anyone will be able to afford the hard-drive space to store a full copy of the block chain for years to come. Anyone will be able to also afford an internet connection, nearly anywhere in the world, with the capacity needed to participate as a full, validating node.

Like it or not we can't have every transaction using Bitcoin on the block chain. We need to develop alternate solutions anyway for small-value transactions, and since we're doing that, why not use those solutions for day-to-day spending and keep the blocksize low enough to keep Bitcoin itself truly decentralized?

My biggest fear is these small-value transaction solutions won't be developed, and instead we'll see pressure to just keep raising the blocksize, losing decentralization each time until Bitcoin is just another PayPal.
975  Bitcoin / Development & Technical Discussion / Re: How a floating blocksize limit inevitably leads towards centralization on: February 19, 2013, 04:04:33 PM
If you think that the block size should stay at 1 megabyte forever, then you're saying the network will never support more than 7 transactions per second, and each transaction will need to be for a fairly large number of bitcoins (otherwise transaction fees will eat up the value of the transaction).

If transactions are all pretty big, why the heck do we have 8 decimal places for the transaction amount?

Why not? They're 64-bit numbers, might as well give plenty of room for whatever the price turns out to be.

More to the point, if I'm correct and in the future we're paying miners billions of dollars a year, that implies Bitcoin is probably transfering trillions of dollars a year in value, on and off chain. In that scenario the market cap is probably tens of trillions of dollars worth, so 1BTC could easily be worth something like $10,000USD. Thus the $20 fee is 0.002BTC. That's pretty close to fees currently in terms of BTC - you might as well ask why do we have 8 decimal places now?

One reasonable concern is that if there is no "block size pressure" transaction fees will not be high enough to pay for sufficient mining.

Here's an idea: Reject blocks larger than 1 megabyte that do not include a total reward (subsidy+fees) of at least 50 BTC per megabyte.

"But miners can just include a never broadcast, fee-only transactions to jack up the fees in the block!"

Yes... but if their block gets orphaned then they'll lose those "fake fees" to another miner. I would guess that the incentive to try to push low-bandwidth/CPU miners out of the network would be overwhelmed by the disincentive of losing lots of BTC if you got orphaned.

You know, it's not a crazy idea, but trying to figure out what's the right BTC value is a tough, tough problem, and changing it later if it turns out your estimates of what the BTC/USD/mining hardware conversions as well as the orphan rate is a tough problem. You'll also create a lot of incentives for miners to create systems to crawl through the whole Bitcoin network, discovering every single node, and then use that knowledge to connect to every node at once. Using such a system the second you find your block, you'll immediately broadcast it to every node immediately. Of course, the biggest miner who actually succeeds in doing this will have the lowest orphan rate, and thus has much more room to increase block sizes because faking tx fee's costs them a lot less than miners who haven't spent so much effort. (effort that like all this stuff diverts resources from what really keeps the network secure, hashing power)

This same system can also now be used to launch sybil attacks on the network. You could use it to launch double-spend attacks, or to monitor exactly where every transaction is coming from. Obviously this can be done already - blockchain.info already has such a network - but the last thing we need is to give incentives to build these systems. As it is we should be doing more to ensure that each peer nodes connect to comes from a unique source run by an independent entity like using P2Pool PoW's linked to IP addresses.
976  Bitcoin / Development & Technical Discussion / Re: BIP: Increasing the Network Hashing Power by reducing block propagation time on: February 19, 2013, 02:24:19 PM

You have to be careful with transmitting transaction hash lists rather than the transactions themselves. While it definitely makes propagation faster in the average case, it also means that the worst-case, a block entirely composed of transactions that have not been previous broadcast on the network, is significantly worse.

I don't think so. Since only the missing transactions are transmitted, the worst case is just like it is today. Maybe a little worse in the limit case that the decision to request individual txs instead of the full block was not a good one (many requested are made instead of a single request).


I don't mean worse than today, I mean worse than the average case.

Just implementing your idea is fine and probably should be done; the issue purely in making assumptions, particularly security assumptions, about how fast blocks will propagate based on it.
977  Bitcoin / Bitcoin Discussion / Re: The fork on: February 19, 2013, 01:42:57 PM
retep,

I saw in that other thread that you gave quite a bit of thought to how increasing blocksize could lead to increased centralization of mining.

I'm curious if you've given much thought to the ways that increased transaction fees might also lead to increased centralization?  Perhaps I haven't given it enough thought yet, but my basic thinking is along the lines of:

On the bitcoin-dev email list I responded to exactly that argument actually; my responses below are based on the response:

Increased fees create an incentive for a few large well funded mining operations to get involved

The cost of mining is in two parts: mining itself, and overhead. The mining equipment costs basically the same regardless of how many hashes/s you want to mine with; if anything I suspect small mining operations are cheaper than larger operations because cooling costs are non-existance for a small miners with a few rigs, and at a small scale power can often be either re-used for heating (cold climates) or is available at a flat rate. (I don't pay for power at my apartment) We're fortunate that the primary cost of ASICs is the mining chip itself, and they are cheapest when you make thousands of inexpensive chips. You'll always be able to buy relatively inexpensive rigs that only contain a few chips, just like BFL sells everything from $150 rigs, one chip, to $35,000 rigs, lots of chips.

The overhead, running a validating node to verify the blocks you are mining, is a fixed cost.

Thus I see no reason why large fees have anything to do with the size required to profitably run a mining operation.

The larger the total hashing power of the network, the more necessary it becomes for smaller operations to participate in a pool to receive a reasonable chance of being paid for their efforts in a timely manner.

Sure, but that's already true. Even the ASIC operations have been mining in pools to keep varience low. The important thing is that small blocks make it cheap for anyone to validate the blocks you are mining for the pool, keeping the pool honest. Equally they allow you to mine on P2Pool, which is totally distributed and not controlled by anyone.

Anyway, the argument you're really making is that we can't spend a lot of money ensuring that the network remains secure against a well funded 51% attacker, not that small blocks themselves are an issue. I dunno about you, but I think huge mining rewards are a good thing and keep us all safe from 51% attackers.

Quote from: DannyHamilton link=topic=145072.msg1539415#msg1539415
  • High fees make spending/using that small acquired share of a mining reward cost prohibitive (fees use up the entire balance of each added input leaving nothing left for actual spending).
  • The inability to spend/use any of the earned bitcoins discourages participation in mining by small operations, leading to an increase in centralization as the smaller operations are forced out of the market.

These are real issues, but they can be solved with the same micropayment systems people will use for small transactions. For instance, P2Pool makes payments directly from the coinbase, taking block space away from transactions that could earn money instead.

What would happen as blocks approach the limit is first P2Pool would take into account the cost of payout transactions in terms of lost fees. This would give miners an incentive to only get payouts when the payout amount was sufficiently high. The amount of hashing power required would be pretty large, so you'd naturally see sub-pools develop combining a whole bunch of hashing power together. (P2Pool already supports sub-pools BTW) Those sub-pools would publish contracts specifying how they would pay out, what micropayment system and so on, and miners using those sub-pools would either be payed correctly, or if the pool defrauded them, they'd be able to prove the pool defrauded them and make them lose their fidelity bonds they had to purchanse to be trusted in the first place.

Note how that's basically how most pools already work anyway - you trust the pool to pay what you are owed. The only change is in the mechanism by which you get paid.
978  Bitcoin / Development & Technical Discussion / Re: BIP: Increasing the Network Hashing Power by reducing block propagation time on: February 19, 2013, 01:20:58 PM
There needs to be some way for people to confirm that transactions are known.  Maybe the protocol rule could be changed to not propagate the message if any txs are unknown. This creates an incentive to only mine against well known transactions.

You need to be really careful with anything that discourages blocks. Any time a given rule for discouraging blocks is adopted by a minority, rather than a majority, the majority of hashing power that is not following that particular rule has an incentive to deliberately produce blocks that will be discouraged by the minority.

What you are proposing is a relay rule, so the issue is really what % of hashing power doesn't get the block because of the rule, but regardless if only 25% of the hashing power ignores the violating block, the other 75% should include an unknown TX in every block they mine. They'll have 25% less competition - an obvious advantage.
979  Bitcoin / Bitcoin Discussion / Re: The fork on: February 19, 2013, 01:03:44 PM
Yawn.

If the majority of developers feel it is important to change the protocol to keep it functioning properly I will change my miners or switch to a pool that supports it.  Changes have happened before and they will happen again.  Bitcoin is not a frozen protocol.  If you don't like the change, don't change though you may no longer be a part of the majority network.  These things resolve themselves quickly based on the TECHNICAL MERITS of the change.

Bitcoin has changed before, but the last time a hard-fork change happened was way back in the summer of 2010 to fix the overflow bug. Back then Bitcoin's were nearly worthless, there wasn't really an economy built, and the userbase was tiny. There have been soft-forks like P2SH, where only mining power requires upgrading, but even those took months of planning and advocacy.

Changing the block size is a really big deal. Technically speaking the change is no different than changing the inflation schedule, and as we've seen, it's not a change without a lot of controversy. Everyone needs to change their software to accept the change; if Gavin pushed a blocksize change to the reference client tomorrow, I know I myself would stick with the existing system as would many others.
980  Bitcoin / Development & Technical Discussion / Re: BIP: Increasing the Network Hashing Power by reducing block propagation time on: February 19, 2013, 09:34:35 AM
Two pull requests I would like to see, ones that make prototyping this stuff easier, would be "getrawblock/sendrawblock" RPC commands, and a "notifytransaction" mechanism.

The latter lets you find new incoming transactions that have been accepted to the mempool so you can inject them into your alternate distribution mechanism, later added to the client mempool with sendrawtransaction (a flag that disables re-broadcasting would be nice) The latter has already been attempted; I seem to remember there were some issues with locking that needed to be solved, but they looked tractable.

The former lets you do the same thing for entire blocks. (submitblock isn't quite what you want)

For instance a cool and potentially useful thing to do would be to operate a service that mulicasts the blockchain via Amazon's Simple Messaging Service. You'd sign up, and get a blockchain feed for your EC2 node without having to actually connect to anyone else. EC2 bandwidth is fairly expensive, so costs would be significantly less, and it could scale to absolutely enormous numbers of clients. Of course, you are trusting the service, but for some applications it's not a big deal.

Another cool thing to do would be to implement multicast broadcasting of the blockchain and transactions. While not commonly available, multicast is available on a few networks, so there will be some people who can make use of it. Equally you could start up a satellite downlink service or radio service.

The point is, actually implementing those ideas will be much easier with those two pull requests.
Pages: « 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 [49] 50 51 52 53 54 55 56 57 58 59 60 61 62 »
Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!