Bitcoin Forum
March 01, 2015, 08:27:41 AM *
News: Latest stable version of Bitcoin Core: 0.10.0 [Torrent] (New!)
 
  Home Help Search Donate Login Register  
  Show Posts
Pages: « 1 2 3 [4] 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 ... 113 »
61  Bitcoin / Development & Technical Discussion / Re: Increasing the block size is a good idea; 50%/year is probably too aggressive on: October 14, 2014, 03:11:15 PM
No comment on this?
Quote
One example of a better way would be to use a sliding window of x number of blocks 100+ deep and basing max allowed size on some percentage over the average while dropping anomalous outliers from that calculation.  Using some method that is sensitive to the reality as it may exist in the unpredictable future give some assurance that we won't just be changing this whenever circumstances change.
Do it right, do it once.

That does not address the core of people's fears, which is that big, centralized mining concerns will collaborate to push smaller competitors off the network by driving up the median block size.

There isn't a way to predict what networks will look like in the future, other than to use the data of the future to do just that.  Where we are guessing we ought acknowledge that.

Yes, that is a good point, made by other people in the other thread about this. A more conservative rule would be fine with me, e.g.

Fact: average "good" home Internet connection is 250GB/month bandwidth.
Fact: Internet bandwidth has been growing at 50% per year for the last 20 years.
  (if you can find better data than me on these, please post links).

So I propose the maximum block size be increased to 20MB as soon as we can be sure the reference implementation code can handle blocks that large (that works out to about 40% of 250GB per month).
Increase the maximum by 40% every two years (really, double every two years-- thanks to whoever pointed out 40% per year is 96% over two years)
Since nothing can grow forever, stop doubling after 20 years.

62  Bitcoin / Development & Technical Discussion / Re: Increasing the block size is a good idea; 50%/year is probably too aggressive on: October 13, 2014, 07:05:00 PM
It also may be contrary to the eventual goal of usage driven mining, where transaction fees ultimately overtake block reward in value.  This proposal may drive TX fees to zero forever.  Block chain is a somewhat scarce resource, just as total # of coins.  Adding an arbitrary 50% yearly inflation changes things detrimentally.

I'm sending a follow-up blog post to a couple of economists to review, to make sure my economic reasoning is correct, but I don't believe that even an infinite blocksize would drive fees to zero forever.

Commodity prices never drop to zero, no matter how abundant they are (assuming a reasonably free market-- government can, of course supply "free" goods, but the results are never pretty). The suppliers of the commodities have to make a profit, or they'll find something else to do.

That has very little to do with whether or not transaction fees will be enough to secure the network in the future. I think both the "DON'T RAISE BLOCKSIZE OR THE WORLD WILL END!" and "MUST RAISE THE BLOCKSIZE OR THE WORLD WILL END!" factions confuse those two issues. I don't think adjusting the block size up or down or keeping it the same will have any effect on whether or not transaction fees will be enough to secure the network as the block subsidy goes to zero (and, as I said, I'll ask professional economists what they think).

If this forks as currently proposed, I'll be selling all my BTC on Gavin's fork and mining on the other.  I suspect I will not be the only one.

Okey dokey. You can join the people still mining on we-prefer-50-BTC-per-block fork (if you can find them... I think they gave up really quickly after the 50 to 25 BTC subsidy decrease).
63  Bitcoin / Development & Technical Discussion / Re: A Scalability Roadmap on: October 09, 2014, 10:14:22 PM
Yeah, 40% of a 250 GB connection works out to about 23 MB depending on how you define month.  May I ask what would happen regarding TOR?

Thanks for checking my math!  I used 31-day months, since I assume that is how ISPs do the bandwidth cap.

RE: what happens with Tor:

Run a full node (or better, several full nodes) that is connected to the network directly-- not via Tor.

But to keep your transactions private, you broadcast them through a Tor-connected SPV (not full) node. If you are mining, broadcast new blocks the same way.

That gives you fully-validating-node security plus transaction/block privacy. You could run both the full node and the SPV-Tor-connected node on a machine at home; to the rest of the network your home IP address would look like a relay node that never generated any transactions or blocks.

If you live in a country where even just connecting to the Bitcoin network is illegal (or would draw unwelcome attention to yourself), then you'd need to pay for a server somewhere else and administer it via Tor.
64  Bitcoin / Development & Technical Discussion / Re: A Scalability Roadmap on: October 09, 2014, 07:51:30 PM
An extremely large block size would mess up the economics of mining eventually.

I'm working on a follow-up blog post that talks about economics of the block size, but want to get it reviewed by some real economists to make sure my thinking is reasonably correct. But I'm curious: why do you think an extremely large block size will mess up the economics of mining?  What do you think would happen?

RE: geometric growth cannot go on forever:  true, but Moore's law has been going steady for 40 years now. The most pessimistic prediction I could find said it would last at least another 10-20 years; the most optimistic, 600 years.

I'd be happy with "increase block size 40% per year (double every two years) for 20 years, then stop."

Because if Bitcoin is going gangbusters 15 years from now, and CPU and bandwidth growth is still going strong, then either the "X%" or the "then stop date" can be changed to continue growing.

I did some research, and the average "good" broadband Internet connection in the US is 10Mbps speed. But ISPs are putting caps on home users' total bandwidth usage per month, typically 250 or 300GB/month. If I recall correctly, 300GB per month was the limit for my ISP in Australia, too.

Do the math, and 40% of a 250GB connection works out to 21MB dedicated to Bitcoin every ten minutes. Leave a generous megabyte for overhead, that would work out to a starting point of maximum-size-20MB blocks.

(somebody check my math, I'm really good at dropping zeroes)

65  Bitcoin / Development & Technical Discussion / Re: A Scalability Roadmap on: October 08, 2014, 05:36:07 PM
Lowering the limit afterward wouldn't be a soft-forking change if the majority of mining power was creating too-large blocks, which seems possible.

When I say "soft-fork" I mean "a majority of miners upgrade and force all the rest of the miners to go along (but merchants and other fully-validating, non-mining nodes do not have to upgrade)."

Note that individual miners (or sub-majority cartels) can unilaterally create smaller blocks containing just higher-fee transactions, if they think it is in their long-term interest to put upward pressure on transaction fees.

I think that a really conservative automatic increase would be OK, but 50% yearly sounds too high to me. If this happens to exceed some residential ISP's actual bandwidth growth, then eventually that ISP's customers will be unable to be full nodes unless they pay for a much more expensive Internet connection. The idea of this sort of situation really concerns me, especially since the loss of full nodes would likely be gradual and easy to ignore until after it becomes very difficult to correct.

As I mentioned on Reddit, I'm also not 100% sure that I agree with your proposed starting point of 50% of a hobbyist-level Internet connection. This seems somewhat burdensome for individuals. It's entirely possible that Bitcoin can be secure without a lot of individuals running full nodes, but I'm not sure about this.

Would 40% initial size and growth make you support the proposal?


Determining the best/safest way to choose the max block size isn't really a technical problem; it has more to do with economics and game theory. I'd really like to see some research/opinions on this issue from economists and other people who specialize in this sort of problem.

Anybody know economists who specialize in this sort of problem? Judging by what I know about economics and economists, I suspect if we ask eleven of them we'll get seven different opinions for the best thing to do. Five of which will miss the point of Bitcoin entirely. ("...elect a Board of Blocksize Governors that decides on an Optimal Size based on market supply and demand conditions as measured by an independent Bureau of Blocksize Research....")
66  Bitcoin / Development & Technical Discussion / Re: A Scalability Roadmap on: October 06, 2014, 06:25:02 PM
Is Gavin saying this should grow at 50% per year because bandwidth has been increasing at this rate in the past?  Might it not be safer to choose a rate lower than historic bandwidth growth?  Also how do we know this high growth in bandwidth will continue?

Yes, that is what I am saying.

"Safer" : there are two competing threats here: raise the block size too slowly and you discourage transactions and increase their price. The danger is Bitcoin becomes irrelevant for anything besides huge transactions, and is used only by big corporations and is too expensive for individuals. Hurray, we just reinvented the SWIFT or ACH systems.

Raise it too quickly and it gets too expensive for ordinary people to run full nodes.

So I'm saying: the future is uncertain, but there is a clear trend. Lets follow that trend, because it is the best predictor of what will happen that we have.

If the experts are wrong, and bandwidth growth (or CPU growth or memory growth or whatever) slows or stops in ten years, then fine: change the largest-block-I'll-accept formula. Lowering the maximum is easier than raising it (lowering is a soft-forking change that would only affect stubborn miners who insisted on creating larger-than-what-the-majority-wants blocks).


RE: a quick fix like doubling the size:

Why doubling? Please don't be lazy, at least do some back-of-the-envelope calculations to justify your numbers (to save you some work: the average Bitcoin transaction is about 250 bytes big). The typical broadband home internet connection can support much larger blocks today.
67  Bitcoin / Development & Technical Discussion / Re: Adding a feature to decline dust transactions on: October 04, 2014, 05:00:07 PM
That could be a feature of the wallet:  do not to display any unconfirmed (or even confirmed) transaction less than x

That is a feature of Bitcoin-Qt. Unconfirmed dust transactions don't enter the memory pool, so they are not relayed, not included in blocks being mined, and not displayed by the wallet.

If I recall correctly, if they DO get mined into a block by somebody then they are displayed. Ignoring them and not adding them to the wallet in that case might be a nice feature, although today's dust might be tomorrow's treasure if prices rise another couple orders of magnitude.
68  Bitcoin / Bitcoin Discussion / Re: It's about time to turn off PoW mining on: September 17, 2014, 03:56:49 PM
I am not the best person to discuss the technical details here, but how do you explain PoW altcoins are easily 51% attacked to death. But then PoS altcoins all avoided this fate, and most of them (the non scammy ones), works and works well. Clearly when put in a equal competition (altcoins), the PoS system came out on top in an equal competitive environment (without early start advantage etc...).

I think we'll see non-clone coins being broken after two things happen:

1. They become valuable enough for attackers to bother, and there is some way for them to cash out.
2. The attackers have some time to do what they need to do to mount an attack-- write code, deploy botnets, hack into some big exchange(s), get their hands on some early-adopter's wallet backups, or whatever.

Once the tools and techniques are developed, then I think we'll see what we see in PoW 51% attacks: attacks against even mostly-worthless clonecoins, because if they've already got the tools then they might just attack for the lulz.

I'm surprised you count peercoin a PoS success-- they're still running with centralized checkpoints, aren't they?
69  Bitcoin / Bitcoin Discussion / Re: It's about time to turn off PoW mining on: September 17, 2014, 03:23:42 PM
Is there a rebuttal from the PoS crowd to this:
  https://download.wpsoftware.net/bitcoin/pos.pdf

... other than "sure, the original PoS ideas were flawed, but the latest MegaUberPoS system gets it right and nobody has figured out exactly how to break it!"
70  Bitcoin / Bitcoin Discussion / Re: DDoS attack on Bitcoin.org on: September 17, 2014, 03:11:26 PM
Probably just some anti-Foundation skiddie who saw this:
  https://bitcoinfoundation.org/2014/09/bitcoin-org-walk-down-memory-lane/
71  Bitcoin / Development & Technical Discussion / Re: Instructing a node to disconnect from a specific peer via RPC on: September 15, 2014, 12:29:24 PM
I needed that, so hacked together a disconnectpeer RPC call:
  https://github.com/gavinandresen/bitcoin-git/commit/499ae0b3d77e1c41d79f34329d555980676d1f3a

Needs more thorough testing-- I'm not sure if calling CloseSocketDisconnect directly from the RPC thread is the cleanest way of disconnecting a peer.
72  Bitcoin / Development & Technical Discussion / Re: Energy efficiency. on: September 10, 2014, 04:51:42 PM
Thaddeus Dryja's "proof of idle" idea hasn't been getting enough attention. See https://www.youtube.com/watch?v=QN2TPeQ9mnA

The idea is to get paid NOT to mine, because it is economically rational for everybody to keep the difficulty lower rather than higher (everybody saves money on electricity if everybody can somehow agree to keep their equipment idle). Thaddeus figured out a way of solving the coordination problem so nobody can lie about how much mining power they have or profit from cheating and running miners that they promised to keep idle.

Having lots of idle mining capacity is appealing for at least two reasons -- it is more energy efficient, and if an attack of some kind is detected it could be brought online instead of kept idle to help fight the attack.



However... I suspect that taking that idle mining power, pointing it to a big mining pool, and then performing a winning-share-withholding-attack (if you find a share that satisfies full network difficulty, don't submit it to the pool -- just throw it away and pretend it never happened) could be a way of doubling your profits, because you drive down difficulty, get paid for "idle" capacity, AND get a share of the profits from the mining pool you're attacking.
73  Economy / Service Announcements / Re: [ANN] Tip4Commit.com - support opensource projects, contribute and earn bitcoins on: September 09, 2014, 04:41:40 PM
I think we're seeing some people submitting lots of tiny, trivial commits -- perhaps because they get more of the tipping pie.

I'm not sure how to combat that bad incentive....
74  Bitcoin / Development & Technical Discussion / Re: Share your ideas on what to replace the 1 MB block size limit with on: August 21, 2014, 02:27:35 PM
Quote
Edit: With respect to CryptoNote and Monero, I do see merit in the argument that the fee penalty alone may not be enough to constrain blocksize; however when combined with the difficulty increase requirement the picture changes. As for the specifics of the Monero blockchain there are also other factors including dust from mining pools that led to the early bloating, and we must also keep in mind the CryptoNote and Monero also have built in privacy which adds to the blocksize by its very nature.
Yes, its complicated— but it's also concerning. The only altcoins that I'm aware of which have diverged from the current Bitcoin behavior in this front, and did so with the benefits of all the discussions we've had before being available to them, have been suffering from crippling bloat.

Glancing at block explorers for Monero and ByteCoin... I'm not seeing crippling bloat right now. I see lots of very-few-transactions blocks.

Glancing at recent release notes for ByteCoin, it looks like transactions were not being prioritized by fee, which is a fundamental to getting a working fee market.

Have Monero and ByteCoin fixed the bloat problem, or did the transaction spammers just get bored and go away?

75  Bitcoin / Press / Re: [2014-08-10] Bitcoin Suspected to Be NSA or CIA Project on: August 13, 2014, 12:16:22 AM
It is absolutely true! I heard the main guy even visited the NSA's headquarters in Fort Langley, Maryland!
76  Bitcoin / Development & Technical Discussion / Re: O(1) block propagation on: August 12, 2014, 02:21:03 PM
Block re-orgs need some thought.

If I have chain A-B-C, and get IBLT's for an alternative chain A-B'-C'-D'...

... then the current memory pool won't work to try to reconstruct B' C' D'.

Using B and C to reconstruct B' and C' should work pretty well. Then the remaining memory pool transactions can be used to reconstruct D.

If any of the reconstructions fail, just fall back to fetching all of B' C' D'.

Then again, re-orgs are rare enough that always falling back to fetching full blocks would be OK.
77  Bitcoin / Development & Technical Discussion / Re: Possible “reasons” for the transaction fee and possible price implications on: July 09, 2014, 05:57:42 PM
Can't somebody build a web-service that detects when transactions enter it's memory pool, looks at the transactions fees, and then examines when these transactions enter the block-chain and analyses all of this data to provide a more accurate estimate of required fees which would be based on real market data? Maybe have some sort of api that wallets could use? Or does this exist?

That is exactly what the 'smartfee' code in the reference implementation does.

RE: where does the market information come from:

Like any market, it comes from the collective action of lots of individual decisions. Different wallet software has different fee policies, and there is already a little bit of "I sent a transaction using wallet XYZ and it took FOREVER to confirm, WTF?!?" (or "why does exchange/wallet/service ABC charge me such high transaction fees").

As wallets mature, I expect them to specialize ("Save on Fees! Use UberWallet!") and/or compete for best cost/speed/reliability/predictability.

The default for the reference implementation will be "follow the herd" -- but the price will be set by the minority of people 'at the margins' who REALLY want their transactions to confirm quickly or REALLY want spend as little as possible on transaction fees. They will set -paytxfee and/or -txconfirmtarget  to override the default behavior.

And "they" are likely to be high-volume-transaction-creators-- like exchanges (probably want their transactions to confirm quickly; fewer customer support calls) or watch-a-video-get-a-few-bits services (probably want to cut costs any way they can, don't care if their customers have to wait a while for a withdrawal to get confirmed...).

RE: sybil/isolation attack:

Again, not a likely attack. You would have to:
1) Find some high-transaction-volume service and identify all of their bitcoin-network-connected nodes
2) Control ALL of those nodes' connections (expensive to do reliably with the 'connect out to 8 random peers' rule) FROM THE BEGINNING OF TIME (well, beginning of when it started running the smartfee code).
3) Let that node see only extremely-high-fee transactions (there aren't very many of those, so you'll need to manage to control the node's connections for a while).
4) Expect the node's operator to send a lot of transactions and not notice that they were paying abnormally high transaction fees.

If you are running a high-transaction-volume service you probably already have several connections into the bitcoin p2p network because you have probably already been the target of a distributed denial of service attack....

Definitely not an issue for Bitcoin-Qt, because you're shown the total amount + fees you'll pay before every transaction.


78  Bitcoin / Development & Technical Discussion / Re: Possible “reasons” for the transaction fee and possible price implications on: July 04, 2014, 05:09:02 PM
It's fairly easy to manipulate actually; sybil attacks are not that hard to do. (there's evidence they're being used against people accepting zeroconf) I demonstrated a 1BTC fee tx due to the estimator failing. Right now there is a cap of 100mBTC, but obviously that can drain your wallet pretty quickly.

... demonstrated in a completely artificial -regtest environment...

If you can Sybil somebody and control their view of the network, then it seems to me there are more potentially profitable attacks than "make them pay more in fees than they should."

But please feel free to demonstrate an actual, effective Sybil on the fee estimation code. bitcoincore.org is running a wallet-less bitcoind on port 8333 that generates the graphs at bitcoincore.org/smartfee/

(hacking into the web server to make it LOOK like the fee estimation code failed doesn't count, you have to manage to control it's p2p network connections and then manipulate the memory pool).

79  Bitcoin / Development & Technical Discussion / Re: Possible “reasons” for the transaction fee and possible price implications on: July 04, 2014, 12:07:43 AM
Smart (dynamic, floating) fees for the reference implementation wallet was pulled today:
  https://github.com/bitcoin/bitcoin/pull/4250

... and should appear in version 0.10.

The estimation code only considers transactions that are broadcast on the network, enter the memory pool (so are available to any miner to mine), and then are included in a block. So it is immune to miners putting pay-to-self transactions with artificially high fees in their blocks.

Right now if you use the default fee rules your transactions will take 2-6 blocks to confirm:
  http://bitcoincore.org/smartfee/fee_graph.html

The priority estimation code is even more broken; the reference implementation wallet will send a 56-million-priority transaction with no fee, which is nowhere near enough priority to get confirmed quickly:
  http://bitcoincore.org/smartfee/priority_graph.html

(the smart fee code estimates priority, too).

Release notes from doc/release-notes.md in the source tree:

Transaction fee changes
=======================

This release automatically estimates how high a transaction fee (or how
high a priority) transactions require to be confirmed quickly. The default
settings will create transactions that confirm quickly; see the new
'txconfirmtarget' setting to control the tradeoff between fees and
confirmation times.

Prior releases used hard-coded fees (and priorities), and would
sometimes create transactions that took a very long time to confirm.


New Command Line Options
========================

-txconfirmtarget=n : create transactions that have enough fees (or priority)
so they are likely to confirm within n blocks (default: 1). This setting
is over-ridden by the -paytxfee option.

New RPC methods
===============

Fee/Priority estimation
-----------------------

estimatefee nblocks : Returns approximate fee-per-1,000-bytes needed for
a transaction to be confirmed within nblocks. Returns -1 if not enough
transactions have been observed to compute a good estimate.

estimatepriority nblocks : Returns approximate priority needed for
a zero-fee transaction to confirm within nblocks. Returns -1 if not
enough free transactions have been observed to compute a good
estimate.

Statistics used to estimate fees and priorities are saved in the
data directory in the 'fee_estimates.dat' file just before
program shutdown, and are read in at startup.
80  Bitcoin / Development & Technical Discussion / Re: Faster blocks vs bigger blocks on: July 03, 2014, 02:56:20 PM
It seems to me having miners share 'near-miss' blocks with each other (and the rest of the world) does several good things.

As Greg say, that tells you how much hashing power is including your not-yet-confirmed transaction, which should let merchants reason better about the risk of their transactions being double-spent.

If the protocol is well-designed, sharing near-miss blocks should also make propagation of complete blocks almost instantaneous most of the time. All of the data in the block (except the nonce and the coinbase) is likely to have already been validated/propagated. See Greg's thoughts on efficient encoding of blocks:  https://en.bitcoin.it/wiki/User:Gmaxwell/block_network_coding

So there could almost always be no advantage to working on a smaller block rather than a larger block (it would be very rare to find a full-difficulty block before finding-- say-- a 1/100'th difficulty block).

Near-instant block propagation if you 'show your work' should give un-selfish miners an advantage over miners who try any kind of block withholding attack. And it should make network convergence quicker in the case of block races; miners could estimate how much hashing power is working on each fork when there are two competing forks on the network, and rational miners will abandon what looks like a losing fork as soon as it looks statistically likely (based on the previous-block pointers for near-miss blocks they see) that they're on the losing fork.

We can do all of this without a hard fork. It could even be prototyped as an ultra-efficient "miner backbone network" separate from the existing p2p network-- in fact, I'm thinking it SHOULD be done first as a separate network...
Pages: « 1 2 3 [4] 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 ... 113 »
Sponsored by , a Bitcoin-accepting VPN.
Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!