Bitcoin Forum
May 24, 2024, 03:47:43 PM *
News: Latest Bitcoin Core release: 27.0 [Torrent]
 
  Home Help Search Login Register More  
  Show Posts
Pages: « 1 2 3 4 5 6 7 8 9 10 11 12 13 14 [15] 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 ... 113 »
281  Bitcoin / Development & Technical Discussion / Re: Blocking the creation of outputs that don't make economic sense to spend on: March 10, 2013, 05:29:11 PM
Forcing SD to pay for such externalities it causes is a great idea.
I agree.

That is why I ask "what is the external cost, with reasonable assumptions."

So I'll ask again: what is the cost-per-(pick-your-favorite-time-unit) to the network of an extra unspent transaction output?
282  Bitcoin / Development & Technical Discussion / Re: Blocking the creation of outputs that don't make economic sense to spend on: March 10, 2013, 03:10:05 AM
The point is, a transaction that will never be spent costs the network more than one that will be spent because the former requires expensive, high-speed storage (currently RAM, and maybe SSDs in the future)

How much does it cost, if you assume reasonable trends for storage/electricity cost?
283  Bitcoin / Development & Technical Discussion / Re: Thoughts on raising the Hard 1Mb block size limit on: March 08, 2013, 08:44:09 PM
Honestly, I'm sick of people ignoring all the optimizations that have already been identified and are just waiting to be coded, as if we're going to scale to 2000 tps without anyone bothering to implement any of them.
+1
284  Bitcoin / Mining / Re: Soft block size limit reached, action required by YOU on: March 07, 2013, 05:10:27 PM
You may have heard me say "Bitcoin is an experiment" before...  well, we're finding out right now what happens as the experiment scales up.

First:  I sent a message to the big mining pools, reminding them of the run-time options they can set to control the size of the blocks they create. I did not tell them what they should or shouldn't do, I think we need to move beyond centralized decision-making.

I did send them a pointer to this very rough back-of-the-envelope estimate on the current marginal cost of transactions:
  https://gist.github.com/gavinandresen/5044482

(if anybody wants to do a better analysis, I'd love to read it).

Second: block size is half of the equation. The other half is transaction fees and competition for getting included into blocks. All of the bitcoin clients need to do a better job of figuring out the 'right' transaction fee, and services that generate transactions will have to adjust the fees they pay (and are, already).

Finally: in my opinion, there is rough consensus that the 1MB block size limit WILL be raised. It is just a question of when and how much / how quickly.
285  Bitcoin / Bitcoin Discussion / Re: News in multisignature support? on: March 05, 2013, 02:30:48 PM
I wouldn't say NO progress is being made, but there has been a long detour because we need a secure way of telling you WHO you are paying to make multisignature work securely. Otherwise we could have the most nifty, secure multisig system in the world that fails because you THINK you're paying 1kqHKEYYC8CQPxyV53nCju4Lk2ufpQqA2 but a crafty attacker makes you pay them at 1kqHLFyZDBDoPDYwSEtjv5CWka42uGqA2 instead.

So I've been spending most of my time implementing "the payment protocol." I'll write more in a Foundation blog post on Friday.

Payment protocol messages will be part of the information that will be sent between devices or people to make multisig transactions work.
286  Bitcoin / Development & Technical Discussion / Re: Can anybody stall Bitcoin for 72BTC per hour? ANSWER: PARTIALLY? on: March 04, 2013, 04:32:27 PM
Yes, I definitely meant priority. Highest priority transactions (transferring lots of old coins) get included in blocks first under the default block-filling rules.

And also notice that I said "most miners are..." There are at least a few big mining pools that have their own idiosyncratic ways of deciding which transactions get into blocks, including private deals with big exchanges/merchants/etc.

Also note that because finding blocks is a random process the Bitcoin network "stalls" for an hour every three weeks or so, with no blocks found.

My guess is that if an attacker tried to monopolize block space most of us wouldn't even notice. If you're really worried about it, then encourage some big mining pool(s) to have a completely different block-filling strategy ("randomly select from the memory pool" would be easy to implement).

287  Bitcoin / Development & Technical Discussion / Re: Can anybody stall Bitcoin for 72BTC per hour? on: March 02, 2013, 04:05:41 PM
The default block-filling algorithm that most miners are running is:

+ Fill up part of the block with the highest transactions, regardless of fees
+ Then fill up the rest of the block with as many fee-paying transactions as possible, highest fee-per-kilobyte first.

... so flooding the network with high-fee transactions won't "stall Bitcoin."  Well, except for people playing SatoshiDice or doing something else that results in lots of low-priority fee-paying transactions (and even there, they could always opt to pay a little more in transaction fees).
288  Bitcoin / Bitcoin Discussion / Re: How merchant will behave when there is hard fork & they are not sure who win? on: February 20, 2013, 10:09:02 PM
A hard fork won't happen unless the vast super-majority of miners support it.

E.g. from my "how to handle upgrades" gist https://gist.github.com/gavinandresen/2355445

Quote
Example: increasing MAX_BLOCK_SIZE (a 'hard' blockchain split change)

Increasing the maximum block size beyond the current 1MB per block (perhaps changing it to a floating limit based on a multiple of the median size of the last few hundred blocks) is a likely future change to accomodate more transactions per block. A new maximum block size rule might be rolled out by:

New software creates blocks with a new block.version
Allow greater-than-MAX_BLOCK_SIZE blocks if their version is the new block.version or greater and 100% of the last 1000 blocks are new blocks. (51% of the last 100 blocks if on testnet)
100% of the last 1000 blocks is a straw-man; the actual criteria would probably be different (maybe something like block.timestamp is after 1-Jan-2015 and 99% of the last 2000 blocks are new-version), since this change means the first valid greater-than-MAX_BLOCK_SIZE-block immediately kicks anybody running old software off the main block chain.
289  Bitcoin / Development & Technical Discussion / Re: How a floating blocksize limit inevitably leads towards centralization on: February 20, 2013, 07:38:16 PM
RE: particular ideas if "we" decide the blocksize has to be increased:

I think the first step is to come to rough consensus that, at some point, we WILL need a hardfork to increase the block size limit.

If we can come to rough consensus on that, then we can figure out the safest way to accomplish that.

I don't think we'll get consensus; retep and others will argue "we need to run into the hard limit to FORCE alternatives to be created first."

I keep saying we should see what happens as we run into the soft blocksize limits.  To people on both sides of this debate:  what do you predict will happen?

If what you predict will happen doesn't actually happen, will that make you re-evaluate your position?

(I haven't spent enough time thinking about this problem to answer those questions, but that is how I'm going to think about it).
290  Bitcoin / Bitcoin Discussion / Re: The fork on: February 20, 2013, 03:46:19 PM
Why wouldn't miners reject interactions with miners who set the block size too high, for instance?

Yes, I believe they would. So far, most miners and pools are VERY conservative; I think the idea that they will create huge blocks that have a significant risk of being rejected, just so they MIGHT get an advantage over marginal miners that can't process them fast enough, is loony.

But I might be wrong.

So I'd like to wait a little while, think deeply some more, and see how miners and merchants and users react with the system we've got as transaction volume increases.
291  Bitcoin / Bitcoin Discussion / Re: Bitcoin-Qt / bitcoind version 0.8.0 released on: February 20, 2013, 03:37:56 PM
A couple people asked if they need to upgrade if they are running rc1:  no, I wouldn't bother.

The only significant code change is better handling of the rare case of one of the leveldb database files being corrupted. If you're really curious, github will show you the differences between any two versions; here are the rc1 to 0.8.0 final release differences:  https://github.com/bitcoin/bitcoin/compare/v0.8.0rc1...v0.8.0
292  Alternate cryptocurrencies / Altcoin Discussion / Re: [ANN] [PPC] PPCoin 0.3.0 Release - Upgrade Required on: February 20, 2013, 12:24:33 AM
If you ask Gavin about his position on this matter he likely would have to tell you the same thing.

... or not.  There's a difference between "unfixed vulnerabilities" and "half-baked design."

I think big decisions that affect the fundamentals of the design should be discussed in the open (see the current Bitcoin debate over raising the block size limit).
293  Bitcoin / Development & Technical Discussion / Re: artificial 250kB limit? on: February 20, 2013, 12:13:00 AM
So the current one megabyte hard limit is already double the limit Satoshi originally put in place?
No, the hard limit has been 1 megabyte forever. But 500K was the largest block Satoshi's code could possibly build (I believe that that wasn't even possible in practice, because you'd have to spend all 21 million bitcoins in fees to fill it to 500K).
294  Bitcoin / Development & Technical Discussion / Re: artificial 250kB limit? on: February 19, 2013, 09:01:05 PM
A couple of minor clarifications:

There has always been an artificial block size limit; Satoshi's code exponentially increased required transaction fees required to get into a block as the block filled up from 250K to an absolute-maximum 500K. There are almost certainly still miners running with that algorithm; their effective maximum block size is a little more than 250K.

Also, solo/p2p miners and pool operators running a recent version of bitcoind can very easily change the maximum block size; it is a command-line / bitcoin.conf setting. They don't need to use different software.
295  Bitcoin / Bitcoin Discussion / Bitcoin-Qt / bitcoind version 0.8.0 released on: February 19, 2013, 06:41:31 PM
Bitcoin-Qt version 0.8.0 is now available from:
  http://sourceforge.net/projects/bitcoin/files/Bitcoin/bitcoin-0.8.0/

This is a major release designed to improve performance and handle the
increasing volume of transactions on the network.

Please report bugs using the issue tracker at github:
  https://github.com/bitcoin/bitcoin/issues

How to Upgrade
--------------

If you are running an older version, shut it down. Wait
until it has completely shut down (which might take a few minutes for older
versions), then run the installer (on Windows) or just copy over
/Applications/Bitcoin-Qt (on Mac) or bitcoind/bitcoin-qt (on Linux).

The first time you run after the upgrade a re-indexing process will be
started that will take anywhere from 30 minutes to several hours,
depending on the speed of your machine.

Incompatible Changes
--------------------

This release no longer maintains a full index of historical transaction ids
by default, so looking up an arbitrary transaction using the getrawtransaction
RPC call will not work. If you need that functionality, you must run once
with -txindex=1 -reindex=1 to rebuild block-chain indices (see below for more
details).

Improvements
------------

Mac and Windows binaries are signed with certificates owned by the Bitcoin
Foundation, to be compatible with the new security features in OSX 10.8 and
Windows 8.

LevelDB, a fast, open-source, non-relational database from Google, is
now used to store transaction and block indices.  LevelDB works much better
on machines with slow I/O and is faster in general. Berkeley DB is now only
used for the wallet.dat file (public and private wallet keys and transactions
relevant to you).

Pieter Wuille implemented many optimizations to the way transactions are
verified, so a running, synchronized node uses less working memory and does
much less I/O. He also implemented parallel signature checking, so if you
have a multi-CPU machine all CPUs will be used to verify transactions.

New Features
------------

"Bloom filter" support in the network protocol for sending only relevant transactions to
lightweight clients.

contrib/verifysfbinaries is a shell-script to verify that the binary downloads
at sourceforge have not been tampered with. If you are able, you can help make
everybody's downloads more secure by running this occasionally to check PGP
signatures against download file checksums.

contrib/spendfrom is a python-language command-line utility that demonstrates
how to use the "raw transactions" JSON-RPC api to send coins received from particular
addresses (also known as "coin control").

New/changed settings (command-line or bitcoin.conf file)
--------------------------------------------------------

dbcache : controls LevelDB memory usage.

par : controls how many threads to use to validate transactions. Defaults to the number
of CPUs on your machine, use -par=1 to limit to a single CPU.

txindex : maintains an extra index of old, spent transaction ids so they will be found
by the getrawtransaction JSON-RPC method.

reindex : rebuild block and transaction indices from the downloaded block data.

New JSON-RPC API Features
-------------------------

lockunspent / listlockunspent allow locking transaction outputs for a period of time so
they will not be spent by other processes that might be accessing the same wallet.

addnode / getaddednodeinfo methods, to connect to specific peers without restarting.

importprivkey now takes an optional boolean parameter (default true) to control whether
or not to rescan the blockchain for transactions after importing a new private key.

Important Bug Fixes
-------------------

Privacy leak: the position of the "change" output in most transactions was not being
properly randomized, making network analysis of the transaction graph to identify
users' wallets easier.

Zero-confirmation transaction vulnerability: accepting zero-confirmation transactions
(transactions that have not yet been included in a block) from somebody you do not
trust is still not recommended, because there will always be ways for attackers to
double-spend zero-confirmation transactions. However, this release includes a bug
fix that makes it a little bit more difficult for attackers to double-spend a
certain type ("lockTime in the future") of zero-confirmation transaction.

Dependency Changes
------------------

Qt 4.8.3 (compiling against older versions of Qt 4 should continue to work)


Thanks to everybody who contributed to this release:
----------------------------------------------------

Alexander Kjeldaas
Andrey Alekseenko
Arnav Singh
Christian von Roques
Eric Lombrozo
Forrest Voight
Gavin Andresen
Gregory Maxwell
Jeff Garzik
Luke Dashjr
Matt Corallo
Mike Cassano
Mike Hearn
Peter Todd
Philip Kaufmann
Pieter Wuille
Richard Schwab
Robert Backhaus
Rune K. Svendsen
Sergio Demian Lerner
Wladimir J. van der Laan
burger2
default
fanquake
grimd34th
justmoon
redshark1802
tucenaber
xanatos
296  Bitcoin / Bitcoin Discussion / Re: Should casual users avoid the Satoshi client? on: February 19, 2013, 04:05:15 PM
I think casual users should avoid the Satoshi client. Gigabytes of blockchain data is not user-friendly, and we've done a lousy job of making it hard for users to lose their keys.

I think the something like the blockchain.info web wallet or Electrum is a good choice for long-term storage; you keep control over your private keys, and are exposed to possible theft risk only when you make a transaction (because a hacked blockchain.info could feed you evil Javascript, or a hacked Electrum download server could feed you an evil executable).  The chances that you will be one of the first customers who make a transaction after they were hacked, before they took their site offline to recover from the hack, are pretty small if you are only making a couple of transactions per month.

I'm also assuming that a casual user isn't storing thousands of bitcoins. I don't think we have great solutions for casual users with thousands of bitcoins yet (I consider paper wallets a fair solution, not a great one).
297  Bitcoin / Development & Technical Discussion / Re: How a floating blocksize limit inevitably leads towards centralization on: February 19, 2013, 03:17:17 PM
The changes in the last year were "soft forks" -- forks that required all miners to upgrade (if they don't, their blocks are ignored), but that do not require merchants/users to upgrade.

-------

A couple of random, half-baked thoughts I had this morning:

If you think that the block size should stay at 1 megabyte forever, then you're saying the network will never support more than 7 transactions per second, and each transaction will need to be for a fairly large number of bitcoins (otherwise transaction fees will eat up the value of the transaction).

If transactions are all pretty big, why the heck do we have 8 decimal places for the transaction amount?

Don't get me wrong, I still think the bitcoin network is the wrong solution for sub-US-penny payments. But I see no reason why it can't continue to work well for small-amount (between a US $1 and $0.01) payments.

If there are a very limited number of transactions per day and billions of dollars worth of BTC being transacted (that's what we all want, yes?) then obviously each transaction must be large. So, again, why bother having 8 digits after the decimal point if each transaction is hundreds of bitcoins big?

------

Second half-baked thought:

One reasonable concern is that if there is no "block size pressure" transaction fees will not be high enough to pay for sufficient mining.

Here's an idea: Reject blocks larger than 1 megabyte that do not include a total reward (subsidy+fees) of at least 50 BTC per megabyte.

"But miners can just include a never broadcast, fee-only transactions to jack up the fees in the block!"

Yes... but if their block gets orphaned then they'll lose those "fake fees" to another miner. I would guess that the incentive to try to push low-bandwidth/CPU miners out of the network would be overwhelmed by the disincentive of losing lots of BTC if you got orphaned.
298  Bitcoin / Development & Technical Discussion / Re: How a floating blocksize limit inevitably leads towards centralization on: February 18, 2013, 07:18:36 PM
RE: lots of code to write if you can't keep up with transaction volume:  sure.  So?

Transaction volume itself leads to centralization too, simply by ensuring that only a miner able to keep up with the large volume of low-fee transactions can make a profit.

I really don't understand this logic.

Yes, it is a fact of life that if you have a system where people are competing, the people who are less efficient will be driven out of business. So there will be fewer people in that business.

You seem to be saying that we should subsidize inefficient miners by limiting the block size, therefore driving up fees and making users pay for their inefficiency.

All in the name of vague worries about "too much centralization."
299  Bitcoin / Development & Technical Discussion / Re: How a floating blocksize limit inevitably leads towards centralization on: February 18, 2013, 06:29:47 PM
Half-baked thoughts on the O(N) problem:

So, we've got O(T) transactions that have to get verified.

And, right now, we've got O(P) full nodes on the network that verify every single transaction.

So, we get N verifications, where N = T*P.

The observation is that if both T and P increase at the same rate, that formula is O(N^2).

... and at this point your (and gmaxwell's) imagination seems to run out, and you throw up your hands and say "We Must Limit Either T or P."

Really?

If we have 20,000 full nodes on the network, do we really need every transaction to be verified 20,000 separate times?

I think as T and P increase it'd be OK if full nodes with limited CPU power or bandwidth decide to only fetch and validate a random subset of transactions.
300  Bitcoin / Development & Technical Discussion / Re: How a floating blocksize limit inevitably leads towards centralization on: February 18, 2013, 05:14:32 PM
So...  I start from "more transactions == more success"

I strongly feel that we shouldn't aim for Bitcoin topping out as a "high power money" system that can process only 7 transactions per second.

I agree with Stephen Pair-- THAT would be a highly centralized system.

Oh, sure, mining might be decentralized.  But who cares if you either have to be a gazillionaire to participate directly on the network as an ordinary transaction-creating customer, or have to have your transactions processed via some centralized, trusted, off-the-chain transaction processing service?

So, as I've said before:  we're running up against the artificial 250K block size limit now, I would like to see what happens. There are lots of moving pieces here, so I don't think ANYBODY really knows what will happen (maybe miners will collectively decide to keep the block size low, so they get more fees.  Maybe they will max it out to force out miners on slow networks.  Maybe they will keep it small so their blocks relay through slow connections faster (maybe there will be a significant fraction of mining power listening for new blocks behind tor, but blasting out new blocks not via tor)).


I think we should put users first. What do users want? They want low transaction fees and fast confirmations. Lets design for that case, because THE USERS are who ultimately give Bitcoin value.
Pages: « 1 2 3 4 5 6 7 8 9 10 11 12 13 14 [15] 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 ... 113 »
Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!