Bitcoin Forum
September 25, 2024, 06:31:49 PM *
News: Latest Bitcoin Core release: 27.1 [Torrent]
 
  Home Help Search Login Register More  
  Show Posts
Pages: « 1 ... 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 [68] 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 ... 368 »
1341  Bitcoin / Development & Technical Discussion / Re: How a floating blocksize limit inevitably leads towards centralization on: February 27, 2013, 01:52:01 AM
  I like the idea of taking a bandwidth metric, like propagation time, of all connected nodes and using that to determine max block size.  Done properly, the bitcoin network should be able to optimize itself to whatever the current average bandwidth happens to be, without overwhelming it.

The question is, how do we collect accurate data upon propagation time?  And then how do we utilize said data in a way that will result in a uniform computation for the entire network?
1342  Bitcoin / Bitcoin Discussion / Re: Bitcoin showing up on e-commerce sites. What have you seen? on: February 26, 2013, 08:44:06 PM
That's awesome.  Perhaps someone should point this out to Gunbroker.com itself, and encourge them to make Bitcoin a standard payment method?
1343  Bitcoin / Bitcoin Discussion / Re: Off-chain Transactions on: February 26, 2013, 07:44:09 PM

While I agree with the perspective of the OP, the greater gain would be to develop some kind of standard overlay network, across which many smaller BCH's, wallet services, exchanges, etc could interact off of the bitcoin network; and periodicly settle up upon the main blockchain.  ...

I like this direction too although I'm not sure how to implement it. It sounds like decentralized accounting which I think would be great, but it seems to me more practical to set up private controlled servers.


No, no.  Bitcoin is already decentralized accounting.  What I'm proposing would basicly be a VPN for BCH's of many types, some decentralized, others dedicated servers that function like banks.  In reality, this is going to happen eventually should the transaction fees ever hit anything that these guys can consitantly undercut.  One way to do it now, would be for the ownership of a couple of major wallet services; say MtGox and SilkRoad, were to get together and form a direct relationship, wherein the membership of one institution could send funds to any member of the other institution, and rather than it creating a blockchain transaction, both servers recognize that they are trying to send money to the other, and each credits & debits the appropriate accounts based upon the two institutions' mutual credit.  This could be done simply with a set of code on each server that could identify the addresses of the other institution, or simply by clicking the 'use green address' button.  These two institutions would have to be willing to hold a balance with each other, up to some point which triggers a settling up.  Say, 100 BTC.  If buyers on SilkRoad were transfering funds from their accounts at MtGox, buying things from SilkRoad, and a portion of those vendors were transfering funds back towards MtGox on a continuing basis; a large enough mutual credit limit can result in many tranfers between those two institutions balancing out, and thus completely avoiding a blockchain transaction at all, but they are still using Bitcoin.  They may not even be aware of the cost saving agreements that their institutions employ.  However, in order to do this, these institutions must both be large (in both revenue and membership) and have access control over members' funds.  I can't see a way that milti-sig works here.

My proposal is for a standard way of setting up these mutual credit agreements, as well as extending these aggrements in a similar way that Ripple works between individuals.  Ripple is actually more powerful between institutions than individuals, IMHO.
1344  Bitcoin / Development & Technical Discussion / Re: How a floating blocksize limit inevitably leads towards centralization on: February 26, 2013, 06:51:37 PM
Apple isn't a big company with which no one could possibly compete because no one can build what they build, Apple is such a big company because they send GOVERNMENT THUGS after anyone who would dare use their ideas and copy their products.

This is so true. People think Apple is a technology company but what they are really good at is using the government's monopoly on the use of violence to eliminate competitors.

They also have some creative people working there, it's not like Apple's business model is based up copying the best features of competitors.  They are not an ideal example of an upstanding corporation, but nor are they Micro$oft.  From 1994 till at least 2007, I've literally not ssen anything new out of M$ that wasn't available first, in some form, on GNU/Linux, or some other open source project.  I'm not convinced that M$ ever did anything particularly novel outside the scope of copyright legal theory.
1345  Other / Beginners & Help / Re: Shoulda used bitcoin... on: February 26, 2013, 03:53:40 AM
I still don't understand why this is even illegal.  Did the other parents not know that there would be an adult theme?  It was held in a private rented room.
1346  Bitcoin / Bitcoin Discussion / Re: Off-chain Transactions on: February 26, 2013, 01:55:36 AM
While I agree with the perspective of the OP, the greater gain would be to develop some kind of standard overlay network, across which many smaller BCH's, wallet services, exchanges, etc could interact off of the bitcoin network; and periodicly settle up upon the main blockchain.

This sounds like an interesting idea.

I started to wonder if daughter alt chains (specifically made for the task) could provide the off network transactions, with the main bitcoin network remaining only to periodically combine the last n txs from a daughter chain into blocks. This way a client would only need to keep a copy of the chain to which they were subscribed. Then I realised that each daughter chain would be less secure than the main network, and since this is not something of which I have an indepth understanding, there will probably be other reasons this can't be done.

So maybe alt chains (if only the term "alt chain" didn't have the connotations it unfortunately has). If the block size problem becomes an issue for miners or users, alt chains like litecoin may see greater use. Or maybe geographically local alt chains that are otherwise identical to bitcoin?

I hope not.  Part of the benefit of bitcoin is the network effect, which is damaged, not improved, by the growth of alt-coins.  The idea that I just described still uses bitcoins as it's underlying asset.
1347  Bitcoin / Bitcoin Discussion / Re: Off-chain Transactions on: February 26, 2013, 01:39:30 AM
While I agree with the perspective of the OP, the greater gain would be to develop some kind of standard overlay network, across which many smaller BCH's, wallet services, exchanges, etc could interact off of the bitcoin network; and periodicly settle up upon the main blockchain.  I don't really know how to do this, but I imagine it would work something like how Ripple is intended to work, so that for any given BCH, the ownership only needs to prove identity and credit-worthiness to however many of it's peers as may be required, and not the entire p2p network at large.  This, unto itself, would limit centralization of the greater bitcoin economy by providing a standardized means that any one person, or group of people, with the right kinds of resources could set up such a BCH for their own membership.  For example, vendors on SilkRoad could buy something by providing their BCH crypto-ID, which could be something as simple as a copy of a specially chosen bitcoin address (probably one attached to their internal account at SilkRoad), which has also been digitially signed by SilkRoad's BCH crypto-ID.  IF the vendor could scan such a QR code, submit that data to their own BCH server, and then that server be able to determine 1) if both the ID's were real 2) the unnamed BCH (silkroad) has the ripple-credit to back up this transaction and 3) if there are any unresolved disputes between this address and any others on the BCH network. 

If we can do this, then the rest will follow.
1348  Economy / Lending / Re: Looking for a Loan on: February 26, 2013, 01:19:03 AM
Go talk to your family, because there is next to zero chance that a largely unknown person is going to be able to get 80 BTC in order to go on a vacation.
1349  Bitcoin / Bitcoin Discussion / Re: transactions even with fees take forever. on: February 26, 2013, 01:15:03 AM
its good below that 2k

If that is true, then it is quite surprising seeing as there were plenty of transactions that were included in the blocks that only had a fee of 0.0005 BTC.  Any miner that wants to increase their profits should be choosing your 0.001 BTC fee transaction over a 0.0005 BTC transaction.

Logicly, yes.  But bear in mind that modification of the local selection rules requires both the technical ability to do so, and belief that it's worth the effort.  IT's probably not yet worth the effort.
1350  Bitcoin / Bitcoin Discussion / Re: transactions even with fees take forever. on: February 26, 2013, 01:04:25 AM
I thought the minimum fee to be regarded as a fee paying transaction was .005 BTC.  That's what my client defaults to, anyway. 
1351  Bitcoin / Development & Technical Discussion / Re: review of proposals for adaptive maximum block size on: February 25, 2013, 09:27:57 PM
.
(Is a hard fork necessary for removing the block size limit altogether?) </noob>


Yes, but only because the max blocksize is coded into the rules that define the validity of a block, and anyone that refuses to upgrade to the new set of rules (on purpose or simply by ignorance of the issues) would force the blockchain to split into two competing versions.  Thus, for a time, there will literally be two different versions of the truth, and that cannot persist.  Bitcoin is designed to manage relatively short splits that result from temporary conditions, such as bandwidth issues for entire sections of the Internet.  However, it's not really designed to be able to recover from a split that lasts more than a day.

Quote
Quote from: Mashuri
We should have seen the limit maxed a long time ago, yes?
I also would like to know why people aren't just pumping out 1MB blocks non-stop which is apparently what will happen if we remove the limit.

No one has tried to game the system, in part, because the presence of the max_blocksize rule would make any such attempt futile.  If we do remove that limit altogether, that kind of criminal calculation changes, and attempts might be made.
1352  Bitcoin / Development & Technical Discussion / Re: review of proposals for adaptive maximum block size on: February 25, 2013, 08:18:06 PM
An alternative that I also offered is to have a special case, wherein a miner could produce a particularly large block (probably not good to be unlimited) if all the transactions included were free; as evidence that the miner doing the processing is either doing so altruisticly, or is being compensated by an out-of-network agreement.

However, I'd modify this proposal a bit that wouldn't require that all transactions be free, but that all of them were part of the transaction pool for at least a week.
1353  Bitcoin / Development & Technical Discussion / Re: review of proposals for adaptive maximum block size on: February 25, 2013, 07:53:31 PM
Been very interested in this debate, and I'm interrupting mughat and markm's conversation, and I apologize, but I'm sure they'll deal.

There was one proposal, which I can't recall now (somewhere deep in that giant thread) about having one unlimited/very large block every two weeks or so, to clear the backlog of transactions.

I think it warrants more discussion since most people seem to have passed it by. But perhaps I'm just missing some obvious flaw.

Thoughts?

That sounds like my proposal, to permit each re-target block to be unlimited.  The re-target block comes once every 2448 blocks (or so?), and is intended to be roughly every two weeks.  This would make the deliberate padding of blocks to force out small players ineffective, reward honest miners with an expecially profitable block if they are able to handle it, and preserve the market for rapidly confirming transactions for the remainder of the two week period.  Any small players who were overwelmed by a huge block would simply have to write off the next couple of blocks while they caught back up with the rest of the network.  It's also provide an outlet for free transactions and fee paying transactions that simply don't have enough to get included into a block, so the backlog of unconfirmable transactions (and thus the transaction queue) won't grow into infininty.

However, there could be, and probably are, some unintended consequences that this could create if this were the only change here.  The first one that I can think of is that there still wouldn't be any way to compel miners to include old transactions, free or not, and then such free transactions might never clear out regardless.
1354  Bitcoin / Development & Technical Discussion / Re: How a floating blocksize limit inevitably leads towards centralization on: February 24, 2013, 11:26:39 PM

As transactions become more expensive per byte people are going to use all sorts of techniques to make transaction size smaller. For instance you can combine transactions together with other parties; each transaction includes a 10 byte header. If you get together with 20 other people, you've saved 200 bytes and you improve your privacy because you've mixed your funds with those 20 other people.

That's quite literally the intent of the send-to-many transaction type, although it's much more likely that they'll be used to send to pay many different vendors from one single payer than multiple payers to multiple payees.  The best example is that of weekly payroll, as anyone getting payed wages in bitcoin, working for the same company or entity, can be paid their weekly wages in the transaction as everyone else.  Regular users could do the same thing using bitcoin aware bill payment programs, than can gather up all the re-occuring and one time bills that a person has received, and pay their water bill, electric bill and cable bill, etc. in a single action; so long as they have the total value in inputs that would be required.

So while a direct deposit payroll event for any significantly sized company would involved hundreds to thousands of electronic transactions per week, these same companies could do the entire event in a single send-to-many transaction that weighs in at a couple kilobytes, and currently should cost less than a quarter.  Even if the transaction cost rise that such a large transaction costs $10 at a time, that's chump change compared to the costs of simply printing cheques, much less mailing them.  In the event that small value transactions wherein the customer sends the vendor a set of low value keypairs (as opposed to transactions), the vendor would have a vested interest in flushing those keypairs in a timely manner, so as to limit the risk of a double spending fraud against them.  In this way, collecting those many inputs and pumping them back out to employees with the weekly payroll send-to-many transaction does double duty.
1355  Bitcoin / Development & Technical Discussion / Re: How a floating blocksize limit inevitably leads towards centralization on: February 24, 2013, 12:43:16 PM
That is truly awful news.

Why? What is the proportianality of exchange rates to transaction rates?

Exchange rate is approx. $30 per coin currently at current number of transactions per second. What value of exchange rate will it take to drive transaction rate up to 4 per second?

-MarkM-


The BTCt fiat exchange rates have no direct influence on the transaction rate.  Only the size of the economy has a real influence on the transaction rate, but severe limitations on the transaction rate might be a limitation on the size of the economy.
1356  Bitcoin / Development & Technical Discussion / Re: How a floating blocksize limit inevitably leads towards centralization on: February 24, 2013, 12:37:11 PM
I'm pretty sure that the 250KB limit has never been broken to date.

block 187901 499,245 bytes
block 191652 499,254 bytes
block 182862 499,261 bytes
block 192961 499,262 bytes
block 194270 499,273 bytes

These are the biggest 5 blocks up to the checkpoint at block 216,116.  Interesting that they're all 499,2xx bytes long, as if whoever is mining them doesn't want to get too close to 500,000 bytes long.

I understand that at least one miner has their own soft-limit, probably Eligius and probably at 500kb.

I take that back because these blocks are version 1 and Eligius is supposedly producing all version 2 (http://blockorigin.pfoe.be/top.php)

However, I have extracted the transaction counts and they average 1190 each. Looking at a bunch of blocks maxing out at 250Kb they are in the region of 600 transactions each, which is to be expected. Obviously, there is a block header overhead in all of them. But this does mean that the 1Mb blocks will be saturated when they carry about 2400 transactions. This ignores the fact that some blocks are virtually empty as a few miners seem not to care about including many transactions.

So 2400 transactions per block * 144 blocks per day = 345,600 transactions per day or Bitcoin's maximum sustained throughput is just 4 transactions per second.
This is even more anemic than the oft-quoted 7 tps!



That is truly awful news.
1357  Bitcoin / Development & Technical Discussion / Re: How a floating blocksize limit inevitably leads towards centralization on: February 24, 2013, 05:31:31 AM
The verification delays grow exponentially with the number of transactions simply because each node must perform it's own series of checks before forwarding said block.

Wrong, because the number of nodes getting involved in verification of specific block grows exponentially as well, so the relationship between number of transactions and propagation time is linear.

As an analogy think about how a bacteria multiplies, at every step a size of the colony increased by factor of 2 until it's reached lets say 1024, if suddenly it requires twice more time to multiple then it takes only twice longer to reach the size of 1024 because the number of multiplication steps is still the same.

If block verification were a distributable process, you'd be correct, but it isn't.  At least it's not now, and I don't know how it could be done.  One thing that could be altered to speed up propagation is for some nodes to have trusted peers, wherein if they receive a block from that peer, they re-broadcast that block first then do their checks.  But if this was the default behavior, the network could be DDOS'ed with false blocks.

Lets say you have 1025 full nodes on the network, to keep the example simple we will have those nodes connected in a form of binary tree where a block starts propagating from it's root. Lets say it takes 1 minutes for a node to verify a block and relay it to its children, then it will take 10 minutes for 1024 nodes ( I exclude the root from verification time) to verify the block, in another words it takes 10 minutes for a block to get propagated and verified by the network.

Now lets have a bigger block that takes twice more time to verify, then it will take 20 minutes for 1024 nodes to verify the block, in another words it will take twice more minutes for a block to get propagated and verified by the network. Therefore the relationship between verification time of block and propagation delay it's causing is linear, e.g. if it takes thrice more time to verify a block then, in the simplified example, it will take thrice more time to propagate it through the network.

Your example does not relate to the network.  It's not a binary tree, and doubling the size of the block does not simply double the verification times.  While it might actually be close enough to ignore in practice, the increase in the actual number of transactions in the myrkle tree makes the tree itself more complex, with more binary branches thus more layers to the myrkle tree.  This would imply a greater complexity to the verification process on it's own.  For example, a simple block with only four transactions will have the TxID hashes for those four transactions, plus two hashes for their paired myrkel tree branches, and a final hash that pairs those two together, and that is the myrkle root, which is then included into the block header.  Moving beyond four transactions, however, creates another layer in the binary tree; and then another after 8 transactions, and another after 16 transactions.  Once your into a block with several thousand transactions, your myrkel tree is going to have several layers, and only the bottom of the tree are actual transaction ID's; all the rest are artifacts of the myrkel tree, which every verification of a block must replicate.  The binary myrkel tree within a block is a very efficient way to verifiablely store the data in a way that it can be trimmed later, but it's most certainly not a linear progression.  More complex transaction types, such as contracts or send-to-many, has a similar effect; as the process for verifying transactions is not as simple as a single hash.  Portions of the entire transaction must be excluded from the signing hash that each input address must add to the end of each transaction, and then be re-included as the verification process marches down the list of inputs.  And that is just an example of what I, personally, know about the process.  Logically, the increase in clock cycles for larger, and more complex, blocks and transactions must increase the propagation times at least linearly; and it's very likely to be greater than linear.  Which is, by defintion, exponential.  It may or may not matter in practice, as that exponetial growth may be so small as to not really effect the outcomes, but it is very likely present; and if so, the larger and more complex that blocks are permitted to grow the more likely said growth will metasize to a noticable level.
1358  Bitcoin / Development & Technical Discussion / Re: How a floating blocksize limit inevitably leads towards centralization on: February 23, 2013, 03:26:23 PM
Still, the number of hops (nodes) a block has to be checked by in order for the network to be flooded / saturated with that block is linear to the average radius of the net, in hops, from the node that solves the block, isn't it?

-MarkM-


Hmm, more or less.  But my point is that, as the blocksize increases, the propagation delays increase due to more than one delay metric increasing.  We have the check times, we have the individual p2p transmission times, and we have the number of hops.

One thing that I didn't make clear is that I also suspect, but cannot prove, the number of hops in the network will also tend to increase in average number.  Partly because of an increase in active nodes, which is an artifact of a growing economy, but also somewhat due to the increase in blocksize.  I suspect that as network resource demand increases, some full nodes will deliberately choose to limit their peers and their dedicated bandwidth, functionally moving themselves towards the edge of the network.
1359  Bitcoin / Development & Technical Discussion / Re: How a floating blocksize limit inevitably leads towards centralization on: February 23, 2013, 03:14:40 PM
The verification delays grow exponentially with the number of transactions simply because each node must perform it's own series of checks before forwarding said block.

Wrong, because the number of nodes getting involved in verification of specific block grows exponentially as well, so the relationship between number of transactions and propagation time is linear.

As an analogy think about how a bacteria multiplies, at every step a size of the colony increased by factor of 2 until it's reached lets say 1024, if suddenly it requires twice more time to multiple then it takes only twice longer to reach the size of 1024 because the number of multiplication steps is still the same.

If block verification were a distributable process, you'd be correct, but it isn't.  At least it's not now, and I don't know how it could be done.  One thing that could be altered to speed up propagation is for some nodes to have trusted peers, wherein if they receive a block from that peer, they re-broadcast that block first then do their checks.  But if this was the default behavior, the network could be DDOS'ed with false blocks.
1360  Bitcoin / Development & Technical Discussion / Re: How a floating blocksize limit inevitably leads towards centralization on: February 23, 2013, 01:34:00 PM
One, is the fixed cost really directly linear to the max block size? Or is it really more like exponential?
It's definately exponential, even if you just consider network traffic.
Let me paint a strawman to burn...
The network topology, as it exists, is largely random.  A full client is expected to have at least 8 active connections, although three would suffice for the health of the network and only one works for the node itself.  Many can, and do, have more connections than 8.  Many of the 'fallback' nodes have hundreds, perhaps thousands, of connections.  So here is the problem; when a block is found, it's in every honest nodes' own interest that said block propagates to the edges of the network as quickly as possible.  A recently as last year, it wouldn't normally take more than 10 seconds to saturate the network, but Satoshi chose a 10 minute interval, in part, to reduce the number of network splits and orphaned blocks due to propagation times, because he presumed that the process that nodes go through (verify every transaction, verify the block, broadcast a copy to each peer that does not have it) would grow as blocks became larger.  The verification delays grow exponentially with the number of transactions simply because each node must perform it's own series of checks before forwarding said block. 

Now we get to the traffic problem...

Once the block has been found by a miner, that miner very much wants to send it to every peer he has as quickly as he can.  So he has to upload that block, whatever it's size, at least 8 times because it's impossible that any of his peers already has it.  Then, after each of those peers has performed the checks, he then proceeds to upload that block to all of his peers save one (the one it got it from) because at this point it is very unlikely that any of his other connected peers already has the block.  These nodes also have a vested interest in getting this block out quickly, once they have confirmed it's a valid block, if they are miners themselves because they don't want to end up mining on a minority block.  Thus, as you can see, the largest miners will always form the center of the network, because they all have a vested interest in being very well connected to each other.  So they also have a vested interest in peer connections with other miners of their same caliber, but not so much with the rest of the network, since sending to lesser miners or non-mining full clients will slow down their ability to transmit said block to those peers that increase the odds that said block will be the winner in a blockchain split.  This effect just gets worse as the size of the block increases, no matter how fast the connection that the mining nodes may have.

Well, this process continues across the p2p network until half of the network already has the block, at which point each node, on average, only finds that half of his peers still need the block.  The problem is that, for small miners, the propagation of a block that is not their own offers them somewhat less advantage than it does for the big miners at the center of the network, because those big miners have already been mining against that new block for many milliseconds to several seconds before the marginal miner even received it.  To make matters worse, the marginal miner tends to have a slower connection than the majors.  So even though it's increasingly likely that his peers don't need the block at all; as the size of the block increases, the incentives for the marginal miner to fail to forward that block, or otherwise pretend it doesn't have it, increase at a rate that is greater than linear.  This is actually true for all of the mining nodes, regardless of their position in the network relative to the center, but those who are closest to the center are always those with the greatest incentives to propagate, so the disincentives increase at a rate closer to linear the closer one is in the network to the center.

Now there is a catch, most of the nodes don't know where they are in the network.  It's not that they can't determine this, most owners simply don't bother to figure this out, nor do they deliberately establish peer connections with the centermost nodes.  However, those centermost nodes almost certainly do have favored connections directly to each other, and they were established deliberately by the ownership for these very reasons.
Is it possible to test this theory in the testnet?

I would doubt it, since there is no economic dynamic on testnet.
Pages: « 1 ... 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 [68] 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 ... 368 »
Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!