Bitcoin Forum
November 08, 2024, 07:17:40 AM *
News: Latest Bitcoin Core release: 28.0 [Torrent]
 
   Home   Help Search Login Register More  
Pages: « 1 2 [3] 4 »  All
  Print  
Author Topic: Micropayments?  (Read 12878 times)
Cdecker
Hero Member
*****
Offline Offline

Activity: 489
Merit: 505



View Profile WWW
December 29, 2010, 08:08:22 PM
 #41

I actually think we're going at it the wrong way, the problem is not so much that micropayments are prohibited by the current implementation, that was just a reaction to people voicing the concern of a DDoS, which then actually was carried out (don't remember the thread right now).

The thing is that the current network topology and message routing is not scalable and a large number of transactions will actually bring it to its knees. It just so happens that using a nanobitcoin makes it really easy to put heavy load on the network, hence the code avoiding small transactions. But I do see a huge increase of the number of transactions in the near future, and we will see the same limits be hit again.

One of the very frequent questions I have to answer to is "how scalable is the Bitcoin network?" and my usual answer is that it isn't at all scalable. Simply the fact that every transaction is broadcast in a random fashion is incredibly wasteful. Were we to adopt a nicer system (pulling transactions only when a block is found, upstream aggregation, DHT style responsability sharing, ...) it would work so much nicer and would scale better.

The current limitation on the size of transactions is just trying to cure the symptoms but the root cause can only be cured by a better structured topology, improved routing mechanisms and a reduction in message complexity!

Want to see what developers are chatting about? http://bitcoinstats.com/irc/bitcoin-dev/logs/
Bitcoin-OTC Rating
jgarzik
Legendary
*
Offline Offline

Activity: 1596
Merit: 1100


View Profile
December 29, 2010, 08:30:04 PM
 #42

One of the very frequent questions I have to answer to is "how scalable is the Bitcoin network?" and my usual answer is that it isn't at all scalable. Simply the fact that every transaction is broadcast in a random fashion is incredibly wasteful. Were we to adopt a nicer system (pulling transactions only when a block is found, upstream aggregation, DHT style responsability sharing, ...) it would work so much nicer and would scale better.

Flood-fill scales in practice, because interested network parties keep upgrading to handle the load.  DHT is pointless, as you could lose information to network partitions or simply if an unlucky set of participants disconnects.

Overall the current system is designed for Big Players With Capacity to survive on the P2P network long term.  It is simply not meant for small players.

Unfortunately we continue to await the development of the much anticipated "lightweight client" that will not need to receive every tranasaction and echo it, etc.  satoshi added the "getheaders" network message recently in order to better support lightweight clients, which permits casual connectivity, pulling only the block headers needed since last connection.

I would rather we just remove the "fee for < 0.01 transactions" rule.  If people want to spam, they can spam with 0.01 BTC transactions, and fill up the free-TX area in each block.

Jeff Garzik, Bloq CEO, former bitcoin core dev team; opinions are my own.
Visit bloq.com / metronome.io
Donations / tip jar: 1BrufViLKnSWtuWGkryPsKsxonV2NQ7Tcj
FreeMoney
Legendary
*
Offline Offline

Activity: 1246
Merit: 1016


Strength in numbers


View Profile WWW
December 29, 2010, 09:47:36 PM
 #43

It may be a non-issue to the minority of us who can shift decimal points about with ease, but for the majority of humans, currency units need to be scaled to practical values.
You misunderstood, nanoCoins would be integers displayed to the user handled under the hood as 1e-9 BTC.
Transferring 1 BTC to this client would display a balance of 1,000,000,000.
This really shouldn't be worried about


Technically it won't need to be worried about, logistically it will.

You want Amazon to eventually support BTC right? Does their current system support .000001 precision?



If they can change the text next to the field to say BTC then they can make it say NanoBTC too. This is a non-problem.

Play Bitcoin Poker at sealswithclubs.eu. We're active and open to everyone.
Cdecker
Hero Member
*****
Offline Offline

Activity: 489
Merit: 505



View Profile WWW
December 29, 2010, 10:02:47 PM
 #44

Flood-fill scales in practice, because interested network parties keep upgrading to handle the load.  DHT is pointless, as you could lose information to network partitions or simply if an unlucky set of participants disconnects.

Overall the current system is designed for Big Players With Capacity to survive on the P2P network long term.  It is simply not meant for small players.
Not really a good argument, why impose the need to keep up, if simple changes can lower the barrier? Remember that if we can think of it, others will too, if we don't implement it, others will. And if they do you can be sure they'll start a new block chain, eventually surpassing our community in size, because of the added flexibility, and thus making all our hard earned Bitcoins worth less.

Unfortunately we continue to await the development of the much anticipated "lightweight client" that will not need to receive every tranasaction and echo it, etc.  satoshi added the "getheaders" network message recently in order to better support lightweight clients, which permits casual connectivity, pulling only the block headers needed since last connection.
Which is basically the implementation of a 2 hierarchical system as proposed earlier. It is by no means the definitive solution to everything, since even the inner network will eventually become too busy to handle the message complexity.

I would rather we just remove the "fee for < 0.01 transactions" rule.  If people want to spam, they can spam with 0.01 BTC transactions, and fill up the free-TX area in each block.
Making everyone else pay? To be really resilient we have to make the network Dos proof, and while there is no definitive solution to this, we can make steps in the right direction.

Want to see what developers are chatting about? http://bitcoinstats.com/irc/bitcoin-dev/logs/
Bitcoin-OTC Rating
MoonShadow
Legendary
*
Offline Offline

Activity: 1708
Merit: 1010



View Profile
December 29, 2010, 10:12:10 PM
 #45

What we really need is a way to write a digital "check" and digitally sign it.  Perhaps this can be done with the regular client, but the receiver simply can't know if the sender has modified his client to permit double spending this way, so we get back into the trusted third party issue.  A Mybitcoin.com variant that issues each user a unique ID number and matching keypair for the purpose of signing said checks, any online vender or individual receiving these checks could simply collect all of the checks that he has received over a period of time for the same bitcoin 'bank website' so that he can be paid in one bitcoin transaction.

"The powers of financial capitalism had another far-reaching aim, nothing less than to create a world system of financial control in private hands able to dominate the political system of each country and the economy of the world as a whole. This system was to be controlled in a feudalist fashion by the central banks of the world acting in concert, by secret agreements arrived at in frequent meetings and conferences. The apex of the systems was to be the Bank for International Settlements in Basel, Switzerland, a private bank owned and controlled by the world's central banks which were themselves private corporations. Each central bank...sought to dominate its government by its ability to control Treasury loans, to manipulate foreign exchanges, to influence the level of economic activity in the country, and to influence cooperative politicians by subsequent economic rewards in the business world."

- Carroll Quigley, CFR member, mentor to Bill Clinton, from 'Tragedy And Hope'
jgarzik
Legendary
*
Offline Offline

Activity: 1596
Merit: 1100


View Profile
December 29, 2010, 10:30:02 PM
 #46

Flood-fill scales in practice, because interested network parties keep upgrading to handle the load.  DHT is pointless, as you could lose information to network partitions or simply if an unlucky set of participants disconnects.

Overall the current system is designed for Big Players With Capacity to survive on the P2P network long term.  It is simply not meant for small players.
Not really a good argument, why impose the need to keep up, if simple changes can lower the barrier? Remember that if we can think of it, others will too, if we don't implement it, others will. And if they do you can be sure they'll start a new block chain, eventually surpassing our community in size, because of the added flexibility, and thus making all our hard earned Bitcoins worth less.

If you want to design an alternate bitcoin, hey, go for it.  But what you're talking about isn't really bitcoin at that point.

The current system is designed to devolve into (a) Large Mining Conglomerates, and (b) lightweight clients that simply "sip" the necessary parts of the block chain.


Unfortunately we continue to await the development of the much anticipated "lightweight client" that will not need to receive every tranasaction and echo it, etc.  satoshi added the "getheaders" network message recently in order to better support lightweight clients, which permits casual connectivity, pulling only the block headers needed since last connection.
Which is basically the implementation of a 2 hierarchical system as proposed earlier. It is by no means the definitive solution to everything, since even the inner network will eventually become too busy to handle the message complexity.

Do you have any hard data that even remotely backs up these "too busy" claims?  One block header every 10 minutes remains fixed at ~80 bytes, regardless of 10 or 10,000,000 transactions per block.  Lightweight clients only need block headers and transaction data directly relevant to their wallets, as the system is currently designed.  Lightweight clients do not need to listen to, nor relay, all-network transactions and all-network completed blocks as do full nodes.


I would rather we just remove the "fee for < 0.01 transactions" rule.  If people want to spam, they can spam with 0.01 BTC transactions, and fill up the free-TX area in each block.
Making everyone else pay? To be really resilient we have to make the network Dos proof, and while there is no definitive solution to this, we can make steps in the right direction.

Define "DoS" proof, and then review how the current system works.  Smiley

People can DoS the tiny free TX area, and then they must start paying transaction fees.  That's the system working as designed.  TX fees make the system more expensive, as transaction rates increase, making DoS and spam more expensive, the more data sent.

Jeff Garzik, Bloq CEO, former bitcoin core dev team; opinions are my own.
Visit bloq.com / metronome.io
Donations / tip jar: 1BrufViLKnSWtuWGkryPsKsxonV2NQ7Tcj
Anonymous
Guest

December 29, 2010, 10:30:28 PM
 #47

The answer is to have a billion banks.... Cheesy

What if opening a bank was as simple as installing a client?  Wink

davout
Legendary
*
Offline Offline

Activity: 1372
Merit: 1008


1davout


View Profile WWW
December 29, 2010, 10:38:14 PM
 #48

The answer is to have a billion banks.... Cheesy

What if opening a bank was as simple as installing a client?  Wink

It might become very easy when the bounty for an open source trading platform raises a respectable amount Wink

Cdecker
Hero Member
*****
Offline Offline

Activity: 489
Merit: 505



View Profile WWW
December 29, 2010, 10:51:38 PM
Last edit: December 29, 2010, 11:35:36 PM by Cdecker
 #49

The answer is to have a billion banks.... Cheesy

What if opening a bank was as simple as installing a client?  Wink
Yeah, but then you're back at 0, since you'd have to convince users that they can trust you.

Want to see what developers are chatting about? http://bitcoinstats.com/irc/bitcoin-dev/logs/
Bitcoin-OTC Rating
Mike Hearn
Legendary
*
Offline Offline

Activity: 1526
Merit: 1134


View Profile
December 30, 2010, 08:50:47 PM
 #50

I don't think it's worth worrying about core network load for a long time yet. BitCoin can easily scale up in the core. Let's do some back of the envelope maths to prove it.

VISA handles on average around 2000 transactions/sec, so call it a peak rate of 4000/sec. Let's take that as starting goal. Obviously if we want BitCoin to scale to all economic transactions worldwide, including cash, it'd be a lot higher than that, perhaps more in the region of a few hundred thousand transactions/sec.

The protocol has two parts. Nodes send "inv" messages to other nodes telling them they have a new transaction. If the receiving node doesn't have that transaction it requests it with a getdata.

Receiving a transaction isn't that expensive, as software operations go. Basically the big cost is the crypto and block chain lookups involved with verifying the transaction. It'd be best to time Satoshis actual implementation, but as I can't be bothered doing that right now here's a best guess as to how expensive it is:

http://www.cryptopp.com/benchmarks.html

This claims that on a 1.8Ghz Core 2 Duo an ECDSA verification with the library BitCoin uses takes around 8msec. RIPEMD-160 runs at 106 megabytes/sec (call it 100 for simplicity) and SHA256 is about the same. So hashing 1 megabyte should take around 10 milliseconds and hashing 1 kilobyte would take 0.01 milliseconds, clearly it's dwarfed by the cost of the ECDSA.

The bulk of the rest of the time in verifying a transaction today probably goes on disk IO, but in our hypothetical future the entire block chain would surely be stored in RAM (or flash). Reading from RAM or even flash is cheap.

Even if you assume a gigantic block chain, it's quite feasible to hold the whole thing in RAM. Consider that Google holds large parts of the web in RAM today if you don't believe me.

So the slowest part of verifying a transaction is verifying its inputs, which is probably 8-10 msec per input on todays hardware. It seems like in the current blockchain most transactions have only one input, and a few have more like 5/6 inputs. Let's call it an average of 2 inputs overall.

So this means a single core today can probably, with tuning and the block chain held in RAM but no special hardware beyond that, verify and accept about 50 transactions/sec. Writing data out over the network to peers is cheap and can be done largely by the NIC itself so that's not a concern.

If I'm in the right ballpark, it means a network node capable of keeping up with VISA would need roughly 80 cores + whatever is used for mining (done by separate machines/GPUs). Whilst building a single machine with 80 cores would be kind of a pain load balancing inbound "tx" messages over multiple machines would be very easy. Certainly a single machine could easily load balance all of VISAs transactions to a small group of verification machines which would then send the verified tx hash to the miners for incorporation into the merkle tree.

For receiving and handling all the "tx" messages, you could build a rack of 10 8-core machines that would keep up easily. And if hardware acceleration for ECDSA signature verification can be found/built, it's probably possible for only one (beefy) machine to do it!

That leaves the inbound inv messages. The cost of handling an inv is basically reading a small message from the network and then doing a RAM lookup to see if we already have the transaction. This is really, really fast. A single core could easily handle several thousand inv messages per second before breaking a sweat, even assuming it needs to read from a sharded in-memory block chain index.

The tl;dr summary is that with some adapted software, you could build a distributed BitCoin network node that could keep up with VISA with probably 2 or 3 racks of machines assuming the block chain and associated indexes are either kept in regular RAM or (more likely) flash storage. So not something a hobbyist is going to do, but quite feasible for a small company or organization today - not even taking into account the falling cost of computing over time. If BitCoin is ever as large as VISA there'll be plenty of people willing to run such rigs.
MoonShadow
Legendary
*
Offline Offline

Activity: 1708
Merit: 1010



View Profile
December 30, 2010, 10:39:12 PM
 #51

Well, done.  But I don't think the overhead of verifying transactions is what he was concerned about.  The bigger issue is the bandwidth required to echo every new block to every bitcoin client when transactions average 2000 per second.

"The powers of financial capitalism had another far-reaching aim, nothing less than to create a world system of financial control in private hands able to dominate the political system of each country and the economy of the world as a whole. This system was to be controlled in a feudalist fashion by the central banks of the world acting in concert, by secret agreements arrived at in frequent meetings and conferences. The apex of the systems was to be the Bank for International Settlements in Basel, Switzerland, a private bank owned and controlled by the world's central banks which were themselves private corporations. Each central bank...sought to dominate its government by its ability to control Treasury loans, to manipulate foreign exchanges, to influence the level of economic activity in the country, and to influence cooperative politicians by subsequent economic rewards in the business world."

- Carroll Quigley, CFR member, mentor to Bill Clinton, from 'Tragedy And Hope'
Mike Hearn
Legendary
*
Offline Offline

Activity: 1526
Merit: 1134


View Profile
December 31, 2010, 03:32:17 PM
 #52

I don't think that's a big concern either.

Let's assume an average rate of 2000tps, so just VISA. Transactions vary in size from about 0.2 kilobytes to over 1 kilobyte, but from looking at the block explorer it's probably averaging half a kilobyte today. So let's assume the way people use BitCoin gets more complicated and call it 1kb per transaction.

A solved block will then be around (1kb * 2000tps * 60 * 10) / 1024 / 1024 = 1.14 gigabytes per block.

But you only have to transmit a solved block to your connected peers. If we assume these big futuristic supernodes have something like 40 or 50 peered connections, that means in the worst case scenario where you solve a block OR you receive a block but none of your peers have it yet (unlikely), you have to send ~57 gigabytes of data (call it 60).

Shifting 60 gigabytes of data in, say, 60 seconds means an average rate of 1 gigabyte per second, or 8 gigabits per second.

The real question you want to know is how much does that sort of bandwidth cost? Well, bandwidth prices are a very tricky thing as some of the largest consumers pay the least due to how peering agreements work. The Googles and Akamais of this world will pay way less for a 10G wave than a small operator would. And, you wouldn't be hitting the 8Gbps very frequently .... only when you solve a block, really, as when relaying a block the peers you connect to will likely have already received it from some other peer anyway so only a subset would need to receive it from you.

But you can take a look at this random example of a colo provider to get a feel for it:

   http://icecolo.com/colocation-packages

If you wanted to run a distributed supernode that held the block chain in flash/ram etc, you'd probably buy an 11u quarter rack from them at 250 pound sterling a month, which also gets you 3T of data transfer per month beyond just power and cooling. So you could potentially do a full block broadcast 50 times per month. Certainly you could negotiate that up if you needed to.

It's very unlikely that a hypothetical VISA scale BitCoin network would have nodes which win a block every single day, at least, a healthy network would hopefully not have hash power concentrated so tightly into single miner nodes.

So basically with the technology of today running a BitCoin node capable of keeping up with VISA will not cost you an arm and a leg by any means, and if one day BitCoin is actually that big all the prices and numbers we've been discussing will have improved dramatically anyway.
Cdecker
Hero Member
*****
Offline Offline

Activity: 489
Merit: 505



View Profile WWW
December 31, 2010, 07:43:00 PM
 #53

Well, done.  But I don't think the overhead of verifying transactions is what he was concerned about.  The bigger issue is the bandwidth required to echo every new block to every bitcoin client when transactions average 2000 per second.
Wow, nice analysis, it really makes me confident that we do have considerable room to grow. Being a distributed systems guy, I'm a bit concerned however about the waste of messages. As each transaction results in potentially O(n^2) messages (random broadcast) being sent around the network. Even if small that might result in quite a lot of traffic.

My point is that we could easily create a hypercube structure with forced node membership (hash the IP and join the hypercube vertex that uses the same prefix) where each transaction is sent to the nodes sharing the prefix of the transaction hash. Should the number of nodes in a vertex grow too high just make a longer prefix (add a dimension) and splits the vertex. Scales well, transactions are sent to specific storage points and the winner of a hash can collect all transactions he needs when reestablishing the network consistency.

Am I shooting completely over the target or is this a reasonable idea?

Want to see what developers are chatting about? http://bitcoinstats.com/irc/bitcoin-dev/logs/
Bitcoin-OTC Rating
jgarzik
Legendary
*
Offline Offline

Activity: 1596
Merit: 1100


View Profile
December 31, 2010, 10:17:55 PM
 #54

Note that the only thing broadcast for transactions and blocks is their hash.

Whenever a P2P node receives a new TX / block, they tell people "I have $HASH"

And only nodes without that $HASH will then request "give me tx or block with $HASH"

Thus, each node should ideally only receive the data once.

Jeff Garzik, Bloq CEO, former bitcoin core dev team; opinions are my own.
Visit bloq.com / metronome.io
Donations / tip jar: 1BrufViLKnSWtuWGkryPsKsxonV2NQ7Tcj
Mike Hearn
Legendary
*
Offline Offline

Activity: 1526
Merit: 1134


View Profile
January 01, 2011, 11:31:25 AM
 #55

Not sure. A hypercube based network sounds interesting, but it's potentially a lot of complexity to solve a problem that might be easier solved with brute force. In particular it sounds like it could make the network more fragile and vulnerable to particular nodes falling offline - but I don't have a good understanding of the proposal so maybe I'm wrong.
Cdecker
Hero Member
*****
Offline Offline

Activity: 489
Merit: 505



View Profile WWW
January 01, 2011, 04:41:20 PM
 #56

Not sure. A hypercube based network sounds interesting, but it's potentially a lot of complexity to solve a problem that might be easier solved with brute force. In particular it sounds like it could make the network more fragile and vulnerable to particular nodes falling offline - but I don't have a good understanding of the proposal so maybe I'm wrong.
Yep, that's the main downside, the complexity increases. We'd have to introduce a voting mechanism to collectively decide if a split should be done. Then we'd add selectively forwarding a broadcast to it's destination. and then a mechanism for the hash winner to collect the transactions he's going to sign.

Basically I think we can already modify the client, without touching the protocol, just switch a few inner workings. As for the fragility of the network, if we have about 50 nodes at each vertex that decide to split the vertex should they reach 100 nodes or join together when going below 25 nodes we'd have a really stable network.

Basically we can look at the current network as such a hypercube of dimension 0.

As for the size of the messages, yes I know they are small since only hashes are broadcast, but when comparing a 32 byte hash against a transaction of 100 byte, that's not really much of an improvement, but its pull based design is really nice, and might help keep transfers low, and first of all it helps keeping size deterministic.

Want to see what developers are chatting about? http://bitcoinstats.com/irc/bitcoin-dev/logs/
Bitcoin-OTC Rating
m0mchil
Full Member
***
Offline Offline

Activity: 171
Merit: 127


View Profile
January 02, 2011, 07:50:38 PM
 #57

A solved block will then be around (1kb * 2000tps * 60 * 10) / 1024 / 1024 = 1.14 gigabytes per block.

Would someone please explain why sending solved block with raw TXs is needed? Isn't it possible to broadcast lighter version of solved block with only TX hashes (32 bytes each), bringing above number to ~32 megabytes (instead of a GB)?

Cdecker
Hero Member
*****
Offline Offline

Activity: 489
Merit: 505



View Profile WWW
January 03, 2011, 12:03:28 AM
 #58

It's not so much the block, but the collecting and verifying of the transactions. AFAIK only the merkle root will be in the block (source http://www.bitcoin.org/wiki/doku.php?id=bitcoins_draft_spec_0_0_1#block).

I could of course be wrong since I'm only starting to implement the block verification now, and haven't dedicated much time to it yet. Please correct me if I'm wrong.

Want to see what developers are chatting about? http://bitcoinstats.com/irc/bitcoin-dev/logs/
Bitcoin-OTC Rating
theymos
Administrator
Legendary
*
Offline Offline

Activity: 5376
Merit: 13407


View Profile
January 03, 2011, 12:19:56 AM
 #59

It's not so much the block, but the collecting and verifying of the transactions. AFAIK only the merkle root will be in the block (source http://www.bitcoin.org/wiki/doku.php?id=bitcoins_draft_spec_0_0_1#block).

A lot of info on that page is wrong. The transactions are also sent in block messages.

1NXYoJ5xU91Jp83XfVMHwwTUyZFK64BoAD
Mike Hearn
Legendary
*
Offline Offline

Activity: 1526
Merit: 1134


View Profile
January 03, 2011, 08:49:06 AM
 #60

I think you are probably right m0mchil. In theory that could be done and then nodes would request the transactions they did not see. Not worth doing today but a nice protocol improvement for the future.
Pages: « 1 2 [3] 4 »  All
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!