Bitcoin Forum
March 28, 2024, 05:30:50 PM *
News: Latest Bitcoin Core release: 26.0 [Torrent]
 
   Home   Help Search Login Register More  
Pages: [1] 2 »  All
  Print  
Author Topic: [RFC]: A distributed mining pool proposal  (Read 2983 times)
nanotube (OP)
Hero Member
*****
Offline Offline

Activity: 482
Merit: 501


View Profile WWW
July 08, 2011, 11:46:47 PM
 #1

I have written up a high-level overview of a distributed mining pool proposal. Please follow the link below, read, and give your comments.
http://privwiki.dreamhosters.com/wiki/Distributed_mining_pool_proposal

Specifically, it would be important to try to poke holes in the framework, to make sure that it can be made cheater- and spammer-resistant. (Just like bitcoin itself! Smiley )

Your comments appreciated.


Join #bitcoin-market on freenode for real-time market updates.
Join #bitcoin-otc - an over-the-counter trading market. http://bitcoin-otc.com
OTC web of trust: http://bitcoin-otc.com/trust.php
My trust rating: http://bitcoin-otc.com/viewratingdetail.php?nick=nanotube
1711647050
Hero Member
*
Offline Offline

Posts: 1711647050

View Profile Personal Message (Offline)

Ignore
1711647050
Reply with quote  #2

1711647050
Report to moderator
1711647050
Hero Member
*
Offline Offline

Posts: 1711647050

View Profile Personal Message (Offline)

Ignore
1711647050
Reply with quote  #2

1711647050
Report to moderator
You can see the statistics of your reports to moderators on the "Report to moderator" pages.
Advertised sites are not endorsed by the Bitcoin Forum. They may be unsafe, untrustworthy, or illegal in your jurisdiction.
1711647050
Hero Member
*
Offline Offline

Posts: 1711647050

View Profile Personal Message (Offline)

Ignore
1711647050
Reply with quote  #2

1711647050
Report to moderator
Luke-Jr
Legendary
*
expert
Offline Offline

Activity: 2576
Merit: 1186



View Profile
July 09, 2011, 12:07:54 AM
 #2

This does Proportional payouts, which are predictably unfair. To work, you'd have to also implement auto-hopping-- that is, when total shares reach 43%, throw them all away and start over. Alternatively, there is also a real-world PPLNS implementation in the Pools forum. Also, even with auto-hopping, each share would be valued at down to 0.00006806 at today's difficulty. Payouts of this size would result in huge transfer fees (which is why we added a minimum payout for Eligius) and coinbase transactions. Therefore, this kind of system would need to clutter together miners in different groups by similar hashpower so they all maintain at least 1% or so when the block is found. This also means that any number of CPU miners will still never find a block as a group.

vector76
Member
**
Offline Offline

Activity: 70
Merit: 18


View Profile
July 09, 2011, 12:37:48 PM
 #3

I don't think this will scale.  If each node rebroadcasts every share to every other node, then the shares will have to be very difficult to not overwhelm the network.  I also think that reaching 'agreement' as to what the proper payout is will prove to be problematic, and this in turn makes it hard to tell whether the shares are valid, in a catch-22.  And splitting each block among all network participants will definitely be a problem.

What if the pooling was more 'local' within the network?

Conceptually, if I produce a share and the coinbase would have paid you, I would like to expect (but have no guarantee) that you will generate a share with a coinbase that would have paid me.

There is a sort of prisoner's dilemma problem because there is no way to know that the other person will return a share.  But every share could be considered an iteration, so a tit-for-tat strategy could make 'cheating' less profitable than playing 'honestly'.  And it would make sense if people could observe the shares that other people are generating for each other and identify who is returning share-for-share and who is not.

Each miner might be exchanging shares with dozens of other miners, and they wouldn't necessarily be organized into cliques.  Pools can overlap in an ad-hoc manner, and each person would in a sense be the center of their own pool.  This can scale indefinitely and each coinbase payout stays a limited size.  But staying within the technical limits could mean the variance will only decrease by so much.  Small miners won't be able to get fractions of pennies every hour.  Maybe that's a feature.
JoelKatz
Legendary
*
Offline Offline

Activity: 1596
Merit: 1012


Democracy is vulnerable to a 51% attack.


View Profile WWW
July 09, 2011, 10:10:14 PM
 #4

Quote
All pool-difficulty share blocks can be trimmed of the included transactions, leaving only previous block header, coinbase, and the tip of the merkle tree.
Are you suggesting that the coinbase transaction go last? Otherwise, the coinbase transaction will be at the tip of the merkle tree. I think you need the full header, the coinbase transaction, and the hash of every other transaction in the block.

I am an employee of Ripple. Follow me on Twitter @JoelKatz
1Joe1Katzci1rFcsr9HH7SLuHVnDy2aihZ BM-NBM3FRExVJSJJamV9ccgyWvQfratUHgN
nanotube (OP)
Hero Member
*****
Offline Offline

Activity: 482
Merit: 501


View Profile WWW
July 10, 2011, 01:33:35 AM
 #5

This does Proportional payouts, which are predictably unfair. To work, you'd have to also implement auto-hopping-- that is, when total shares reach 43%, throw them all away and start over. Alternatively, there is also a real-world PPLNS implementation in the Pools forum. Also, even with auto-hopping, each share would be valued at down to 0.00006806 at today's difficulty. Payouts of this size would result in huge transfer fees (which is why we added a minimum payout for Eligius) and coinbase transactions. Therefore, this kind of system would need to clutter together miners in different groups by similar hashpower so they all maintain at least 1% or so when the block is found. This also means that any number of CPU miners will still never find a block as a group.

a) generation transaction does not require a fee, regardless of size of its outputs
b) we'll let the miners take care of the pool hopping if they so choose
c) difficulty: you have failed to read the proposal carefully - suggested difficulty is 1e-4 * currentdifficulty, and miners of different power can choose to create/join pools of different difficulties.
d) the system proposes exactly the 'cluster together of miners by hash power precisely - the miners will self-select into pools of appropriate difficulty for them.

Join #bitcoin-market on freenode for real-time market updates.
Join #bitcoin-otc - an over-the-counter trading market. http://bitcoin-otc.com
OTC web of trust: http://bitcoin-otc.com/trust.php
My trust rating: http://bitcoin-otc.com/viewratingdetail.php?nick=nanotube
nanotube (OP)
Hero Member
*****
Offline Offline

Activity: 482
Merit: 501


View Profile WWW
July 10, 2011, 01:35:58 AM
 #6

I don't think this will scale.  If each node rebroadcasts every share to every other node, then the shares will have to be very difficult to not overwhelm the network.  I also think that reaching 'agreement' as to what the proper payout is will prove to be problematic, and this in turn makes it hard to tell whether the shares are valid, in a catch-22.  And splitting each block among all network participants will definitely be a problem.

What if the pooling was more 'local' within the network?

Conceptually, if I produce a share and the coinbase would have paid you, I would like to expect (but have no guarantee) that you will generate a share with a coinbase that would have paid me.

There is a sort of prisoner's dilemma problem because there is no way to know that the other person will return a share.  But every share could be considered an iteration, so a tit-for-tat strategy could make 'cheating' less profitable than playing 'honestly'.  And it would make sense if people could observe the shares that other people are generating for each other and identify who is returning share-for-share and who is not.

Each miner might be exchanging shares with dozens of other miners, and they wouldn't necessarily be organized into cliques.  Pools can overlap in an ad-hoc manner, and each person would in a sense be the center of their own pool.  This can scale indefinitely and each coinbase payout stays a limited size.  But staying within the technical limits could mean the variance will only decrease by so much.  Small miners won't be able to get fractions of pennies every hour.  Maybe that's a feature.

re: difficulty: you, too, have failed to read the proposal. see post above about difficulties, different pools, and miner self-selection.

re: prisoner's dilemma: there is no prisoner's dilemma. shares are valid simply by the majority of the miners recognizing them as such and including them in their payout calculations. someone deliberately excluding valid shares in his payout calculations will produce invalid shares, to his own detriment.


Join #bitcoin-market on freenode for real-time market updates.
Join #bitcoin-otc - an over-the-counter trading market. http://bitcoin-otc.com
OTC web of trust: http://bitcoin-otc.com/trust.php
My trust rating: http://bitcoin-otc.com/viewratingdetail.php?nick=nanotube
nanotube (OP)
Hero Member
*****
Offline Offline

Activity: 482
Merit: 501


View Profile WWW
July 10, 2011, 01:36:35 AM
 #7

Quote
All pool-difficulty share blocks can be trimmed of the included transactions, leaving only previous block header, coinbase, and the tip of the merkle tree.
Are you suggesting that the coinbase transaction go last? Otherwise, the coinbase transaction will be at the tip of the merkle tree. I think you need the full header, the coinbase transaction, and the hash of every other transaction in the block.

the coinbase transaction already goes last - it is at the second-to-top level of the merkle tree.

Join #bitcoin-market on freenode for real-time market updates.
Join #bitcoin-otc - an over-the-counter trading market. http://bitcoin-otc.com
OTC web of trust: http://bitcoin-otc.com/trust.php
My trust rating: http://bitcoin-otc.com/viewratingdetail.php?nick=nanotube
Luke-Jr
Legendary
*
expert
Offline Offline

Activity: 2576
Merit: 1186



View Profile
July 10, 2011, 01:47:46 AM
 #8

This does Proportional payouts, which are predictably unfair. To work, you'd have to also implement auto-hopping-- that is, when total shares reach 43%, throw them all away and start over. Alternatively, there is also a real-world PPLNS implementation in the Pools forum. Also, even with auto-hopping, each share would be valued at down to 0.00006806 at today's difficulty. Payouts of this size would result in huge transfer fees (which is why we added a minimum payout for Eligius) and coinbase transactions. Therefore, this kind of system would need to clutter together miners in different groups by similar hashpower so they all maintain at least 1% or so when the block is found. This also means that any number of CPU miners will still never find a block as a group.

a) generation transaction does not require a fee, regardless of size of its outputs
I wasn't talking about a transaction fee by the pool, but rather for the miners later when they want to spend it.
b) we'll let the miners take care of the pool hopping if they so choose
That's a cop-out. Wink
c) difficulty: you have failed to read the proposal carefully - suggested difficulty is 1e-4 * currentdifficulty, and miners of different power can choose to create/join pools of different difficulties.
d) the system proposes exactly the 'cluster together of miners by hash power precisely - the miners will self-select into pools of appropriate difficulty for them.
That doesn't really fix the problem. CPU miners won't be able to meet that difficulty, and a pool of just CPU miners will never get a block. They are left out in the cold.

hashcoin
Full Member
***
Offline Offline

Activity: 372
Merit: 101


View Profile
July 10, 2011, 03:43:53 AM
Last edit: July 10, 2011, 04:56:24 AM by hashcoin
 #9

Quote
Due to network lag it is possible that not everyone will have received the most recent shares in the round. As mentioned above, there will be built-in tolerance for the payout proportion distribution, allowing a share as valid even if it is missing a few of the most recent shares in its proportion calculations.

I think this needs to be described very carefully and it is not as trivial as it sounds.  I don't mean this as a put-down, I do not see an obvious solution either and this is the point I struggled with myself when trying to design a distributed pool scheme.  If you can come up with a way to handle this it would be great.

Otherwise, I think centralized pools are not too bad if you can make them transparent (i.e., have pool publish tx contents that led to merkle root in some easy-to-access location, and even publish all shares submitted too).  Then there's not really much concern other than pool operator not including transactions.   But having a single funnel serialize the TXs in the network makes consistency much much simpler.

In other words, it seems fine to me to let a single node be responsible for serializing transactions as long as things are done in such a way that 1) the node can't "cheat" other than by denying service. 2) in the event that node denies service (or someone else denies service to that node), the TX serializer can be re-elected.

Secure leader-election in the presence of malicious parties is fairly well studied.  The classic reference is Feige's randomized voting scheme, where all nodes pick a random node and vote for that node to be the leader;  The node with the least votes is then elected leader (protocol repeated to break ties).  A google search of "leader election" should yield refs to more recent work.

[1] http://portal.acm.org/citation.cfm?id=796481

If you can't find a non-paywalled copy, here's a blog post on it: http://jsaia.wordpress.com/2009/09/21/consensus-ii-fieges-leader-election-protocol/
nanotube (OP)
Hero Member
*****
Offline Offline

Activity: 482
Merit: 501


View Profile WWW
July 10, 2011, 05:05:50 AM
 #10

This does Proportional payouts, which are predictably unfair. To work, you'd have to also implement auto-hopping-- that is, when total shares reach 43%, throw them all away and start over. Alternatively, there is also a real-world PPLNS implementation in the Pools forum. Also, even with auto-hopping, each share would be valued at down to 0.00006806 at today's difficulty. Payouts of this size would result in huge transfer fees (which is why we added a minimum payout for Eligius) and coinbase transactions. Therefore, this kind of system would need to clutter together miners in different groups by similar hashpower so they all maintain at least 1% or so when the block is found. This also means that any number of CPU miners will still never find a block as a group.

a) generation transaction does not require a fee, regardless of size of its outputs
I wasn't talking about a transaction fee by the pool, but rather for the miners later when they want to spend it.
fees do not depend on input sizes, only on output sizes.

Quote
b) we'll let the miners take care of the pool hopping if they so choose
That's a cop-out. Wink

one i'm willing to take.

Quote
c) difficulty: you have failed to read the proposal carefully - suggested difficulty is 1e-4 * currentdifficulty, and miners of different power can choose to create/join pools of different difficulties.
d) the system proposes exactly the 'cluster together of miners by hash power precisely - the miners will self-select into pools of appropriate difficulty for them.
That doesn't really fix the problem. CPU miners won't be able to meet that difficulty, and a pool of just CPU miners will never get a block. They are left out in the cold.

cpu miners can stick to centralized pools. or they can just stop mining - which they mostly already have, except for the ones that don't pay for their electricity.

Join #bitcoin-market on freenode for real-time market updates.
Join #bitcoin-otc - an over-the-counter trading market. http://bitcoin-otc.com
OTC web of trust: http://bitcoin-otc.com/trust.php
My trust rating: http://bitcoin-otc.com/viewratingdetail.php?nick=nanotube
vector76
Member
**
Offline Offline

Activity: 70
Merit: 18


View Profile
July 10, 2011, 05:08:33 AM
 #11

You invite suggestions and then you ignore them.  That's your prerogative i guess.
nanotube (OP)
Hero Member
*****
Offline Offline

Activity: 482
Merit: 501


View Profile WWW
July 10, 2011, 05:12:07 AM
 #12

Quote
Due to network lag it is possible that not everyone will have received the most recent shares in the round. As mentioned above, there will be built-in tolerance for the payout proportion distribution, allowing a share as valid even if it is missing a few of the most recent shares in its proportion calculations.

I think this needs to be described very carefully and it is not as trivial as it sounds.  I don't mean this as a put-down, I do not see an obvious solution either and this is the point I struggled with myself when trying to design a distributed pool scheme.  If you can come up with a way to handle this it would be great.

indeed, it needs a bit more detail in the description. for now this was a really rough proposal just to get some thoughts flowing. my thinking on this was that we can simply rely on the order in which the node received the shares and the local-timing thereof to determine whether a share is allowed to be omitted from the payout calculation. (e.g., a submitted share is still valid if it excludes 2 most recent shares, or shares received in the last 5 seconds, whichever is greater).

Quote
Otherwise, I think centralized pools are not too bad if you can make them transparent (i.e., have pool publish tx contents that led to merkle root in some easy-to-access location, and even publish all shares submitted too).  Then there's not really much concern other than pool operator not including transactions.   But having a single funnel serialize the TXs in the network makes consistency much much simpler.
well, besides the pool fee (someone's gotta pay for the beefy server with the think network pipe, to hand out the work units), and the ddos problem (mmm, who do we ddos - aha, that nice pool server looks good), yes, centralized pools are not too bad.

Quote
In other words, it seems fine to me to let a single node be responsible for serializing transactions as long as things are done in such a way that 1) the node can't "cheat" other than by denying service. 2) in the event that node denies service (or someone else denies service to that node), the TX serializer can be re-elected.

Secure leader-election in the presence of malicious parties is fairly well studied.  The classic reference is Feige's randomized voting scheme, where all nodes pick a random node and vote for that node to be the leader;  The node with the least votes is then elected leader (protocol repeated to break ties).  A google search of "leader election" should yield refs to more recent work.

[1] http://portal.acm.org/citation.cfm?id=796481

If you can't find a non-paywalled copy, here's a blog post on it: http://jsaia.wordpress.com/2009/09/21/consensus-ii-fieges-leader-election-protocol/

interesting... guess that's something to look into as well, rotating leader-selection. one of the issues with this that i can see though, is that the selected 'leader' will have to do a lot more cpu and network work, since it will have to hand out all the work units to the pool. this means anyone not running a beefy server on good net connection will be unwilling, and unable, to be the leader. and we get a nice adverse-selection problem. there's no benefit to being the leader. so the only people willing to be the leader will be those who want to try something sneaky.

Join #bitcoin-market on freenode for real-time market updates.
Join #bitcoin-otc - an over-the-counter trading market. http://bitcoin-otc.com
OTC web of trust: http://bitcoin-otc.com/trust.php
My trust rating: http://bitcoin-otc.com/viewratingdetail.php?nick=nanotube
nanotube (OP)
Hero Member
*****
Offline Offline

Activity: 482
Merit: 501


View Profile WWW
July 10, 2011, 05:13:18 AM
 #13

You invite suggestions and then you ignore them.  That's your prerogative i guess.

i have responded to your post above. did you miss it, or did you just not think it was a valid response?

Join #bitcoin-market on freenode for real-time market updates.
Join #bitcoin-otc - an over-the-counter trading market. http://bitcoin-otc.com
OTC web of trust: http://bitcoin-otc.com/trust.php
My trust rating: http://bitcoin-otc.com/viewratingdetail.php?nick=nanotube
hashcoin
Full Member
***
Offline Offline

Activity: 372
Merit: 101


View Profile
July 10, 2011, 05:32:39 AM
 #14

interesting... guess that's something to look into as well, rotating leader-selection. one of the issues with this that i can see though, is that the selected 'leader' will have to do a lot more cpu and network work, since it will have to hand out all the work units to the pool. this means anyone not running a beefy server on good net connection will be unwilling, and unable, to be the leader. and we get a nice adverse-selection problem. there's no benefit to being the leader. so the only people willing to be the leader will be those who want to try something sneaky.

There would be a benefit -- the TX serializer, in return for doing so much extra work, would be entitled to the fees (entitled meaning, honest clients would imagine the serializer submitted some X number of valid shares when doing payouts).  Indeed not all nodes could handle this -- presumably some nodes could flag themselves as being interested in being "leaders".  If a leader is elected that can't handle the resource requirements, it's the same as the DOS case: clients elect a new leader.

These schemes are quite complex though and quite different from what bitcoin has now.  Perhaps there is a way to do it more simply along the lines of your original suggestion.  One thought is that with a structured overlay network,
it would be much easier to reason about what kinds of differences are considered acceptable since the entire network structure, and thus order in which messages are propagated, is known by all participants.
Luke-Jr
Legendary
*
expert
Offline Offline

Activity: 2576
Merit: 1186



View Profile
July 10, 2011, 06:27:38 AM
 #15

a) generation transaction does not require a fee, regardless of size of its outputs
I wasn't talking about a transaction fee by the pool, but rather for the miners later when they want to spend it.
fees do not depend on input sizes, only on output sizes.
This is not correct, under the hard-coded rules in bitcoind. The data size of a transaction is the primary focus of fees, and will be huge if all the coins are fractions of what people actually spend.

JoelKatz
Legendary
*
Offline Offline

Activity: 1596
Merit: 1012


Democracy is vulnerable to a 51% attack.


View Profile WWW
July 10, 2011, 06:39:14 AM
 #16

This is not correct, under the hard-coded rules in bitcoind. The data size of a transaction is the primary focus of fees, and will be huge if all the coins are fractions of what people actually spend.
With this scheme, you will have one expensive transaction to 'recombine' all your mining revenues. Whether or not this will actually be significant in comparison to the amount of the revenues is an interesting question -- someone will have to do the math.

Currently, transaction fees work out to around .0005 per input for large transactions. This would mean mining pools would have to stay to around 1,000 members or fewer to keep transaction fees under 1% of mining revenue. (Assuming a dedicated 'recombine' transaction to gather mining revenues.) A better question though is how nasty will the chain get if we see more coinbase transctions with 1,000 outputs?

I am an employee of Ripple. Follow me on Twitter @JoelKatz
1Joe1Katzci1rFcsr9HH7SLuHVnDy2aihZ BM-NBM3FRExVJSJJamV9ccgyWvQfratUHgN
nanotube (OP)
Hero Member
*****
Offline Offline

Activity: 482
Merit: 501


View Profile WWW
July 10, 2011, 06:41:43 AM
 #17

a) generation transaction does not require a fee, regardless of size of its outputs
I wasn't talking about a transaction fee by the pool, but rather for the miners later when they want to spend it.
fees do not depend on input sizes, only on output sizes.
This is not correct, under the hard-coded rules in bitcoind. The data size of a transaction is the primary focus of fees, and will be huge if all the coins are fractions of what people actually spend.

i meant, sizes in bitcoins, not sizes in bytes. yes, size in bytes also does matter, sure.
i would venture to say that this would simply, self-select smaller miners out of such pools, and the free market will once more solve the issue without any assistance from us.

Join #bitcoin-market on freenode for real-time market updates.
Join #bitcoin-otc - an over-the-counter trading market. http://bitcoin-otc.com
OTC web of trust: http://bitcoin-otc.com/trust.php
My trust rating: http://bitcoin-otc.com/viewratingdetail.php?nick=nanotube
nanotube (OP)
Hero Member
*****
Offline Offline

Activity: 482
Merit: 501


View Profile WWW
July 10, 2011, 06:44:03 AM
 #18

This is not correct, under the hard-coded rules in bitcoind. The data size of a transaction is the primary focus of fees, and will be huge if all the coins are fractions of what people actually spend.
With this scheme, you will have one expensive transaction to 'recombine' all your mining revenues. Whether or not this will actually be significant in comparison to the amount of the revenues is an interesting question -- someone will have to do the math.

Currently, transaction fees work out to around .0005 per input for large transactions. This would mean mining pools would have to stay to around 1,000 members or fewer to keep transaction fees under 1% of mining revenue. (Assuming a dedicated 'recombine' transaction to gather mining revenues.) A better question though is how nasty will the chain get if we see more coinbase transctions with 1,000 outputs?


tx fees are not 'per input' but 'per transaction', and currently you'd only pay .0005 for your one transaction combining input. so just collect a bunch of mining revenue, and combine it with your one tx for .0005 fee.

yes, larger generation transactions will make the blocks that much larger. just have to see how it goes eh? Wink

Join #bitcoin-market on freenode for real-time market updates.
Join #bitcoin-otc - an over-the-counter trading market. http://bitcoin-otc.com
OTC web of trust: http://bitcoin-otc.com/trust.php
My trust rating: http://bitcoin-otc.com/viewratingdetail.php?nick=nanotube
JoelKatz
Legendary
*
Offline Offline

Activity: 1596
Merit: 1012


Democracy is vulnerable to a 51% attack.


View Profile WWW
July 10, 2011, 07:08:04 AM
 #19

tx fees are not 'per input' but 'per transaction',
No, they are per input. The transaction fee is based on the size of the transaction, which is wholly determined (for standard transactions) based on how many inputs and outputs they have. The context was a 'recombining' transaction with multiple inputs and one output. The size, and hence the fee, will depend on the number of inputs.

Quote
and currently you'd only pay .0005 for your one transaction combining input. so just collect a bunch of mining revenue, and combine it with your one tx for .0005 fee.
No, you'll pay .0005 per input. Do the math. For example, here's a transaction somewhat like the recombining transactions I'm talking about (though in reverse):

http://blockexplorer.com/tx/d28fb4ceeab94b31daf95b87d6e7ae9c740ffba564f4fb0d3c1f4093225b4c4b
Notice the fee was .03 and the number of transaction endpoints was 62. 62x.0005=.031
The fee was roughly .0005 per endpoint, just like I said.

One possible solution would be for pool members to agree to include each other's 'recombining' transactions as high priority transactions for no fee.

I am an employee of Ripple. Follow me on Twitter @JoelKatz
1Joe1Katzci1rFcsr9HH7SLuHVnDy2aihZ BM-NBM3FRExVJSJJamV9ccgyWvQfratUHgN
nanotube (OP)
Hero Member
*****
Offline Offline

Activity: 482
Merit: 501


View Profile WWW
July 11, 2011, 12:03:39 AM
 #20

tx fees are not 'per input' but 'per transaction',
No, they are per input. The transaction fee is based on the size of the transaction, which is wholly determined (for standard transactions) based on how many inputs and outputs they have. The context was a 'recombining' transaction with multiple inputs and one output. The size, and hence the fee, will depend on the number of inputs.

Quote
and currently you'd only pay .0005 for your one transaction combining input. so just collect a bunch of mining revenue, and combine it with your one tx for .0005 fee.
No, you'll pay .0005 per input. Do the math. For example, here's a transaction somewhat like the recombining transactions I'm talking about (though in reverse):

http://blockexplorer.com/tx/d28fb4ceeab94b31daf95b87d6e7ae9c740ffba564f4fb0d3c1f4093225b4c4b
Notice the fee was .03 and the number of transaction endpoints was 62. 62x.0005=.031
The fee was roughly .0005 per endpoint, just like I said.

One possible solution would be for pool members to agree to include each other's 'recombining' transactions as high priority transactions for no fee.

well, the exact fee structure is set out here: https://en.bitcoin.it/wiki/Transaction_fees . while all inputs are considered, the final fee is set based on total size and priority of the overall transaction.

it seems that given that, one can structure the consolidation transactions to minimize the fee (one obvious way would be to wait until they age, increasing tx priority score).

Join #bitcoin-market on freenode for real-time market updates.
Join #bitcoin-otc - an over-the-counter trading market. http://bitcoin-otc.com
OTC web of trust: http://bitcoin-otc.com/trust.php
My trust rating: http://bitcoin-otc.com/viewratingdetail.php?nick=nanotube
Pages: [1] 2 »  All
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!