Bitcoin Forum
November 10, 2024, 06:51:19 PM *
News: Latest Bitcoin Core release: 28.0 [Torrent]
 
   Home   Help Search Login Register More  
Pages: « 1 2 [3] 4 5 6 »  All
  Print  
Author Topic: Think I just solved the pool problem.  (Read 19139 times)
dishwara
Legendary
*
Offline Offline

Activity: 1855
Merit: 1016



View Profile
May 21, 2011, 06:28:22 AM
 #41

Quote
But a far better way would be:

1. Register for a pool.
2. Get Bitcoin address for the pool.
3. You run miner on your own local bitcoind.
4. Miner calls getwork on your bitcoind, gets block template YOU create locally! However, it gets the difficulty and generation address from the pool (to allow share-based mining, and to make sure the pool gets paid.)
5. Miner tries random nonces.
6. Miner finds share! Sends it to pool. If the share turns out to be a valid block, pool distributes winnings.

Ta-da. Now, all block generation is done by miners, not by pools, as Satoshi intended. In other words, the pool has /no/ control over the content of blocks! But pools still get block/share based mining, as pools want.


Please can you tell step by step.
I have already registered with a name & password in slush pool. I also created workers there & gave bitcoin address & mining & getting coins.
Now according to the method you saying, where & what changes i have to do exactly. If you give step by step, it will be useful. Coz, programming is headache to me.
In bitcoin.conf
what rpc user, password, ip or url , address have to make it work?
wumpus
Hero Member
*****
Offline Offline

Activity: 812
Merit: 1022

No Maps for These Territories


View Profile
May 21, 2011, 06:28:55 AM
 #42

I really like the idea. This keeps the system more democratic, as it makes the pools less like aristocrats that collect their serfs work and submit it under their own name. The miners decide what goes in the blocks, not the pool owner.

If someone made a pool like this I'd certainly join it, and would accept paying a larger share for it than current pools.

Bitcoin Core developer [PGP] Warning: For most, coin loss is a larger risk than coin theft. A disk can die any time. Regularly back up your wallet through FileBackup Wallet to an external storage or the (encrypted!) cloud. Use a separate offline wallet for storing larger amounts.
Ian Maxwell
Full Member
***
Offline Offline

Activity: 140
Merit: 101



View Profile WWW
May 21, 2011, 06:35:13 AM
 #43

Please can you tell step by step.
I have already registered with a name & password in slush pool. I also created workers there & gave bitcoin address & mining & getting coins.
Now according to the method you saying, where & what changes i have to do exactly. If you give step by step, it will be useful. Coz, programming is headache to me.
In bitcoin.conf
what rpc user, password, ip or url , address have to make it work?

You've misunderstood. This is a suggested protocol for future mining pools. It's not something you can do with existing pools.

Ian Maxwell
PGP key | WoT rating
dishwara
Legendary
*
Offline Offline

Activity: 1855
Merit: 1016



View Profile
May 21, 2011, 06:36:21 AM
 #44

damn, now has to wait for pool owners to change protocol.
But in theory good one.
kjj
Legendary
*
Offline Offline

Activity: 1302
Merit: 1026



View Profile
May 21, 2011, 07:46:00 AM
 #45

damn, now has to wait for pool owners to change protocol.
But in theory good one.

Worse than that.  The mainline client needs to be changed, plus all of the pools, plus all of the miners.

Excellent idea, so I'm sure it'll happen.  But it will take a little while.

17Np17BSrpnHCZ2pgtiMNnhjnsWJ2TMqq8
I routinely ignore posters with paid advertising in their sigs.  You should too.
slush
Legendary
*
Offline Offline

Activity: 1386
Merit: 1097



View Profile WWW
May 21, 2011, 08:17:37 AM
 #46

1. Register for a pool.
2. Get Bitcoin address for the pool.
3. You run miner on your own local bitcoind.
4. Miner calls getwork on your bitcoind, gets block template YOU create locally! However, it gets the difficulty and generation address from the pool (to allow share-based mining, and to make sure the pool gets paid.)
5. Miner tries random nonces.
6. Miner finds share! Sends it to pool. If the share turns out to be a valid block, pool distributes winnings.

Ta-da. Now, all block generation is done by miners, not by pools, as Satoshi intended. In other words, the pool has /no/ control over the content of blocks! But pools still get block/share based mining, as pools want.

This is not new idea, I thought about that months before, too.

There is one huge problem - performance. It does not look so on first view, but client need transfer _complete block data_ to central server for _every share_, because pool have to validate that share is correct (by user intention or by bug). So sending short hash to server with filled nonce is NOT enough.

I calculated this and needed performance (for pool with tens or hundreds Ghash/s) would be enormous. Even worse - it is rising with more transactions in the block. So this scale even worse than standard centralized pool. In case that bitcoin will grow massively, this is show stopper.

cuddlefish (OP)
Sr. Member
****
Offline Offline

Activity: 364
Merit: 250


View Profile
May 21, 2011, 08:21:54 AM
 #47

1. Register for a pool.
2. Get Bitcoin address for the pool.
3. You run miner on your own local bitcoind.
4. Miner calls getwork on your bitcoind, gets block template YOU create locally! However, it gets the difficulty and generation address from the pool (to allow share-based mining, and to make sure the pool gets paid.)
5. Miner tries random nonces.
6. Miner finds share! Sends it to pool. If the share turns out to be a valid block, pool distributes winnings.

Ta-da. Now, all block generation is done by miners, not by pools, as Satoshi intended. In other words, the pool has /no/ control over the content of blocks! But pools still get block/share based mining, as pools want.

This is not new idea, I thought about that months before, too.

There is one huge problem - performance. It does not look so on first view, but client need transfer _complete block data_ to central server for _every share_, because pool have to validate that share is correct (by user intention or by bug). So sending short hash to server with filled nonce is NOT enough.

I calculated this and needed performance (for pool with tens or hundreds Ghash/s) would be enormous. Even worse - it is rising with more transactions in the block. So this scale even worse than standard centralized pool. In case that bitcoin will grow massively, this is show stopper.

Maaybe.
Other options:

Just send transaction IDs to the pool for verification, along with the header.
2. Get Bitcoin address for the pool.

The pool should give you N addresses:

One for D=1 work, one for D=6 work, one for D=12 work. etc.

D=6 work pays 6 shares. Etc.  The reason for this is because your scheme uses a lot more bandwidth to transmit shares, but this is trivially corrected by increasing difficulty. But that would increase variance to unacceptable levels for slow miners.  By changing the address they pre-commit to a target difficulty and the shares will be credited accordingly.

The miner software can then bet setup so that it picks the diff that gets it close to 1 share per minute...which should end up being less bandwidth than currently used.
slush
Legendary
*
Offline Offline

Activity: 1386
Merit: 1097



View Profile WWW
May 21, 2011, 08:36:31 AM
 #48

Other options:

Just send transaction IDs to the pool for verification, along with the header.
2. Get Bitcoin address for the pool.

The pool should give you N addresses:

One for D=1 work, one for D=6 work, one for D=12 work. etc.

D=6 work pays 6 shares. Etc.  The reason for this is because your scheme uses a lot more bandwidth to transmit shares, but this is trivially corrected by increasing difficulty. But that would increase variance to unacceptable levels for slow miners.  By changing the address they pre-commit to a target difficulty and the shares will be credited accordingly.

The miner software can then bet setup so that it picks the diff that gets it close to 1 share per minute...which should end up being less bandwidth than currently used.

Better, but not good, as load is not driven by pool, but by miners. I see thousands of CPU miners on my pool even with current difficulty, which is - from economical point of view - nonsense. So how to solve problem where hundreds CPU miners can shut down your pool with sending 1diff blocks, 1MB in size each? Smiley

With rising difficulty (expect diff over milion soon!), one share will be amost worthless. By increasing basic difficulty, you can make it better, but will people accept minimum difficulty @ 1000? Smiley

Btw it's not only transfer problem, calculating complete block for every share is pretty hard, too. Forgot that pool can calculate tens of hundreds shares per second...

Basically I like the idea, but those are reasons why I leaved it long time ago.




kjj
Legendary
*
Offline Offline

Activity: 1302
Merit: 1026



View Profile
May 21, 2011, 08:54:59 AM
 #49

Other options:

Just send transaction IDs to the pool for verification, along with the header.
2. Get Bitcoin address for the pool.

The pool should give you N addresses:

One for D=1 work, one for D=6 work, one for D=12 work. etc.

D=6 work pays 6 shares. Etc.  The reason for this is because your scheme uses a lot more bandwidth to transmit shares, but this is trivially corrected by increasing difficulty. But that would increase variance to unacceptable levels for slow miners.  By changing the address they pre-commit to a target difficulty and the shares will be credited accordingly.

The miner software can then bet setup so that it picks the diff that gets it close to 1 share per minute...which should end up being less bandwidth than currently used.

Better, but not good, as load is not driven by pool, but by miners. I see thousands of CPU miners on my pool even with current difficulty, which is - from economical point of view - nonsense. So how to solve problem where hundreds CPU miners can shut down your pool with sending 1diff blocks, 1MB in size each? Smiley

With rising difficulty (expect diff over milion soon!), one share will be amost worthless. By increasing basic difficulty, you can make it better, but will people accept minimum difficulty @ 1000? Smiley

Btw it's not only transfer problem, calculating complete block for every share is pretty hard, too. Forgot that pool can calculate tens of hundreds shares per second...

Basically I like the idea, but those are reasons why I leaved it long time ago.

If you set a minimum difficulty of 1000 for your pool, and they want to participate, that pretty much means they'll accept it.

If people really can't let go of CPU mining, they can run their own mini-pool that handles difficulty 1 blocks before sending those that meet the pool criteria up to the big pool.  Maybe meta-pools will pring up.

Just because you can't imagine how to scale things doesn't mean that no one can.

17Np17BSrpnHCZ2pgtiMNnhjnsWJ2TMqq8
I routinely ignore posters with paid advertising in their sigs.  You should too.
goatpig
Legendary
*
Offline Offline

Activity: 3752
Merit: 1364

Armory Developer


View Profile
May 21, 2011, 10:38:40 AM
Last edit: May 21, 2011, 10:59:29 AM by goatpig
 #50

Technically, miners only take the pool's address and build up the block header on their own, so individual miners can include/exclude any transaction they want.

Quote
Technically, the pool builds the block header, so the pool owner can include/exclude any transaction they want, or start a giant chain fork if the pool has a majority of power ([Tycho], I'm looking at you.)

Pick one.

I see your point, I'm just extrapolating in the future. Right now the priority is this fix.

Is there some sort of a .conf file you can feed to bitcoind for inclusion rules?

Better, but not good, as load is not driven by pool, but by miners. I see thousands of CPU miners on my pool even with current difficulty, which is - from economical point of view - nonsense. So how to solve problem where hundreds CPU miners can shut down your pool with sending 1diff blocks, 1MB in size each? Smiley

With rising difficulty (expect diff over milion soon!), one share will be amost worthless. By increasing basic difficulty, you can make it better, but will people accept minimum difficulty @ 1000? Smiley

Btw it's not only transfer problem, calculating complete block for every share is pretty hard, too. Forgot that pool can calculate tens of hundreds shares per second...

Basically I like the idea, but those are reasons why I leaved it long time ago.

Have a new function in the pool software, that stores the Merkle root + timestamp per account. On the miners side, each times you add a transaction or update the timestamp, you call that function, upload your new Merkle root and timestamp once. Then whenever you have a share to submit you only send in your nonces. Pool is waiting there with the header precalculated, waiting to add the nonce to hash it, so the server side you will be fine with CPU time. You'll need something like an extra 1 kb memory per account (doesn't have to be per worker since they getwork() from bitcoind).

When your miners hit a full difficulty solution, you upload the whole thing to the pool, transactions and all.

slush
Legendary
*
Offline Offline

Activity: 1386
Merit: 1097



View Profile WWW
May 21, 2011, 05:12:22 PM
 #51

Goatpig, I see what you mean; not _so_ bad idea, but still worse from side of performance and scalability. I'll think about it little more.

When your miners hit a full difficulty solution, you upload the whole thing to the pool, transactions and all.

Pool still need to know all transactions to validate single share, so those information must be known by pool in time of share submit.

cuddlefish (OP)
Sr. Member
****
Offline Offline

Activity: 364
Merit: 250


View Profile
May 21, 2011, 05:13:39 PM
 #52

Pool still need to know all transactions to validate single share, so those information must be known by pool in time of share submit.

So just send the TX ids.
goatpig
Legendary
*
Offline Offline

Activity: 3752
Merit: 1364

Armory Developer


View Profile
May 21, 2011, 08:36:09 PM
 #53

Pool still need to know all transactions to validate single share, so those information must be known by pool in time of share submit.

So just send the TX ids.

Or assuming most people will integrate the totality of available transactions, send in the IDs of the ones that are not integrated, saves Bandwitdh and memory server side.

thefiatfreezone
Newbie
*
Offline Offline

Activity: 30
Merit: 0


View Profile WWW
May 21, 2011, 09:59:19 PM
 #54

? if slush is correct and all this extra communication is needed ... wouldn't that simply take out the mega-HUGE groups and force everyone to create lots of individual and mini-groups (de-centralized) the way Bitcoin was meant to be?  isn't that a good thing?
cuddlefish (OP)
Sr. Member
****
Offline Offline

Activity: 364
Merit: 250


View Profile
May 21, 2011, 10:24:07 PM
 #55

? if slush is correct and all this extra communication is needed ... wouldn't that simply take out the mega-HUGE groups and force everyone to create lots of individual and mini-groups (de-centralized) the way Bitcoin was meant to be?  isn't that a good thing?
YES. Frankly, the pools should do exactly 1 thing: provide insurance. Nothing else.
gmaxwell
Moderator
Legendary
*
Offline Offline

Activity: 4270
Merit: 8805



View Profile WWW
May 21, 2011, 10:49:20 PM
 #56

Btw it's not only transfer problem, calculating complete block for every share is pretty hard, too. Forgot that pool can calculate tens of hundreds shares per second...

So you're saying that a small set of pool systems scales better than the idle CPUs of thousands of miners? Thats silly. As the txload rises miners can simply prioritize transactions based on their hash distance from some random value, allowing TX validation to scale far beyond what the pool could support.

Today the responses to take about 181 bytes on the wire.  Blocks are frequently about 4k, so at the moment difficulty would need to be 22 to send the whole block and use the same amount of traffic.  If it were compressed by only sending the TX ids, it would be 354 bytes/share for 10tx shares, or less then double now.

Someday in the future when blocks are 1MB (the largest size clients will accept today) the 'compressed' size will be 128032 bytes/share. Share difficulty would need to be ~750 to get to the _same_ traffic levels we have now.

This could all be further reduced by miners only sending incremental updates. So basically in that case it would only take resending each TX, along with one extra per per new block (~6/hour) to setup the root. Done this way it should do no more than 2x the current bandwidth, though it would take more software to track the incremental work per miner.  

But even ignoring all the things that can be done to make it more efficient: at current bulk internet transit prices ($2/mbit/sec/month) full 1MB shares would each cost the pool $0.0000064 each.

Assuming 2MH/J, $0.05, and an exchange rate of $6/BTC  GPU mining won't be power profitable past difficulty 10,000,000, even for people with cheap (but not stolen) power, assuming the current exchange rates.  At a share difficulty of only 12 this would be bandwidth costs of only about $5.33 block solved while sending full 1MB shares without any efficiency measures.  If you can't figure out how to make that work then I'll welcome your more efficient replacements.

(the formula for breakeven profitability is
diff = (719989013671875*exc*mhj)/(17179869184*kwh)
where diff is difficulty, exc is the exchange rate in $/BTC, MHJ is the number of MH per joule, and kwh is $/KWH)

As I write this deepbit is down and the network has gone 30 minutes without confirming a transaction.  This is nuts. I don't think the bitcoin community should continue to tolerate the reliability problems created by large pools.  You're free not to participate in an operating practice, but the network is also free to ignore blocks mined by pools which actively sabotage the stability and security of the network.
thefiatfreezone
Newbie
*
Offline Offline

Activity: 30
Merit: 0


View Profile WWW
May 21, 2011, 10:59:00 PM
 #57

so .. ?? deepbit goes down for 30 minutes and the network dies ... and you all want this  Huh? ... 1 HUGE mining groups that dictates ....

I'd prefer lots of small ones .. {no single point of failure}  ... if Bitcoin continues with big groups .. I don't see any reason people will want to use it in the future if it is to easy to kill it by crashing one single group.

just my opinion .. but you can clearly see its death coming from dictatorship.
rezin777
Full Member
***
Offline Offline

Activity: 154
Merit: 100


View Profile
May 21, 2011, 11:00:31 PM
 #58

As I write this deepbit is down and the network has gone 30 minutes without confirming a transaction.  This is nuts. I don't think the bitcoin community should continue to tolerate the reliability problems created by large pools.  You're free not to participate in an operating practice, but the network is also free to ignore blocks mined by pools which actively sabotage the stability and security of the network.

It's quite clear that the people in the deepbit pool do not care about the health of the network. Most probably don't understand it. Unfortunately, there is no cure for stupidity. Many of us have been calling for miners to balance the pools, but our arguments fall on deaf ears.
rezin777
Full Member
***
Offline Offline

Activity: 154
Merit: 100


View Profile
May 21, 2011, 11:01:10 PM
 #59

so .. ?? deepbit goes down for 30 minutes and the network dies ... and you all want this  Huh? ... 1 HUGE mining groups that dictates ....

I'd prefer lots of small ones .. {no single point of failure}  ... if Bitcoin continues with big groups .. I don't see any reason people will want to use it in the future if it is to easy to kill it by crashing one single group.

just my opinion .. but you can clearly see its death coming from dictatorship.

Apparently you misunderstand the point of this thread.
thefiatfreezone
Newbie
*
Offline Offline

Activity: 30
Merit: 0


View Profile WWW
May 21, 2011, 11:12:20 PM
Last edit: May 21, 2011, 11:38:21 PM by river
 #60

Apparently you misunderstand the point of this thread.

Ok .. never mind .. I'm outy  Smiley


EDIT: /??? just asking but  ..... if one MEGA goes down and the network can not adapt immediately .. then you get delays and a back log of transactions ... so the 'dictatorship' I was referring to was the MEGA that controls the network ... if it glitches, goes down, etc .. and everyone hurts ... Huh so .. am I still wrong about forcing the break up of MEGAs and creating smaller groups being better?Huh
Pages: « 1 2 [3] 4 5 6 »  All
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!