Bitcoin Forum
March 29, 2024, 10:26:24 AM *
News: Latest Bitcoin Core release: 26.0 [Torrent]
 
   Home   Help Search Login Register More  
Pages: « 1 2 3 4 5 [6]  All
  Print  
Author Topic: Think I just solved the pool problem.  (Read 19100 times)
LegitBit
Full Member
***
Offline Offline

Activity: 140
Merit: 100



View Profile
June 06, 2011, 04:14:54 PM
 #101

Sounds awesome, if anyone gets this going I am willing to test.

Donate : 1EiAKUmTVtqXsaGLKQQVvLT9DDnHsT7jTZ (Block Explorer)
"With e-currency based on cryptographic proof, without the need to trust a third party middleman, money can be secure and transactions effortless." -- Satoshi
Advertised sites are not endorsed by the Bitcoin Forum. They may be unsafe, untrustworthy, or illegal in your jurisdiction.
1711707984
Hero Member
*
Offline Offline

Posts: 1711707984

View Profile Personal Message (Offline)

Ignore
1711707984
Reply with quote  #2

1711707984
Report to moderator
1711707984
Hero Member
*
Offline Offline

Posts: 1711707984

View Profile Personal Message (Offline)

Ignore
1711707984
Reply with quote  #2

1711707984
Report to moderator
1711707984
Hero Member
*
Offline Offline

Posts: 1711707984

View Profile Personal Message (Offline)

Ignore
1711707984
Reply with quote  #2

1711707984
Report to moderator
shackleford
Full Member
***
Offline Offline

Activity: 281
Merit: 100



View Profile
June 06, 2011, 04:18:55 PM
 #102

Any progress on this? Seems to be one of the most important outstanding issues.
I am waiting for permission from my employer before I proceed. Nearly have it, but bureaucracy bureaucracy.
you need your employer to work on a private project? wtf?

I think she works for Google, and is planning on working on this project during her 20% time.
Correct. I actually was planning to do it outside work originally, but if Google is trying to get copyright on it anyways, I may as well use 20% time rather than my non-work time Smiley

http://www.leginfo.ca.gov/cgi-bin/displaycode?section=lab&group=02001-03000&file=2870-2872
Quote
   (1) Relate at the time of conception or reduction to practice of
the invention to the employer's business, or actual or demonstrably
anticipated research or development of the employer; or

Google...... Perhaps deepbit with 75% controll would be better. Would have to see details but by default I do not trust them.
lizthegrey
Newbie
*
Offline Offline

Activity: 56
Merit: 0


View Profile
June 06, 2011, 04:38:32 PM
Last edit: June 06, 2011, 04:55:03 PM by lizthegrey
 #103

Google...... Perhaps deepbit with 75% controll would be better. Would have to see details but by default I do not trust them.
If I'm allowed to develop, source will be open, you'll be welcome to inspect it for yourself, I'll accept patches.

Lastly, I don't plan to be the only distributor. Anyone can start a distributor simply by publicizing a bitcoin address for their distributor and convincing people to start using their new distributor address; they'd obviously be responsible for actually running the distributor after each found block to disburse funds. It just would require having enough trust such that people would be willing to point their miners at your distributor.

When I get this off the ground as the first distributor, I plan to escrow 50 btc with a well known community member as a show of good faith should I fail to pay out.
lizthegrey
Newbie
*
Offline Offline

Activity: 56
Merit: 0


View Profile
June 06, 2011, 04:42:26 PM
Last edit: June 06, 2011, 04:55:18 PM by lizthegrey
 #104

Transaction fees would also go to the pool, right?
Correct.

How about the "I find a solution but don't post it" attack, aka. "withholding winning shares"? As miners might have to be adapted for this pooling scheme anyways, please also implement "oblivious shares"!
Doing so just hurts you, and someone else will come up with a winning share eventually. This is a generic problem with pools and I can't fix that today. Additionally, oblivious shares are tricky to get right and require centralized trust, which is something this system is intended to *avoid*.

Can the pool require it's miners to always include some special transactions (for free), like payouts?
Yes. There could be published policies about only accepting proposed shares/solutions that contain specific transactions; participants in the DHT could refuse to grant credit for shares without those transactions, and the distributor's double-check could verify this.
Sukrim
Legendary
*
Offline Offline

Activity: 2618
Merit: 1006


View Profile
June 06, 2011, 05:07:00 PM
 #105

Doing so just hurts you, and someone else will come up with a winning share eventually. This is a generic problem with pools and I can't fix that today. Additionally, oblivious shares are tricky to get right and require centralized trust, which is something this system is intended to *avoid*.
Not if I want for example to manipulate the pool down.

Imagine Tycho directing his entire pool (or even just parts of it) to a pool like this, creating insane amounts of (valid) shares but just filtering out ones that are >= current difficulty to artificially lower your income.

He would even gain BTC on payouts(!), meaning he'll save money for servers while making sure your pool gets less attactive (as payout is less than in other pools). It then only depends if it would be chaper to buy the remaining BTC (or pay them out of his own pocket/fees) or not to sustain a "leeching attack" like this. (nothing against Tycho here, he just stands for $random_huge_pool_operator - and I don't suspect him in any way to do something like that!)

In the end you'd pay a pool with hashrate X but get BTC for a pool with hashrate X - Leechers. As this attack is not really statistically detectable, people would then start to accuse you of stealing/manipulating stats etc.

In the long run it might even pay off to become the biggest fish in the pond (there is no logical reason atm. for example why you should mine exclusively at deepbit, still people are doing it  - nearly 50% of people!) to sacrifice a little income if you can eliminate competitors.
This is also why I think this attack is more serious than pool hopping - the latter is a statistical attack and can be countered with rating shares. This one can only be countered with oblivious shares, and as you said they require rewrites and might be likely to go wrong.

https://www.coinlend.org <-- automated lending at various exchanges.
https://www.bitfinex.com <-- Trade BTC for other currencies and vice versa.
lizthegrey
Newbie
*
Offline Offline

Activity: 56
Merit: 0


View Profile
June 06, 2011, 05:14:08 PM
 #106

Doing so just hurts you, and someone else will come up with a winning share eventually. This is a generic problem with pools and I can't fix that today. Additionally, oblivious shares are tricky to get right and require centralized trust, which is something this system is intended to *avoid*.
Not if I want for example to manipulate the pool down.
Imagine Tycho directing his entire pool (or even just parts of it) to a pool like this, creating insane amounts of (valid) shares but just filtering out ones that are >= current difficulty to artificially lower your income.

He would even gain BTC on payouts(!), meaning he'll save money for servers while making sure your pool gets less attactive (as payout is less than in other pools). It then only depends if it would be chaper to buy the remaining BTC (or pay them out of his own pocket/fees) or not to sustain a "leeching attack" like this. (nothing against Tycho here, he just stands for $random_huge_pool_operator - and I don't suspect him in any way to do something like that!)
Yes, I acknowledge this is a threat. Seeing if anyone has come up with a good solution.

In the end you'd pay a pool with hashrate X but get BTC for a pool with hashrate X - Leechers. As this attack is not really statistically detectable, people would then start to accuse you of stealing/manipulating stats etc.
This is actually not a problem. The distributed pool design is explicitly designed to be independently verifiable by as many people as wish to run distributors or mock-distributors and who can verify or sign off on the payout before it's sent out. Thus, they can examine all the proofs of work.
Chakravanti
Newbie
*
Offline Offline

Activity: 13
Merit: 0


View Profile
June 07, 2011, 06:38:22 AM
 #107

Okay maybe this is too simple but why not just make a the largest mining pool ineligable for generating the next block with an exception for holding say <%20 or something of the network.  It'd come down to shares & odds against mining profitability to figure a viable cap to avoid the skip but it'd also prevent the finny weakness to by never allowing a single miner or pool to gain majority and keep it.

It would also force more and smaller pools.  The figure could even vary based on mining population (in terms of power, not individual participants) to allow larger nodes to be formed when populations go higher.

Am I missing something or could the finny attackers just pool hop between their own pools?  Seems to me there might be something here but I admittedly don't know enough about what I'm really talking about yet.  Maybe someone who does could turn it into something better. Tongue
DamienBlack
Jr. Member
*
Offline Offline

Activity: 56
Merit: 1


View Profile
June 07, 2011, 10:31:17 AM
 #108

I don't understand all of the nuance of the hashing system, but I think I have a possible solution for the high bandwidth problem. As I understand it, currently the pool gives you the block information once, you add nonce and hash, and when you get a share you send the nonce:

Current System (low bandwidth)

1. Pool gives block to you once
2. You add nonce and hash
3. If share, send nonce to pool for verification
4. Repeat at 2, when valid block found start back at 1

This results in low enough bandwidth to be a sustainable system. Now people have said that the new system would require to much data to be sent because it would look like this:

New system (high bandwidth)

1. You generate your own block
2. You add nonce and hash
3. If share, send whole block for verification
4. Repeat at 2, when valid block found start back at 1

This results in bandwidth too high because you send the entire block with each share. But this seems thoroughly unnecessary to me. Why not as below:

New system (lower bandwidth)

1. You generate your own block
2. Send block to pool once
3. Add nonce and hash
4. If share, send nonce to pool for verification
5. Repeat at 3, when valid block found start back at 1

This would be almost the exact same bandwidth as the old system. In the old system the pool sent the block data to each member once, in the new system each member sends the block data to the pool once. The pool would have to do a little more work storing the blocks that each member is working on, but I don't think it is really a show stopper.

Is there a reason why this doesn't work?
grndzero
Sr. Member
****
Offline Offline

Activity: 392
Merit: 250


View Profile
June 07, 2011, 11:01:08 AM
 #109

1. You generate your own block
That means every bitcoin client would need a lot more connections to the network and/or have built in long polling to make sure they are working on the current block.

2. Send block to pool once
Depending on how many workers/clients are connected, it could mean a lot of storage and processing.

Ubuntu Desktop x64 -  HD5850 Reference - 400Mh/s w/ cgminer  @ 975C/325M/1.175V - 11.6/2.1 SDK
Donate if you find this helpful: 1NimouHg2acbXNfMt5waJ7ohKs2TtYHePy
lizthegrey
Newbie
*
Offline Offline

Activity: 56
Merit: 0


View Profile
June 07, 2011, 11:18:40 AM
 #110

Current System (low bandwidth)

1. Pool gives block to you once
This is incorrect. The pool sends the partially completed hash, you never see the block you are working on. You only add the nonce and hash.

New system (lower bandwidth)
1. You generate your own block
2. Send block to pool once
3. Add nonce and hash
4. If share, send nonce to pool for verification
5. Repeat at 3, when valid block found start back at 1
This is much higher bandwidth, because the central pool master now has to process the entire block from each client every time the current block in progress changes instead of handing out precomputed data. This results in a DoS as all the clients supporting LP hit it at the exact same time with their new draft blocks.
DamienBlack
Jr. Member
*
Offline Offline

Activity: 56
Merit: 1


View Profile
June 07, 2011, 11:54:39 AM
 #111

This is much higher bandwidth, because the central pool master now has to process the entire block from each client every time the current block in progress changes instead of handing out precomputed data. This results in a DoS as all the clients supporting LP hit it at the exact same time with their new draft blocks.

Oh, ok I see now. I did not realize that the pool gave you partially precomputed data. Now it all makes much more sense. Could the pool member send the same sort of precomputed data back to the pool? This way the pool doesn't have to process every block itself? Then when a proper solution for the difficulty is found the pool could hash the entire block for verification. Is there some reason the pool would need to "pre-hash" every block from every user itself? Can the member not be trusted to send a valid precomputed value for the block?
lizthegrey
Newbie
*
Offline Offline

Activity: 56
Merit: 0


View Profile
June 07, 2011, 12:02:19 PM
 #112

Oh, ok I see now. I did not realize that the pool gave you partially precomputed data. Now it all makes much more sense. Could the pool member send the same sort of precomputed data back to the pool? This way the pool doesn't have to process every block itself? Then when a proper solution for the difficulty is found the pool could hash the entire block for verification. Is there some reason the pool would need to "pre-hash" every block from every user itself? Can the member not be trusted to send a valid precomputed value for the block?
Most of the entire block must be sent; otherwise, you could mine for yourself or for another pool and submit block hashes as 'proof of work' without actually working on solutions that pay the pool's payment address.

Edit: oh, I see, you could verify over time rather than all at once by just caching the values and computing later. Sure, that's a possibility, but this still doesn't address the 'single point of failure for recording work' problem that my proposal addresses.
DamienBlack
Jr. Member
*
Offline Offline

Activity: 56
Merit: 1


View Profile
June 07, 2011, 12:29:14 PM
Last edit: June 07, 2011, 12:42:52 PM by DamienBlack
 #113

Most of the entire block must be sent; otherwise, you could mine for yourself or for another pool and submit block hashes as 'proof of work' without actually working on solutions that pay the pool's payment address.

So, if I understand this right, and I'm not sure I do, the main problem is that there is no way to insure that the "generation address" is in fact the pool's address unless the pool hashes the block itself? So if the pool doesn't generate the hash for each block, you could potentially trick the pool into thinking that you are working on pool hashes when you aren't. When you actually hit a hash at the right difficulty, you claim it for yourself because you were actually using your address. At the same time you could submit the same work to a different pool that also can't confirm that you are using the correct address. Is this why you can't trust a pool member to provide the precomputed block data?
lizthegrey
Newbie
*
Offline Offline

Activity: 56
Merit: 0


View Profile
June 07, 2011, 01:03:02 PM
 #114

Most of the entire block must be sent; otherwise, you could mine for yourself or for another pool and submit block hashes as 'proof of work' without actually working on solutions that pay the pool's payment address.

So, if I understand this right, and I'm not sure I do, the main problem is that there is no way to insure that the "generation address" is in fact the pool's address unless the pool hashes the block itself? So if the pool doesn't generate the hash for each block, you could potentially trick the pool into thinking that you are working on pool hashes when you aren't. When you actually hit a hash at the right difficulty, you claim it for yourself because you were actually using your address. At the same time you could submit the same work to a different pool that also can't confirm that you are using the correct address. Is this why you can't trust a pool member to provide the precomputed block data?
Correct.
DamienBlack
Jr. Member
*
Offline Offline

Activity: 56
Merit: 1


View Profile
June 08, 2011, 01:19:55 AM
 #115

So it seems to be that the issue is that the transaction data and the generation address get hashed, and then the nonce is added and it is all hashed again like this:

{generation address}+{transaction data} -> {block hash}

{block hash}+{nonce} -> {final hash}

The pool currently gives you the block hash, you return the nonce of a share and the pool only has to check the final hash. I suppose what we need to do is change the entire system (everything) so that the pool can verify the generation address. Something like this.

{transaction data} -> {transaction hash}

{generation address}+{transaction hash}+{nonce} -> {final hash}

This way a pool member can send its own transaction hash once and the nonce of each share, but the pool still gets to verify the generation address. Is this the upshot of your proposition?
lizthegrey
Newbie
*
Offline Offline

Activity: 56
Merit: 0


View Profile
June 08, 2011, 01:34:12 AM
 #116

I suppose what we need to do is change the entire system (everything) so that the pool can verify the generation address. Something like this.
{transaction data} -> {transaction hash}
{generation address}+{transaction hash}+{nonce} -> {final hash}
This way a pool member can send its own transaction hash once and the nonce of each share, but the pool still gets to verify the generation address. Is this the upshot of your proposition?
This change is very major and not in scope. What I am proposing to do is verify the entire block upon each share, but to have a distributed network rather than a single master verifying receipt of share.
Chakravanti
Newbie
*
Offline Offline

Activity: 13
Merit: 0


View Profile
June 08, 2011, 03:52:46 AM
 #117

I'm confused, if shares aren't signed how do you collect?
lizthegrey
Newbie
*
Offline Offline

Activity: 56
Merit: 0


View Profile
June 08, 2011, 11:31:00 AM
 #118

I'm confused, if shares aren't signed how do you collect?
You send your shares to several other peers who verify and commit to their distributed hash tables. When it's time to pay out, the distributor retrieves your shares from the DHT, verifies them, and generates the proportional share to give you.
gsan
Member
**
Offline Offline

Activity: 72
Merit: 10


View Profile
June 08, 2011, 01:10:29 PM
Last edit: June 08, 2011, 01:30:30 PM by gsan
 #119

I had a different solution in mind, but maybe it might be merged into this one as well.

cuddlefish's solution requires running bitcoind on the miner side. What if miners who don't want to run it could opt to get work from trusted work streams?

  • Pool keeps a list of authorized channels that it gets work from.
  • Each channel supplies the server with signed block data, address set to one of pool's addresses.
  • Miner picks as many of these channels as it likes, pool sends work from any one of them at random, signed by the channel owner.

This way, attacks would require everyone to choose channel(s) controlled by a single entity (unfeasible I think), traffic is low and miners wouldn't have to run bitcoind.

What do you think?

(As a further step, we could separate work servers from pools.)

lizthegrey
Newbie
*
Offline Offline

Activity: 56
Merit: 0


View Profile
June 08, 2011, 02:23:54 PM
 #120

cuddlefish's solution requires running bitcoind on the miner side. What if miners who don't want to run it could opt to get work from trusted work streams?
Under my proposal, you are welcome to use someone else's local pool manager instead of your own. I'm imagining most people but not all will run a local pool manager, and only a small subset of people will run a DHT node, but there will be enough to keep the system resilient.
Pages: « 1 2 3 4 5 [6]  All
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!