Bitcoin Forum
April 26, 2024, 02:22:56 PM *
News: Latest Bitcoin Core release: 27.0 [Torrent]
 
   Home   Help Search Login Register More  
Pages: « 1 ... 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 [181] 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 »
  Print  
Author Topic: bitHopper: Python Pool Hopper Proxy  (Read 355551 times)
lucita777
Newbie
*
Offline Offline

Activity: 39
Merit: 0


View Profile
August 16, 2011, 01:43:04 AM
 #3601

A slice is simply a amount of time allotted to mining on a given pool, so its slices time, keeps track of slices and uses that to switch between 2 eligible pools

Are slices size calculated based on amount of shares the pool already have or they are just equal?
Did anyone run simulation on some sample pools with OldDefaultScheduler and DefaultScheduler to see what is the different in efficiency and variation/standard deviation ?

I tried to run a simulation on it, but the efficiency numbers I got (I will share them when I get back home) for those schedulers were very close - and the difference was smaller than standard deviation on average daily efficiency). That result is actually surspised me since, I'd think that mining in a pool which is 20% instead in a pool which is 2% done should have a big difference in the estimated share value. The deviation was something like 2x smaller for the DefaultScheduler compared to OldDefaultScheduler.

If my numbers are wrong and in fact there is a big difference in efficiency for those schedules, it might be interesting to simulate a hybrid scheduler like this:
a) If there is a pool with number of shares less than < some_fixed_ratio * difficulty (i.e. some_fixed_ratio = 20%) then use OldDefaultScheduler
b) otherwise, use DefaultScheduler
In theory it should have smaller variance than OldDefaultScheduler and better efficiency than DefaultScheduler. In practice, well, who knows Smiley
1714141376
Hero Member
*
Offline Offline

Posts: 1714141376

View Profile Personal Message (Offline)

Ignore
1714141376
Reply with quote  #2

1714141376
Report to moderator
1714141376
Hero Member
*
Offline Offline

Posts: 1714141376

View Profile Personal Message (Offline)

Ignore
1714141376
Reply with quote  #2

1714141376
Report to moderator
Each block is stacked on top of the previous one. Adding another block to the top makes all lower blocks more difficult to remove: there is more "weight" above each block. A transaction in a block 6 blocks deep (6 confirmations) will be very difficult to remove.
Advertised sites are not endorsed by the Bitcoin Forum. They may be unsafe, untrustworthy, or illegal in your jurisdiction.
1714141376
Hero Member
*
Offline Offline

Posts: 1714141376

View Profile Personal Message (Offline)

Ignore
1714141376
Reply with quote  #2

1714141376
Report to moderator
lucita777
Newbie
*
Offline Offline

Activity: 39
Merit: 0


View Profile
August 16, 2011, 01:51:34 AM
 #3602

And a side question:
Since we cannot identify the owner of block in any way other than checking the stats on a pools website or by guessing, then what prevents the pool operators from not admitting that their pool actually found a block and take the income to themselves?
hawks5999
Full Member
***
Offline Offline

Activity: 168
Merit: 100



View Profile WWW
August 16, 2011, 02:14:54 AM
 #3603

And a side question:
Since we cannot identify the owner of block in any way other than checking the stats on a pools website or by guessing, then what prevents the pool operators from not admitting that their pool actually found a block and take the income to themselves?

a) that happens
b) those pools don't last or they produce enough for their miners that they don't miss an occasional stolen block.

■ ▄▄▄
■ ███
■ ■  ■               
LEDGER  WALLET    ████
■■■ ORDER NOW! ■■■
              LEDGER WALLET
Smartcard security for your BTCitcoins
■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■
Decentralized. Open. Secure.
organofcorti
Donator
Legendary
*
Offline Offline

Activity: 2058
Merit: 1007


Poor impulse control.


View Profile WWW
August 16, 2011, 03:10:04 AM
 #3604


Did anyone run simulation on some sample pools with OldDefaultScheduler and DefaultScheduler to see what is the different in efficiency and variation/standard deviation ?


I'm in the middle of a rewrite to do a simple slice atm, suggested by joulesbeef. I'm approaching slicing as following:

  • 'Shares' ('shares' are just a placeholder, the value of share is determined later) are distributed to equally to pools under *hop point* up until the *hop point* limit of any particular pool.
  • Equal share slicing (favouring no particular hoppable pool): Divide shares contributed as above by number of pools to which they were distributed to get the 'value' of each share.
  • Equal time slicing: divide shares by number of pools to which they were distributed, and then multiply by a pools' fraction of the combined hash speed of all pools the to get the 'value' of each share.


    Of course this is an idealised version of slicing and assumes no loss of time hopping from one pool to another and

    If I'm wrong about this please let me know before I go any further with the rewrite!


Bitcoin network and pool analysis 12QxPHEuxDrs7mCyGSx1iVSozTwtquDB3r
follow @oocBlog for new post notifications
macboy80
Member
**
Offline Offline

Activity: 102
Merit: 10


View Profile
August 16, 2011, 05:29:35 AM
 #3605

@ooc : Had you considered an approach like I mentioned in the below post? I know c00w was planning a scheduler re-write, however I'm not sure what his priorities are. Please check it out, as I definitely think it could be helpful given the current stats of hoppable pools.

https://bitcointalk.org/index.php?topic=26866.msg443792#msg443792

@everyone : I haven't given up on streamlining the whole stats/skin process. I have something rudimentary, however I still don't have everything quite ready. Smiley
lucita777
Newbie
*
Offline Offline

Activity: 39
Merit: 0


View Profile
August 16, 2011, 06:20:12 AM
 #3606

Some numbers from simulation I mentioned above:
- Miner submitted total of 70,000,000 shares. Used pools with share submission ratios: 210, 165, 11, 55, 5, 2.5 times higher the tested miner ratio.
- Miner submissions were not accounted towards the block completion (it might have had some impact the smallest pools).
- Difficulty: 1890362. Share value as in PPS: 2.644996e-5.
- Standard deviation and variation calculated on the average share value for batches of shares of size 21600. (1GH/s * 1 day)
- OldDefaultScheduler: Average share value: 5.453634e-5. Standard deviation: 6.54271e-5. Variation: 4.2807e-09
- DefaultScheduler: Average share value: 5.316599e-5. Standard deviation: 5.39707e-5. Variation: 2.91284e-09
- HybridScheduler (<20% OldDefaultScheduler, > 20% DefaultScheduler): Average share value: 5.02669e-5. Standard deviation: 4.86141e-5. Variation: 2.36333e-09

In all cases the standard deviation of "daily" average was comparable to the to the average share value, which I'm not exactly sure how to interpret. Does it even make any sense?
organofcorti
Donator
Legendary
*
Offline Offline

Activity: 2058
Merit: 1007


Poor impulse control.


View Profile WWW
August 16, 2011, 06:31:07 AM
 #3607

Create a new role called "mine_slice". This would be for the small pools with low hashrate (user selectable). The faster pools would stay "mine".
My idea is to run the slice scheduler inside of the default scheduler. So the behavior would be: If there were "mine" pools below threshold, you would be mining them under OldDefault rules. If there were no acceptable "mine" pools, all of the "mine_slice" pools would be selected running under slice rules as a group, but only if there was one or more below threshold. If all of these options were exhausted, we would then resort to "backup".

I thought this was an interesting idea. At first look, by minimising low pool shares you should minimise variance a bit compared to olddefault without being completely egalitarian (a la slice) so efficiency should be improved c/w slice, but a little less than old default. I could be wrong about this.

What makes it more interesting is that you can test it - just put all your small pools onto mine_slush but with a .25 penalty so they can mine to .43 if they have to. Mine_slush seems to work on a backup-ish fashion - it doesn't seem to jump there much if there's a good role:mine around.

My new sim is designed to be as modular as possible so once it's done I'll try to write a sim of your suggestion. You may be waiting a while though. Try it yourself with oDS and mine_slush penalty 0.25 in the meantime.

Bitcoin network and pool analysis 12QxPHEuxDrs7mCyGSx1iVSozTwtquDB3r
follow @oocBlog for new post notifications
organofcorti
Donator
Legendary
*
Offline Offline

Activity: 2058
Merit: 1007


Poor impulse control.


View Profile WWW
August 16, 2011, 07:19:33 AM
 #3608


In all cases the standard deviation of "daily" average was comparable to the to the average share value, which I'm not exactly sure how to interpret. Does it even make any sense?


Wow, real difficulties, hash rates etc.... I applaud you. Kudos. I'm way too lazy to do that. I just use floating point shares to make calculations easier. It means that I can only get comparative efficiencies easily, if I want real world stuff I have to go through and change values which slows everything down.

Anyway, if mean ~ sd it usually indicates a wide spread of values (depending on median). I just ran byteHopper to get some results to post using 3 pools, hash speed ratios of 0.01, 0.1 and 0.1  and no hop off point. The first is over twenty weeks, the second daily estimate over twenty days, and the final one over one day.

slow pool:
mean =  1.673466    1.422362      3.225511
sd = 0.5091813       0.7734413     5.912161
median = 1.841146   1.51612       1.590691

mean =  1.706857       1.747548   4.391248
sd = 0.2108296           0.668324   4.405089
median = 1.701362      1.634362   2.275186

fast pool
mean =  1.573833    1.700709        2.903397
sd = 0.09222334      0.2546248      2.972965
median =  1.576645  1.745717        1.701818

combined:
mean =   1.646658    1.644637      3.895539
sd = 0.1879099        0.4111877     5.067339
median =  1.616157   1.587286       1.874025

As you can see, as sampling intervals decrease the standard deviation (and hence variance) increases - slowly for the slow pools and more rapidly for the fast pools. If you want to model the variation you get as a miner, then use days as a time interval. If you want as close to a theoretically correct result as possible, use more. I usually use about a thousand loops of a million rounds for that.

Now, a request: chartporn please!

Bitcoin network and pool analysis 12QxPHEuxDrs7mCyGSx1iVSozTwtquDB3r
follow @oocBlog for new post notifications
Keninishna
Hero Member
*****
Offline Offline

Activity: 556
Merit: 500



View Profile
August 16, 2011, 08:28:10 AM
 #3609

how about connecting to multiple pools at the same time and then distributing the hashrate accordingly? That is pools with lower share counts get more mh/s. That would be the ideal hopper. I suppose the scheduling would be difficult to code.
flower1024
Legendary
*
Offline Offline

Activity: 1428
Merit: 1000


View Profile
August 16, 2011, 08:30:23 AM
 #3610

how about connecting to multiple pools at the same time and then distributing the hashrate accordingly? That is pools with lower share counts get more mh/s. That would be the ideal hopper. I suppose the scheduling would be difficult to code.

there are two schedulers which actually do this. DefaultScheduler (guess what: its the default) or AltSliceScheduler (which was "invented" by me, but integrated by some other guy).

anyway: at hoppersden is a simulation software which shows clearly: any slicing does reduce variance but with a cose. your total earnings are lowered. best way is to always take the pool with lowest round shares.
cirz8
Newbie
*
Offline Offline

Activity: 42
Merit: 0


View Profile
August 16, 2011, 11:12:47 AM
 #3611

Isn't it just hopping between, and not doing multiple shares from multiple pools simultaneously?
At least that is how I read Keninishna's question.

Quote
best way is to always take the pool with lowest round shares.
The Pools hashrates are important variables here?
MaGNeT
Legendary
*
Offline Offline

Activity: 1526
Merit: 1002


Waves | 3PHMaGNeTJfqFfD4xuctgKdoxLX188QM8na


View Profile WWW
August 16, 2011, 11:26:10 AM
 #3612

Isn't it just hopping between, and not doing multiple shares from multiple pools simultaneously?
At least that is how I read Keninishna's question.

Quote
best way is to always take the pool with lowest round shares.
The Pools hashrates are important variables here?

I guess it doesn't.
Low hashrate pools make variance bigger but it levels over time.
organofcorti
Donator
Legendary
*
Offline Offline

Activity: 2058
Merit: 1007


Poor impulse control.


View Profile WWW
August 16, 2011, 11:58:55 AM
 #3613

I don't have sim figures for slice vs standard yet but if you work it out scheduling, number of pools and pool size choice affect efficiency and variance as follows:

  • Standard scheduling has better efficiency but higher variance than slice
  • Larger pools have lower variance than smaller pools
  • Larger number of hoppable pools available lead to higher efficiency.

So a couple of choices are:
  • Have lots of pools including the smaller ones and use slicing to reduce variance
  • Only mine larger pools for better variance and use OldDefaultScheduler for best efficiency

Anyone else spot other tactics?

Bitcoin network and pool analysis 12QxPHEuxDrs7mCyGSx1iVSozTwtquDB3r
follow @oocBlog for new post notifications
deepceleron
Legendary
*
Offline Offline

Activity: 1512
Merit: 1025



View Profile WWW
August 16, 2011, 02:01:54 PM
Last edit: August 16, 2011, 05:27:24 PM by deepceleron
 #3614

I don't have sim figures for slice vs standard yet but if you work it out scheduling, number of pools and pool size choice affect efficiency and variance as follows:

  • Standard scheduling has better efficiency but higher variance than slice
  • Larger pools have lower variance than smaller pools
  • Larger number of hoppable pools available lead to higher efficiency.

So a couple of choices are:
  • Have lots of pools including the smaller ones and use slicing to reduce variance
  • Only mine larger pools for better variance and use OldDefaultScheduler for best efficiency

Anyone else spot other tactics?

"Standard scheduling has better efficiency but higher variance than slice" - I think that's a misguided assumption there.

You are already greatly reducing your variance by spreading your love over many different pools. Even a 100Ghash pool is going to be solving a block in less than 24 hours on average; five that size, and you are averaging 5 smaller-payout blocks a day vs. one pool's variance. Add in hopping a "top five" pool and your daily payments are clockwork. The old scheduler wants to submit the same percentage of shares to every pool, and the same number of shares to every pool round.

Decreasing variance at the cost of mining efficiency shouldn't enter into any consideration you make. Instantaneous share value quickly drops from 13x to 1x within difficulty N/.43 shares, and a +28% per pool return estimate only happens when you start mining right at new round start. A pool has a five minute round, and now you've only put in half as many shares as you could have by sub-optimal slicing.

If there was a time-slicing option to be made at the expense of profitability (which, only as a side effect, would reduce variance), it would be to spread random shares around to all the pools you are checking for stats and LP pushes when no pool is <N/.43, to justify your bandwidth and avoid being profiled as an exploitative hopper.
ahitman
Sr. Member
****
Offline Offline

Activity: 302
Merit: 250


View Profile
August 16, 2011, 03:55:32 PM
 #3615

Just wanted to remind people that we should be giving a donation to all the pool hopper friendly pools, I dont keep track of my efficiencies as well as I should, but because of the friendly pools I make more than I would on PPS.

How much are you guys donating? Do you think 2% is fair to pools that don't screw hoppers around?
r2edu
Member
**
Offline Offline

Activity: 68
Merit: 10


View Profile
August 16, 2011, 05:00:38 PM
Last edit: August 16, 2011, 05:15:08 PM by r2edu
 #3616

Anyone with problems with Ozco? It doesn´t show me any "est.earnings" and i send about 800 shares!

Edit: wich one of these commands works better or are necessary with the "mine_deepbit" mode? or the three of those at the same time?

--startLP = Seeds the LP module with known pools.
--noLP = Turns off client side longpolling*
--p2pLP = Starts up an IRC bot to validate LP based hopping

*i don´t understand what this one does..
Houseonfire
Full Member
***
Offline Offline

Activity: 129
Merit: 100


View Profile
August 16, 2011, 05:11:08 PM
 #3617

Can someone help me add in a complete addition of all the gained BTC on the top of the page up by by speed? And also the current system time. For example:

Code:
bcpool @71MHash | 1.000 BTC Earned  |  10:00PM

And by earned, i dont mean cashed out. I mean the total of what is earned so far.
deepceleron
Legendary
*
Offline Offline

Activity: 1512
Merit: 1025



View Profile WWW
August 16, 2011, 05:36:01 PM
 #3618

Anyone with problems with Ozco? It doesn´t show me any "est.earnings" and i send about 800 shares!

Edit: wich one of these commands works better or are necessary with the "mine_deepbit" mode? or the three of those at the same time?

--startLP = Seeds the LP module with known pools.

This is now the default, it doesn't need to be specified at the command-line any more.

--noLP = Turns off client side longpolling*
*i don´t understand what this one does..

I believe client-side means that Bithopper doesn't push LPs to your miner with this option, your miners would just request new work when their queue is empty, which would increase stale shares.

--p2pLP = Starts up an IRC bot to validate LP based hopping
I would encourage people to set up pool worker accounts for at least a dozen pools and add them as 'info' before using this option, otherwise the information you will be sharing with others using the IRC LP info-sharing technique would be sub-optimal.

cuqa
Newbie
*
Offline Offline

Activity: 40
Merit: 0


View Profile
August 16, 2011, 06:08:11 PM
 #3619

i recommend to remove bitcoinpool from the rotation. its obvious that they fake their shares as decoy and then even demand a fee. also they are so unprofessional in every way.. avoid!
paraipan
In memoriam
Legendary
*
Offline Offline

Activity: 924
Merit: 1004


Firstbits: 1pirata


View Profile WWW
August 16, 2011, 06:57:42 PM
 #3620

i recommend to remove bitcoinpool from the rotation. its obvious that they fake their shares as decoy and then even demand a fee. also they are so unprofessional in every way.. avoid!

you have some screen capt to sustain that ?

BTCitcoin: An Idea Worth Saving - Q&A with bitcoins on rugatu.com - Check my rep
Pages: « 1 ... 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 [181] 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 »
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!