Bitcoin Forum
December 03, 2016, 02:44:42 AM *
News: Latest stable version of Bitcoin Core: 0.13.1  [Torrent].
 
   Home   Help Search Donate Login Register  
Pages: « 1 ... 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 [181] 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 »
  Print  
Author Topic: bitHopper: Python Pool Hopper Proxy  (Read 332427 times)
muyoso
Member
**
Offline Offline

Activity: 84



View Profile
August 15, 2011, 08:59:47 PM
 #3601


yeah, the recent changes in bithopper basically make you a 24/7 miner at deepbit. Which is fine if you want to 24/7 mine at deepbit. Otherwise, though, it's best to just not even bother with deepbit.


About half of all of my shares are going to deepbit right now.  What would be awesome is if bithopper could check with pident about once an hour and confirm that solved block was actually from the same pool that bithopper received the LP from, and if not have bithopper auto adjust the penalties a little bit at a time until a certain accuracy is reached.  Of course like most ideas I am sure this is easier said than done.  

Where do we put the lp_penalty value for a pool?  Is it in user.cfg or pools.cfg?

I drink it up!
Advertised sites are not endorsed by the Bitcoin Forum. They may be unsafe, untrustworthy, or illegal in your jurisdiction. Advertise here.
1480733082
Hero Member
*
Offline Offline

Posts: 1480733082

View Profile Personal Message (Offline)

Ignore
1480733082
Reply with quote  #2

1480733082
Report to moderator
1480733082
Hero Member
*
Offline Offline

Posts: 1480733082

View Profile Personal Message (Offline)

Ignore
1480733082
Reply with quote  #2

1480733082
Report to moderator
cirz8
Jr. Member
*
Offline Offline

Activity: 42


View Profile
August 15, 2011, 09:50:13 PM
 #3602

A crazy idea just popped up while I was messing around with some bash-scripts.
 
Would it be possible to have bh running multiple pools simultaneously?
And splitting the getwork requests of those based on a on-every-X-getwork.

If one sets a pool to 'on-every-X-getwork: 10' it would do a getwork every 10th getwork, getting 10% of the total hashspeed.

Why would anyone want something crazy like that?
Direct a constant portion of ones hashingpower towards a pool that one might like to support despite hopping or whatever reason.

Getting bh to re-read the user.cfg either by command or by internal filechange-checking would be a nice, but not required, addition to this.


Another idea came up while writing this post, but it can be shot down quickly if someone has the data:
Would it be favorable to mine, for example, two(things change if there are more?) sub 10%shares pools simultaneously, instead of one until it reaches the threshold and then jump to one of the others, if it/they are still below threshold, or is it better to try and get lucky as early on in a round as possible?

I'm not quite sure how the hopping is determined atm.
Will bh constantly try and find the pool with the lowest share-count and then jump to it?
or will it stay with a pool it has jumped to until the threshold is reached, then jump to the pool with the lowest share-count?
I'm guessing "--altslicesize" has a part in this, but I don't understand what a "slice" is, is it shares?

(The more I add to this post, the more confusing it gets, so I will just press "Post" now)  Huh

Mandatory?  123ABCcirz8CcieVh9UwThEX2vkoJF33Te
GenTarkin
Legendary
*
Offline Offline

Activity: 1918


View Profile
August 15, 2011, 10:48:54 PM
 #3603


I'm guessing "--altslicesize" has a part in this, but I don't understand what a "slice" is, is it shares?

(The more I add to this post, the more confusing it gets, so I will just press "Post" now)  Huh

A slice is simply a amount of time allotted to mining on a given pool, so its slices time, keeps track of slices and uses that to switch between 2 eligible pools

GenTarkin's MOD Kncminer Titan custom firmware! v1.0.4! <--- CLICK HERE
Donations: bitcoin- 1Px71mWNQNKW19xuARqrmnbcem1dXqJ3At || litecoin- LYXrLis3ik6TRn8tdvzAyJ264DRvwYVeEw
muyoso
Member
**
Offline Offline

Activity: 84



View Profile
August 16, 2011, 01:25:32 AM
 #3604

What would be the best way of going about tweaking the lp_penalty so that I can get bithopper to make the most accurate guess that it can on who solved a block?  Does anyone have a method that is semi simple?

I drink it up!
lucita777
Jr. Member
*
Offline Offline

Activity: 39


View Profile
August 16, 2011, 01:43:04 AM
 #3605

A slice is simply a amount of time allotted to mining on a given pool, so its slices time, keeps track of slices and uses that to switch between 2 eligible pools

Are slices size calculated based on amount of shares the pool already have or they are just equal?
Did anyone run simulation on some sample pools with OldDefaultScheduler and DefaultScheduler to see what is the different in efficiency and variation/standard deviation ?

I tried to run a simulation on it, but the efficiency numbers I got (I will share them when I get back home) for those schedulers were very close - and the difference was smaller than standard deviation on average daily efficiency). That result is actually surspised me since, I'd think that mining in a pool which is 20% instead in a pool which is 2% done should have a big difference in the estimated share value. The deviation was something like 2x smaller for the DefaultScheduler compared to OldDefaultScheduler.

If my numbers are wrong and in fact there is a big difference in efficiency for those schedules, it might be interesting to simulate a hybrid scheduler like this:
a) If there is a pool with number of shares less than < some_fixed_ratio * difficulty (i.e. some_fixed_ratio = 20%) then use OldDefaultScheduler
b) otherwise, use DefaultScheduler
In theory it should have smaller variance than OldDefaultScheduler and better efficiency than DefaultScheduler. In practice, well, who knows Smiley
lucita777
Jr. Member
*
Offline Offline

Activity: 39


View Profile
August 16, 2011, 01:51:34 AM
 #3606

And a side question:
Since we cannot identify the owner of block in any way other than checking the stats on a pools website or by guessing, then what prevents the pool operators from not admitting that their pool actually found a block and take the income to themselves?
hawks5999
Full Member
***
Offline Offline

Activity: 168



View Profile WWW
August 16, 2011, 02:14:54 AM
 #3607

And a side question:
Since we cannot identify the owner of block in any way other than checking the stats on a pools website or by guessing, then what prevents the pool operators from not admitting that their pool actually found a block and take the income to themselves?

a) that happens
b) those pools don't last or they produce enough for their miners that they don't miss an occasional stolen block.

■ ▄▄▄
■ ███
■ ■  ■               
LEDGER  WALLET    ████
■■■ ORDER NOW! ■■■
              LEDGER WALLET
Smartcard security for your BTCitcoins
■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■
Decentralized. Open. Secure.
organofcorti
Donator
Legendary
*
Offline Offline

Activity: 1946


Poor impulse control.


View Profile WWW
August 16, 2011, 03:10:04 AM
 #3608


Did anyone run simulation on some sample pools with OldDefaultScheduler and DefaultScheduler to see what is the different in efficiency and variation/standard deviation ?


I'm in the middle of a rewrite to do a simple slice atm, suggested by joulesbeef. I'm approaching slicing as following:

  • 'Shares' ('shares' are just a placeholder, the value of share is determined later) are distributed to equally to pools under *hop point* up until the *hop point* limit of any particular pool.
  • Equal share slicing (favouring no particular hoppable pool): Divide shares contributed as above by number of pools to which they were distributed to get the 'value' of each share.
  • Equal time slicing: divide shares by number of pools to which they were distributed, and then multiply by a pools' fraction of the combined hash speed of all pools the to get the 'value' of each share.


    Of course this is an idealised version of slicing and assumes no loss of time hopping from one pool to another and

    If I'm wrong about this please let me know before I go any further with the rewrite!


Bitcoin network and pool analysis 12QxPHEuxDrs7mCyGSx1iVSozTwtquDB3r
follow @oocBlog for new post notifications
macboy80
Member
**
Offline Offline

Activity: 102


View Profile
August 16, 2011, 05:29:35 AM
 #3609

@ooc : Had you considered an approach like I mentioned in the below post? I know c00w was planning a scheduler re-write, however I'm not sure what his priorities are. Please check it out, as I definitely think it could be helpful given the current stats of hoppable pools.

https://bitcointalk.org/index.php?topic=26866.msg443792#msg443792

@everyone : I haven't given up on streamlining the whole stats/skin process. I have something rudimentary, however I still don't have everything quite ready. Smiley
lucita777
Jr. Member
*
Offline Offline

Activity: 39


View Profile
August 16, 2011, 06:20:12 AM
 #3610

Some numbers from simulation I mentioned above:
- Miner submitted total of 70,000,000 shares. Used pools with share submission ratios: 210, 165, 11, 55, 5, 2.5 times higher the tested miner ratio.
- Miner submissions were not accounted towards the block completion (it might have had some impact the smallest pools).
- Difficulty: 1890362. Share value as in PPS: 2.644996e-5.
- Standard deviation and variation calculated on the average share value for batches of shares of size 21600. (1GH/s * 1 day)
- OldDefaultScheduler: Average share value: 5.453634e-5. Standard deviation: 6.54271e-5. Variation: 4.2807e-09
- DefaultScheduler: Average share value: 5.316599e-5. Standard deviation: 5.39707e-5. Variation: 2.91284e-09
- HybridScheduler (<20% OldDefaultScheduler, > 20% DefaultScheduler): Average share value: 5.02669e-5. Standard deviation: 4.86141e-5. Variation: 2.36333e-09

In all cases the standard deviation of "daily" average was comparable to the to the average share value, which I'm not exactly sure how to interpret. Does it even make any sense?
organofcorti
Donator
Legendary
*
Offline Offline

Activity: 1946


Poor impulse control.


View Profile WWW
August 16, 2011, 06:31:07 AM
 #3611

Create a new role called "mine_slice". This would be for the small pools with low hashrate (user selectable). The faster pools would stay "mine".
My idea is to run the slice scheduler inside of the default scheduler. So the behavior would be: If there were "mine" pools below threshold, you would be mining them under OldDefault rules. If there were no acceptable "mine" pools, all of the "mine_slice" pools would be selected running under slice rules as a group, but only if there was one or more below threshold. If all of these options were exhausted, we would then resort to "backup".

I thought this was an interesting idea. At first look, by minimising low pool shares you should minimise variance a bit compared to olddefault without being completely egalitarian (a la slice) so efficiency should be improved c/w slice, but a little less than old default. I could be wrong about this.

What makes it more interesting is that you can test it - just put all your small pools onto mine_slush but with a .25 penalty so they can mine to .43 if they have to. Mine_slush seems to work on a backup-ish fashion - it doesn't seem to jump there much if there's a good role:mine around.

My new sim is designed to be as modular as possible so once it's done I'll try to write a sim of your suggestion. You may be waiting a while though. Try it yourself with oDS and mine_slush penalty 0.25 in the meantime.

Bitcoin network and pool analysis 12QxPHEuxDrs7mCyGSx1iVSozTwtquDB3r
follow @oocBlog for new post notifications
organofcorti
Donator
Legendary
*
Offline Offline

Activity: 1946


Poor impulse control.


View Profile WWW
August 16, 2011, 07:19:33 AM
 #3612


In all cases the standard deviation of "daily" average was comparable to the to the average share value, which I'm not exactly sure how to interpret. Does it even make any sense?


Wow, real difficulties, hash rates etc.... I applaud you. Kudos. I'm way too lazy to do that. I just use floating point shares to make calculations easier. It means that I can only get comparative efficiencies easily, if I want real world stuff I have to go through and change values which slows everything down.

Anyway, if mean ~ sd it usually indicates a wide spread of values (depending on median). I just ran byteHopper to get some results to post using 3 pools, hash speed ratios of 0.01, 0.1 and 0.1  and no hop off point. The first is over twenty weeks, the second daily estimate over twenty days, and the final one over one day.

slow pool:
mean =  1.673466    1.422362      3.225511
sd = 0.5091813       0.7734413     5.912161
median = 1.841146   1.51612       1.590691

mean =  1.706857       1.747548   4.391248
sd = 0.2108296           0.668324   4.405089
median = 1.701362      1.634362   2.275186

fast pool
mean =  1.573833    1.700709        2.903397
sd = 0.09222334      0.2546248      2.972965
median =  1.576645  1.745717        1.701818

combined:
mean =   1.646658    1.644637      3.895539
sd = 0.1879099        0.4111877     5.067339
median =  1.616157   1.587286       1.874025

As you can see, as sampling intervals decrease the standard deviation (and hence variance) increases - slowly for the slow pools and more rapidly for the fast pools. If you want to model the variation you get as a miner, then use days as a time interval. If you want as close to a theoretically correct result as possible, use more. I usually use about a thousand loops of a million rounds for that.

Now, a request: chartporn please!

Bitcoin network and pool analysis 12QxPHEuxDrs7mCyGSx1iVSozTwtquDB3r
follow @oocBlog for new post notifications
Keninishna
Hero Member
*****
Offline Offline

Activity: 551



View Profile WWW
August 16, 2011, 08:28:10 AM
 #3613

how about connecting to multiple pools at the same time and then distributing the hashrate accordingly? That is pools with lower share counts get more mh/s. That would be the ideal hopper. I suppose the scheduling would be difficult to code.
flower1024
Hero Member
*****
Offline Offline

Activity: 854


luck is just a share away


View Profile
August 16, 2011, 08:30:23 AM
 #3614

how about connecting to multiple pools at the same time and then distributing the hashrate accordingly? That is pools with lower share counts get more mh/s. That would be the ideal hopper. I suppose the scheduling would be difficult to code.

there are two schedulers which actually do this. DefaultScheduler (guess what: its the default) or AltSliceScheduler (which was "invented" by me, but integrated by some other guy).

anyway: at hoppersden is a simulation software which shows clearly: any slicing does reduce variance but with a cose. your total earnings are lowered. best way is to always take the pool with lowest round shares.
cirz8
Jr. Member
*
Offline Offline

Activity: 42


View Profile
August 16, 2011, 11:12:47 AM
 #3615

Isn't it just hopping between, and not doing multiple shares from multiple pools simultaneously?
At least that is how I read Keninishna's question.

Quote
best way is to always take the pool with lowest round shares.
The Pools hashrates are important variables here?

Mandatory?  123ABCcirz8CcieVh9UwThEX2vkoJF33Te
MaGNeT
Legendary
*
Offline Offline

Activity: 1050


Founder of Orlycoin | O RLY? YA RLY!


View Profile WWW
August 16, 2011, 11:26:10 AM
 #3616

Isn't it just hopping between, and not doing multiple shares from multiple pools simultaneously?
At least that is how I read Keninishna's question.

Quote
best way is to always take the pool with lowest round shares.
The Pools hashrates are important variables here?

I guess it doesn't.
Low hashrate pools make variance bigger but it levels over time.
organofcorti
Donator
Legendary
*
Offline Offline

Activity: 1946


Poor impulse control.


View Profile WWW
August 16, 2011, 11:58:55 AM
 #3617

I don't have sim figures for slice vs standard yet but if you work it out scheduling, number of pools and pool size choice affect efficiency and variance as follows:

  • Standard scheduling has better efficiency but higher variance than slice
  • Larger pools have lower variance than smaller pools
  • Larger number of hoppable pools available lead to higher efficiency.

So a couple of choices are:
  • Have lots of pools including the smaller ones and use slicing to reduce variance
  • Only mine larger pools for better variance and use OldDefaultScheduler for best efficiency

Anyone else spot other tactics?

Bitcoin network and pool analysis 12QxPHEuxDrs7mCyGSx1iVSozTwtquDB3r
follow @oocBlog for new post notifications
deepceleron
Legendary
*
Offline Offline

Activity: 1470



View Profile WWW
August 16, 2011, 02:01:54 PM
 #3618

I don't have sim figures for slice vs standard yet but if you work it out scheduling, number of pools and pool size choice affect efficiency and variance as follows:

  • Standard scheduling has better efficiency but higher variance than slice
  • Larger pools have lower variance than smaller pools
  • Larger number of hoppable pools available lead to higher efficiency.

So a couple of choices are:
  • Have lots of pools including the smaller ones and use slicing to reduce variance
  • Only mine larger pools for better variance and use OldDefaultScheduler for best efficiency

Anyone else spot other tactics?

"Standard scheduling has better efficiency but higher variance than slice" - I think that's a misguided assumption there.

You are already greatly reducing your variance by spreading your love over many different pools. Even a 100Ghash pool is going to be solving a block in less than 24 hours on average; five that size, and you are averaging 5 smaller-payout blocks a day vs. one pool's variance. Add in hopping a "top five" pool and your daily payments are clockwork. The old scheduler wants to submit the same percentage of shares to every pool, and the same number of shares to every pool round.

Decreasing variance at the cost of mining efficiency shouldn't enter into any consideration you make. Instantaneous share value quickly drops from 13x to 1x within difficulty N/.43 shares, and a +28% per pool return estimate only happens when you start mining right at new round start. A pool has a five minute round, and now you've only put in half as many shares as you could have by sub-optimal slicing.

If there was a time-slicing option to be made at the expense of profitability (which, only as a side effect, would reduce variance), it would be to spread random shares around to all the pools you are checking for stats and LP pushes when no pool is <N/.43, to justify your bandwidth and avoid being profiled as an exploitative hopper.

ahitman
Full Member
***
Offline Offline

Activity: 126


View Profile WWW
August 16, 2011, 03:55:32 PM
 #3619

Just wanted to remind people that we should be giving a donation to all the pool hopper friendly pools, I dont keep track of my efficiencies as well as I should, but because of the friendly pools I make more than I would on PPS.

How much are you guys donating? Do you think 2% is fair to pools that don't screw hoppers around?
r2edu
Member
**
Offline Offline

Activity: 68


View Profile
August 16, 2011, 05:00:38 PM
 #3620

Anyone with problems with Ozco? It doesn´t show me any "est.earnings" and i send about 800 shares!

Edit: wich one of these commands works better or are necessary with the "mine_deepbit" mode? or the three of those at the same time?

--startLP = Seeds the LP module with known pools.
--noLP = Turns off client side longpolling*
--p2pLP = Starts up an IRC bot to validate LP based hopping

*i don´t understand what this one does..
Pages: « 1 ... 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 [181] 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 »
  Print  
 
Jump to:  

Sponsored by , a Bitcoin-accepting VPN.
Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!