Sukrim
Legendary
Offline
Activity: 2618
Merit: 1007
|
|
August 14, 2011, 01:35:12 PM |
|
...I meant actually selling the mined coins for Bitcoins, not just determining if it is worth mining (which is trivial, I only need to figure out which measure to take - 24h average, daily low or last sell/buy).
|
|
|
|
licutis
Newbie
Offline
Activity: 38
Merit: 0
|
|
August 14, 2011, 04:53:22 PM |
|
Urgh, not these scamcoins! Is mine_ixc really implemented? We need a difficulty for these btw. too... ixc difficulty http://bitcoinx.com/ixcoin/
|
|
|
|
muyoso
Member
Offline
Activity: 84
Merit: 10
|
|
August 14, 2011, 06:19:13 PM |
|
they paid out... once their json stats are fixed to reality I'll begin mining them again.
They aren't going to fix their JSON stats. They are doing this as an anti-hopping measure. Asked about it on their IRC and was told that.
|
I drink it up!
|
|
|
bb
Member
Offline
Activity: 84
Merit: 10
|
|
August 14, 2011, 06:27:16 PM |
|
Hop off thresholds revisitedA while ago @organofcorti claimed that our beloved hop off threshold of 43% does not make too much sense when hopping multiple pools. Unsure about this claim (like so many others) I implemented (a very crude) pool hopping simulator myself and ran some simulations, the results of which I would like to share with you. All these simulations run one miner at approximately 1 GH/s for about 1 year on what is now called the OldDefaultScheduler, meaning that the miner always jumps to the pool with the least shares. Also, there is a threshold variable (eg. t = 0.43). If there are no pools with less than t * difficulty shares, the miner hops to a fair pool. The simulation does not use slicing. The pool speeds stay constant. The standard case: one proportional poolAt first I ran the simulation with one proportional and one fair pool. This is the case @Raulo discussed in his original paper. The simulated pool was running at about 500 GH/s. The result shows the expected peak at about 0.43: Two hoppable poolsNext I added another proportional pool. The simulated pools were running at about 210 GH/s and 165 GH/s. You can still see a peak somewhere around 0.40: Real world simulationFinally, I ran the simulation with six pools that were hopped at the time I started writing the simulation: - polmine, ~ 210 GH/s
- MtRed, ~ 165 GH/s
- triple, ~ 110 GH/s
- ozco, ~ 55 GH/s
- rfc, ~ 5 GH/s
- btcmonkey, ~ 2.5 GH/s
In the result the peak is gone and between 0.40 up to at least 1.8 there is no visible drop in efficiency! ConclusionThe results of my simulations agree with the findings of @organofcorti (and others). When using multiple pools for hopping (which is presently the common case), there is no need for a hop off threshold of 0.43. However, regarding efficiency, there does not seem to be any benefit in using a higher threshold either. Choosing thresholds randomly when pool hopping makes it impossible for pool operators to identify you as a hopper for hopping off at 0.43. I therefore suggest implementing a random threshold selection from the range [0.5, 1.5] when there are multiple pools being hopped.
|
|
|
|
ahitman
|
|
August 14, 2011, 07:16:17 PM |
|
Does the efficiency stabilize because we are not going to any pool that is less than 43%? In your simulations can you say how much time the hopper would spend at the PPS pool (or how much time there was spent with no pool being less than 43%)? If the amount of time spent at PPS is a tiny percentage (because of all the available prop pools) then the efficiency would seem to be high but not because you hop away from a pool, but because you are always hopping TO a pool that has low shares.
|
|
|
|
burp
Member
Offline
Activity: 98
Merit: 10
|
|
August 14, 2011, 07:26:33 PM |
|
Does the efficiency stabilize because we are not going to any pool that is less than 43%? In your simulations can you say how much time the hopper would spend at the PPS pool (or how much time there was spent with no pool being less than 43%)? If the amount of time spent at PPS is a tiny percentage (because of all the available prop pools) then the efficiency would seem to be high but not because you hop away from a pool, but because you are always hopping TO a pool that has low shares.
Sure, I guess this is simply the reason and coincides so far with my observations . My backup pool nearly never needs to be used, especially in these days where a huge percentage of the whole network hashing power is within proportional pools, and you can hop deepbit. It doesn't matter if you set the threshold to 0.43 or 1 or infinity (then the backup pool will never be used) with many enough pools. There is nearly always a pool with less than 43%. You need to notice that the efficiency still drops sharply below 43%.
|
|
|
|
deepceleron
Legendary
Offline
Activity: 1512
Merit: 1036
|
|
August 14, 2011, 08:02:41 PM |
|
Hop off thresholds revisitedReal world simulationFinally, I ran the simulation with six pools that were hopped at the time I started writing the simulation: - polmine, ~ 210 GH/s
- MtRed, ~ 165 GH/s
- triple, ~ 110 GH/s
- ozco, ~ 55 GH/s
- rfc, ~ 5 GH/s
- btcmonkey, ~ 2.5 GH/s
In the result the peak is gone and between 0.40 up to at least 1.8 there is no visible drop in efficiency! ConclusionThe results of my simulations agree with the findings of @organofcorti (and others). When using multiple pools for hopping (which is presently the common case), there is no need for a hop off threshold of 0.43. However, regarding efficiency, there does not seem to be any benefit in using a higher threshold either. Choosing thresholds randomly when pool hopping makes it impossible for pool operators to identify you as a hopper for hopping off at 0.43. I therefore suggest implementing a random threshold selection from the range [0.5, 1.5] when there are multiple pools being hopped. If you disable backup pooling or set backup threshold really high, you will essentially be hopping at a random threshold. A new round could start at a new pool when your current least-share pool is at 10% or 300%. I assume you are giving each pool a table of random block find times (randomly generate a high-precision round percentile 0% to 100%, turn that percentile into number of individual hashes required using correct math, turn that into times using hashrate), and then are simulating the switching and share percentages earned. I too am curious in your simulation, how much time is being spent mining beyond the "optimal" range, say if the threshold was 1.8? Could you run a histogram with difficulty switch points for your six-pool example, so we can visually see how long we spend in proportional rounds. I'd be interested how long they actually tend to go. For more interesting real-world factors to model, analysis could be biased with a 'pool hop-in' time delay (like jumping 5 minutes after block find), and a 'wrong pool hop error rate' that reflects what people are seeing with early block-detection methods, when they get incorrect blockfinding because they aren't getting info from the pool that actually found a block, and jump to a false-positive.
|
|
|
|
ryouiki
Newbie
Offline
Activity: 33
Merit: 0
|
|
August 14, 2011, 08:48:51 PM |
|
What the graphs show is that for a large number of alternative pools what you gain in efficiency per share for hopping earlier, you lose on the PPS back up. The efficiency remains constant because as you increase the hop off point you don't need to hop to back up and go for 1.0 efficiency anymore. Did your sim include a PPS backup, or just look at per share efficiency?
I'd love you to post your graphs so we can get a side by side comparison.
My simulator has PPS backup pool that pays constant BTC for each share. My interpretation : PPS pool barely touched, when we have more proportional pools. (so, threshold becomes less meaningful, however it does not imply share from >43% proportional pool has greater profit expectation than pps) I think we have same shaped graphs.
|
|
|
|
r2edu
Member
Offline
Activity: 68
Merit: 10
|
|
August 14, 2011, 09:27:36 PM |
|
I´m getting good results with "mine_deepbit" in: bitclockers / polmine / bitcoinpool (testing) / bclc (testing)
but no with the pool that give that mode it´s name hehe
BTW, Digbtc is fakin their stats now or is down?
***
About the previous Hop Thresholds conversation:
I don´t really get to understand the simulation you´re running, but making my quick numbers, the shorter the round the more valuable share, until 1*diff, then devaluates with every share (vs. PPS), BUT previous that <100% every share you submit helps mantain the value of all of the previous shares. Now I´m testing with lower penaltys (0.9/0.8/0.7) to see some real world results. If I make it to something positive, I´ll post it
***
Edit: with 0.1.7.2-14 version
|
|
|
|
bb
Member
Offline
Activity: 84
Merit: 10
|
|
August 14, 2011, 09:37:36 PM |
|
In your simulations can you say how much time the hopper would spend at the PPS pool (or how much time there was spent with no pool being less than 43%)? If the amount of time spent at PPS is a tiny percentage (because of all the available prop pools) then the efficiency would seem to be high but not because you hop away from a pool, but because you are always hopping TO a pool that has low shares.
I ran a simulation at threshold 1.5 using the "real world" parameters. From a total of 2750 BTC earned, only 0.03 BTC (essentially 0) came from the fair pool. This is basically just the consequence of the probability for all six pools beeing at 1.5 * difficulty being very low. I ran the simulation again, recording how much time was spent at pools below/above a 0.43 treshold. below 0.43: ~ 92%above 0.43: ~ 8%at fair pool: < 0.1%
|
|
|
|
abracadabra
|
|
August 15, 2011, 12:43:52 AM |
|
Please add a wording/notation that even if you have a x64 processor, you need to install x32 python.
x64 python + x32 ssl doesn't work.
Thanks!
|
|
|
|
r2edu
Member
Offline
Activity: 68
Merit: 10
|
|
August 15, 2011, 12:53:11 AM |
|
@c00w (and all the colaborators): I think everything is working flawlessly !!! -v0.1.7.2-14-
(except deepbit, the first time it finds a block mines ok, but every second time the workers just die... i mean in every restart of the bh, then i disable it cause that problem)
I´m a little confused with the lp_penalty... witch pool do i have to give more time, the one with the more false-positives or the others? if i have 4-5 pools with "mine_deepbit", do i have to give them the same delay or in certain order??
Thanks!!
Edit: I´m looking at the bclc stats and match perfectly with the ones in the bh stats, and it´s set to mine_deepbit... this is ok?
|
|
|
|
wtfman
Member
Offline
Activity: 118
Merit: 10
BTCServ Operator
|
|
August 15, 2011, 01:00:04 AM Last edit: August 15, 2011, 01:16:34 AM by wtfman |
|
hey, i am the operator of btcserv.net and wanted to let you know that now json stats are available @ http://btcserv.net/json/pool/ .. would be cool if dev team could update/add whatever necessary so the data is fetched from there. greetings.
|
|
|
|
bb
Member
Offline
Activity: 84
Merit: 10
|
|
August 15, 2011, 01:19:09 AM |
|
I assume you are giving each pool a table of random block find times (randomly generate a high-precision round percentile 0% to 100%, turn that percentile into number of individual hashes required using correct math, turn that into times using hashrate), and then are simulating the switching and share percentages earned.
No. I am using Meni's formula. For more interesting real-world factors to model, analysis could be biased with a 'pool hop-in' time delay (like jumping 5 minutes after block find), and a 'wrong pool hop error rate' that reflects what people are seeing with early block-detection methods, when they get incorrect blockfinding because they aren't getting info from the pool that actually found a block, and jump to a false-positive.
I am guessing all these simply reduce efficiency.
|
|
|
|
deepceleron
Legendary
Offline
Activity: 1512
Merit: 1036
|
|
August 15, 2011, 01:53:55 AM |
|
I assume you are giving each pool a table of random block find times (randomly generate a high-precision round percentile 0% to 100%, turn that percentile into number of individual hashes required using correct math, turn that into times using hashrate), and then are simulating the switching and share percentages earned.
No. I am using Meni's formula. One situation this does not address is the bit-hopping effect itself. If 100ghash hops on and off 100ghash pools, the early shares will be go by twice as fast, meaning there will be less cheating time than predicted. All pools will spend more time in the > .43 difficulty range; lowering the chance of there being a pool to hop to with share earnings higher than backup.
|
|
|
|
muyoso
Member
Offline
Activity: 84
Merit: 10
|
|
August 15, 2011, 02:15:35 AM |
|
Anyone able to use mine_deepbit with bitcoinpool and not have it jump when the JSON resets every few minutes?
|
I drink it up!
|
|
|
bb
Member
Offline
Activity: 84
Merit: 10
|
|
August 15, 2011, 02:16:25 AM |
|
I assume you are giving each pool a table of random block find times (randomly generate a high-precision round percentile 0% to 100%, turn that percentile into number of individual hashes required using correct math, turn that into times using hashrate), and then are simulating the switching and share percentages earned.
No. I am using Meni's formula. One situation this does not address is the bit-hopping effect itself. If 100ghash hops on and off 100ghash pools, the early shares will be go by twice as fast, meaning there will be less cheating time than predicted. All pools will spend more time in the > .43 difficulty range; lowering the chance of there being a pool to hop to with share earnings higher than backup. This should just shift pools (and speeds) around a bit, but doesn't change the outcome.
|
|
|
|
joulesbeef
Sr. Member
Offline
Activity: 476
Merit: 250
moOo
|
|
August 15, 2011, 02:18:35 AM |
|
Noticed that btcworld.de updates stats omg-slow on their german page.. which keeps giving us that api disable error but they seem to update in near real time on their english page. Anyways they seem to choose language via javascript.. is there anyway to tell regex to choose english first? http://btcworld.de/statistics
|
mooo for rent
|
|
|
joulesbeef
Sr. Member
Offline
Activity: 476
Merit: 250
moOo
|
|
August 15, 2011, 02:30:02 AM Last edit: August 15, 2011, 03:03:31 AM by joulesbeef |
|
hey, i am the operator of btcserv.net and wanted to let you know that now json stats are available @ http://btcserv.net/json/pool/ .. would be cool if dev team could update/add whatever necessary so the data is fetched from there. greetings. Greetings and welcome. Thanks for stopping by, we'll get that fixed right away. Thanks for adding json. so this [btcserv] name: BTCServ.net mine_address: btcserv.net:8335 api_address: http://btcserv.net/ api_method:re api_key:([0-9]+(,[0-9]+)*) shares api_strip:',' url: https://btcserv.net/user/stats/ should be this [btcserv] name: BTCServ.net mine_address: btcserv.net:8335 api_address: http://btcserv.net/json/pool/ api_method:json api_key:valid url: https://btcserv.net/user/stats/ i think yeah that works.. and can someone fix this one as well [bmunion] name: BitMinersUnion.org mine_address: pool.bitminersunion.org:8341 api_address: http://www.bitminersunion.org/stats.php api_method: re api_key: Shares this round:([0-9]+)<br/> url: http://www.bitminersunion.org/ I dont know about the pull requests yet... will figure out soon
|
mooo for rent
|
|
|
muyoso
Member
Offline
Activity: 84
Merit: 10
|
|
August 15, 2011, 02:52:59 AM |
|
LOL. Tried for the last half hour to figure out regex scraping so that I could get bitcoinpool to work by scraping from the website instead of the JSON, but its now obvious to me that I have no idea what I am doing. Anyone got it working from the website stats? Trying to do re_rate or re_rateduration on it.
|
I drink it up!
|
|
|
|