Aseras
|
|
April 02, 2013, 12:50:12 PM |
|
I havent talked to ckovilas today, but looking at the linux box, hes pulled p2pool down and setup a bunch of other things. He's in Poland, I'm in the USA. We are about 10 hours off from each other. He's going to primarily bring cgminer for avalon up to the current codebase. p2pool compatibility is my special request and I'm sure we'll be screwing with it for some time. We may end up making a fork or a frankenbuild of p2pool to fix things for testing. I don't see why in the end it would need to fork for everyone, but it would be a hard fork and everyone would need to upgrade to a version where everyone is on the same share chain. The ball is rolling just not quickly yet
|
|
|
|
jgarzik
Legendary
Offline
Activity: 1596
Merit: 1100
|
|
April 02, 2013, 02:47:18 PM |
|
It sounds like forrestv is instead in favor of alternate means to extend the effective work interval through merging of parallel chains. Various theoretical designs were discussed.
We may end up making a fork or a frankenbuild of p2pool to fix things for testing. I don't see why in the end it would need to fork for everyone, but it would be a hard fork and everyone would need to upgrade to a version where everyone is on the same share chain.
It ultimately seems like 10 seconds is just too short, given Internet propagation, current Avalon hashrate, and the up to 1.5-second delay it can take for work to be returned from Avalons (high latency). Thus, I argue for around 30 seconds, which would imply a hard fork at some point.
|
Jeff Garzik, Bloq CEO, former bitcoin core dev team; opinions are my own. Visit bloq.com / metronome.io Donations / tip jar: 1BrufViLKnSWtuWGkryPsKsxonV2NQ7Tcj
|
|
|
Aseras
|
|
April 02, 2013, 03:05:17 PM |
|
It sounds like forrestv is instead in favor of alternate means to extend the effective work interval through merging of parallel chains. Various theoretical designs were discussed.
We may end up making a fork or a frankenbuild of p2pool to fix things for testing. I don't see why in the end it would need to fork for everyone, but it would be a hard fork and everyone would need to upgrade to a version where everyone is on the same share chain.
It ultimately seems like 10 seconds is just too short, given Internet propagation, current Avalon hashrate, and the up to 1.5-second delay it can take for work to be returned from Avalons (high latency). Thus, I argue for around 30 seconds, which would imply a hard fork at some point. I'm going to play with it today. ckolivas and xiangfu were in #cgminer today, got a new build of cgminer working with bugfixes from ckolivas. I'll see what we can come up with. babysteps, but we are moving forward.
|
|
|
|
rav3n_pl
Legendary
Offline
Activity: 1361
Merit: 1003
Don`t panic! Organize!
|
|
April 02, 2013, 04:53:41 PM |
|
We are concerning about that Avalons will sky high share diff to high. Share diff is raised, when shares are showing to fast in chain. Maybe allow Avalon (or another high power devices) users to set share diff as high as they want? Easiest way I see is add another mark to username ie "*". Shares found this way should be saved in chain as diff that high. This way shares will NOT come up so much often and "normal" share diff will be on sane level for smaller miners and high hash power users will be paid more for higher diff shares. This proposal will "only" need minor changes in code and we will not need separate share chain or hard fork. OFC there should be "some" protection against changes in code, ie there should be at least 2 shares reported on same higher sd from same node/user/address.
|
|
|
|
jgarzik
Legendary
Offline
Activity: 1596
Merit: 1100
|
|
April 02, 2013, 05:21:33 PM |
|
We are concerning about that Avalons will sky high share diff to high. Share diff is raised, when shares are showing to fast in chain. Maybe allow Avalon (or another high power devices) users to set share diff as high as they want?
There is an argument for multiple pools... an Avalon/ASIC pool, a GPU-and-smaller pool, etc.
|
Jeff Garzik, Bloq CEO, former bitcoin core dev team; opinions are my own. Visit bloq.com / metronome.io Donations / tip jar: 1BrufViLKnSWtuWGkryPsKsxonV2NQ7Tcj
|
|
|
maqifrnswa
|
|
April 02, 2013, 05:52:30 PM |
|
This proposal will "only" need minor changes in code and we will not need separate share chain or hard fork. OFC there should be "some" protection against changes in code, ie there should be at least 2 shares reported on same higher sd from same node/user/address.
I think there is another concern: avalons have high work-return latency. The hard fork would be caused if moving to a 30 second per share target to compensate for the latency issues. That alone would cause a 73% increase in variance across the board. Large miners (ASICS) might not care since their variance is low to begin with, but it might be too much to swallow for small miners who are already experiencing higher variance. However, if the 3x increase in target time is combined with a 3x increase in the percentage of bitcoin hashing rate attributed to p2pool (thanks to ASICs now being able mine), then the small miners won't even notice the change in variance and there can be just one pool: the new 30 second one. Or you can make a command line flag on p2pool and let the market choose/decide. Nothing would stop the smaller miners from choosing the 30 second pool if the variance is lower there.
|
|
|
|
Aseras
|
|
April 02, 2013, 06:27:16 PM |
|
I think I got p2pool working on avalon with stratum.. maybe. its hashing at full speed last couple of minutes in main.py serverfactory = switchprotocol.FirstByteSwitchFactory({'{': stratum.StratumServerFactory(wb)}, web_serverfactory)
same workaround as p2pool avalon branch, to disable work caching.
|
|
|
|
gyverlb
|
|
April 02, 2013, 06:37:54 PM |
|
We are concerning about that Avalons will sky high share diff to high. Share diff is raised, when shares are showing to fast in chain. Maybe allow Avalon (or another high power devices) users to set share diff as high as they want?
There is an argument for multiple pools... an Avalon/ASIC pool, a GPU-and-smaller pool, etc. I'm not sure the argument is valid. The argument is based on the assumption that a GPU on an ASIC pool will get such a high variance that it will be a deterrent. The problem with this line of thinking is that it doesn't scale: a GPU in a small ASIC pool is the same today as an ASIC in a big ASIC pool tomorrow. The problem is small relative hashrate, it will exist even with a balanced p2pool (with everybody in the same ballpark) when it grows.
|
|
|
|
gyverlb
|
|
April 02, 2013, 06:47:35 PM |
|
Another solution involving several pools:
The p2pool network could more or less automatically organize itself in subpools to avoid the problems of a too large pool. A node should target a pool where it gets a percentage of the hashrate in a range suited for low variance.
The problem I see is how a new subpool could be automatically created (it should be done cooperatively to avoid one single node on a subpool). A node could be connected to the old and new pools at the same time and balancing its hashrate among them progressively (monitoring other node hashrate rising) to make the transition less risky for its variance.
|
|
|
|
PatMan
|
|
April 02, 2013, 07:54:47 PM |
|
We are concerning about that Avalons will sky high share diff to high. Share diff is raised, when shares are showing to fast in chain. Maybe allow Avalon (or another high power devices) users to set share diff as high as they want?
There is an argument for multiple pools... an Avalon/ASIC pool, a GPU-and-smaller pool, etc. I not sure that this is a good option, it may cause more problems than it solves. Far better to have the one pool for everyone I think...keeps it simple also.
|
|
|
|
-ck
Legendary
Offline
Activity: 4312
Merit: 1649
Ruu \o/
|
|
April 02, 2013, 09:05:52 PM |
|
It sounds like forrestv is instead in favor of alternate means to extend the effective work interval through merging of parallel chains. Various theoretical designs were discussed.
We may end up making a fork or a frankenbuild of p2pool to fix things for testing. I don't see why in the end it would need to fork for everyone, but it would be a hard fork and everyone would need to upgrade to a version where everyone is on the same share chain.
It ultimately seems like 10 seconds is just too short, given Internet propagation, current Avalon hashrate, and the up to 1.5-second delay it can take for work to be returned from Avalons (high latency). Thus, I argue for around 30 seconds, which would imply a hard fork at some point. No, taking this direction is a mistake. You are trying to redesign p2pool around the design of one device. There is nothing that says that all future ASICs will have this same hardware limitation. It is an intrinsic design flaw/limitation/shortcut taken in the first generation Avalons.
|
Developer/maintainer for cgminer, ckpool/ckproxy, and the -ck kernel 2% Fee Solo mining at solo.ckpool.org -ck
|
|
|
Aseras
|
|
April 02, 2013, 09:17:42 PM |
|
It sounds like forrestv is instead in favor of alternate means to extend the effective work interval through merging of parallel chains. Various theoretical designs were discussed.
We may end up making a fork or a frankenbuild of p2pool to fix things for testing. I don't see why in the end it would need to fork for everyone, but it would be a hard fork and everyone would need to upgrade to a version where everyone is on the same share chain.
It ultimately seems like 10 seconds is just too short, given Internet propagation, current Avalon hashrate, and the up to 1.5-second delay it can take for work to be returned from Avalons (high latency). Thus, I argue for around 30 seconds, which would imply a hard fork at some point. No, taking this direction is a mistake. You are trying to redesign p2pool around the design of one device. There is nothing that says that all future ASICs will have this same hardware limitation. It is an intrinsic design flaw/limitation/shortcut taken in the first generation Avalons. it affects more devices, the bfl singles have the same issue. the minirigs have a work around, but sort of the same as well. I will bet money the bfl SC when/if they come out will as well. The way the current asics are designed is the issue, they are clusters of many small devices with overhead. anyways, the issue is kinda moot right now since it appears that the avalons will work on p2pool once you disable work caching on stratum as well. No need to create a fork to test. But it would be nice to not loose 20-30% to DOA. But in the long run, once asic hit mainstream everyone else will have them too and it will even out.
|
|
|
|
rav3n_pl
Legendary
Offline
Activity: 1361
Merit: 1003
Don`t panic! Organize!
|
|
April 02, 2013, 09:35:40 PM |
|
Another way would be create 3 type of shares. For current pool hash rate sd is about 700. Make one share type that have diff 1/3 of standard share and one type that have 3x standard share diff. Each type have to be scored according to its diff. Pool will take sd1+ shares form worker and calculate its hash rate. Then pool decide what type of share should be used for this worker. We should also have ability to force standard or higher share diff using prefix or postfix in worker name (not allow to drop to type 1). Lower share diff should be only enabled on pool side. Also if node is making lots of low diff shares pool should punish those shares as invalid (in case that s1 mess in code). This way we avoid too high share diff and allow smaller and bigger miners to mine in p2pool This change require hard fork ofc.
|
|
|
|
-ck
Legendary
Offline
Activity: 4312
Merit: 1649
Ruu \o/
|
|
April 02, 2013, 09:41:15 PM Last edit: April 02, 2013, 10:21:09 PM by ckolivas |
|
here is nothing that says that all future ASICs will have this same hardware limitation. It is an intrinsic design flaw/limitation/shortcut taken in the first generation Avalons.
it affects more devices, the bfl singles have the same issue. the minirigs have a work around, but sort of the same as well. I will bet money the bfl SC when/if they come out will as well. The way the current asics are designed is the issue, they are clusters of many small devices with overhead. No, the BFL SC devices do not suffer this.
|
Developer/maintainer for cgminer, ckpool/ckproxy, and the -ck kernel 2% Fee Solo mining at solo.ckpool.org -ck
|
|
|
gyverlb
|
|
April 02, 2013, 10:04:57 PM |
|
Another way would be create 3 type of shares. For current pool hash rate sd is about 700. Make one share type that have diff 1/3 of standard share and one type that have 3x standard share diff. Each type have to be scored according to its diff. Pool will take sd1+ shares form worker and calculate its hash rate. Then pool decide what type of share should be used for this worker. We should also have ability to force standard or higher share diff using prefix or postfix in worker name (not allow to drop to type 1). Lower share diff should be only enabled on pool side. Also if node is making lots of low diff shares pool should punish those shares as invalid (in case that s1 mess in code). This way we avoid too high share diff and allow smaller and bigger miners to mine in p2pool This change require hard fork ofc. There's an easier way: just make the code compute the difficulty needed to get a maximum of shares in the PPLNS window to limit its share usage while maintaining the variance to an acceptable level (200 shares should probably be enough: I get a little more than 100 and variance is OK). But someone (I think it's gmaxwell on IRC) pointed to me that there's a flaw: even if the pool tries to enforce this rule, there's nothing preventing a user mining with several payout addresses to lower his variance even more, defeating the mechanism. I think it would be a sane default for p2pool to behave that way though. The current PPLNS setup only allows for ~40 miners with low (at least low according to my own definition) variance (I'm not sure of the N in PPLNS as it involves both 24h and the estimated time to get a block but there's 8640 shares in one day).
|
|
|
|
Aseras
|
|
April 03, 2013, 01:22:31 AM |
|
I still don't see why a slightly longer share chain long poll time would be bad. It would let the slower miners get a share in easier before being cut off and helps higher latency. It would benefit just about everyone. It's still PPLNS so the payouts would still be equal to the balance of shares.
|
|
|
|
kano
Legendary
Offline
Activity: 4634
Merit: 1851
Linux since 1997 RedHat 4
|
|
April 03, 2013, 02:10:06 AM |
|
I still don't see why a slightly longer share chain long poll time would be bad. It would let the slower miners get a share in easier before being cut off and helps higher latency. It would benefit just about everyone. It's still PPLNS so the payouts would still be equal to the balance of shares.
Longer LP means higher difficulty ...
|
|
|
|
Aseras
|
|
April 03, 2013, 11:38:24 AM |
|
I still don't see why a slightly longer share chain long poll time would be bad. It would let the slower miners get a share in easier before being cut off and helps higher latency. It would benefit just about everyone. It's still PPLNS so the payouts would still be equal to the balance of shares.
Longer LP means higher difficulty ... yes, but it will be a pittance compared to ASIC. I'm using +32 difficulty now. I would go even higher but p2pool has problems or the work DOA because of age.
|
|
|
|
PatMan
|
|
April 03, 2013, 11:53:22 AM |
|
On a side note - what's the record for the longest period without finding a block? I have a feeling it's gonna be broken shortly...... (we usually find one when I mention something about it - let's hope it happens again.....soon )
|
|
|
|
rav3n_pl
Legendary
Offline
Activity: 1361
Merit: 1003
Don`t panic! Organize!
|
|
April 03, 2013, 12:04:43 PM |
|
On a side note - what's the record for the longest period without finding a block? I have a feeling it's gonna be broken shortly...... (we usually find one when I mention something about it - let's hope it happens again.....soon ) Over 4 days.... looks like we will have lots of short blocks soon ;]
|
|
|
|
|