Bitcoin Forum
June 19, 2019, 06:13:00 PM *
News: Latest Bitcoin Core release: 0.18.0 [Torrent] (New!)
 
   Home   Help Search Login Register More  
Pages: « 1 ... 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 [239] 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 ... 814 »
  Print  
Author Topic: [1500 TH] p2pool: Decentralized, DoS-resistant, Hop-Proof pool  (Read 2578723 times)
stevegee58
Legendary
*
Offline Offline

Activity: 922
Merit: 1003



View Profile
April 01, 2013, 01:29:26 AM
 #4761

Is there a site like p2pool.info for LTC?

You are in a maze of twisty little passages, all alike.
PLAY NOW
Advertised sites are not endorsed by the Bitcoin Forum. They may be unsafe, untrustworthy, or illegal in your jurisdiction. Advertise here.
kano
Legendary
*
Offline Offline

Activity: 2856
Merit: 1175


Linux since 1997 RedHat 4


View Profile
April 01, 2013, 09:09:27 AM
 #4762

Just get an Avalon for ckolivas and cgminer will be hashing Avalon on all working pools as quickly as possible.
I don't want an Avalon as I have made clear for quite a while now.
So no, it's not get "cgminer team" an Avalon, it's get "ckolivas" an Avalon.

This is happening next week. If plans go well.

And Kano, take what you can get. Just because you have beef with the avalon team, don't screw with everyone else. If anything we need people like you to get the avalons to operate like they should. Instead, we have this half assed build of cgminer because they wanted it all in house. Better to just cut them out and move on.

Bitcoin is going to ASIC. BFL may be close, or one post away from bankruptcy. There's no one else. Might as well get your hands on what's out there.
Sorry, I'm quite serious that I do not wish to be involved in any way with GitSyncom or the companies he is a lowly employee of.
If I had an Avalon I would be supporting them and I will not do that.

They ignored the GPL license for cgminer for a long time until they released the source and made up excuses that were unrelated to cgminer as to why they didn't release the code at first.
GitSyncom also directly stated that he thought my suggestions that hardware was required by me to properly support it was just an excuse to get "free hardware" and also that when I pointed this out to him last year his thoughts on that were:
https://bitcointalk.org/index.php?topic=142083.msg1513358#msg1513358
"I'll reject you on sheer principle fucking level."

I don't mind helping Xiangfu or CKolivas with the implementation, but I will be leaving any non USB specific code directly up to them (as they of course are well able to deal with it)

I'm not motivated by money above my own conscience and since I cannot with a good conscience accept an Avalon, the monetary gain is irrelevant.
I have been offered 2 already and turned them both down. One you will see in one of the Avalon threads and the other in PM.

I'm not sure if you consider this to be yet another offer - but either way - I'm not interested in it.

Pool: https://kano.is Here on Bitcointalk: Forum BTC: 1KanoPb8cKYqNrswjaA8cRDk4FAS9eDMLU
Discord support invite at https://kano.is/ Majority developer of the ckpool code
Help keep Bitcoin secure by mining on pools with full block verification on all blocks - and NO empty blocks!
Aseras
Hero Member
*****
Offline Offline

Activity: 658
Merit: 500


View Profile
April 01, 2013, 03:48:54 PM
 #4763

Sorry, I'm quite serious that I do not wish to be involved in any way with GitSyncom or the companies he is a lowly employee of.
If I had an Avalon I would be supporting them and I will not do that.

They ignored the GPL license for cgminer for a long time until they released the source and made up excuses that were unrelated to cgminer as to why they didn't release the code at first.
GitSyncom also directly stated that he thought my suggestions that hardware was required by me to properly support it was just an excuse to get "free hardware" and also that when I pointed this out to him last year his thoughts on that were:
https://bitcointalk.org/index.php?topic=142083.msg1513358#msg1513358
"I'll reject you on sheer principle fucking level."

I don't mind helping Xiangfu or CKolivas with the implementation, but I will be leaving any non USB specific code directly up to them (as they of course are well able to deal with it)

I'm not motivated by money above my own conscience and since I cannot with a good conscience accept an Avalon, the monetary gain is irrelevant.
I have been offered 2 already and turned them both down. One you will see in one of the Avalon threads and the other in PM.

I'm not sure if you consider this to be yet another offer - but either way - I'm not interested in it.

I totally understand I watched the whole thing develop. bitsyncom was a total dick about it all. xiangfu is ok, but hes very quiet and doesn't talk much, and what he does say is quite hard to follow sometimes.

that said, i do wish you might reconsider and help US out.

ckolivas has been working all day on my units. he had a crash course in l2tp under ubuntu last night Cheesy Anyways, hes in now, and mining away while he tries out new things. they are making ~9BTC per day, so by the end of the week @ > $100/BTC he should make out, and hopefully we'll have a much better improved cgminer on avalon for it soon.
maqifrnswa
Sr. Member
****
Offline Offline

Activity: 454
Merit: 250


View Profile
April 01, 2013, 07:09:17 PM
Last edit: April 01, 2013, 07:34:06 PM by maqifrnswa
 #4764

...
The highest p2pool would let me go is 6535. Any higher number just comes back as 6535.
...
Better get that fixed fast ...
I think it is in getwork.py:
Code:
'target': pack.IntType(256).pack(self.share_target).encode('hex'),


Regarding minimum difficulty:
rav3n_pl has helped point me to what's going on in worker.py
Code:
       if desired_pseudoshare_target is None:
            target = 2**256-1
            if len(self.recent_shares_ts_work) == 50:
                hash_rate = sum(work for ts, work in self.recent_shares_ts_work[1:])//(self.recent_shares_ts_work[-1][0] - self.recent_shares_ts_work[0][0])
                if hash_rate:
                    target = min(target, int(2**256/hash_rate))
        else:
            target = desired_pseudoshare_target
        target = max(target, share_info['bits'].target)

The last line shows that if the desired_pseudoshare_target (that is the diff that is served to your miner and it is taken from the username+desired_pseudoshare_target log in to the server) is harder (higher difficulty or lower target) than the current p2pool network difficulty (share_info['bits'].target), the target served as work to your miner will be the current p2pool network difficulty.

rav3n_pl had the point that unless you plug something into the p2pool network that hashes at > 1000% of the current network hashrate, you will submit shares to your local p2pool instance at a rate of < 1 share per second. Most servers should be able to handle that somewhat easily.

So the difficulty "bug" does not appear to be one, unless someone else has something to add.

And thank you @Aseras for donating you machine to help get avalon working on p2pool.


EDIT: I realize that Aseras may also have been talking about the maximum difficulty returned to the p2pool network (which should have no connection to server load).
From data.py, get_transaction:
Code:
bits = bitcoin_data.FloatingInteger.from_target_upper_bound(math.clip(desired_target, (pre_target3//10, pre_target3)))
So you will return to the network the easier (lower difficulty) of the desired target from "username/desired_target" or 10 times the current p2pool network difficulty.

That's why you were getting 6535: the network difficulty was 653.5 and it wouldn't let you set a target greater than 10x harder.
Aseras
Hero Member
*****
Offline Offline

Activity: 658
Merit: 500


View Profile
April 01, 2013, 09:12:12 PM
 #4765

...
The highest p2pool would let me go is 6535. Any higher number just comes back as 6535.
...
Better get that fixed fast ...
I think it is in getwork.py:
Code:
'target': pack.IntType(256).pack(self.share_target).encode('hex'),


Regarding minimum difficulty:
rav3n_pl has helped point me to what's going on in worker.py
Code:
       if desired_pseudoshare_target is None:
            target = 2**256-1
            if len(self.recent_shares_ts_work) == 50:
                hash_rate = sum(work for ts, work in self.recent_shares_ts_work[1:])//(self.recent_shares_ts_work[-1][0] - self.recent_shares_ts_work[0][0])
                if hash_rate:
                    target = min(target, int(2**256/hash_rate))
        else:
            target = desired_pseudoshare_target
        target = max(target, share_info['bits'].target)

The last line shows that if the desired_pseudoshare_target (that is the diff that is served to your miner and it is taken from the username+desired_pseudoshare_target log in to the server) is harder (higher difficulty or lower target) than the current p2pool network difficulty (share_info['bits'].target), the target served as work to your miner will be the current p2pool network difficulty.

rav3n_pl had the point that unless you plug something into the p2pool network that hashes at > 1000% of the current network hashrate, you will submit shares to your local p2pool instance at a rate of < 1 share per second. Most servers should be able to handle that somewhat easily.

So the difficulty "bug" does not appear to be one, unless someone else has something to add.

And thank you @Aseras for donating you machine to help get avalon working on p2pool.


EDIT: I realize that Aseras may also have been talking about the maximum difficulty returned to the p2pool network (which should have no connection to server load).
From data.py, get_transaction:
Code:
bits = bitcoin_data.FloatingInteger.from_target_upper_bound(math.clip(desired_target, (pre_target3//10, pre_target3)))
So you will return to the network the easier (lower difficulty) of the desired target from "username/desired_target" or 10 times the current p2pool network difficulty.

That's why you were getting 6535: the network difficulty was 653.5 and it wouldn't let you set a target greater than 10x harder.

Yes I think you've found the "problem" The issue is, the asics NEED a higher difficulty, or they are going to kill the smaller miners in p2pool.

I also think as difficulty greatly increases, we are going to need a longer long-poll time as well. maybe 20 or 30 seconds.
jgarzik
Legendary
*
Offline Offline

Activity: 1554
Merit: 1005


View Profile
April 02, 2013, 02:14:06 AM
 #4766

Yes I think you've found the "problem" The issue is, the asics NEED a higher difficulty, or they are going to kill the smaller miners in p2pool.

Yes, this was established the day the first Avalon arrived Smiley

Quote
I also think as difficulty greatly increases, we are going to need a longer long-poll time as well. maybe 20 or 30 seconds.

On IRC, an ASIC-only p2pool share chain idea was floated, with a higher difficulty by default and a longer time between shares.


Jeff Garzik, Bloq CEO, former bitcoin core dev team; opinions are my own.
Visit bloq.com / metronome.io
Donations / tip jar: 1BrufViLKnSWtuWGkryPsKsxonV2NQ7Tcj
wtogami
Sr. Member
****
Offline Offline

Activity: 263
Merit: 250



View Profile
April 02, 2013, 02:52:18 AM
 #4767

Yes I think you've found the "problem" The issue is, the asics NEED a higher difficulty, or they are going to kill the smaller miners in p2pool.

Yes, this was established the day the first Avalon arrived Smiley

Quote
I also think as difficulty greatly increases, we are going to need a longer long-poll time as well. maybe 20 or 30 seconds.

On IRC, an ASIC-only p2pool share chain idea was floated, with a higher difficulty by default and a longer time between shares.

It sounds like forrestv is instead in favor of alternate means to extend the effective work interval through merging of parallel chains.  Various theoretical designs were discussed.

If you appreciate my work please consider making a small donation.
BTC:  1LkYiL3RaouKXTUhGcE84XLece31JjnLc3      LTC:  LYtrtYZsVSn5ymhPepcJMo4HnBeeXXVKW9
GPG: AEC1884398647C47413C1C3FB1179EB7347DC10D
Aseras
Hero Member
*****
Offline Offline

Activity: 658
Merit: 500


View Profile
April 02, 2013, 12:50:12 PM
 #4768

I havent talked to ckovilas today, but looking at the linux box, hes pulled p2pool down and setup a bunch of other things. He's in Poland, I'm in the USA. We are about 10 hours off from each other.

He's going to primarily bring cgminer for avalon up to the current codebase. p2pool compatibility is my special request and I'm sure we'll be screwing with it for some time.

We may end up making a fork or a frankenbuild of p2pool to fix things for testing. I don't see why in the end it would need to fork for everyone, but it would be a hard fork and everyone would need to upgrade to a version where everyone is on the same share chain.

The ball is rolling just not quickly yet Tongue

jgarzik
Legendary
*
Offline Offline

Activity: 1554
Merit: 1005


View Profile
April 02, 2013, 02:47:18 PM
 #4769

It sounds like forrestv is instead in favor of alternate means to extend the effective work interval through merging of parallel chains.  Various theoretical designs were discussed.

We may end up making a fork or a frankenbuild of p2pool to fix things for testing. I don't see why in the end it would need to fork for everyone, but it would be a hard fork and everyone would need to upgrade to a version where everyone is on the same share chain.

It ultimately seems like 10 seconds is just too short, given Internet propagation, current Avalon hashrate, and the up to 1.5-second delay it can take for work to be returned from Avalons (high latency).  Thus, I argue for around 30 seconds, which would imply a hard fork at some point.

Jeff Garzik, Bloq CEO, former bitcoin core dev team; opinions are my own.
Visit bloq.com / metronome.io
Donations / tip jar: 1BrufViLKnSWtuWGkryPsKsxonV2NQ7Tcj
Aseras
Hero Member
*****
Offline Offline

Activity: 658
Merit: 500


View Profile
April 02, 2013, 03:05:17 PM
 #4770

It sounds like forrestv is instead in favor of alternate means to extend the effective work interval through merging of parallel chains.  Various theoretical designs were discussed.

We may end up making a fork or a frankenbuild of p2pool to fix things for testing. I don't see why in the end it would need to fork for everyone, but it would be a hard fork and everyone would need to upgrade to a version where everyone is on the same share chain.

It ultimately seems like 10 seconds is just too short, given Internet propagation, current Avalon hashrate, and the up to 1.5-second delay it can take for work to be returned from Avalons (high latency).  Thus, I argue for around 30 seconds, which would imply a hard fork at some point.


I'm going to play with it today. ckolivas and xiangfu were in #cgminer today, got a new build of cgminer working with bugfixes from ckolivas. I'll see what we can come up with.

babysteps, but we are moving forward.
rav3n_pl
Legendary
*
Offline Offline

Activity: 1360
Merit: 1000


Don`t panic! Organize!


View Profile WWW
April 02, 2013, 04:53:41 PM
 #4771

We are concerning about that Avalons will sky high share diff to high. Share diff is raised, when shares are showing to fast in chain.
Maybe allow Avalon (or another high power devices) users to set share diff as high as they want?
Easiest way I see is add another mark to username ie "*".
Shares found this way should be saved in chain as diff that high.
This way shares will NOT come up so much often and "normal" share diff will be on sane level for smaller miners and high hash power users will be paid more for higher diff shares.
This proposal will "only" need minor changes in code and we will not need separate share chain or hard fork.
OFC there should be "some" protection against changes in code, ie there should be at least 2 shares reported on same higher sd from same node/user/address.

1Rav3nkMayCijuhzcYemMiPYsvcaiwHni  Bitcoin stuff on my OneDrive
My RPC CoinControl for any coin https://bitcointalk.org/index.php?topic=929954
Some stuff on https://github.com/Rav3nPL/
jgarzik
Legendary
*
Offline Offline

Activity: 1554
Merit: 1005


View Profile
April 02, 2013, 05:21:33 PM
 #4772

We are concerning about that Avalons will sky high share diff to high. Share diff is raised, when shares are showing to fast in chain.
Maybe allow Avalon (or another high power devices) users to set share diff as high as they want?

There is an argument for multiple pools... an Avalon/ASIC pool, a GPU-and-smaller pool, etc.


Jeff Garzik, Bloq CEO, former bitcoin core dev team; opinions are my own.
Visit bloq.com / metronome.io
Donations / tip jar: 1BrufViLKnSWtuWGkryPsKsxonV2NQ7Tcj
maqifrnswa
Sr. Member
****
Offline Offline

Activity: 454
Merit: 250


View Profile
April 02, 2013, 05:52:30 PM
 #4773

This proposal will "only" need minor changes in code and we will not need separate share chain or hard fork.
OFC there should be "some" protection against changes in code, ie there should be at least 2 shares reported on same higher sd from same node/user/address.

I think there is another concern: avalons have high work-return latency. The hard fork would be caused if moving to a 30 second per share target to compensate for the latency issues. That alone would cause a 73% increase in variance across the board. Large miners (ASICS) might not care since their variance is low to begin with, but it might be too much to swallow for small miners who are already experiencing higher variance.

However, if the 3x increase in target time is combined with a 3x increase in the percentage of bitcoin hashing rate attributed to p2pool (thanks to ASICs now being able mine), then the small miners won't even notice the change in variance and there can be just one pool: the new 30 second one.

Or you can make a command line flag on p2pool and let the market choose/decide. Nothing would stop the smaller miners from choosing the 30 second pool if the variance is lower there.
Aseras
Hero Member
*****
Offline Offline

Activity: 658
Merit: 500


View Profile
April 02, 2013, 06:27:16 PM
 #4774

I think I got p2pool working on avalon with stratum.. maybe.

its hashing at full speed last couple of minutes Cheesy

in main.py

Quote
serverfactory = switchprotocol.FirstByteSwitchFactory({'{': stratum.StratumServerFactory(wb)}, web_serverfactory)

same workaround as p2pool avalon branch, to disable work caching.
gyverlb
Hero Member
*****
Offline Offline

Activity: 896
Merit: 1000



View Profile
April 02, 2013, 06:37:54 PM
 #4775

We are concerning about that Avalons will sky high share diff to high. Share diff is raised, when shares are showing to fast in chain.
Maybe allow Avalon (or another high power devices) users to set share diff as high as they want?

There is an argument for multiple pools... an Avalon/ASIC pool, a GPU-and-smaller pool, etc.

I'm not sure the argument is valid. The argument is based on the assumption that a GPU on an ASIC pool will get such a high variance that it will be a deterrent.

The problem with this line of thinking is that it doesn't scale: a GPU in a small ASIC pool is the same today as an ASIC in a big ASIC pool tomorrow. The problem is small relative hashrate, it will exist even with a balanced p2pool (with everybody in the same ballpark) when it grows.

P2pool tuning guide
Trade BTC for €/$ at bitcoin.de (referral), it's cheaper and faster (acts as escrow and lets the buyers do bank transfers).
Tip: 17bdPfKXXvr7zETKRkPG14dEjfgBt5k2dd
gyverlb
Hero Member
*****
Offline Offline

Activity: 896
Merit: 1000



View Profile
April 02, 2013, 06:47:35 PM
 #4776

Another solution involving several pools:

The p2pool network could more or less automatically organize itself in subpools to avoid the problems of a too large pool. A node should target a pool where it gets a percentage of the hashrate in a range suited for low variance.

The problem I see is how a new subpool could be automatically created (it should be done cooperatively to avoid one single node on a subpool). A node could be connected to the old and new pools at the same time and balancing its hashrate among them progressively (monitoring other node hashrate rising) to make the transition less risky for its variance.

P2pool tuning guide
Trade BTC for €/$ at bitcoin.de (referral), it's cheaper and faster (acts as escrow and lets the buyers do bank transfers).
Tip: 17bdPfKXXvr7zETKRkPG14dEjfgBt5k2dd
PatMan
Hero Member
*****
Offline Offline

Activity: 924
Merit: 1000


Watch out for the "Neg-Rep-Dogie-Police".....


View Profile WWW
April 02, 2013, 07:54:47 PM
 #4777

We are concerning about that Avalons will sky high share diff to high. Share diff is raised, when shares are showing to fast in chain.
Maybe allow Avalon (or another high power devices) users to set share diff as high as they want?

There is an argument for multiple pools... an Avalon/ASIC pool, a GPU-and-smaller pool, etc.



I not sure that this is a good option, it may cause more problems than it solves. Far better to have the one pool for everyone I think...keeps it simple also.

"When one person is deluded it is called insanity - when many people are deluded it is called religion" - Robert M. Pirsig.  I don't want your coins, I want change.
Amazon UK BTC payment service - https://bitcointalk.org/index.php?topic=301229.0 - with FREE delivery!
http://www.ae911truth.org/ - http://rethink911.org/ - http://rememberbuilding7.org/
-ck
Moderator
Legendary
*
Offline Offline

Activity: 2884
Merit: 1167


Ruu \o/


View Profile WWW
April 02, 2013, 09:05:52 PM
 #4778

It sounds like forrestv is instead in favor of alternate means to extend the effective work interval through merging of parallel chains.  Various theoretical designs were discussed.

We may end up making a fork or a frankenbuild of p2pool to fix things for testing. I don't see why in the end it would need to fork for everyone, but it would be a hard fork and everyone would need to upgrade to a version where everyone is on the same share chain.

It ultimately seems like 10 seconds is just too short, given Internet propagation, current Avalon hashrate, and the up to 1.5-second delay it can take for work to be returned from Avalons (high latency).  Thus, I argue for around 30 seconds, which would imply a hard fork at some point.

No, taking this direction is a mistake. You are trying to redesign p2pool around the design of one device. There is nothing that says that all future ASICs will have this same hardware limitation. It is an intrinsic design flaw/limitation/shortcut taken in the first generation Avalons.

Developer/maintainer for cgminer and ckpool/ckproxy.
ZERO FEE Pooled mining at ckpool.org, 1% Fee Solo mining at solo.ckpool.org
-ck
Aseras
Hero Member
*****
Offline Offline

Activity: 658
Merit: 500


View Profile
April 02, 2013, 09:17:42 PM
 #4779

It sounds like forrestv is instead in favor of alternate means to extend the effective work interval through merging of parallel chains.  Various theoretical designs were discussed.

We may end up making a fork or a frankenbuild of p2pool to fix things for testing. I don't see why in the end it would need to fork for everyone, but it would be a hard fork and everyone would need to upgrade to a version where everyone is on the same share chain.

It ultimately seems like 10 seconds is just too short, given Internet propagation, current Avalon hashrate, and the up to 1.5-second delay it can take for work to be returned from Avalons (high latency).  Thus, I argue for around 30 seconds, which would imply a hard fork at some point.

No, taking this direction is a mistake. You are trying to redesign p2pool around the design of one device. There is nothing that says that all future ASICs will have this same hardware limitation. It is an intrinsic design flaw/limitation/shortcut taken in the first generation Avalons.

it affects more devices, the bfl singles have the same issue. the minirigs have a work around, but sort of the same as well. I will bet money the bfl SC when/if they come out will as well. The way the current asics are designed is the issue, they are clusters of many small devices with overhead.

anyways, the issue is kinda moot right now since it appears that the avalons will work on p2pool once you disable work caching on stratum as well. No need to create a fork to test. But it would be nice to not loose 20-30% to DOA. But in the long run, once asic hit mainstream everyone else will have them too and it will even out.
rav3n_pl
Legendary
*
Offline Offline

Activity: 1360
Merit: 1000


Don`t panic! Organize!


View Profile WWW
April 02, 2013, 09:35:40 PM
 #4780

Another way would be create 3 type of shares.
For current pool hash rate sd is about 700.
Make one share type that have diff 1/3 of standard share and one type that have 3x standard share diff. Each type have to be scored according to its diff.
Pool will take sd1+ shares form worker and calculate its hash rate. Then pool decide what type of share should be used for this worker.
We should also have ability to force standard or higher share diff using prefix or postfix in worker name (not allow to drop to type 1).
Lower share diff should be only enabled on pool side.
Also if node is making lots of low diff shares pool should punish those shares as invalid (in case that s1 mess in code).
This way we avoid too high share diff and allow smaller and bigger miners to mine in p2pool Smiley
This change require hard fork ofc.

1Rav3nkMayCijuhzcYemMiPYsvcaiwHni  Bitcoin stuff on my OneDrive
My RPC CoinControl for any coin https://bitcointalk.org/index.php?topic=929954
Some stuff on https://github.com/Rav3nPL/
Pages: « 1 ... 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 [239] 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 ... 814 »
  Print  
 
Jump to:  

Sponsored by , a Bitcoin-accepting VPN.
Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!