yslyung
Legendary
Offline
Activity: 1500
Merit: 1002
Mine Mine Mine
|
|
January 02, 2015, 06:35:50 AM Last edit: January 02, 2015, 05:56:28 PM by yslyung |
|
I thought I'd read you could set your pseudo share difficulty to just about any value? My nodes seem to be limited to 1,000,000.
M
Did u try? Any eg? Thx @patman We on a roll keep the blocks rolling in another question of the day : if i do this : Appending "/1000" to a miner's username, i see this on the console : (which is also the diff shown under Current Share Difficulty) New work for worker! Difficulty: 999.984741 Share difficulty: 13464960.940531 Total block value: 25.016444 BTC including 115 transactions if i DO NOT append the /1000 to the miner's username (which is a BTC add), i see this on the console : (the value is different & much higher than the Current Share Difficulty) New work for worker! Difficulty: 999.984741 Share difficulty: 25433607.461485 Total block value: 25.016424 BTC including 119 transactions so which is which ? should or should not ? confused here yes, looked thru the web & all but a finer explanation will be greatly appreciated as being pointed to the right direction with better understanding. if my miner has a capability of 1th/s & it's getting shares @ the Current Diff (/1000), that means to me it has better chance of getting a share faster but if it's the other way round (without /1000) same 1th/s but with higher difficulty means it has a lower chance or a longer time to get a share. HEHEHEHHE i found a block ! for p2p BITCOIN BLOCK FOUND by 1JKhNWfS4D7PPzpvKpqwCny8qeySJta7fh! https://blockchain.info/block/000000000000000011ca2e3 e4607bf793b191de7a2810cd07cdc3a53006dc6b2
|
|
|
|
Duce
|
|
January 02, 2015, 05:30:15 PM |
|
I thought I'd read you could set your pseudo share difficulty to just about any value? My nodes seem to be limited to 1,000,000.
M
Did u try? Any eg? Thx @patman We on a roll keep the blocks rolling in another question of the day : if i do this : Appending "/1000" to a miner's username, i see this on the console : (which is also the diff shown under Current Share Difficulty) New work for worker! Difficulty: 999.984741 Share difficulty: 13464960.940531 Total block value: 25.016444 BTC including 115 transactions if i DO NOT append the /1000 to the miner's username (which is a BTC add), i see this on the console : (the value is different & much higher than the Current Share Difficulty) New work for worker! Difficulty: 999.984741 Share difficulty: 25433607.461485 Total block value: 25.016424 BTC including 119 transactions so which is which ? should or should not ? confused here yes, looked thru the web & all but a finer explanation will be greatly appreciated as being pointed to the right direction with better understanding. if my miner has a capability of 1th/s & it's getting shares @ the Current Diff (/1000), that means to me it has better chance of getting a share faster but if it's the other way round (without /1000) same 1th/s but with higher difficulty means it has a lower chance or a longer time to get a share. This may help with your question https://bitcointalk.org/index.php?topic=18313.msg9643415#msg9643415. The one just below from JB also gives some insight.
|
|
|
|
windpath
Legendary
Offline
Activity: 1258
Merit: 1027
|
|
January 02, 2015, 06:55:02 PM |
|
The arguments are share difficulty and pseudo share difficulty: <bitcoin_address>/<share>+<pseudo_share> Setting <share> to 0 defaults to lowest p2pool diff, setting <share> to any number less then current p2pool diff will result the minimum current diff All setting pseudo share does is smooth out your graphs a little, it does not improve mining. <pseudo_share> is recommended to be calculated as your hash rate in KH/s times 0.00000116 i.e. for an S3 that actually runs at 440GH/s: 440,000,000 * 0.00000116 = 510.4 M, to answer your question directly, here is the code from work.py, starting at line 299: if desired_pseudoshare_target is None: target = 2**256-1 local_hash_rate = self._estimate_local_hash_rate() if local_hash_rate is not None: target = min(target, bitcoin_data.average_attempts_to_target(local_hash_rate * 1)) # limit to 1 share response every second by modulating pseudoshare difficulty else: target = desired_pseudoshare_target target = max(target, share_info['bits'].target)
So you should be able to set it as high as you want...
|
|
|
|
mdude77
Legendary
Offline
Activity: 1540
Merit: 1001
|
|
January 02, 2015, 08:07:01 PM |
|
The arguments are share difficulty and pseudo share difficulty:
<bitcoin_address>/<share>+<pseudo_share>
Setting <share> to 0 defaults to lowest p2pool diff, setting <share> to any number less then current p2pool diff will result the minimum current diff
All setting pseudo share does is smooth out your graphs a little, it does not improve mining.
<pseudo_share> is recommended to be calculated as your hash rate in KH/s times 0.00000116
i.e. for an S3 that actually runs at 440GH/s: 440,000,000 * 0.00000116 = 510.4
M, to answer your question directly, here is the code from work.py, starting at line 299:
So you should be able to set it as high as you want...
I never understood the point of the /parm. Just the +parm. I tried setting my miners to +7000000 and they all come up as 999,999. I got the same result with +5000000. M
|
I mine at Kano's Pool because it pays the best and is completely transparent! Come join me!
|
|
|
PatMan
|
|
January 02, 2015, 08:15:00 PM |
|
Following up from my earlier post about my half dead S5, I had a pleasant surprise when UPS came knocking on my door with a replacement hashing board today - I didn't even realise that BitmainWarranty had despatched it! Straight away I got to work, stripped the S5, removed the offending board, slid in the new one & put it back together - it didn't take long at all due to the "open air" design...... The result: Running for 3 hours now without a hitch - lovely! Respect & thanks are due to BitmainWarranty who was super fast in arranging the replacement board without a hassle, as well as friendly & understanding as usual, unlike a certain other "representative" - who should take a leaf out of BitmainWarranty's book on customer relations instead of barking at customers. If he did so, Bitmain & himself would be far better for it. Faith restored
|
|
|
|
windpath
Legendary
Offline
Activity: 1258
Merit: 1027
|
|
January 02, 2015, 08:50:28 PM |
|
The arguments are share difficulty and pseudo share difficulty:
<bitcoin_address>/<share>+<pseudo_share>
Setting <share> to 0 defaults to lowest p2pool diff, setting <share> to any number less then current p2pool diff will result the minimum current diff
All setting pseudo share does is smooth out your graphs a little, it does not improve mining.
<pseudo_share> is recommended to be calculated as your hash rate in KH/s times 0.00000116
i.e. for an S3 that actually runs at 440GH/s: 440,000,000 * 0.00000116 = 510.4
M, to answer your question directly, here is the code from work.py, starting at line 299:
So you should be able to set it as high as you want...
I never understood the point of the /parm. Just the +parm. I tried setting my miners to +7000000 and they all come up as 999,999. I got the same result with +5000000. M I wonder if the limitation is in CGMiner?
|
|
|
|
mdude77
Legendary
Offline
Activity: 1540
Merit: 1001
|
|
January 02, 2015, 09:12:09 PM |
|
The arguments are share difficulty and pseudo share difficulty:
<bitcoin_address>/<share>+<pseudo_share>
Setting <share> to 0 defaults to lowest p2pool diff, setting <share> to any number less then current p2pool diff will result the minimum current diff
All setting pseudo share does is smooth out your graphs a little, it does not improve mining.
<pseudo_share> is recommended to be calculated as your hash rate in KH/s times 0.00000116
i.e. for an S3 that actually runs at 440GH/s: 440,000,000 * 0.00000116 = 510.4
M, to answer your question directly, here is the code from work.py, starting at line 299:
So you should be able to set it as high as you want...
I never understood the point of the /parm. Just the +parm. I tried setting my miners to +7000000 and they all come up as 999,999. I got the same result with +5000000. M I wonder if the limitation is in CGMiner? on the p2pool node I see work size of 999,999. M
|
I mine at Kano's Pool because it pays the best and is completely transparent! Come join me!
|
|
|
jonnybravo0311
Legendary
Offline
Activity: 1344
Merit: 1024
Mine at Jonny's Pool
|
|
January 02, 2015, 09:57:50 PM |
|
The arguments are share difficulty and pseudo share difficulty:
<bitcoin_address>/<share>+<pseudo_share>
Setting <share> to 0 defaults to lowest p2pool diff, setting <share> to any number less then current p2pool diff will result the minimum current diff
All setting pseudo share does is smooth out your graphs a little, it does not improve mining.
<pseudo_share> is recommended to be calculated as your hash rate in KH/s times 0.00000116
i.e. for an S3 that actually runs at 440GH/s: 440,000,000 * 0.00000116 = 510.4
M, to answer your question directly, here is the code from work.py, starting at line 299:
So you should be able to set it as high as you want...
I never understood the point of the /parm. Just the +parm. I tried setting my miners to +7000000 and they all come up as 999,999. I got the same result with +5000000. M I wonder if the limitation is in CGMiner? on the p2pool node I see work size of 999,999. M It's limited in p2pool. You guys are looking in the wrong spot. From bitcoin/networks.py: SANE_TARGET_RANGE = (2**256//2**32//1000000 - 1, 2**256//2**32 - 1)
Here's the implementation of its usage in work.py: if desired_pseudoshare_target is None: target = 2**256-1 local_hash_rate = self._estimate_local_hash_rate() if local_hash_rate is not None: target = min(target, bitcoin_data.average_attempts_to_target(local_hash_rate * 1)) # limit to 1 share response every second by modulating pseudoshare difficulty else: target = desired_pseudoshare_target target = max(target, share_info['bits'].target) for aux_work, index, hashes in mm_later: target = max(target, aux_work['target']) target = math.clip(target, self.node.net.PARENT.SANE_TARGET_RANGE)
The difference between '+' and '/' is that '+' sets the pseudo-share difficulty (i.e. shares that p2pool cares about recording from this particular BTC address) and '/' can have an effect on your payouts if you set it higher than p2pool's minimum share difficulty. Set '+' too low and you'll see really pretty graphs with far fewer spikes because the node starts to care about more and more meaningless shares. You're also likely to see a whole lot more of those "Miner submitted share more than once" messages in your logs. Set '/' lower than current p2pool share difficulty and its meaningless. Set it above current p2pool share difficulty and even if your miner finds a share that would satisfy the requirements and end up on the chain, it will not be counted unless it also meets the difficulty requirements you set by using the '/' parameter. The benefit of this is that each share you DO submit is more heavily weighted than the "normal" shares, so it's worth more. At this point in the game, unless you're bringing a few hundred TH/s onto a node, there's no need to use this parameter at all.
|
Jonny's Pool - Mine with us and help us grow! Support a pool that supports Bitcoin, not a hardware manufacturer's pockets! No SPV cheats. No empty blocks.
|
|
|
mdude77
Legendary
Offline
Activity: 1540
Merit: 1001
|
|
January 02, 2015, 10:34:14 PM |
|
It's limited in p2pool. You guys are looking in the wrong spot. From bitcoin/networks.py: SANE_TARGET_RANGE = (2**256//2**32//1000000 - 1, 2**256//2**32 - 1)
Here's the implementation of its usage in work.py: if desired_pseudoshare_target is None: target = 2**256-1 local_hash_rate = self._estimate_local_hash_rate() if local_hash_rate is not None: target = min(target, bitcoin_data.average_attempts_to_target(local_hash_rate * 1)) # limit to 1 share response every second by modulating pseudoshare difficulty else: target = desired_pseudoshare_target target = max(target, share_info['bits'].target) for aux_work, index, hashes in mm_later: target = max(target, aux_work['target']) target = math.clip(target, self.node.net.PARENT.SANE_TARGET_RANGE)
The difference between '+' and '/' is that '+' sets the pseudo-share difficulty (i.e. shares that p2pool cares about recording from this particular BTC address) and '/' can have an effect on your payouts if you set it higher than p2pool's minimum share difficulty. Set '+' too low and you'll see really pretty graphs with far fewer spikes because the node starts to care about more and more meaningless shares. You're also likely to see a whole lot more of those "Miner submitted share more than once" messages in your logs. Set '/' lower than current p2pool share difficulty and its meaningless. Set it above current p2pool share difficulty and even if your miner finds a share that would satisfy the requirements and end up on the chain, it will not be counted unless it also meets the difficulty requirements you set by using the '/' parameter. The benefit of this is that each share you DO submit is more heavily weighted than the "normal" shares, so it's worth more. At this point in the game, unless you're bringing a few hundred TH/s onto a node, there's no need to use this parameter at all. I was trying to set the pseudo share size high to reduce my bandwidth usage. I'm still constricted by my DSL bandwidth. Thanks for proving what I saw. M
|
I mine at Kano's Pool because it pays the best and is completely transparent! Come join me!
|
|
|
jonnybravo0311
Legendary
Offline
Activity: 1344
Merit: 1024
Mine at Jonny's Pool
|
|
January 02, 2015, 10:44:27 PM |
|
I was trying to set the pseudo share size high to reduce my bandwidth usage. I'm still constricted by my DSL bandwidth. Thanks for proving what I saw. M Your miners and your node aren't on the same network I presume? Can you run your p2pool node on the same local network as your miners? The vast majority of the bandwidth being consumed by p2pool is the broadcasting of the share chain data, not the data from your miners. I don't know how you've got things setup (like co-located miners, but running the p2pool node from your home PC, or miners in your home, but running the p2pool node on a VPS). Bandwidth limitations suck. Sorry you're suffering from it EDIT: one thing you can do, if you haven't, is to ensure you aren't accepting traffic on 9333. Yeah, it'll turn off incoming connections to your node, but it'll definitely reduce your overall usage. You can also limit your connections to bitcoind and/or not allow traffic on port 8333.
|
Jonny's Pool - Mine with us and help us grow! Support a pool that supports Bitcoin, not a hardware manufacturer's pockets! No SPV cheats. No empty blocks.
|
|
|
mdude77
Legendary
Offline
Activity: 1540
Merit: 1001
|
|
January 02, 2015, 11:35:00 PM |
|
I was trying to set the pseudo share size high to reduce my bandwidth usage. I'm still constricted by my DSL bandwidth. Thanks for proving what I saw. M Your miners and your node aren't on the same network I presume? Can you run your p2pool node on the same local network as your miners? The vast majority of the bandwidth being consumed by p2pool is the broadcasting of the share chain data, not the data from your miners. I don't know how you've got things setup (like co-located miners, but running the p2pool node from your home PC, or miners in your home, but running the p2pool node on a VPS). Bandwidth limitations suck. Sorry you're suffering from it EDIT: one thing you can do, if you haven't, is to ensure you aren't accepting traffic on 9333. Yeah, it'll turn off incoming connections to your node, but it'll definitely reduce your overall usage. You can also limit your connections to bitcoind and/or not allow traffic on port 8333. Nothing is coming in, and outgoing connections are limited. I have a VPS node, but it's on the other side of the country. I get 16% stale if I use it compared to 1.5% local. M
|
I mine at Kano's Pool because it pays the best and is completely transparent! Come join me!
|
|
|
jonnybravo0311
Legendary
Offline
Activity: 1344
Merit: 1024
Mine at Jonny's Pool
|
|
January 02, 2015, 11:41:55 PM |
|
I was trying to set the pseudo share size high to reduce my bandwidth usage. I'm still constricted by my DSL bandwidth. Thanks for proving what I saw. M Your miners and your node aren't on the same network I presume? Can you run your p2pool node on the same local network as your miners? The vast majority of the bandwidth being consumed by p2pool is the broadcasting of the share chain data, not the data from your miners. I don't know how you've got things setup (like co-located miners, but running the p2pool node from your home PC, or miners in your home, but running the p2pool node on a VPS). Bandwidth limitations suck. Sorry you're suffering from it EDIT: one thing you can do, if you haven't, is to ensure you aren't accepting traffic on 9333. Yeah, it'll turn off incoming connections to your node, but it'll definitely reduce your overall usage. You can also limit your connections to bitcoind and/or not allow traffic on port 8333. Nothing is coming in, and outgoing connections are limited. I have a VPS node, but it's on the other side of the country. I get 16% stale if I use it compared to 1.5% local. M If your miners and node are local, then that traffic is not going outbound, so it wouldn't impact your bandwidth limitations. I figured you'd already done the connection limit and shut down the inbound ports, but thought I'd mention it in case anyone else happened to have bandwidth limitations and hadn't done so yet.
|
Jonny's Pool - Mine with us and help us grow! Support a pool that supports Bitcoin, not a hardware manufacturer's pockets! No SPV cheats. No empty blocks.
|
|
|
mdude77
Legendary
Offline
Activity: 1540
Merit: 1001
|
|
January 03, 2015, 12:00:18 AM |
|
If your miners and node are local, then that traffic is not going outbound, so it wouldn't impact your bandwidth limitations. I figured you'd already done the connection limit and shut down the inbound ports, but thought I'd mention it in case anyone else happened to have bandwidth limitations and hadn't done so yet.
I've been trying different things. The large share size when was I was trying to use my VPS. M
|
I mine at Kano's Pool because it pays the best and is completely transparent! Come join me!
|
|
|
windpath
Legendary
Offline
Activity: 1258
Merit: 1027
|
|
January 03, 2015, 03:25:34 AM |
|
It's limited in p2pool. You guys are looking in the wrong spot. From bitcoin/networks.py: SANE_TARGET_RANGE = (2**256//2**32//1000000 - 1, 2**256//2**32 - 1)
Aha! Thanks
|
|
|
|
IYFTech
|
|
January 03, 2015, 10:43:27 AM |
|
Following up from my earlier post about my half dead S5, I had a pleasant surprise when UPS came knocking on my door with a replacement hashing board today - I didn't even realise that BitmainWarranty had despatched it! Straight away I got to work, stripped the S5, removed the offending board, slid in the new one & put it back together - it didn't take long at all due to the "open air" design...... The result: Running for 3 hours now without a hitch - lovely! Respect & thanks are due to BitmainWarranty who was super fast in arranging the replacement board without a hassle, as well as friendly & understanding as usual, unlike a certain other "representative" - who should take a leaf out of BitmainWarranty's book on customer relations instead of barking at customers. If he did so, Bitmain & himself would be far better for it. Faith restored Nice one! So we needn't have spent hours with a hangover trying to fix it the day before then? Thanks for the New Years Eve party dude, great fun! I can highly recommend PatMans good lady's fried breakfast in the morning - absolute class!
|
|
|
|
PatMan
|
|
January 03, 2015, 01:00:39 PM |
|
I can highly recommend PatMans good lady's fried breakfast in the morning - absolute class! Flattery will get you everywhere......
|
|
|
|
idonothave
|
|
January 03, 2015, 01:07:55 PM |
|
I can highly recommend PatMans good lady's fried breakfast in the morning - absolute class! Flattery will get you everywhere...... Can You please share real consumption at the wall of this S5?
|
|
|
|
PatMan
|
|
January 03, 2015, 01:14:01 PM |
|
I can highly recommend PatMans good lady's fried breakfast in the morning - absolute class! Flattery will get you everywhere...... Can You please share real consumption at the wall of this S5? I don't have the equipment to measure it I'm afraid - but you should find that on the Bitmain S5 thread
|
|
|
|
idonothave
|
|
January 03, 2015, 01:28:21 PM |
|
I can highly recommend PatMans good lady's fried breakfast in the morning - absolute class! Flattery will get you everywhere...... Can You please share real consumption at the wall of this S5? I don't have the equipment to measure it I'm afraid - but you should find that on the Bitmain S5 thread Ok, I have found this: "After running for an hour, it settled into: Miner Speed: 1155 GH/s BTCGuild Speed: 1145 GH/s Power: 675 W (750 W Bronze PSU)" this is what we probably can believe to (unfortunately guy is mining at wrong pool)
|
|
|
|
grn
|
|
January 04, 2015, 02:31:19 AM |
|
p2pool on windows 8 problem. when i try to see the stats page at 127.0.0.1:9332 I get this
Installed Python 2.7 Twisted 14.0.2 zope.interface 4.1.2 pywin32-218.win32-py2.7.exe WMI-1.4.9.win32.exe
web.Server Traceback (most recent call last): exceptions.TypeError: Can only pass-through bytes on Python 2 C:\Python27\lib\site-packages\twisted\web\server.py:189 in process 188 self._encoder = encoder 189 self.render(resrc) 190 except: C:\Python27\lib\site-packages\twisted\web\server.py:238 in render 237 try: 238 body = resrc.render(self) 239 except UnsupportedMethod as e: C:\Python27\lib\site-packages\twisted\web\resource.py:250 in render 249 raise UnsupportedMethod(allowedMethods) 250 return m(request) 251 C:\Python27\lib\site-packages\twisted\web\static.py:631 in render_GET 630 631 producer.start() 632 # and make sure the connection doesn't get closed C:\Python27\lib\site-packages\twisted\web\static.py:710 in start 709 def start(self): 710 self.request.registerProducer(self, False) 711 C:\Python27\lib\site-packages\twisted\web\http.py:873 in registerProducer 872 else: 873 self.transport.registerProducer(producer, streaming) 874 C:\Python27\lib\site-packages\twisted\internet\abstract.py:112 in registerProducer 111 if not streaming: 112 producer.resumeProducing() 113 C:\Python27\lib\site-packages\twisted\web\static.py:720 in resumeProducing 719 # .resumeProducing again, so be prepared for a re-entrant call 720 self.request.write(data) 721 else: C:\Python27\lib\site-packages\twisted\web\server.py:217 in write 216 data = self._encoder.encode(data) 217 http.Request.write(self, data) 218 C:\Python27\lib\site-packages\twisted\web\http.py:1002 in write 1001 # Backward compatible cast for non-bytes values 1002 value = networkString('%s' % (value,)) 1003 l.extend([name, b": ", value, b"\r\n"]) C:\Python27\lib\site-packages\twisted\python\compat.py:364 in networkString 363 if not isinstance(s, str): 364 raise TypeError("Can only pass-through bytes on Python 2") 365 # Ensure we're limited to ASCII subset: exceptions.TypeError: Can only pass-through bytes on Python 2
Any Ideas?
|
How is that Lexical analysis working out bickneleski?
|
|
|
|