Bitcoin Forum
April 24, 2024, 05:21:35 AM *
News: Latest Bitcoin Core release: 27.0 [Torrent]
 
   Home   Help Search Login Register More  
Pages: « 1 ... 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 [146] 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 ... 814 »
  Print  
Author Topic: [1500 TH] p2pool: Decentralized, DoS-resistant, Hop-Proof pool  (Read 2591623 times)
tucenaber
Sr. Member
****
Offline Offline

Activity: 337
Merit: 252


View Profile
July 08, 2012, 10:44:10 AM
Last edit: July 08, 2012, 11:08:38 AM by tucenaber
 #2901

No. Smiley
P2pool is getting work form bitcoind. Each node do the same, but not all the nodes have exact same work because not all txes are in all places of bitcoin network at same time.
So if your node will found a block it will be closed in way your bitcoind see it. Then your "block found" and your closing tx is spread across p2pool nodes.
So it IS possible, that you see block value 61 and I see same block @ 55 and s1 else only 50.

Good info, thanks.

What still puzzles me a little is why I consistently saw blocks in the low 60s.  It wasn't a one time deal, it stayed that one until we found a block, then it dropped.  And it was that way long enough for my payout to increase from .9 to 1.2, and as we know, the payout doesn't change rapidly.  I don't know how long it takes this info to propagate across the network.  That implies there's something wrong with my logic, or propagation is pretty slow.

BTW, I'm still regularly getting dupe submission messages.  I just saw this (biggest one I've seen so far):

2012-07-07 09:29:47.581000 > Worker q6600 @ 127.0.0.1 submitted share more than once!
2012-07-07 09:29:47.897000 > Worker miner1 @ 192.168.0.110 submitted share more than once!
2012-07-07 09:29:49.355000 > Worker miner1 @ 192.168.0.110 submitted share more than once!
2012-07-07 09:29:50.915000 > Worker miner1 @ 192.168.0.110 submitted share more than once!
2012-07-07 09:29:51.948000 > Worker miner1 @ 192.168.0.110 submitted share more than once!

M

I have this problem too and I've been doing some counting. The number as of now: accepted:29732 duplicates:2330 rejected:426. That is 7-8% duplicate shares of the total and the actual submitted good shares are only 91%.

I have also confirmed that the same hashes shows up in cgminer's sharelog and they are never submitted more than once. (EDIT: not true; occasionally the same hash is subitted three times)

Since the set contaning the submitted hashes is local to the got_response closure it seems to me that the duplicate check only affects submitted shares from the same get_work. Am I correct to conclude that this means that the problem is with cgminer?

Code:
        
    def get_work(self, pubkey_hash, desired_share_target, desired_pseudoshare_target):

        /.../

        received_header_hashes = set()

        def got_response(header, request):

            /.../

            elif header_hash in received_header_hashes:
                print >>sys.stderr, 'Worker %s @ %s submitted share %064x more than once!' % (request.getUser(), request.getClientIP(), header_hash)
1713936095
Hero Member
*
Offline Offline

Posts: 1713936095

View Profile Personal Message (Offline)

Ignore
1713936095
Reply with quote  #2

1713936095
Report to moderator
1713936095
Hero Member
*
Offline Offline

Posts: 1713936095

View Profile Personal Message (Offline)

Ignore
1713936095
Reply with quote  #2

1713936095
Report to moderator
The network tries to produce one block per 10 minutes. It does this by automatically adjusting how difficult it is to produce blocks.
Advertised sites are not endorsed by the Bitcoin Forum. They may be unsafe, untrustworthy, or illegal in your jurisdiction.
1713936095
Hero Member
*
Offline Offline

Posts: 1713936095

View Profile Personal Message (Offline)

Ignore
1713936095
Reply with quote  #2

1713936095
Report to moderator
mdude77
Legendary
*
Offline Offline

Activity: 1540
Merit: 1001



View Profile
July 08, 2012, 12:12:59 PM
 #2902

BTW, I'm still regularly getting dupe submission messages.  I just saw this (biggest one I've seen so far):

2012-07-07 09:29:47.581000 > Worker q6600 @ 127.0.0.1 submitted share more than once!
2012-07-07 09:29:47.897000 > Worker miner1 @ 192.168.0.110 submitted share more than once!
2012-07-07 09:29:49.355000 > Worker miner1 @ 192.168.0.110 submitted share more than once!
2012-07-07 09:29:50.915000 > Worker miner1 @ 192.168.0.110 submitted share more than once!
2012-07-07 09:29:51.948000 > Worker miner1 @ 192.168.0.110 submitted share more than once!

M

I have this problem too and I've been doing some counting. The number as of now: accepted:29732 duplicates:2330 rejected:426. That is 7-8% duplicate shares of the total and the actual submitted good shares are only 91%.

I have also confirmed that the same hashes shows up in cgminer's sharelog and they are never submitted more than once. (EDIT: not true; occasionally the same hash is subitted three times)

Since the set contaning the submitted hashes is local to the got_response closure it seems to me that the duplicate check only affects submitted shares from the same get_work. Am I correct to conclude that this means that the problem is with cgminer?

I think the problem is in p2pool, because I two of my miners are cgminer, one is phoenix, and I get dupes on all 3.

M

I mine at Kano's Pool because it pays the best and is completely transparent!  Come join me!
tucenaber
Sr. Member
****
Offline Offline

Activity: 337
Merit: 252


View Profile
July 08, 2012, 12:15:33 PM
 #2903

I guess you are right. Then my only idea is the caching of work in WorkerInterface. Can someone shed some light on that?
forrestv (OP)
Hero Member
*****
Offline Offline

Activity: 516
Merit: 643


View Profile
July 08, 2012, 01:23:12 PM
 #2904

I guess you are right. Then my only idea is the caching of work in WorkerInterface. Can someone shed some light on that?

The work caching preserves the merkle root, but advances the timestamp by 12 seconds every time. It may be possible that CGminer is advancing the timestamp more than that (ignoring the X-Roll-NTime header, which is set to "expire=10"), and so redoing work.

EDIT: I'm going to add a check to P2Pool that warns about improperly rolled work. If anyone _ever_ sees this warning, we'll know that something is broken.

On the other hand, maybe CGminer retries submitting work before the original work submit is finished if it's slow? That would mean that there's no work actually being lost. I should really just look at the code...

I use CGminer and almost never see this message. Is there anything special about the mining rigs that this happens to? How many GPUs do they have?

1J1zegkNSbwX4smvTdoHSanUfwvXFeuV23
tucenaber
Sr. Member
****
Offline Offline

Activity: 337
Merit: 252


View Profile
July 08, 2012, 01:38:07 PM
 #2905

I use CGminer and almost never see this message. Is there anything special about the mining rigs that this happens to? How many GPUs do they have?

I have two machines and both show this fenomenon but one has a much higher hashrate and therefore it is more noticable on that. One desktop machine with a single 5850 and a dedicated rig with 4 x 5850 + 10 x Icarus. Both of these are running linux and I have the latest version of both p2pool and cgminer.

Thanks for looking into it.
kano
Legendary
*
Offline Offline

Activity: 4466
Merit: 1798


Linux since 1997 RedHat 4


View Profile
July 08, 2012, 01:42:27 PM
 #2906

Also look at
 java API stats
that may show some interesting numbers
(though I have no idea what values to expect from p2pool)

Pool: https://kano.is - low 0.5% fee PPLNS 3 Days - Most reliable Solo with ONLY 0.5% fee   Bitcointalk thread: Forum
Discord support invite at https://kano.is/ Majority developer of the ckpool code - k for kano
The ONLY active original developer of cgminer. Original master git: https://github.com/kanoi/cgminer
mdude77
Legendary
*
Offline Offline

Activity: 1540
Merit: 1001



View Profile
July 08, 2012, 02:30:34 PM
 #2907

I guess you are right. Then my only idea is the caching of work in WorkerInterface. Can someone shed some light on that?

The work caching preserves the merkle root, but advances the timestamp by 12 seconds every time. It may be possible that CGminer is advancing the timestamp more than that (ignoring the X-Roll-NTime header, which is set to "expire=10"), and so redoing work.

EDIT: I'm going to add a check to P2Pool that warns about improperly rolled work. If anyone _ever_ sees this warning, we'll know that something is broken.

On the other hand, maybe CGminer retries submitting work before the original work submit is finished if it's slow? That would mean that there's no work actually being lost. I should really just look at the code...

I use CGminer and almost never see this message. Is there anything special about the mining rigs that this happens to? How many GPUs do they have?

2012-07-07 09:29:47.581000 > Worker q6600 @ 127.0.0.1 submitted share more than once!
2012-07-07 09:29:47.897000 > Worker miner1 @ 192.168.0.110 submitted share more than once!
2012-07-07 09:29:49.355000 > Worker miner1 @ 192.168.0.110 submitted share more than once!
2012-07-07 09:29:50.915000 > Worker miner1 @ 192.168.0.110 submitted share more than once!
2012-07-07 09:29:51.948000 > Worker miner1 @ 192.168.0.110 submitted share more than once!

q6600 = my workstation, one 7870 on cgminer.  p2pool is running here.
miner1 = 4x7870 on cgminer
miner2 = 4x5870 on phoenix

I don't see miner2 that often, but I do see it.

M

I mine at Kano's Pool because it pays the best and is completely transparent!  Come join me!
Peterae
Newbie
*
Offline Offline

Activity: 22
Merit: 0


View Profile
July 08, 2012, 06:15:52 PM
 #2908

Can someone please explain to me why my local rate chart shows toal mean area 314Mhs for a week and 22Mhs Dead Mean yet P2Pool stats page only shows me @ 194Mhs and is normally way below what my live stats shows.

Thx
Smoovious
Hero Member
*****
Offline Offline

Activity: 504
Merit: 500

Scattering my bits around the net since 1980


View Profile
July 08, 2012, 10:50:00 PM
 #2909

Can someone please explain to me why my local rate chart shows toal mean area 314Mhs for a week and 22Mhs Dead Mean yet P2Pool stats page only shows me @ 194Mhs and is normally way below what my live stats shows.

Thx

The chart is averaged out over the period the chart is displaying, and the stats shown in the console, are averaged out over the past 10 minutes or so, based on your diff-1 share submission, so luck will play a part too.

-- Smoov
rav3n_pl
Legendary
*
Offline Offline

Activity: 1361
Merit: 1003


Don`t panic! Organize!


View Profile WWW
July 08, 2012, 11:34:11 PM
 #2910

Can someone please explain to me why my local rate chart shows toal mean area 314Mhs for a week and 22Mhs Dead Mean yet P2Pool stats page only shows me @ 194Mhs and is normally way below what my live stats shows.

Thx

On p2pool.info you can see last 24hrs hashrate measured by shares accepted by pool.
On own chart you can see hour/day/wee/month measure by diff=1 shares.
Depends on your luck you can see that rate reflected in pool shares can be higher or lower than sd1 shares.
To compare what you see on p2pool.info and local page always see local last day.

1Rav3nkMayCijuhzcYemMiPYsvcaiwHni  Bitcoin stuff on my OneDrive
My RPC CoinControl for any coin https://bitcointalk.org/index.php?topic=929954
Some stuff on https://github.com/Rav3nPL/
organofcorti
Donator
Legendary
*
Offline Offline

Activity: 2058
Merit: 1007


Poor impulse control.


View Profile WWW
July 08, 2012, 11:35:01 PM
 #2911

Can someone please explain to me why my local rate chart shows toal mean area 314Mhs for a week and 22Mhs Dead Mean yet P2Pool stats page only shows me @ 194Mhs and is normally way below what my live stats shows.

Thx


You're confusing the area (total megahashes, Mh) with the rate (megahashes per second, MHps). The actual rate you should expect from your given stats is: 314*86400*7/10^6 = 189.9 MHps. This is quite close to the 194 Mhps you quoted.

Bitcoin network and pool analysis 12QxPHEuxDrs7mCyGSx1iVSozTwtquDB3r
follow @oocBlog for new post notifications
mdude77
Legendary
*
Offline Offline

Activity: 1540
Merit: 1001



View Profile
July 09, 2012, 01:12:55 AM
 #2912

I guess you are right. Then my only idea is the caching of work in WorkerInterface. Can someone shed some light on that?

On the other hand, I am seeing it a lot more from my two cgminer instances than my phoenix instance.  Considering my phoenix instance is around 1.6g/h, and my local 7870 is 660m/h, you'd think I'd see a lot more from phoenix than I do the 7870.  But I don't.

M

I mine at Kano's Pool because it pays the best and is completely transparent!  Come join me!
Peterae
Newbie
*
Offline Offline

Activity: 22
Merit: 0


View Profile
July 09, 2012, 05:40:34 AM
 #2913

Thanks for the explanations, that certainly clears that up  Grin
Ente
Legendary
*
Offline Offline

Activity: 2126
Merit: 1001



View Profile
July 09, 2012, 07:43:11 AM
 #2914


2012-07-07 09:29:47.581000 > Worker q6600 @ 127.0.0.1 submitted share more than once!
2012-07-07 09:29:47.897000 > Worker miner1 @ 192.168.0.110 submitted share more than once!
2012-07-07 09:29:49.355000 > Worker miner1 @ 192.168.0.110 submitted share more than once!
2012-07-07 09:29:50.915000 > Worker miner1 @ 192.168.0.110 submitted share more than once!
2012-07-07 09:29:51.948000 > Worker miner1 @ 192.168.0.110 submitted share more than once!

M

I have this problem too and I've been doing some counting. The number as of now: accepted:29732 duplicates:2330 rejected:426. That is 7-8% duplicate shares of the total and the actual submitted good shares are only 91%.


What if.. those are the missing 10% we see globally on the p2pool stats?

Ente
organofcorti
Donator
Legendary
*
Offline Offline

Activity: 2058
Merit: 1007


Poor impulse control.


View Profile WWW
July 09, 2012, 03:04:00 PM
 #2915


2012-07-07 09:29:47.581000 > Worker q6600 @ 127.0.0.1 submitted share more than once!
2012-07-07 09:29:47.897000 > Worker miner1 @ 192.168.0.110 submitted share more than once!
2012-07-07 09:29:49.355000 > Worker miner1 @ 192.168.0.110 submitted share more than once!
2012-07-07 09:29:50.915000 > Worker miner1 @ 192.168.0.110 submitted share more than once!
2012-07-07 09:29:51.948000 > Worker miner1 @ 192.168.0.110 submitted share more than once!

M

I have this problem too and I've been doing some counting. The number as of now: accepted:29732 duplicates:2330 rejected:426. That is 7-8% duplicate shares of the total and the actual submitted good shares are only 91%.


What if.. those are the missing 10% we see globally on the p2pool stats?

Ente

If the pool used 100% of shares to estimate the global pool hashrate instead of 91%, then yes. But I think p2Pool only counts valid shares? If not, you might notice 9% less payout, but reporting would be accurate.

Bitcoin network and pool analysis 12QxPHEuxDrs7mCyGSx1iVSozTwtquDB3r
follow @oocBlog for new post notifications
tucenaber
Sr. Member
****
Offline Offline

Activity: 337
Merit: 252


View Profile
July 09, 2012, 03:33:15 PM
 #2916


I have this problem too and I've been doing some counting. The number as of now: accepted:29732 duplicates:2330 rejected:426. That is 7-8% duplicate shares of the total and the actual submitted good shares are only 91%.


What if.. those are the missing 10% we see globally on the p2pool stats?

Ente

If the pool used 100% of shares to estimate the global pool hashrate instead of 91%, then yes. But I think p2Pool only counts valid shares? If not, you might notice 9% less payout, but reporting would be accurate.

It counts valid shares and dead-on-arrivals, but not duplicates which are just dropped. So yes, you are right about that.
tucenaber
Sr. Member
****
Offline Offline

Activity: 337
Merit: 252


View Profile
July 09, 2012, 05:32:41 PM
 #2917

Quote
2012-07-09 19:31:07.554921 > Miner digger @ 192.168.1.102 rolled timestamp improperly! This may be a bug in the miner that is causing you to lose work!
2012-07-09 19:31:13.877677 > Miner digger @ 192.168.1.102 rolled timestamp improperly! This may be a bug in the miner that is causing you to lose work!
2012-07-09 19:31:16.723297 > Miner digger @ 192.168.1.102 rolled timestamp improperly! This may be a bug in the miner that is causing you to lose work!
2012-07-09 19:31:22.509470 > Miner digger @ 192.168.1.102 rolled timestamp improperly! This may be a bug in the miner that is causing you to lose work!
2012-07-09 19:31:30.192033 > Miner digger @ 192.168.1.102 rolled timestamp improperly! This may be a bug in the miner that is causing you to lose work!
Angry
Now what?
rav3n_pl
Legendary
*
Offline Offline

Activity: 1361
Merit: 1003


Don`t panic! Organize!


View Profile WWW
July 09, 2012, 05:51:28 PM
 #2918

Looks like you have updated to fresh version that checks and warns about double-sending Smiley

1Rav3nkMayCijuhzcYemMiPYsvcaiwHni  Bitcoin stuff on my OneDrive
My RPC CoinControl for any coin https://bitcointalk.org/index.php?topic=929954
Some stuff on https://github.com/Rav3nPL/
RandomQ
Hero Member
*****
Offline Offline

Activity: 826
Merit: 500



View Profile
July 09, 2012, 09:56:58 PM
 #2919

I love the latest from cgminer 2.5.0 change log.

----
 I've also created my own workaround for the biggest problem with existing bitforce devices - the can now abort work as soon as a longpoll hits which means literally half as much work on average wasted across longpoll than previously, and a much lower reject rate. Note these devices are still inefficient across longpoll since they don't even have the support the minirig devices have - and they never will according to bfl. This means you should never mine with them on p2pool.
----

P2pool- The Anti BFL pool
 Grin
tucenaber
Sr. Member
****
Offline Offline

Activity: 337
Merit: 252


View Profile
July 11, 2012, 11:31:57 AM
 #2920

So I have been having problems, and now at least I think what's going on.

The problem is this:

p2pool increments the getwork timestamp at every request and assumes that the miner will respect X-Roll-NTime which is set to "expire=10". cgminer (and apparently phoenix) doesn't respect that. Sometimes there is a hash collision from two different getwork requests, both rolled past 10 seconds. (see the cgminer thread)

(The check forrestv added yesterday warning about this, is broken since it only catches 1/12 of the shares rolled past 10 seconds, but that's not very important)

Now, my question is what do I do about it? One reason I get more of these than other people, I suspect, is because the miner produces many shares per minute.

- Would the best short term solution be to increase the local difficulty?
- ckolivas suggested setting --scan-time in cgminer to 10 but that doesn't make much of a difference.
- hack p2pool to inrease the 12 second increment to something higher? That seems a bit risky...

other suggestions?
Pages: « 1 ... 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 [146] 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 ... 814 »
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!