eroxors
Legendary
Offline
Activity: 924
Merit: 1000
Think. Positive. Thoughts.
|
|
April 30, 2012, 02:46:52 PM |
|
Hmmm... one of my machines is at 5% stales, the other is at 19.5% stales, I'm beginning to wonder if it is some sort of network congestion since one is connected to a higher-priority switch on the network. Also, the one with lower stales is on cgminer 2.3.1, higher stales is the newer 2.3.2.
A couple things for p2pool you want to be on 2.3.3 or higher. Some of the older versions have a bug where cgminer ignores the submit-stale flag set by the pool. Discarded % is irrelevant. It is the amount of work cgminer requested that it discard BEFORE starting to work. This is simply a metric between GPU hashing power and LP interval. If your GPU is 300 MH/s (each GPU is what matters) then it will complete a getwork in ~15s but LP will occur early than that so most queued up work will never be started. You can reduce the amount of discarded work by setting the queue param to 1 (and threads =1) but you will still have a lot of discarded work. One other thing to consider is some routers have an issue w/ high # of open connections (especially multiple rigs all running cgminer opening dozens of simultaneous connections). newer version of cgminer helps this but some routers still lag under the load (# of connections not bandwidth). OK, thanks. I'm updating cgminer now. I booted up guiminer and it shows 0 stales after 50 submitted shares but it shows "connection problems" every 10 seconds instead of 350mhash so this may be a network thing. Thanks for the help.
|
|
|
|
eroxors
Legendary
Offline
Activity: 924
Merit: 1000
Think. Positive. Thoughts.
|
|
April 30, 2012, 06:27:09 PM |
|
Hmmm... one of my machines is at 5% stales, the other is at 19.5% stales, I'm beginning to wonder if it is some sort of network congestion since one is connected to a higher-priority switch on the network. Also, the one with lower stales is on cgminer 2.3.1, higher stales is the newer 2.3.2.
A couple things for p2pool you want to be on 2.3.3 or higher. Some of the older versions have a bug where cgminer ignores the submit-stale flag set by the pool. Discarded % is irrelevant. It is the amount of work cgminer requested that it discard BEFORE starting to work. This is simply a metric between GPU hashing power and LP interval. If your GPU is 300 MH/s (each GPU is what matters) then it will complete a getwork in ~15s but LP will occur early than that so most queued up work will never be started. You can reduce the amount of discarded work by setting the queue param to 1 (and threads =1) but you will still have a lot of discarded work. One other thing to consider is some routers have an issue w/ high # of open connections (especially multiple rigs all running cgminer opening dozens of simultaneous connections). newer version of cgminer helps this but some routers still lag under the load (# of connections not bandwidth). OK, thanks. I'm updating cgminer now. I booted up guiminer and it shows 0 stales after 50 submitted shares but it shows "connection problems" every 10 seconds instead of 350mhash so this may be a network thing. Thanks for the help. new cgminer didn't fix it, though I am down from 20% stales to 15%... switching to guiminer for a few hours to test
|
|
|
|
check_status
Full Member
Offline
Activity: 196
Merit: 100
Web Dev, Db Admin, Computer Technician
|
|
April 30, 2012, 07:13:09 PM |
|
What is your cgminer command line eroxors? A sample similar to mine: $ sudo ./cgminer -o http://p2pmining.com:9332 -u ⊅BTC Address -p relacks -I 6 -g 1 -k phatk -v 2 -w 128 --auto-fan --temp-target 68 --gpu-engine 875 --gpu-mem 250The bold items, for p2pool, are required to be less than what would be set when used for other pools. In other pools threads, -g can be set to 2 or more. Intensity needs to be lowered until you find good results starting at 8 for p2pool.
|
For Bitcoin to be a true global currency the value of BTC needs always to rise. If BTC became the global currency & money supply = 100 Trillion then ⊅1.00 BTC = $4,761,904.76. P2Pool Server List | How To's and Guides Mega List | 1 EndfedSryGUZK9sPrdvxHntYzv2EBexGA
|
|
|
eroxors
Legendary
Offline
Activity: 924
Merit: 1000
Think. Positive. Thoughts.
|
|
April 30, 2012, 08:38:31 PM |
|
What is your cgminer command line eroxors? A sample similar to mine: $ sudo ./cgminer -o http://p2pmining.com:9332 -u ⊅BTC Address -p relacks -I 6 -g 1 -k phatk -v 2 -w 128 --auto-fan --temp-target 68 --gpu-engine 875 --gpu-mem 250The bold items, for p2pool, are required to be less than what would be set when used for other pools. In other pools threads, -g can be set to 2 or more. Intensity needs to be lowered until you find good results starting at 8 for p2pool. Currently: cgminer -g 1 -I 4
|
|
|
|
eroxors
Legendary
Offline
Activity: 924
Merit: 1000
Think. Positive. Thoughts.
|
|
April 30, 2012, 10:04:24 PM |
|
Hmmm... one of my machines is at 5% stales, the other is at 19.5% stales, I'm beginning to wonder if it is some sort of network congestion since one is connected to a higher-priority switch on the network. Also, the one with lower stales is on cgminer 2.3.1, higher stales is the newer 2.3.2.
A couple things for p2pool you want to be on 2.3.3 or higher. Some of the older versions have a bug where cgminer ignores the submit-stale flag set by the pool. Discarded % is irrelevant. It is the amount of work cgminer requested that it discard BEFORE starting to work. This is simply a metric between GPU hashing power and LP interval. If your GPU is 300 MH/s (each GPU is what matters) then it will complete a getwork in ~15s but LP will occur early than that so most queued up work will never be started. You can reduce the amount of discarded work by setting the queue param to 1 (and threads =1) but you will still have a lot of discarded work. One other thing to consider is some routers have an issue w/ high # of open connections (especially multiple rigs all running cgminer opening dozens of simultaneous connections). newer version of cgminer helps this but some routers still lag under the load (# of connections not bandwidth). OK, thanks. I'm updating cgminer now. I booted up guiminer and it shows 0 stales after 50 submitted shares but it shows "connection problems" every 10 seconds instead of 350mhash so this may be a network thing. Thanks for the help. new cgminer didn't fix it, though I am down from 20% stales to 15%... switching to guiminer for a few hours to test I've been on guiminer for a few hours, stales are at 7.5% and dropping. Anyone else on guiminer getting the "connection problems" under gpu speed? Another thing, GUIMiner shows 0 stale shares, but the site is still showing a percentage... any ideas why?
|
|
|
|
JayCoin (OP)
|
|
April 30, 2012, 11:41:29 PM |
|
I've been on guiminer for a few hours, stales are at 7.5% and dropping. Anyone else on guiminer getting the "connection problems" under gpu speed?
Another thing, GUIMiner shows 0 stale shares, but the site is still showing a percentage... any ideas why?
Those are DOA shares. That means a new p2pool share has been found before you submitted your work. Because the work you submit may still solve a block, it is still excepted by the pool.
|
Hello There!
|
|
|
JayCoin (OP)
|
|
May 01, 2012, 12:09:27 AM |
|
To promote openness, I thought I would release the Python code I add to the P2Pool code. The rest of the pool is in PHP. Later, I may record who actually finds shares and blocks so people interested in that data can see it. At the beginning of main.py In the WorkerBridge.got_response() in main.py after p2pool verifies that the proof of work hash is < the target. #Enter Share to database try: dbf_user = request.getUser() dbuser_items = dbf_user.split('+') db_diff = bitcoin_data.target_to_difficulty(target) * 1000 proxy_db = MySQLdb.connect(host="localhost",user="mysqluser",passwd="mysqlpassword",db="p2pmining") pdb_c = proxy_db.cursor() pdb_c.execute("""INSERT INTO miner_data (id,address,hashrate,timestamp,difficulty,ontime) VALUES (NULL, %s , %s , UNIX_TIMESTAMP() , %s, %s)""", (dbuser_items[0], db_diff * on_time, db_diff , on_time ) ) proxy_db.close() except: log.err(None, 'Error with database:') ###
That is it.
|
Hello There!
|
|
|
check_status
Full Member
Offline
Activity: 196
Merit: 100
Web Dev, Db Admin, Computer Technician
|
|
May 01, 2012, 02:27:09 AM |
|
So just after, def got_response(header, request): assert header['merkle_root'] == merkle_root header_hash = bitcoin_data.hash256(bitcoin_data.block_header_type.pack(header)) pow_hash = net.PARENT.POW_FUNC(bitcoin_data.block_header_type.pack(header)) on_time = current_work.value['best_share_hash'] == share_info['share_data']['previous_share_hash'] you replace the existing 'try:' and 'except:' sections with your code? I was looking through the diff stuff with the hex addresses goin . If we use it should we comment it with credit to JayCoin? Suggestions: Would you add ⊅BTC paid out per 24 hours, ⊅BTC paid out total? On the current miners page maybe list firstbits instead of full addresses? (So more stats can fit. ) Would it be possible to have a subpool subsidy that a 6 GH miner could donate to those under 1 GH? Do you plan on having a drawing to give away a 5830 or 6870 for the subpool hashers who have hashed for at least, say one week between date to date?
|
For Bitcoin to be a true global currency the value of BTC needs always to rise. If BTC became the global currency & money supply = 100 Trillion then ⊅1.00 BTC = $4,761,904.76. P2Pool Server List | How To's and Guides Mega List | 1 EndfedSryGUZK9sPrdvxHntYzv2EBexGA
|
|
|
JayCoin (OP)
|
|
May 01, 2012, 03:39:00 AM |
|
Here is the code in between p2pool code for further clarification: if pow_hash > target: print 'Worker %s submitted share with hash > target:' % (request.getUser(),) print ' Hash: %56x' % (pow_hash,) print ' Target: %56x' % (target,) elif header_hash in received_header_hashes: print >>sys.stderr, 'Worker %s @ %s submitted share more than once!' % (request.getUser(), request.getClientIP()) else: received_header_hashes.add(header_hash) #Enter Share to database try: dbf_user = request.getUser() dbuser_items = dbf_user.split('+') db_diff = bitcoin_data.target_to_difficulty(target) * 1000 proxy_db = MySQLdb.connect(host="localhost",user="mysqluser",passwd="mysqlpassword",db="p2pmining") pdb_c = proxy_db.cursor() pdb_c.execute("""INSERT INTO miner_data (id,address,hashrate,timestamp,difficulty,ontime) VALUES (NULL, %s , %s , UNIX_TIMESTAMP() , %s, %s)""", (dbuser_items[0], db_diff * on_time, db_diff , on_time ) ) proxy_db.close() except: log.err(None, 'Error with database:') #End Enter Share Code pseudoshare_received.happened(bitcoin_data.target_to_average_attempts(target), not on_time, user) self.recent_shares_ts_work.append((time.time(), bitcoin_data.target_to_average_attempts(target))) while len(self.recent_shares_ts_work) > 50: self.recent_shares_ts_work.pop(0) local_rate_monitor.add_datum(dict(work=bitcoin_data.target_to_average_attempts(target), dead=not on_time, user=user)) Notoriety in the comments would be cool
|
Hello There!
|
|
|
JayCoin (OP)
|
|
May 01, 2012, 03:48:29 AM |
|
Suggestions: Would you add ⊅BTC paid out per 24 hours, ⊅BTC paid out total? On the current miners page maybe list firstbits instead of full addresses? (So more stats can fit. ) Would it be possible to have a subpool subsidy that a 6 GH miner could donate to those under 1 GH? Do you plan on having a drawing to give away a 5830 or 6870 for the subpool hashers who have hashed for at least, say one week between date to date? I will definitely put in the BTC paid data for each miner soon. I like your idea of using the firstbits for the addresses. I don't think that is possible with the current set up to address your third suggestion No plans an drawings yet.
|
Hello There!
|
|
|
JayCoin (OP)
|
|
May 01, 2012, 11:12:14 PM |
|
Just ordered a few more GPUs. A 5870,5970 and a new 7970.
|
Hello There!
|
|
|
pyroim
Newbie
Offline
Activity: 17
Merit: 0
|
|
May 02, 2012, 12:22:52 AM |
|
a that price why not FPGA? from what ive seen the good open source ones will keep their value
|
|
|
|
JayCoin (OP)
|
|
May 02, 2012, 01:23:15 AM |
|
a that price why not FPGA? from what ive seen the good open source ones will keep their value
There are way more people buying GPUs than FPGAs, so it will be easier to sell later on. 7970 was only $450 from amazon.
|
Hello There!
|
|
|
check_status
Full Member
Offline
Activity: 196
Merit: 100
Web Dev, Db Admin, Computer Technician
|
|
May 02, 2012, 11:29:12 AM |
|
⊅ graphs look pretty good.
|
For Bitcoin to be a true global currency the value of BTC needs always to rise. If BTC became the global currency & money supply = 100 Trillion then ⊅1.00 BTC = $4,761,904.76. P2Pool Server List | How To's and Guides Mega List | 1 EndfedSryGUZK9sPrdvxHntYzv2EBexGA
|
|
|
eroxors
Legendary
Offline
Activity: 924
Merit: 1000
Think. Positive. Thoughts.
|
|
May 02, 2012, 07:43:32 PM |
|
Hmmm... one of my machines is at 5% stales, the other is at 19.5% stales, I'm beginning to wonder if it is some sort of network congestion since one is connected to a higher-priority switch on the network. Also, the one with lower stales is on cgminer 2.3.1, higher stales is the newer 2.3.2.
A couple things for p2pool you want to be on 2.3.3 or higher. Some of the older versions have a bug where cgminer ignores the submit-stale flag set by the pool. Discarded % is irrelevant. It is the amount of work cgminer requested that it discard BEFORE starting to work. This is simply a metric between GPU hashing power and LP interval. If your GPU is 300 MH/s (each GPU is what matters) then it will complete a getwork in ~15s but LP will occur early than that so most queued up work will never be started. You can reduce the amount of discarded work by setting the queue param to 1 (and threads =1) but you will still have a lot of discarded work. One other thing to consider is some routers have an issue w/ high # of open connections (especially multiple rigs all running cgminer opening dozens of simultaneous connections). newer version of cgminer helps this but some routers still lag under the load (# of connections not bandwidth). I tried with cgminer 2.3.6, stales go up to between 15 and 20% with it. GUI miner keeps them between 10-15%. I'm going to try BAMT later this week, but I'm testing another pool until then to make sure it's not a problem with my network. Thanks for all the help.
|
|
|
|
JayCoin (OP)
|
|
May 03, 2012, 01:07:59 AM |
|
Help simplify this SQL statement for daily miner payout graph data. It works, but is there a simpler way? Got crazy when I had to exclude orphaned blocks. SELECT DATE_FORMAT( FROM_UNIXTIME( t1.time ) , '%Y-%m-%d' ) DAY , UNIX_TIMESTAMP( DATE_FORMAT( FROM_UNIXTIME( t1.time ) , '%Y-%m-%d' ) ) TIMESTAMP, SUM( SHARE * amount ) btc FROM (SELECT TIME, payouts.amount AS SHARE , pool_blocks.amount FROM `payouts` , `pool_blocks` , (SELECT MAX( TIME ) AS mtime FROM pool_blocks)maxt WHERE payouts.txid = pool_blocks.txid AND pool_blocks.time > UNIX_TIMESTAMP( ) -60 *60 *24 *30 AND address = '1KgFh9kWBpz4TsX92xcx78VQ2Fo1jP2Ddx' AND NOT ( maxt.mtime > pool_blocks.time AND pool_blocks.confirmations =1 ) )t1 GROUP BY DAY
SQL is fun
|
Hello There!
|
|
|
weex
Legendary
Offline
Activity: 1102
Merit: 1014
|
|
May 03, 2012, 02:02:54 AM |
|
SQL is fun Your love for it shows. I generally prefer to break up statements like this into a few separate queries and use the scripting language running the whole thing to glue the results together. I think it's much easier to read and maintain that way but that's me. I just started with the pool a couple days ago and I'm sorry if this is already answered but...how long after a block is confirmed do payouts go out? Also how does it work when I was only mining for some of the time during the first round? Finally, how do you mix the altcoin dividends into the payments? Do you sell them periodically? Thanks for this neat form of pool.
|
|
|
|
JayCoin (OP)
|
|
May 03, 2012, 02:37:21 AM |
|
SQL is fun Your love for it shows. I generally prefer to break up statements like this into a few separate queries and use the scripting language running the whole thing to glue the results together. I think it's much easier to read and maintain that way but that's me. I just started with the pool a couple days ago and I'm sorry if this is already answered but...how long after a block is confirmed do payouts go out? Also how does it work when I was only mining for some of the time during the first round? Finally, how do you mix the altcoin dividends into the payments? Do you sell them periodically? Thanks for this neat form of pool. After a block is confirmed 120 times your reward is available for payout. After your balance is greater than 0.5 then it is automatically paid. You can do an instant payout any time by signing a short message with your bitcoin address and submitting the signature. Alt coins are split between the miners who have registered there alt coin addresses. The way the alt coins are paid out is the same as bitcoins. This pool is PPLNS with a 24 hr history. That means any shares submitted in the last 24hrs when a block is found are used when calculating the reward. This prevents pool hoppers from gaining an unfair advantage. Thanks for trying my pool!
|
Hello There!
|
|
|
Tittiez
|
|
May 03, 2012, 10:16:18 AM |
|
Of course, as soon as I switch to another pool due to P2Pool luck there is 7 blocks in a day, ah hell.
|
|
|
|
RandomQ
|
|
May 03, 2012, 12:36:48 PM |
|
I'm in kinda the same boat, I was in the process of moving my mining boxes. So I only had half of my boxes online yesterday. But I do want to say, Great job to the website. It looks great, I love all the new features.
|
|
|
|
|