ghost
Newbie
Offline
Activity: 34
Merit: 0
|
|
April 27, 2011, 12:00:54 AM |
|
The problems I was having with slushes pool have gone away and it will now connect. I noticed this was happening with other miners as well so I don't think this was ever anything specific to Phoenix.
|
|
|
|
CFSworks
Member
Offline
Activity: 63
Merit: 10
|
|
April 27, 2011, 01:08:38 AM |
|
I've just realized that with phoenix miner, I dont even have to turn my bitcoin client on ! Lol ?
To mine in a pool? You shouldn't have to turn your Bitcoin client on at all unless you're mining solo. The pool just benefits from your contributed computing power and pays you a % cut. You don't have to leave Bitcoin up to receive the payments either. Good work on the miner guys ... donation coming your way.
PS: I don't know to love you or hate you for single-handedly making the network hashrate 10-15% bigger.
Thanks! And don't worry, I think I hate us too. are rejected the same as stale shares?
Rejected shares are either stale or invalid. But unless you have broken hardware (or overclocked too much) or buggy software which would result in invalid solutions, rejected ones are stale. "Rejected" can be any number of things, it just means the system you were mining for (a pool server or your Bitcoin client) rejected the work. A Bitcoin client will reject only on invalid or stale, but a pool could reject for other reasons (duplicate work, account setting errors, internal server problem, etc.) However, since invalid work is prevented by a double-check in Phoenix (even with unstable hardware), and duplicate work won't occur unless the kernel has bugs that would cause that, they're usually going to be stale... unless there are bugs. So, pretty much the wordy version of what Raulo just said. i get this message everytime i start phoenix on ubuntu: /home/noodles/phoenix-1.2/KernelInterface.py:139: DeprecationWarning: struct integer overflow masking is deprecated hashInput = pack('>76sI', staticData, nonce) /home/noodles/phoenix-1.2/KernelInterface.py:148: DeprecationWarning: struct integer overflow masking is deprecated formattedResult = pack('<76sI', range.unit.data[:76], nonce) it just spits out that warning and starts to work anyway, but from time to time, a miner just stops after work queue is empty, like it did about 1hour ago: [27/04/2011 00:20:39] Result: 83228c5b accepted [27/04/2011 00:20:39] Warning: work queue empty, miner is idle and i have to restart it (and again get the warning shown above) Now that's an interesting error! I'm guessing this is Python 2.7? We haven't done any testing on 2.7 yet. A little Googling should tell me what's going on with the DeprecationWarnings; it's nothing serious, but apparently we're doing something in there that the Python team prefers we not do in the future, so we'll fix that in 1.3. As for the work queue stalling, which should be totally unrelated to the warnings... What's your aggression set at? Sometimes the aggression is so high that it runs the queue clean out of work on every loop. We're working on this, but for now you can use -q 2 or -q 3 to increase the size of the queue so that doesn't happen in the future.
|
|
|
|
BitLex
|
|
April 27, 2011, 02:17:59 AM |
|
What's your aggression set at?
aggression is set to 8, didn't stop since, i'll try to set it lower or set the -q if it does again.
|
|
|
|
Noitev
|
|
April 27, 2011, 06:31:14 AM |
|
itd be cool if this miner showed what difficulty ech potential hash > or = to 1 would solve. so like "347 diff not met/ not sent, 2 diff not met/ not sent etc.. be more interesting to see how close each potential hash is... lol
|
|
|
|
CFSworks
Member
Offline
Activity: 63
Merit: 10
|
|
April 27, 2011, 07:01:21 AM |
|
itd be cool if this miner showed what difficulty ech potential hash > or = to 1 would solve. so like "347 diff not met/ not sent, 2 diff not met/ not sent etc.. be more interesting to see how close each potential hash is... lol
Neat idea, but it's probably not practical enough to add to the main code. Being open-source software, you're free to modify it to add that feature yourself, but I don't think it would help enough people to make it worthwhile. If you're curious anyway: the displayed hex when submitting shares is actually the fifth, sixth, seventh, and eighth bytes of the share hashes. You can calculate the difficulty from that by dividing 4294967040 by the hex displayed. For example, 3351e8c0 becomes 861006016; 4294967040/861006016 = 4.988 difficulty.
|
|
|
|
shackleford
|
|
April 27, 2011, 07:27:21 AM |
|
As for the work queue stalling, which should be totally unrelated to the warnings... What's your aggression set at? Sometimes the aggression is so high that it runs the queue clean out of work on every loop. We're working on this, but for now you can use -q 2 or -q 3 to increase the size of the queue so that doesn't happen in the future.
I receive the miner is idle message as well. I get it when going from anything above Aggression 13 (about 408Mh for me). I know people are saying that anything above 10 was unnecessary but what I was seeing was a bump in hash every time I raised it to the point where the dos window would not update the hash rate. In my mind just because it doesn't say it's hashing higher does not mean it isn't. -q 2 or 3 made no difference in the idle messages. Great work, just the other day I was trying to push my card and this miner takes me further than I thought possible while being stable. Tipped
|
|
|
|
pizzaman
Newbie
Offline
Activity: 19
Merit: 0
|
|
April 27, 2011, 08:28:57 AM |
|
Started today with a 5970 XFX Black Edition, stock firmware, oc 805/1200. I'm not completely sure about all the settings and oc yet and these are my results. Running solo two parallel phoenixes with : phoenix.exe -u http://xxxx:xxxx@localhost:8332 PLATFORM=0 DEVICE=0 -k poclbm VECTORS WORKSIZE=128 BFI_INT AGGRESSION=10 -v phoenix.exe -u http://xxxx:xxxx@localhost:8332 PLATFORM=0 DEVICE=1 -k poclbm VECTORS WORKSIZE=128 BFI_INT AGGRESSION=10 -v guiminer 290x2~580Mh/s with -v -d 0 -f 1 -w128 phoenix 330x2~660Mh/s at 90degC Is this normal?: [27/04/2011 18:18:36] Result didn't meet full difficulty, not sending [27/04/2011 18:18:47] Result didn't meet full difficulty, not sending [27/04/2011 18:18:48] Server gave new work; passing to WorkQueue [27/04/2011 18:18:59] Result didn't meet full difficulty, not sending [27/04/2011 18:19:01] Server gave new work; passing to WorkQueue . . . [27/04/2011 18:20:24] Result didn't meet full difficulty, not sending [27/04/2011 18:20:31] Result didn't meet full difficulty, not sending [27/04/2011 18:20:33] Server gave new work; passing to WorkQueue
A bit puzzled about all those not sending warnings. [27/04/2011 18:20:44] Result didn't meet full difficulty, not sending [27/04/2011 18:20:47] Server gave new work; passing to WorkQueue [331.20 Mhash/sec] [0 Accepted] [0 Rejected] [RPC]
|
|
|
|
CFSworks
Member
Offline
Activity: 63
Merit: 10
|
|
April 27, 2011, 08:34:05 AM Last edit: April 27, 2011, 08:55:15 AM by CFSworks |
|
Started today with a 5970 XFX Black Edition, stock firmware, oc 805/1200. I'm not completely sure about all the settings and oc yet and these are my results. Running solo two parallel phoenixes with : phoenix.exe -u http://xxxx:xxxx@localhost:8332 PLATFORM=0 DEVICE=0 -k poclbm VECTORS WORKSIZE=128 BFI_INT AGGRESSION=10 -v phoenix.exe -u http://xxxx:xxxx@localhost:8332 PLATFORM=0 DEVICE=1 -k poclbm VECTORS WORKSIZE=128 BFI_INT AGGRESSION=10 -v guiminer ~580Mh/s phoenix ~590Mh/s Is this normal?: [27/04/2011 18:18:36] Result didn't meet full difficulty, not sending [27/04/2011 18:18:47] Result didn't meet full difficulty, not sending [27/04/2011 18:18:48] Server gave new work; passing to WorkQueue [27/04/2011 18:18:59] Result didn't meet full difficulty, not sending [27/04/2011 18:19:01] Server gave new work; passing to WorkQueue . . . [27/04/2011 18:20:24] Result didn't meet full difficulty, not sending [27/04/2011 18:20:31] Result didn't meet full difficulty, not sending [27/04/2011 18:20:33] Server gave new work; passing to WorkQueue
A bit puzzled about all those not sending warnings. [27/04/2011 18:20:44] Result didn't meet full difficulty, not sending [27/04/2011 18:20:47] Server gave new work; passing to WorkQueue [331.20 Mhash/sec] [0 Accepted] [0 Rejected] [RPC] That's normal. -v turns on "verbose mode" which shows you debug messages. That mode is mostly intended for when you're trying to hunt down a problem with your miner. EDIT: Those "not sending" debug messages are part of the normal behavior of Phoenix. In reality, the poclbm kernel actually reports only difficulty=1 hashes to the Phoenix system. It is then up to Phoenix to check if it meets full network difficulty. When that check fails (as it will most of the time), it flags it as a debug message and continues on its way. Everything else is fine, but since you're mining solo, you might want an askrate so your miner switches sooner after the blocks change. Changing your miners to this should do the trick: phoenix.exe -u http://xxxx:xxxx@localhost:8332/;askrate=10 PLATFORM=0 DEVICE=0 -k poclbm VECTORS WORKSIZE=128 BFI_INT AGGRESSION=10 phoenix.exe -u http://xxxx:xxxx@localhost:8332/;askrate=10 PLATFORM=0 DEVICE=1 -k poclbm VECTORS WORKSIZE=128 BFI_INT AGGRESSION=10
|
|
|
|
TurdHurdur
|
|
April 27, 2011, 08:42:18 AM |
|
So, what's the average donation been?
|
|
|
|
CFSworks
Member
Offline
Activity: 63
Merit: 10
|
|
April 27, 2011, 08:47:13 AM |
|
So, what's the average donation been?
Feel free to look at the address in BlockExplorer... 129ZQG33GmqYRVSCw2hw7zmDUCvvMsuGbCThat 10 came as a surprise, but it's otherwise been pretty modest, although we're grateful for the support!
|
|
|
|
Grinder
Legendary
Offline
Activity: 1284
Merit: 1001
|
|
April 27, 2011, 08:54:12 AM |
|
Problem is that because everybody will use this miner from now on, the difficulty will rise and soon the payout will be the same as with the old miners. It does make the bitcoin network more secure, though.
|
|
|
|
jedi95 (OP)
|
|
April 27, 2011, 09:02:27 AM |
|
We have identified a possible cause of the increased invalid/stale/rejected shares.
The FASTLOOP setting has the kernel do 8 internal loops before getting additional work from the main queue. This process is not interrupted when new work is pushed through LP or otherwise, so using this option can extend the amount of time the miner spends running stale work. This should not be a problem is FASTLOOP is used as intended with AGGRESSION set to 8 or lower. (since the total delay is less than a second)
With higher AGGRESSION settings FASTLOOP can extend the time it takes the kernel to get new work to more than 10 seconds.
The issue is worsened slightly by a minor bug in the Phoenix framework which will be addressed in 1.3.
|
Phoenix Miner developer Donations appreciated at: 1PHoenix9j9J3M6v3VQYWeXrHPPjf7y3rU
|
|
|
CFSworks
Member
Offline
Activity: 63
Merit: 10
|
|
April 27, 2011, 10:15:06 AM |
|
i get this message everytime i start phoenix on ubuntu: /home/noodles/phoenix-1.2/KernelInterface.py:139: DeprecationWarning: struct integer overflow masking is deprecated hashInput = pack('>76sI', staticData, nonce) /home/noodles/phoenix-1.2/KernelInterface.py:148: DeprecationWarning: struct integer overflow masking is deprecated formattedResult = pack('<76sI', range.unit.data[:76], nonce) it just spits out that warning and starts to work anyway, but from time to time, a miner just stops after work queue is empty, like it did about 1hour ago: [27/04/2011 00:20:39] Result: 83228c5b accepted [27/04/2011 00:20:39] Warning: work queue empty, miner is idle and i have to restart it (and again get the warning shown above) To follow up: Apparently the DeprecationWarning is something a little bit more serious. For some reason, PyOpenCL is returning either a negative nonce, or one greater than 2^32. The NumPy array that nonces get placed in is designated as a uint32 array. There is absolutely no way that should be possible... and yet it's happening. Maybe there's a bug in your version of PyOpenCL or NumPy? I have absolutely no idea what's wrong, sorry. However, I don't think this is a benign issue, it could be seriously impacting your hashrate and/or ability to send in shares. We'll probably have to have a more in-depth conversation via PM about this... I'm just as interested in getting to the bottom of this as you are.
|
|
|
|
jedi95 (OP)
|
|
April 27, 2011, 10:40:13 AM Last edit: April 27, 2011, 10:50:32 AM by jedi95 |
|
Version 1.3 has been released.
Changes:
1. Kernel performance improvements on ATI hardware without BFI_INT enabled (3-10 Mhash/sec) 2. Added warning on startup if FASTLOOP is enabled with AGGRESSION set to 9 or higher 3. The kernel's work cache is cleared when a new block is started (reduces invalid/stale) 4. Results are checked against the current block before being sent (prevents sending stale work if the block changed while the kernel was processing it) 5. Various minor bugfixes
|
Phoenix Miner developer Donations appreciated at: 1PHoenix9j9J3M6v3VQYWeXrHPPjf7y3rU
|
|
|
molecular
Donator
Legendary
Offline
Activity: 2772
Merit: 1019
|
|
April 27, 2011, 10:52:28 AM |
|
congrats on your miner: insane speed!
One question: I'm using phoenix to mine solo on one GPU, on slush's pool on the other. The slush one report "working on block #" when switching to a new block. The miner connected to my local bitcoin (solo) doesn't do that. Why not?
|
PGP key molecular F9B70769 fingerprint 9CDD C0D3 20F8 279F 6BE0 3F39 FC49 2362 F9B7 0769
|
|
|
jedi95 (OP)
|
|
April 27, 2011, 11:10:21 AM |
|
congrats on your miner: insane speed!
One question: I'm using phoenix to mine solo on one GPU, on slush's pool on the other. The slush one report "working on block #" when switching to a new block. The miner connected to my local bitcoin (solo) doesn't do that. Why not?
This message is only displayed when the RPC server sends X-Blocknum in the header. A local bitcoin or bitcoind instance doesn't provide it.
|
Phoenix Miner developer Donations appreciated at: 1PHoenix9j9J3M6v3VQYWeXrHPPjf7y3rU
|
|
|
CFSworks
Member
Offline
Activity: 63
Merit: 10
|
|
April 27, 2011, 11:11:22 AM |
|
As jedi95 said, Slush's pool has an extra header in the response that indicates the block number, which is simply displayed in the Phoenix output. bitcoind doesn't give this header during getwork() responses, (and, as of this writing, nor do any of the other pools), but when I have time to sit down and adjust the internals a bit, I can make it query the solo Bitcoin client for its block number.
|
|
|
|
molecular
Donator
Legendary
Offline
Activity: 2772
Merit: 1019
|
|
April 27, 2011, 11:13:21 AM |
|
congrats on your miner: insane speed!
One question: I'm using phoenix to mine solo on one GPU, on slush's pool on the other. The slush one report "working on block #" when switching to a new block. The miner connected to my local bitcoin (solo) doesn't do that. Why not?
This message is only displayed when the RPC server sends X-Blocknum in the header. A local bitcoin or bitcoind instance doesn't provide it. Aaawright. Thanks for clearing that up for me.
|
PGP key molecular F9B70769 fingerprint 9CDD C0D3 20F8 279F 6BE0 3F39 FC49 2362 F9B7 0769
|
|
|
kindle
Member
Offline
Activity: 84
Merit: 10
|
|
April 27, 2011, 11:52:51 AM |
|
Hi I have a question, you mentioned that it is good to set an askrate=10 when mining solo due to the possible change in block. I recalled that the default for poclbm or diablo was 5. Does the askrate for phoenix use the same scale system as the other 2? Additionally, does the askrate=5 potentially cause the miner to work incompletely. For example, if the miner has not completely hash the current work and the new work arrives dumping the current work mid way, and this cycle continues. Would it affect the probability of finding a solution?
|
|
|
|
jedi95 (OP)
|
|
April 27, 2011, 12:21:40 PM |
|
Hi I have a question, you mentioned that it is good to set an askrate=10 when mining solo due to the possible change in block. I recalled that the default for poclbm or diablo was 5. Does the askrate for phoenix use the same scale system as the other 2? Additionally, does the askrate=5 potentially cause the miner to work incompletely. For example, if the miner has not completely hash the current work and the new work arrives dumping the current work mid way, and this cycle continues. Would it affect the probability of finding a solution?
Work (2^32 nonces) is always checked completely unless the block changes before the entire range of nonces is checked. In terms of full getwork responses, if the queue is already full when more work is received the oldest work is discarded. This doesn't affect the chances of finding a block, since it's just as likely that the new work contains a solution.
|
Phoenix Miner developer Donations appreciated at: 1PHoenix9j9J3M6v3VQYWeXrHPPjf7y3rU
|
|
|
|