fullzero (OP)
|
|
July 26, 2017, 01:44:45 AM |
|
Hi Fullzero
I want to share with you a GPU failed that the watchdog is not able to detect
wdog screen:
GPU UTILIZATION: Unable to determine the device handle for GPU 0000:09:00.0: GPU is lost. Reboot the system to recover this GPU
/home/m1/IAmNotAJeep_and_Maxximus007_WATCHDOG: line 44: [: Unable: integer expression expected /home/m1/IAmNotAJeep_and_Maxximus007_WATCHDOG: line 44: [: to: integer expression expected /home/m1/IAmNotAJeep_and_Maxximus007_WATCHDOG: line 44: [: determine: integer expression expected /home/m1/IAmNotAJeep_and_Maxximus007_WATCHDOG: line 44: [: the: integer expression expected /home/m1/IAmNotAJeep_and_Maxximus007_WATCHDOG: line 44: [: device: integer expression expected /home/m1/IAmNotAJeep_and_Maxximus007_WATCHDOG: line 44: [: handle: integer expression expected /home/m1/IAmNotAJeep_and_Maxximus007_WATCHDOG: line 44: [: for: integer expression expected /home/m1/IAmNotAJeep_and_Maxximus007_WATCHDOG: line 44: [: GPU: integer expression expected /home/m1/IAmNotAJeep_and_Maxximus007_WATCHDOG: line 44: [: 0000:09:00.0:: integer expression expected /home/m1/IAmNotAJeep_and_Maxximus007_WATCHDOG: line 44: [: GPU: integer expression expected /home/m1/IAmNotAJeep_and_Maxximus007_WATCHDOG: line 44: [: is: integer expression expected /home/m1/IAmNotAJeep_and_Maxximus007_WATCHDOG: line 44: [: lost.: integer expression expected /home/m1/IAmNotAJeep_and_Maxximus007_WATCHDOG: line 44: [: Reboot: integer expression expected /home/m1/IAmNotAJeep_and_Maxximus007_WATCHDOG: line 44: [: the: integer expression expected /home/m1/IAmNotAJeep_and_Maxximus007_WATCHDOG: line 44: [: system: integer expression expected /home/m1/IAmNotAJeep_and_Maxximus007_WATCHDOG: line 44: [: to: integer expression expected /home/m1/IAmNotAJeep_and_Maxximus007_WATCHDOG: line 44: [: recover: integer expression expected /home/m1/IAmNotAJeep_and_Maxximus007_WATCHDOG: line 44: [: this: integer expression expected /home/m1/IAmNotAJeep_and_Maxximus007_WATCHDOG: line 44: [: GPU: integer expression expected Tue Jul 25 16:57:01 CEST 2017 - All good! Will check again in 60 seconds
GPU UTILIZATION: Unable to determine the device handle for GPU 0000:09:00.0: GPU is lost. Reboot the system to recover this GPU
/home/m1/IAmNotAJeep_and_Maxximus007_WATCHDOG: line 44: [: Unable: integer expression expected /home/m1/IAmNotAJeep_and_Maxximus007_WATCHDOG: line 44: [: to: integer expression expected /home/m1/IAmNotAJeep_and_Maxximus007_WATCHDOG: line 44: [: determine: integer expression expected /home/m1/IAmNotAJeep_and_Maxximus007_WATCHDOG: line 44: [: the: integer expression expected /home/m1/IAmNotAJeep_and_Maxximus007_WATCHDOG: line 44: [: device: integer expression expected /home/m1/IAmNotAJeep_and_Maxximus007_WATCHDOG: line 44: [: handle: integer expression expected /home/m1/IAmNotAJeep_and_Maxximus007_WATCHDOG: line 44: [: for: integer expression expected /home/m1/IAmNotAJeep_and_Maxximus007_WATCHDOG: line 44: [: GPU: integer expression expected /home/m1/IAmNotAJeep_and_Maxximus007_WATCHDOG: line 44: [: 0000:09:00.0:: integer expression expected /home/m1/IAmNotAJeep_and_Maxximus007_WATCHDOG: line 44: [: GPU: integer expression expected /home/m1/IAmNotAJeep_and_Maxximus007_WATCHDOG: line 44: [: is: integer expression expected /home/m1/IAmNotAJeep_and_Maxximus007_WATCHDOG: line 44: [: lost.: integer expression expected /home/m1/IAmNotAJeep_and_Maxximus007_WATCHDOG: line 44: [: Reboot: integer expression expected /home/m1/IAmNotAJeep_and_Maxximus007_WATCHDOG: line 44: [: the: integer expression expected /home/m1/IAmNotAJeep_and_Maxximus007_WATCHDOG: line 44: [: system: integer expression expected /home/m1/IAmNotAJeep_and_Maxximus007_WATCHDOG: line 44: [: to: integer expression expected /home/m1/IAmNotAJeep_and_Maxximus007_WATCHDOG: line 44: [: recover: integer expression expected /home/m1/IAmNotAJeep_and_Maxximus007_WATCHDOG: line 44: [: this: integer expression expected /home/m1/IAmNotAJeep_and_Maxximus007_WATCHDOG: line 44: [: GPU: integer expression expected Tue Jul 25 16:58:01 CEST 2017 - All good! Will check again in 60 seconds
the miner show/detect only 6 GPU over 7
nvidia-smi doesn't work $ nvidia-smi Unable to determine the device handle for GPU 0000:09:00.0: GPU is lost. Reboot the system to recover this GPU
temp screen: Provided power limit 75.00 W is not a valid power limit which should be between 115.00 W and 291.00 W for GPU 00000000:0A:00.0 Terminating early due to previous errors. Tue Jul 25 17:01:07 CEST 2017 - All good, will check again soon
GPU 0, Target temp: 61, Current: 60, Diff: 1, Fan: 75, Power: 123.46
GPU 1, Target temp: 61, Current: 60, Diff: 1, Fan: 63, Power: 124.62
GPU 2, Target temp: 61, Current: 59, Diff: 2, Fan: 77, Power: 119.23
GPU 3, Target temp: 61, Current: 60, Diff: 1, Fan: 68, Power: 120.72
GPU 4, Target temp: 61, Current: 59, Diff: 2, Fan: 57, Power: 124.26
GPU 5, Target temp: 61, Current: Unable, Diff: 61, Fan: to, Power: determine
/home/m1/Maxximus007_AUTO_TEMPERATURE_CONTROL: line 125: [: Unable: integer expression expected /home/m1/Maxximus007_AUTO_TEMPERATURE_CONTROL: line 158: [: the: integer expression expected /home/m1/Maxximus007_AUTO_TEMPERATURE_CONTROL: line 171: [: to: integer expression expected GPU 6, Target temp: 61, Current: 55, Diff: 6, Fan: 50, Power: 126.76
Tue Jul 25 17:01:37 CEST 2017 - Restoring Power limit for gpu:6. Old limit: 125 New limit: 75 Fan speed: 50
Provided power limit 75.00 W is not a valid power limit which should be between 115.00 W and 291.00 W for GPU 00000000:0A:00.0 Terminating early due to previous errors. Tue Jul 25 17:01:37 CEST 2017 - All good, will check again soon
I believe this is the exact problem that Maxximus007 recently made a new code block to resolve.
|
|
|
|
mnh_license@proton.me https://github.com/hartmanm How difficulty adjustment works: Every 2016 blocks, the Network adjusts the current difficulty to estimated difficulty in an attempt to keep the block generation time at 10 minutes or 600 seconds. Thus the Network re-targets the difficulty at a total difficulty time of: 2016 blocks * 10 minutes per block = 20160 minutes / 60 minutes = 336 hours / 24 hours = 14 days. When the Network hashrate is increasing; a difficulty ( 2016 blocks ) should take less than 14 days. How much less can be estimated by comparing the % Network hashrate growth + what the Network hashrate was at the beginning of the difficulty ( 2016 blocks ) against what the Network hashrate was at the beginning of the difficulty ( 2016 blocks ). This is only an estimate because you cannot account for "luck"; but you can calculate reasonably well using explicitly delimited stochastic ranges. The easy way to think about this is to look at this graph and see how close to 0 the current data points are on its y axis. If the blue line is above 0 the difficulty ( 2016 ) blocks should take less than 14 days; if it is below it should take more. http://bitcoin.sipa.be/growth-10k.png
|
|
|
fullzero (OP)
|
|
July 26, 2017, 01:46:32 AM Last edit: July 27, 2017, 12:29:52 AM by mprep |
|
I have NV 1060. How I can change Performance Level from 2 to 3 in Nvidia X server settings?
in 1bash set: GPUPowerMizerMode_Adjust="YES"
GPUPowerMizerMode=3 or open a new guake tab and enter: sudo nvidia-settings -a [gpu:0]/GPUPowerMizerMode=3 changing the number in (gpu:0) for the GPU you are adjusting. Hi fullzero, any fix issue about this ? Mode on GPUPowerMizer 0-1-2-3 Ubuntu dont detect the 3 one. Screen as evidence http://imgur.com/a/wp7kxThanks for all your hardwork, to the other hard contributor to EDIT : how can i grab some log when i soft crash ? I would like to check this. EDIT2 : My main GPU0, with display attached, got some mh/s drop like -3mh/s on max on period. That from the display ? Working on "REMOTE" + ssh fix the issue ? Hey I have the same issue but my assumption is that the Nvidia GUI is showing level 1/2/3 BUT the Nvidia command line tool is showing 0/1/2 and (I am guessing) that 0=1, 1=2 and 2=3. This is very possible. No Nvidia GUI is showing 4lines level 0/1/2/3 And, when 1bash starting, level 3 is showing for short time From the GUI nvidia we have 0 to 3, linux have only 3 setup for PowerMizer 0 to 2. I assume the "3" state in only for windows ? I put negativ oc on core & clock, + full power (150W), same deal "3" is not a valid value. There is a difference between performance mode and performance levels. Performance mode 0 is "prefere maximum performance" mode 1 is "adaptive" and 2 is "auto". Performance level is the thing with the clock speeds(adaptive clocking)1 to 4, witch you can see in the gui. It looks like a by design "feature" from the nvidia driver. Everytime you start the miner software the performance level goes to 3 and the card clock will go down. Overclocking settings will also effect the clock speed on a lower perfomance level. Example : level 1 200mhz and 400mhz level 2 400mhz and 100mhz level 3 1200mhz and 3000mhz level 4 1700mhz and 4000mhz After overclocking gpu +100 memory +1000 it will look like this: level 1 300mhz and 1400mhz level 2 500mhz and 1100mhz level 3 1300mhz and 4000mhz level 4 1800mhz and 5000mhz The only thing you do with overclocking, is making your card clocks speed as fast as it should be. It would be super if someone has a solution for that. Btw nvidia-smi -ac 5000,1800 won’t work in my case.(application clocking) There is also a post at the nvidia devforum with the same problems. the -ac is for maxwell and older GPUs only; it will not work with 1000 series GPUs
Hi guys, While I use v18, I get this error. The vga params are the same as in v17 (7x1070, power limit 100, proc: -200, mem: +1100). In v17 it was stable as rock for weeks. The problems: - I use watchdog param but it is not restarting - once restart after 8 hours - ctrl+c does not work - in the first 10 minutes I can reboot from console - I can not use teamviewer Please help me to find a solution. Thank you ind advance! I'm not sure what is causing this; can you try using claymore and see if it has the same problem: GENOILorCLAYMORE="CLAYMORE" Are you using the bug fix 1bash and files listed at the top of the OP? There were some bugs in the v0018, and it is a good idea to update to the bug fix version. Have you set:
Hi guys! Guys with 1080Ti mining LBC, how did you set up 1bash to mine this coin? I still can't beat an issue with worker autentification. What should I paste in 1bash? Please advice. Thank you!
I responded to your pm; let me know if you get it to work. could I also get this answer? just having the same stupid issues with any pools that require workers. if you prefilled the bash with your own working worker info and stuff it might help idiots like me get started. I did notice that I got an authentication stratum error when just trying to run with your prefilled LBC information When using suprnova; This is what each variable is: LBC_POOL="stratum+tcp://lbry.suprnova.cc:6256" add a workername with a password of: x
|
|
|
|
mnh_license@proton.me https://github.com/hartmanm How difficulty adjustment works: Every 2016 blocks, the Network adjusts the current difficulty to estimated difficulty in an attempt to keep the block generation time at 10 minutes or 600 seconds. Thus the Network re-targets the difficulty at a total difficulty time of: 2016 blocks * 10 minutes per block = 20160 minutes / 60 minutes = 336 hours / 24 hours = 14 days. When the Network hashrate is increasing; a difficulty ( 2016 blocks ) should take less than 14 days. How much less can be estimated by comparing the % Network hashrate growth + what the Network hashrate was at the beginning of the difficulty ( 2016 blocks ) against what the Network hashrate was at the beginning of the difficulty ( 2016 blocks ). This is only an estimate because you cannot account for "luck"; but you can calculate reasonably well using explicitly delimited stochastic ranges. The easy way to think about this is to look at this graph and see how close to 0 the current data points are on its y axis. If the blue line is above 0 the difficulty ( 2016 ) blocks should take less than 14 days; if it is below it should take more. http://bitcoin.sipa.be/growth-10k.png
|
|
|
Maxximus007
|
|
July 26, 2017, 09:02:52 AM |
|
Hi Fullzero
I want to share with you a GPU failed that the watchdog is not able to detect
wdog screen:
GPU UTILIZATION: Unable to determine the device handle for GPU 0000:09:00.0: GPU is lost. Reboot the system to recover this GPU
/home/m1/IAmNotAJeep_and_Maxximus007_WATCHDOG: line 44: [: Unable: integer expression expected /home/m1/IAmNotAJeep_and_Maxximus007_WATCHDOG: line 44: [: to: integer expression expected /home/m1/IAmNotAJeep_and_Maxximus007_WATCHDOG: line 44: [: determine: integer expression expected /home/m1/IAmNotAJeep_and_Maxximus007_WATCHDOG: line 44: [: the: integer expression expected /home/m1/IAmNotAJeep_and_Maxximus007_WATCHDOG: line 44: [: device: integer expression expected /home/m1/IAmNotAJeep_and_Maxximus007_WATCHDOG: line 44: [: handle: integer expression expected /home/m1/IAmNotAJeep_and_Maxximus007_WATCHDOG: line 44: [: for: integer expression expected /home/m1/IAmNotAJeep_and_Maxximus007_WATCHDOG: line 44: [: GPU: integer expression expected /home/m1/IAmNotAJeep_and_Maxximus007_WATCHDOG: line 44: [: 0000:09:00.0:: integer expression expected /home/m1/IAmNotAJeep_and_Maxximus007_WATCHDOG: line 44: [: GPU: integer expression expected /home/m1/IAmNotAJeep_and_Maxximus007_WATCHDOG: line 44: [: is: integer expression expected /home/m1/IAmNotAJeep_and_Maxximus007_WATCHDOG: line 44: [: lost.: integer expression expected /home/m1/IAmNotAJeep_and_Maxximus007_WATCHDOG: line 44: [: Reboot: integer expression expected /home/m1/IAmNotAJeep_and_Maxximus007_WATCHDOG: line 44: [: the: integer expression expected /home/m1/IAmNotAJeep_and_Maxximus007_WATCHDOG: line 44: [: system: integer expression expected /home/m1/IAmNotAJeep_and_Maxximus007_WATCHDOG: line 44: [: to: integer expression expected /home/m1/IAmNotAJeep_and_Maxximus007_WATCHDOG: line 44: [: recover: integer expression expected /home/m1/IAmNotAJeep_and_Maxximus007_WATCHDOG: line 44: [: this: integer expression expected /home/m1/IAmNotAJeep_and_Maxximus007_WATCHDOG: line 44: [: GPU: integer expression expected Tue Jul 25 16:57:01 CEST 2017 - All good! Will check again in 60 seconds
GPU UTILIZATION: Unable to determine the device handle for GPU 0000:09:00.0: GPU is lost. Reboot the system to recover this GPU
/home/m1/IAmNotAJeep_and_Maxximus007_WATCHDOG: line 44: [: Unable: integer expression expected /home/m1/IAmNotAJeep_and_Maxximus007_WATCHDOG: line 44: [: to: integer expression expected /home/m1/IAmNotAJeep_and_Maxximus007_WATCHDOG: line 44: [: determine: integer expression expected /home/m1/IAmNotAJeep_and_Maxximus007_WATCHDOG: line 44: [: the: integer expression expected /home/m1/IAmNotAJeep_and_Maxximus007_WATCHDOG: line 44: [: device: integer expression expected /home/m1/IAmNotAJeep_and_Maxximus007_WATCHDOG: line 44: [: handle: integer expression expected /home/m1/IAmNotAJeep_and_Maxximus007_WATCHDOG: line 44: [: for: integer expression expected /home/m1/IAmNotAJeep_and_Maxximus007_WATCHDOG: line 44: [: GPU: integer expression expected /home/m1/IAmNotAJeep_and_Maxximus007_WATCHDOG: line 44: [: 0000:09:00.0:: integer expression expected /home/m1/IAmNotAJeep_and_Maxximus007_WATCHDOG: line 44: [: GPU: integer expression expected /home/m1/IAmNotAJeep_and_Maxximus007_WATCHDOG: line 44: [: is: integer expression expected /home/m1/IAmNotAJeep_and_Maxximus007_WATCHDOG: line 44: [: lost.: integer expression expected /home/m1/IAmNotAJeep_and_Maxximus007_WATCHDOG: line 44: [: Reboot: integer expression expected /home/m1/IAmNotAJeep_and_Maxximus007_WATCHDOG: line 44: [: the: integer expression expected /home/m1/IAmNotAJeep_and_Maxximus007_WATCHDOG: line 44: [: system: integer expression expected /home/m1/IAmNotAJeep_and_Maxximus007_WATCHDOG: line 44: [: to: integer expression expected /home/m1/IAmNotAJeep_and_Maxximus007_WATCHDOG: line 44: [: recover: integer expression expected /home/m1/IAmNotAJeep_and_Maxximus007_WATCHDOG: line 44: [: this: integer expression expected /home/m1/IAmNotAJeep_and_Maxximus007_WATCHDOG: line 44: [: GPU: integer expression expected Tue Jul 25 16:58:01 CEST 2017 - All good! Will check again in 60 seconds
the miner show/detect only 6 GPU over 7
nvidia-smi doesn't work $ nvidia-smi Unable to determine the device handle for GPU 0000:09:00.0: GPU is lost. Reboot the system to recover this GPU
temp screen: Provided power limit 75.00 W is not a valid power limit which should be between 115.00 W and 291.00 W for GPU 00000000:0A:00.0 Terminating early due to previous errors. Tue Jul 25 17:01:07 CEST 2017 - All good, will check again soon
GPU 0, Target temp: 61, Current: 60, Diff: 1, Fan: 75, Power: 123.46
GPU 1, Target temp: 61, Current: 60, Diff: 1, Fan: 63, Power: 124.62
GPU 2, Target temp: 61, Current: 59, Diff: 2, Fan: 77, Power: 119.23
GPU 3, Target temp: 61, Current: 60, Diff: 1, Fan: 68, Power: 120.72
GPU 4, Target temp: 61, Current: 59, Diff: 2, Fan: 57, Power: 124.26
GPU 5, Target temp: 61, Current: Unable, Diff: 61, Fan: to, Power: determine
/home/m1/Maxximus007_AUTO_TEMPERATURE_CONTROL: line 125: [: Unable: integer expression expected /home/m1/Maxximus007_AUTO_TEMPERATURE_CONTROL: line 158: [: the: integer expression expected /home/m1/Maxximus007_AUTO_TEMPERATURE_CONTROL: line 171: [: to: integer expression expected GPU 6, Target temp: 61, Current: 55, Diff: 6, Fan: 50, Power: 126.76
Tue Jul 25 17:01:37 CEST 2017 - Restoring Power limit for gpu:6. Old limit: 125 New limit: 75 Fan speed: 50
Provided power limit 75.00 W is not a valid power limit which should be between 115.00 W and 291.00 W for GPU 00000000:0A:00.0 Terminating early due to previous errors. Tue Jul 25 17:01:37 CEST 2017 - All good, will check again soon
I believe this is the exact problem that Maxximus007 recently made a new code block to resolve. Yes, this was the exact same error message that prevented a restart. The new code block resolves this, and will restart rig upon receiving this message.
|
|
|
|
argonaute
Newbie
Offline
Activity: 50
Merit: 0
|
|
July 26, 2017, 09:56:13 AM |
|
Hi Fullzero
I want to share with you a GPU failed that the watchdog is not able to detect
wdog screen:
GPU UTILIZATION: Unable to determine the device handle for GPU 0000:09:00.0: GPU is lost. Reboot the system to recover this GPU
/home/m1/IAmNotAJeep_and_Maxximus007_WATCHDOG: line 44: [: Unable: integer expression expected /home/m1/IAmNotAJeep_and_Maxximus007_WATCHDOG: line 44: [: to: integer expression expected /home/m1/IAmNotAJeep_and_Maxximus007_WATCHDOG: line 44: [: determine: integer expression expected /home/m1/IAmNotAJeep_and_Maxximus007_WATCHDOG: line 44: [: the: integer expression expected /home/m1/IAmNotAJeep_and_Maxximus007_WATCHDOG: line 44: [: device: integer expression expected /home/m1/IAmNotAJeep_and_Maxximus007_WATCHDOG: line 44: [: handle: integer expression expected /home/m1/IAmNotAJeep_and_Maxximus007_WATCHDOG: line 44: [: for: integer expression expected /home/m1/IAmNotAJeep_and_Maxximus007_WATCHDOG: line 44: [: GPU: integer expression expected /home/m1/IAmNotAJeep_and_Maxximus007_WATCHDOG: line 44: [: 0000:09:00.0:: integer expression expected /home/m1/IAmNotAJeep_and_Maxximus007_WATCHDOG: line 44: [: GPU: integer expression expected /home/m1/IAmNotAJeep_and_Maxximus007_WATCHDOG: line 44: [: is: integer expression expected /home/m1/IAmNotAJeep_and_Maxximus007_WATCHDOG: line 44: [: lost.: integer expression expected /home/m1/IAmNotAJeep_and_Maxximus007_WATCHDOG: line 44: [: Reboot: integer expression expected /home/m1/IAmNotAJeep_and_Maxximus007_WATCHDOG: line 44: [: the: integer expression expected /home/m1/IAmNotAJeep_and_Maxximus007_WATCHDOG: line 44: [: system: integer expression expected /home/m1/IAmNotAJeep_and_Maxximus007_WATCHDOG: line 44: [: to: integer expression expected /home/m1/IAmNotAJeep_and_Maxximus007_WATCHDOG: line 44: [: recover: integer expression expected /home/m1/IAmNotAJeep_and_Maxximus007_WATCHDOG: line 44: [: this: integer expression expected /home/m1/IAmNotAJeep_and_Maxximus007_WATCHDOG: line 44: [: GPU: integer expression expected Tue Jul 25 16:57:01 CEST 2017 - All good! Will check again in 60 seconds
GPU UTILIZATION: Unable to determine the device handle for GPU 0000:09:00.0: GPU is lost. Reboot the system to recover this GPU
/home/m1/IAmNotAJeep_and_Maxximus007_WATCHDOG: line 44: [: Unable: integer expression expected /home/m1/IAmNotAJeep_and_Maxximus007_WATCHDOG: line 44: [: to: integer expression expected /home/m1/IAmNotAJeep_and_Maxximus007_WATCHDOG: line 44: [: determine: integer expression expected /home/m1/IAmNotAJeep_and_Maxximus007_WATCHDOG: line 44: [: the: integer expression expected /home/m1/IAmNotAJeep_and_Maxximus007_WATCHDOG: line 44: [: device: integer expression expected /home/m1/IAmNotAJeep_and_Maxximus007_WATCHDOG: line 44: [: handle: integer expression expected /home/m1/IAmNotAJeep_and_Maxximus007_WATCHDOG: line 44: [: for: integer expression expected /home/m1/IAmNotAJeep_and_Maxximus007_WATCHDOG: line 44: [: GPU: integer expression expected /home/m1/IAmNotAJeep_and_Maxximus007_WATCHDOG: line 44: [: 0000:09:00.0:: integer expression expected /home/m1/IAmNotAJeep_and_Maxximus007_WATCHDOG: line 44: [: GPU: integer expression expected /home/m1/IAmNotAJeep_and_Maxximus007_WATCHDOG: line 44: [: is: integer expression expected /home/m1/IAmNotAJeep_and_Maxximus007_WATCHDOG: line 44: [: lost.: integer expression expected /home/m1/IAmNotAJeep_and_Maxximus007_WATCHDOG: line 44: [: Reboot: integer expression expected /home/m1/IAmNotAJeep_and_Maxximus007_WATCHDOG: line 44: [: the: integer expression expected /home/m1/IAmNotAJeep_and_Maxximus007_WATCHDOG: line 44: [: system: integer expression expected /home/m1/IAmNotAJeep_and_Maxximus007_WATCHDOG: line 44: [: to: integer expression expected /home/m1/IAmNotAJeep_and_Maxximus007_WATCHDOG: line 44: [: recover: integer expression expected /home/m1/IAmNotAJeep_and_Maxximus007_WATCHDOG: line 44: [: this: integer expression expected /home/m1/IAmNotAJeep_and_Maxximus007_WATCHDOG: line 44: [: GPU: integer expression expected Tue Jul 25 16:58:01 CEST 2017 - All good! Will check again in 60 seconds
the miner show/detect only 6 GPU over 7
nvidia-smi doesn't work $ nvidia-smi Unable to determine the device handle for GPU 0000:09:00.0: GPU is lost. Reboot the system to recover this GPU
temp screen: Provided power limit 75.00 W is not a valid power limit which should be between 115.00 W and 291.00 W for GPU 00000000:0A:00.0 Terminating early due to previous errors. Tue Jul 25 17:01:07 CEST 2017 - All good, will check again soon
GPU 0, Target temp: 61, Current: 60, Diff: 1, Fan: 75, Power: 123.46
GPU 1, Target temp: 61, Current: 60, Diff: 1, Fan: 63, Power: 124.62
GPU 2, Target temp: 61, Current: 59, Diff: 2, Fan: 77, Power: 119.23
GPU 3, Target temp: 61, Current: 60, Diff: 1, Fan: 68, Power: 120.72
GPU 4, Target temp: 61, Current: 59, Diff: 2, Fan: 57, Power: 124.26
GPU 5, Target temp: 61, Current: Unable, Diff: 61, Fan: to, Power: determine
/home/m1/Maxximus007_AUTO_TEMPERATURE_CONTROL: line 125: [: Unable: integer expression expected /home/m1/Maxximus007_AUTO_TEMPERATURE_CONTROL: line 158: [: the: integer expression expected /home/m1/Maxximus007_AUTO_TEMPERATURE_CONTROL: line 171: [: to: integer expression expected GPU 6, Target temp: 61, Current: 55, Diff: 6, Fan: 50, Power: 126.76
Tue Jul 25 17:01:37 CEST 2017 - Restoring Power limit for gpu:6. Old limit: 125 New limit: 75 Fan speed: 50
Provided power limit 75.00 W is not a valid power limit which should be between 115.00 W and 291.00 W for GPU 00000000:0A:00.0 Terminating early due to previous errors. Tue Jul 25 17:01:37 CEST 2017 - All good, will check again soon
I believe this is the exact problem that Maxximus007 recently made a new code block to resolve. Yes, this was the exact same error message that prevented a restart. The new code block resolves this, and will restart rig upon receiving this message. Cool Thanks guys
|
|
|
|
ivoldemar
Newbie
Offline
Activity: 23
Merit: 0
|
|
July 26, 2017, 10:25:50 AM Last edit: July 26, 2017, 10:43:09 AM by ivoldemar |
|
Good afternoon! === I'll start with the configuration of my equipment: GPU - 13 Colorful P106-100 Mining (without video outputs) Edition or 12 + 1 MSI 1060/6 (with video output) Motherboard H110 Pro BTC === Downloaded and installed nvoc 18 version. Starting with the first run, the system immediately began to issue an error and go into reboot. (Screenshot attached) http://prntscr.com/fzo8uv http://prntscr.com/fzobubIf you quickly close this window with an error and try to start the miner - the system starts and mining. But, the system does not respond to overclocking. Increasing the memory, setting the speed of the coolers, powerlimit - nothing is applied and the system can only run in the stock. It did not work out, I tried to connect via integrated video graphics, also through a card with a video output. I suspect this is because of the error that pops up at the very beginning, related to xorg. One error message also appears on one MSI 1060/6 card. I tried to run the data on the motherboard H81 Pro BTC + net install Ubuntu + version of the drivers Nvidia 384.47 and everything was fine. Tell me how to solve this problem? For earlier I express my gratitude! I ask for prognosis for the possibly incorrect text, tk. I used an interpreter. nvOC doesn't support integrated graphics. If you connect a monitor to the integrated graphics you will enter an infinite loop where the system xorg get corrupted then 1bash restores the xorg and reboots. If you have connected a monitor to integrated graphics previously; after switching to using the primary GPU (the one connected to the 16x slot) you will need to allow the system to reboot once, then on the second boot after changing it should work. WTF? http://prntscr.com/g02inrhttp://prntscr.com/g02iuzhttp://prntscr.com/g02j15UPDATE: The system finally started. But, overclocking is applied only to the card in which the monitor is inserted. The remaining P106 cards do not overclock. Screenshots I enclose. http://prntscr.com/g03joahttp://prntscr.com/g03jt8http://prntscr.com/g03jylIt's unclear what else, manual control of the cooler is turned on, set to 100%. But still the system adjusts them automatically. If you want all the fans set to 100% ensure: Maxximus007_AUTO_TEMPERATURE_CONTROL="NO" Also I changed 1bash to hopefully better support the p106 GPUs in the bux fix version linked at the top of the OP. I don't have any of these, so these changes were based on what other members have reported. Looking at your screenshots; it looks like there is a problem either with the image on the ssd or with the ssd itself. I would try reimaging and using the bug fix 1bash and files. BTW this rig looks good; I want to make one of these myself. Thank you! Fix helped the system to run smoothly. But the overclocking and any tuning of the action does not apply! A couple of screenshots as always attached: http://prntscr.com/g0ggi9http://prntscr.com/g0gh50http://prntscr.com/g0gh8shttp://prntscr.com/g0ghcz
|
|
|
|
darkfortedx
Newbie
Offline
Activity: 9
Merit: 0
|
|
July 26, 2017, 02:52:36 PM Last edit: July 27, 2017, 12:30:12 AM by mprep |
|
my rigs keep restarting after few minutes. Some after 5 mins some after 30. Im not sure why this keeps happening.
im using all 1070's with core 100 / 1050 also i tried 150-200 / 1100-1600. No luck
what is your power limit on them? for 1070s I would push around 120-125 with a moderate/heavy OC. Anything less can cause stability problems. my power limits are all at 125 in the home directory there is a file named: open it with gedit and tell me what is there It says utilization is too low. Reviving did not work so restarting machine Are you sure the client is connecting correctly to the pool? If you disable the watchdog: IAmNotAJeep_and_Maxximus007_WATCHDOG="NO" What happens when you try to mine? Also what COIN are you mining? Yea the pool is connecting, If i disable watchdog its still goes. Im trying to mine EXP and SC and switch around to ETH/ETC
I use nvoc for dual mining: DUAL_ETC_PASC
for ETC I can use only etc.nanopool.org or etc.ethermine.org servers, in all other ETC POOL servers I receive the message:
ETH: Authorization failed : {"id":2,"jsonrpc":"2.0","result":null,"error":{"code":-1,"message":"Invalidlogin"}} Stratum - reading socket failed, disconnect ETH: Job timeout, disconnect, retry in 20 sec...
Please help me to find a solution.
Even i get this with various coins if i change my pool. I'm not sure why?
|
|
|
|
WarwickNZ
Newbie
Offline
Activity: 21
Merit: 0
|
|
July 26, 2017, 04:01:23 PM Last edit: July 26, 2017, 04:17:46 PM by WarwickNZ |
|
Hello fullzero, After a bunch of research and try, I built ccminer 2.2 successfully on nvOC 0018. The previous build break is due to the openssl version, m1@m1-desktop:~$ openssl version OpenSSL 1.1.1-dev xx XXX xxxx
You used a dev build of openssl in nvOC, ccminer doesn't support it. I download source code of openssl v1.0.2l from openssl.org, build the code, create a few soft links to replace the v1.1.1, it finally works. Not sure if this change breaks other things. I also built alexis78 fork, https://github.com/alexis78/ccminer, it requires one more step, replace cuda-7.5 to cuda-8.0 in configure.sh. I'm looking forward these miners in nvOC 0019. Using linux requires some technical backgrounds, fortunately I was a windows developer Thanks car1999; this was helpful. the problem with tp2.2 on nvOC was a file in the openssl: bn.h. I swapped it with the version it was looking for and this resolved all the related errors. Also the cuda version does need to be changed in configure.sh as you described, but also for 1000 series GPUs Makefile.am must be altered with the correct Arch flag, 61 for the 1000 series. I compiled both alexis78 and Tpuvot 2.2 ccminer clients and added at download link to them on the OP. I will include them in v0019 of course. Hi, I just tried these out following your instructions and just get "illegal instruction" from the miner, any thoughts? Thanks
|
|
|
|
salfter
|
|
July 26, 2017, 05:39:39 PM Last edit: July 26, 2017, 06:15:35 PM by salfter |
|
I compiled both alexis78 and Tpuvot 2.2 ccminer clients and added at download link to them on the OP. I will include them in v0019 of course.
The alexis78 miner appears to be OK (at least /home/m1/ASccminer/ccminer --help works), but the and tpruvot miners exit on my rig with an illegal-instruction error. My rig runs on a Celeron G3920 (Skylake core). (The help output worked for the alexis78 miner, but when it tried mining with it, it fell over.)
|
|
|
|
WarwickNZ
Newbie
Offline
Activity: 21
Merit: 0
|
|
July 26, 2017, 05:54:48 PM |
|
g4560 CPU for me, "illegal instruction" on ASccminer on --benchmark too
|
|
|
|
salfter
|
|
July 26, 2017, 06:01:59 PM |
|
I use nvoc for dual mining: DUAL_ETC_PASC
for ETC I can use only etc.nanopool.org or etc.ethermine.org servers, in all other ETC POOL servers I receive the message:
ETH: Authorization failed : {"id":2,"jsonrpc":"2.0","result":null,"error":{"code":-1,"message":"Invalidlogin"}} Stratum - reading socket failed, disconnect ETH: Job timeout, disconnect, retry in 20 sec...
Please help me to find a solution.
Even i get this with various coins if i change my pool. I'm not sure why? Different stratum implementations. The Genoil miner knows how to deal with these: -SP, --stratum-protocol <n> Choose which stratum protocol to use: 0: official stratum spec: ethpool, ethermine, coinotron, mph, nanopool (default) 1: eth-proxy compatible: dwarfpool, f2pool, nanopool 2: EthereumStratum/1.0.0: nicehash
The Claymore miner doesn't have this option...though you mention using it with Nanopool and Ethermine, and I've used it briefly with NiceHash, so that covers the types that Genoil knows about. Perhaps it adjusts automatically, but still runs into issues with some pools.
|
|
|
|
fullzero (OP)
|
|
July 26, 2017, 07:47:33 PM Last edit: July 27, 2017, 12:30:54 AM by mprep |
|
Good afternoon! === I'll start with the configuration of my equipment: GPU - 13 Colorful P106-100 Mining (without video outputs) Edition or 12 + 1 MSI 1060/6 (with video output) Motherboard H110 Pro BTC === Downloaded and installed nvoc 18 version. Starting with the first run, the system immediately began to issue an error and go into reboot. (Screenshot attached) http://prntscr.com/fzo8uv http://prntscr.com/fzobubIf you quickly close this window with an error and try to start the miner - the system starts and mining. But, the system does not respond to overclocking. Increasing the memory, setting the speed of the coolers, powerlimit - nothing is applied and the system can only run in the stock. It did not work out, I tried to connect via integrated video graphics, also through a card with a video output. I suspect this is because of the error that pops up at the very beginning, related to xorg. One error message also appears on one MSI 1060/6 card. I tried to run the data on the motherboard H81 Pro BTC + net install Ubuntu + version of the drivers Nvidia 384.47 and everything was fine. Tell me how to solve this problem? For earlier I express my gratitude! I ask for prognosis for the possibly incorrect text, tk. I used an interpreter. nvOC doesn't support integrated graphics. If you connect a monitor to the integrated graphics you will enter an infinite loop where the system xorg get corrupted then 1bash restores the xorg and reboots. If you have connected a monitor to integrated graphics previously; after switching to using the primary GPU (the one connected to the 16x slot) you will need to allow the system to reboot once, then on the second boot after changing it should work. WTF? http://prntscr.com/g02inrhttp://prntscr.com/g02iuzhttp://prntscr.com/g02j15UPDATE: The system finally started. But, overclocking is applied only to the card in which the monitor is inserted. The remaining P106 cards do not overclock. Screenshots I enclose. http://prntscr.com/g03joahttp://prntscr.com/g03jt8http://prntscr.com/g03jylIt's unclear what else, manual control of the cooler is turned on, set to 100%. But still the system adjusts them automatically. If you want all the fans set to 100% ensure: Maxximus007_AUTO_TEMPERATURE_CONTROL="NO" Also I changed 1bash to hopefully better support the p106 GPUs in the bux fix version linked at the top of the OP. I don't have any of these, so these changes were based on what other members have reported. Looking at your screenshots; it looks like there is a problem either with the image on the ssd or with the ssd itself. I would try reimaging and using the bug fix 1bash and files. BTW this rig looks good; I want to make one of these myself. Thank you! Fix helped the system to run smoothly. But the overclocking and any tuning of the action does not apply! A couple of screenshots as always attached: http://prntscr.com/g0ggi9http://prntscr.com/g0gh50http://prntscr.com/g0gh8shttp://prntscr.com/g0ghczI might have to make a custom xorg.conf for the P106-100. They appear to be using typical pcie addressing; but I am guessing the problem is related to their lack of outputs. It is hard to know without having some of these P106-100 to test with. I will add this to the list, and update 1bash to conditionally use that xorg.conf if a P106-100 is detected.
my rigs keep restarting after few minutes. Some after 5 mins some after 30. Im not sure why this keeps happening.
im using all 1070's with core 100 / 1050 also i tried 150-200 / 1100-1600. No luck
what is your power limit on them? for 1070s I would push around 120-125 with a moderate/heavy OC. Anything less can cause stability problems. my power limits are all at 125 in the home directory there is a file named: open it with gedit and tell me what is there It says utilization is too low. Reviving did not work so restarting machine Are you sure the client is connecting correctly to the pool? If you disable the watchdog: IAmNotAJeep_and_Maxximus007_WATCHDOG="NO" What happens when you try to mine? Also what COIN are you mining? Yea the pool is connecting, If i disable watchdog its still goes. Im trying to mine EXP and SC and switch around to ETH/ETC Im not sure what you mean by: switch around to ETH/ETC. Pools that support different Ethash coins usually have different pool connection urls and use different ports. ,"message":"Invalidlogin"}} It looks like you might be using a pool that requires you make a worker before you can connect. If you aren't trying to connect to the pool with a workername that has already been added at the pool end; it will reject your connection. I use nvoc for dual mining: DUAL_ETC_PASC
for ETC I can use only etc.nanopool.org or etc.ethermine.org servers, in all other ETC POOL servers I receive the message:
ETH: Authorization failed : {"id":2,"jsonrpc":"2.0","result":null,"error":{"code":-1,"message":"Invalidlogin"}} Stratum - reading socket failed, disconnect ETH: Job timeout, disconnect, retry in 20 sec...
Please help me to find a solution.
Even i get this with various coins if i change my pool. I'm not sure why? Different stratum implementations. The Genoil miner knows how to deal with these: -SP, --stratum-protocol <n> Choose which stratum protocol to use: 0: official stratum spec: ethpool, ethermine, coinotron, mph, nanopool (default) 1: eth-proxy compatible: dwarfpool, f2pool, nanopool 2: EthereumStratum/1.0.0: nicehash
The Claymore miner doesn't have this option...though you mention using it with Nanopool and Ethermine, and I've used it briefly with NiceHash, so that covers the types that Genoil knows about. Perhaps it adjusts automatically, but still runs into issues with some pools. When using Genoil it is as salfter has explained.
|
|
|
|
mnh_license@proton.me https://github.com/hartmanm How difficulty adjustment works: Every 2016 blocks, the Network adjusts the current difficulty to estimated difficulty in an attempt to keep the block generation time at 10 minutes or 600 seconds. Thus the Network re-targets the difficulty at a total difficulty time of: 2016 blocks * 10 minutes per block = 20160 minutes / 60 minutes = 336 hours / 24 hours = 14 days. When the Network hashrate is increasing; a difficulty ( 2016 blocks ) should take less than 14 days. How much less can be estimated by comparing the % Network hashrate growth + what the Network hashrate was at the beginning of the difficulty ( 2016 blocks ) against what the Network hashrate was at the beginning of the difficulty ( 2016 blocks ). This is only an estimate because you cannot account for "luck"; but you can calculate reasonably well using explicitly delimited stochastic ranges. The easy way to think about this is to look at this graph and see how close to 0 the current data points are on its y axis. If the blue line is above 0 the difficulty ( 2016 ) blocks should take less than 14 days; if it is below it should take more. http://bitcoin.sipa.be/growth-10k.png
|
|
|
mikespax
|
|
July 26, 2017, 07:57:08 PM |
|
ty fullzero! suprnova login help worked. I'm now one step closer to that 13x 1080ti mining rig I've always wanted
|
Bitrated user: mikespax.
|
|
|
fullzero (OP)
|
|
July 26, 2017, 07:59:13 PM |
|
Hello fullzero, After a bunch of research and try, I built ccminer 2.2 successfully on nvOC 0018. The previous build break is due to the openssl version, m1@m1-desktop:~$ openssl version OpenSSL 1.1.1-dev xx XXX xxxx
You used a dev build of openssl in nvOC, ccminer doesn't support it. I download source code of openssl v1.0.2l from openssl.org, build the code, create a few soft links to replace the v1.1.1, it finally works. Not sure if this change breaks other things. I also built alexis78 fork, https://github.com/alexis78/ccminer, it requires one more step, replace cuda-7.5 to cuda-8.0 in configure.sh. I'm looking forward these miners in nvOC 0019. Using linux requires some technical backgrounds, fortunately I was a windows developer Thanks car1999; this was helpful. the problem with tp2.2 on nvOC was a file in the openssl: bn.h. I swapped it with the version it was looking for and this resolved all the related errors. Also the cuda version does need to be changed in configure.sh as you described, but also for 1000 series GPUs Makefile.am must be altered with the correct Arch flag, 61 for the 1000 series. I compiled both alexis78 and Tpuvot 2.2 ccminer clients and added at download link to them on the OP. I will include them in v0019 of course. Hi, I just tried these out following your instructions and just get "illegal instruction" from the miner, any thoughts? Thanks What coin are you mining, when this occurs? I compiled both alexis78 and Tpuvot 2.2 ccminer clients and added at download link to them on the OP. I will include them in v0019 of course.
The alexis78 miner appears to be OK (at least /home/m1/ASccminer/ccminer --help works), but the and tpruvot miners exit on my rig with an illegal-instruction error. My rig runs on a Celeron G3920 (Skylake core). (The help output worked for the alexis78 miner, but when it tried mining with it, it fell over.) What coin are you mining? g4560 CPU for me, "illegal instruction" on ASccminer on --benchmark too I only tested these on a v0018 image (different from the one I compiled on) on the same computer I compiled them on. I didn't think compiling the miner was CPU specific. I will test on some of my rigs that have similar CPUs to the ones you both have reported using; and recompile the clients if necessary. Also what GPUs are you using on these rigs?
|
|
|
|
mnh_license@proton.me https://github.com/hartmanm How difficulty adjustment works: Every 2016 blocks, the Network adjusts the current difficulty to estimated difficulty in an attempt to keep the block generation time at 10 minutes or 600 seconds. Thus the Network re-targets the difficulty at a total difficulty time of: 2016 blocks * 10 minutes per block = 20160 minutes / 60 minutes = 336 hours / 24 hours = 14 days. When the Network hashrate is increasing; a difficulty ( 2016 blocks ) should take less than 14 days. How much less can be estimated by comparing the % Network hashrate growth + what the Network hashrate was at the beginning of the difficulty ( 2016 blocks ) against what the Network hashrate was at the beginning of the difficulty ( 2016 blocks ). This is only an estimate because you cannot account for "luck"; but you can calculate reasonably well using explicitly delimited stochastic ranges. The easy way to think about this is to look at this graph and see how close to 0 the current data points are on its y axis. If the blue line is above 0 the difficulty ( 2016 ) blocks should take less than 14 days; if it is below it should take more. http://bitcoin.sipa.be/growth-10k.png
|
|
|
WarwickNZ
Newbie
Offline
Activity: 21
Merit: 0
|
|
July 26, 2017, 08:37:54 PM |
|
Hello fullzero, After a bunch of research and try, I built ccminer 2.2 successfully on nvOC 0018. The previous build break is due to the openssl version, m1@m1-desktop:~$ openssl version OpenSSL 1.1.1-dev xx XXX xxxx
You used a dev build of openssl in nvOC, ccminer doesn't support it. I download source code of openssl v1.0.2l from openssl.org, build the code, create a few soft links to replace the v1.1.1, it finally works. Not sure if this change breaks other things. I also built alexis78 fork, https://github.com/alexis78/ccminer, it requires one more step, replace cuda-7.5 to cuda-8.0 in configure.sh. I'm looking forward these miners in nvOC 0019. Using linux requires some technical backgrounds, fortunately I was a windows developer Thanks car1999; this was helpful. the problem with tp2.2 on nvOC was a file in the openssl: bn.h. I swapped it with the version it was looking for and this resolved all the related errors. Also the cuda version does need to be changed in configure.sh as you described, but also for 1000 series GPUs Makefile.am must be altered with the correct Arch flag, 61 for the 1000 series. I compiled both alexis78 and Tpuvot 2.2 ccminer clients and added at download link to them on the OP. I will include them in v0019 of course. Hi, I just tried these out following your instructions and just get "illegal instruction" from the miner, any thoughts? Thanks What coin are you mining, when this occurs? I compiled both alexis78 and Tpuvot 2.2 ccminer clients and added at download link to them on the OP. I will include them in v0019 of course.
The alexis78 miner appears to be OK (at least /home/m1/ASccminer/ccminer --help works), but the and tpruvot miners exit on my rig with an illegal-instruction error. My rig runs on a Celeron G3920 (Skylake core). (The help output worked for the alexis78 miner, but when it tried mining with it, it fell over.) What coin are you mining? g4560 CPU for me, "illegal instruction" on ASccminer on --benchmark too I only tested these on a v0018 image (different from the one I compiled on) on the same computer I compiled them on. I didn't think compiling the miner was CPU specific. I will test on some of my rigs that have similar CPUs to the ones you both have reported using; and recompile the clients if necessary. Also what GPUs are you using on these rigs? This is just when attempting to launch miner from terminal, unable to get either of them to mine or attempt to mine anything. I've since tried updating openssl and building myself but can't seem to get it to pick up the new version so still throws the build error...lacking the linux skills. Mining with Gigabyte 1070s on nvOC v18 G4560 (don't know if that's relevant)
|
|
|
|
akokkon
Newbie
Offline
Activity: 15
Merit: 0
|
|
July 26, 2017, 09:42:05 PM |
|
I use nvoc for dual mining. My selection is DUAL_ETC_PASC, but ...
for ETC I can use only etc.nanopool.org or etc.ethermine.org mining pool servers, in all other ETC mining pool servers I receive the message:
ETH: Authorization failed : {"id":2,"jsonrpc":"2.0","result":null,"error":{"code":-1,"message":"Invalidlogin"}} Stratum - reading socket failed, disconnect ETH: Job timeout, disconnect, retry in 20 sec...
Please help me to find a solution.
|
|
|
|
fullzero (OP)
|
|
July 26, 2017, 09:49:39 PM Last edit: July 27, 2017, 12:31:09 AM by mprep |
|
Hello fullzero, After a bunch of research and try, I built ccminer 2.2 successfully on nvOC 0018. The previous build break is due to the openssl version, m1@m1-desktop:~$ openssl version OpenSSL 1.1.1-dev xx XXX xxxx
You used a dev build of openssl in nvOC, ccminer doesn't support it. I download source code of openssl v1.0.2l from openssl.org, build the code, create a few soft links to replace the v1.1.1, it finally works. Not sure if this change breaks other things. I also built alexis78 fork, https://github.com/alexis78/ccminer, it requires one more step, replace cuda-7.5 to cuda-8.0 in configure.sh. I'm looking forward these miners in nvOC 0019. Using linux requires some technical backgrounds, fortunately I was a windows developer Thanks car1999; this was helpful. the problem with tp2.2 on nvOC was a file in the openssl: bn.h. I swapped it with the version it was looking for and this resolved all the related errors. Also the cuda version does need to be changed in configure.sh as you described, but also for 1000 series GPUs Makefile.am must be altered with the correct Arch flag, 61 for the 1000 series. I compiled both alexis78 and Tpuvot 2.2 ccminer clients and added at download link to them on the OP. I will include them in v0019 of course. Hi, I just tried these out following your instructions and just get "illegal instruction" from the miner, any thoughts? Thanks What coin are you mining, when this occurs? I compiled both alexis78 and Tpuvot 2.2 ccminer clients and added at download link to them on the OP. I will include them in v0019 of course.
The alexis78 miner appears to be OK (at least /home/m1/ASccminer/ccminer --help works), but the and tpruvot miners exit on my rig with an illegal-instruction error. My rig runs on a Celeron G3920 (Skylake core). (The help output worked for the alexis78 miner, but when it tried mining with it, it fell over.) What coin are you mining? g4560 CPU for me, "illegal instruction" on ASccminer on --benchmark too I only tested these on a v0018 image (different from the one I compiled on) on the same computer I compiled them on. I didn't think compiling the miner was CPU specific. I will test on some of my rigs that have similar CPUs to the ones you both have reported using; and recompile the clients if necessary. Also what GPUs are you using on these rigs? This is just when attempting to launch miner from terminal, unable to get either of them to mine or attempt to mine anything. I've since tried updating openssl and building myself but can't seem to get it to pick up the new version so still throws the build error...lacking the linux skills. Mining with Gigabyte 1070s on nvOC v18 G4560 (don't know if that's relevant) I tested and had the same results; so I re-compiled both on a G1840. This should allow for all CPUs; please test the updated download link on the OP and let me know.
I use nvoc for dual mining. My selection is DUAL_ETC_PASC, but ...
for ETC I can use only etc.nanopool.org or etc.ethermine.org mining pool servers, in all other ETC mining pool servers I receive the message:
ETH: Authorization failed : {"id":2,"jsonrpc":"2.0","result":null,"error":{"code":-1,"message":"Invalidlogin"}} Stratum - reading socket failed, disconnect ETH: Job timeout, disconnect, retry in 20 sec...
Please help me to find a solution.
Tell me what pools you are using.
|
|
|
|
mnh_license@proton.me https://github.com/hartmanm How difficulty adjustment works: Every 2016 blocks, the Network adjusts the current difficulty to estimated difficulty in an attempt to keep the block generation time at 10 minutes or 600 seconds. Thus the Network re-targets the difficulty at a total difficulty time of: 2016 blocks * 10 minutes per block = 20160 minutes / 60 minutes = 336 hours / 24 hours = 14 days. When the Network hashrate is increasing; a difficulty ( 2016 blocks ) should take less than 14 days. How much less can be estimated by comparing the % Network hashrate growth + what the Network hashrate was at the beginning of the difficulty ( 2016 blocks ) against what the Network hashrate was at the beginning of the difficulty ( 2016 blocks ). This is only an estimate because you cannot account for "luck"; but you can calculate reasonably well using explicitly delimited stochastic ranges. The easy way to think about this is to look at this graph and see how close to 0 the current data points are on its y axis. If the blue line is above 0 the difficulty ( 2016 ) blocks should take less than 14 days; if it is below it should take more. http://bitcoin.sipa.be/growth-10k.png
|
|
|
akokkon
Newbie
Offline
Activity: 15
Merit: 0
|
|
July 26, 2017, 09:55:22 PM |
|
I use nvoc for dual mining. My selection is DUAL_ETC_PASC, but ...
for ETC I can use only etc.nanopool.org or etc.ethermine.org mining pool servers, in all other ETC mining pool servers I receive the message:
ETH: Authorization failed : {"id":2,"jsonrpc":"2.0","result":null,"error":{"code":-1,"message":"Invalidlogin"}} Stratum - reading socket failed, disconnect ETH: Job timeout, disconnect, retry in 20 sec...
Please help me to find a solution.
Tell me what pools you are using. I am trying to use: ETC_POOL="etc2.91pool.com:8009" ETC_POOL="stratum-eu3.coin-miners.info:8008" ETC_POOL="stratum-de.coin-miners.info:8008" ETC_POOL="etc.pool.zet-tech.eu:8008"
|
|
|
|
fullzero (OP)
|
|
July 26, 2017, 10:20:53 PM |
|
I use nvoc for dual mining. My selection is DUAL_ETC_PASC, but ...
for ETC I can use only etc.nanopool.org or etc.ethermine.org mining pool servers, in all other ETC mining pool servers I receive the message:
ETH: Authorization failed : {"id":2,"jsonrpc":"2.0","result":null,"error":{"code":-1,"message":"Invalidlogin"}} Stratum - reading socket failed, disconnect ETH: Job timeout, disconnect, retry in 20 sec...
Please help me to find a solution.
Tell me what pools you are using. I am trying to use: ETC_POOL="etc2.91pool.com:8009" ETC_POOL="stratum-eu3.coin-miners.info:8008" ETC_POOL="stratum-de.coin-miners.info:8008" ETC_POOL="etc.pool.zet-tech.eu:8008" for the first pool try ETC_POOL="etc1.91pool.com:8008" second and third: "stratum-eu3.coin-miners.info:8008" is a BTC mining pool; the etc pool server is: etc-de.ethteam.com:8008 so try: ETC_POOL="etc-de.ethteam.com:8008" with ETC_WORKER: fourth: go to line 2692: (hit ctrl + f then type DUAL_ETC_PASC and press the down arrow until you are at the correct code block) screen -dmS miner $HCD -epool $ETC_POOL -ewal $ETCADDR -epsw x -dpool $PASC_POOL -dwal $ADDR -dpsw x -dcoin pasc -dbg -1 $ETC_EXTENTION_ARGUMENTS this pool uses crazy syntax so you have to change this line to: screen -dmS miner $HCD -epool $ETC_POOL -ewal $ETC_ADDRESS -eworker $ETC_WORKER -epsw x -esm 0 -dpool $PASC_POOL -dwal $ADDR -dpsw x -dcoin pasc -dbg -1 $ETC_EXTENTION_ARGUMENTS
|
|
|
|
mnh_license@proton.me https://github.com/hartmanm How difficulty adjustment works: Every 2016 blocks, the Network adjusts the current difficulty to estimated difficulty in an attempt to keep the block generation time at 10 minutes or 600 seconds. Thus the Network re-targets the difficulty at a total difficulty time of: 2016 blocks * 10 minutes per block = 20160 minutes / 60 minutes = 336 hours / 24 hours = 14 days. When the Network hashrate is increasing; a difficulty ( 2016 blocks ) should take less than 14 days. How much less can be estimated by comparing the % Network hashrate growth + what the Network hashrate was at the beginning of the difficulty ( 2016 blocks ) against what the Network hashrate was at the beginning of the difficulty ( 2016 blocks ). This is only an estimate because you cannot account for "luck"; but you can calculate reasonably well using explicitly delimited stochastic ranges. The easy way to think about this is to look at this graph and see how close to 0 the current data points are on its y axis. If the blue line is above 0 the difficulty ( 2016 ) blocks should take less than 14 days; if it is below it should take more. http://bitcoin.sipa.be/growth-10k.png
|
|
|
WarwickNZ
Newbie
Offline
Activity: 21
Merit: 0
|
|
July 26, 2017, 11:03:17 PM Last edit: July 26, 2017, 11:27:59 PM by WarwickNZ |
|
Hello fullzero, After a bunch of research and try, I built ccminer 2.2 successfully on nvOC 0018. The previous build break is due to the openssl version, m1@m1-desktop:~$ openssl version OpenSSL 1.1.1-dev xx XXX xxxx
You used a dev build of openssl in nvOC, ccminer doesn't support it. I download source code of openssl v1.0.2l from openssl.org, build the code, create a few soft links to replace the v1.1.1, it finally works. Not sure if this change breaks other things. I also built alexis78 fork, https://github.com/alexis78/ccminer, it requires one more step, replace cuda-7.5 to cuda-8.0 in configure.sh. I'm looking forward these miners in nvOC 0019. Using linux requires some technical backgrounds, fortunately I was a windows developer Thanks car1999; this was helpful. the problem with tp2.2 on nvOC was a file in the openssl: bn.h. I swapped it with the version it was looking for and this resolved all the related errors. Also the cuda version does need to be changed in configure.sh as you described, but also for 1000 series GPUs Makefile.am must be altered with the correct Arch flag, 61 for the 1000 series. I compiled both alexis78 and Tpuvot 2.2 ccminer clients and added at download link to them on the OP. I will include them in v0019 of course. Hi, I just tried these out following your instructions and just get "illegal instruction" from the miner, any thoughts? Thanks What coin are you mining, when this occurs? I compiled both alexis78 and Tpuvot 2.2 ccminer clients and added at download link to them on the OP. I will include them in v0019 of course.
The alexis78 miner appears to be OK (at least /home/m1/ASccminer/ccminer --help works), but the and tpruvot miners exit on my rig with an illegal-instruction error. My rig runs on a Celeron G3920 (Skylake core). (The help output worked for the alexis78 miner, but when it tried mining with it, it fell over.) What coin are you mining? g4560 CPU for me, "illegal instruction" on ASccminer on --benchmark too I only tested these on a v0018 image (different from the one I compiled on) on the same computer I compiled them on. I didn't think compiling the miner was CPU specific. I will test on some of my rigs that have similar CPUs to the ones you both have reported using; and recompile the clients if necessary. Also what GPUs are you using on these rigs? This is just when attempting to launch miner from terminal, unable to get either of them to mine or attempt to mine anything. I've since tried updating openssl and building myself but can't seem to get it to pick up the new version so still throws the build error...lacking the linux skills. Mining with Gigabyte 1070s on nvOC v18 G4560 (don't know if that's relevant) I tested and had the same results; so I re-compiled both on a G1840. This should allow for all CPUs; please test the updated download link on the OP and let me know. Thanks, OK so the new ones didn't work initially until I made the file have executable permissions and now it runs the benchmark but refuses to start stratum due to an "illegal instruction", i'm trying to mine Signatum with these switches: ./ccminer -a skunk -o stratum+tcp://sigt.suprnova.cc:7106 -u USERNAME -p PASSWORD -i 25 Would be awesome if we can get this going for everyones benefit by far most profitable mining coin out there right now EDIT: So I just noticed that some of the skunk algo files are missing from the TPccminer folder, which were present in the gitclone of " https://github.com/tpruvot/ccminer/tree/linux" I did when trying to get it going. Would doing another build of the latest version sort the issue?
|
|
|
|
fullzero (OP)
|
|
July 27, 2017, 01:37:56 AM |
|
Hello fullzero, After a bunch of research and try, I built ccminer 2.2 successfully on nvOC 0018. The previous build break is due to the openssl version, m1@m1-desktop:~$ openssl version OpenSSL 1.1.1-dev xx XXX xxxx
You used a dev build of openssl in nvOC, ccminer doesn't support it. I download source code of openssl v1.0.2l from openssl.org, build the code, create a few soft links to replace the v1.1.1, it finally works. Not sure if this change breaks other things. I also built alexis78 fork, https://github.com/alexis78/ccminer, it requires one more step, replace cuda-7.5 to cuda-8.0 in configure.sh. I'm looking forward these miners in nvOC 0019. Using linux requires some technical backgrounds, fortunately I was a windows developer Thanks car1999; this was helpful. the problem with tp2.2 on nvOC was a file in the openssl: bn.h. I swapped it with the version it was looking for and this resolved all the related errors. Also the cuda version does need to be changed in configure.sh as you described, but also for 1000 series GPUs Makefile.am must be altered with the correct Arch flag, 61 for the 1000 series. I compiled both alexis78 and Tpuvot 2.2 ccminer clients and added at download link to them on the OP. I will include them in v0019 of course. Hi, I just tried these out following your instructions and just get "illegal instruction" from the miner, any thoughts? Thanks What coin are you mining, when this occurs? I compiled both alexis78 and Tpuvot 2.2 ccminer clients and added at download link to them on the OP. I will include them in v0019 of course.
The alexis78 miner appears to be OK (at least /home/m1/ASccminer/ccminer --help works), but the and tpruvot miners exit on my rig with an illegal-instruction error. My rig runs on a Celeron G3920 (Skylake core). (The help output worked for the alexis78 miner, but when it tried mining with it, it fell over.) What coin are you mining? g4560 CPU for me, "illegal instruction" on ASccminer on --benchmark too I only tested these on a v0018 image (different from the one I compiled on) on the same computer I compiled them on. I didn't think compiling the miner was CPU specific. I will test on some of my rigs that have similar CPUs to the ones you both have reported using; and recompile the clients if necessary. Also what GPUs are you using on these rigs? This is just when attempting to launch miner from terminal, unable to get either of them to mine or attempt to mine anything. I've since tried updating openssl and building myself but can't seem to get it to pick up the new version so still throws the build error...lacking the linux skills. Mining with Gigabyte 1070s on nvOC v18 G4560 (don't know if that's relevant) I tested and had the same results; so I re-compiled both on a G1840. This should allow for all CPUs; please test the updated download link on the OP and let me know. Thanks, OK so the new ones didn't work initially until I made the file have executable permissions and now it runs the benchmark but refuses to start stratum due to an "illegal instruction", i'm trying to mine Signatum with these switches: ./ccminer -a skunk -o stratum+tcp://sigt.suprnova.cc:7106 -u USERNAME -p PASSWORD -i 25 Would be awesome if we can get this going for everyones benefit by far most profitable mining coin out there right now EDIT: So I just noticed that some of the skunk algo files are missing from the TPccminer folder, which were present in the gitclone of " https://github.com/tpruvot/ccminer/tree/linux" I did when trying to get it going. Would doing another build of the latest version sort the issue? I think you may have the login and worker syntax incorrect, or you have not created the worker you are using before hand at suprnova. add exe with the cmd: chmod 755 '/home/m1/TP_2_2/ccminer' then add the following to 1bash: SIGT_WORKER="nvOC" SIGT_ADDRESS="fullzero22" SIGT_POOL="stratum+tcp://sigt.suprnova.cc:7106" if [ $COIN == "SIGT" ] then HCD='/home/m1/TPccminer/ccminer' ADDR="$SIGT_ADDRESS.$SIGT_WORKER"
screen -dmS miner $HCD -a skunk -o $SIGT_POOL -u $ADDR -p x -i 25
if [ $LOCALorREMOTE == "LOCAL" ] then screen -r miner fi
BITCOIN="theGROUND"
while [ $BITCOIN == "theGROUND" ] do sleep 60 done fi or use the line: '/home/m1/TP_2_2/ccminer' -a skunk -o stratum+tcp://sigt.suprnova.cc:7106 -u fullzero22.nvOC -p x -i 25 in guake terminal Works for me:
|
|
|
|
mnh_license@proton.me https://github.com/hartmanm How difficulty adjustment works: Every 2016 blocks, the Network adjusts the current difficulty to estimated difficulty in an attempt to keep the block generation time at 10 minutes or 600 seconds. Thus the Network re-targets the difficulty at a total difficulty time of: 2016 blocks * 10 minutes per block = 20160 minutes / 60 minutes = 336 hours / 24 hours = 14 days. When the Network hashrate is increasing; a difficulty ( 2016 blocks ) should take less than 14 days. How much less can be estimated by comparing the % Network hashrate growth + what the Network hashrate was at the beginning of the difficulty ( 2016 blocks ) against what the Network hashrate was at the beginning of the difficulty ( 2016 blocks ). This is only an estimate because you cannot account for "luck"; but you can calculate reasonably well using explicitly delimited stochastic ranges. The easy way to think about this is to look at this graph and see how close to 0 the current data points are on its y axis. If the blue line is above 0 the difficulty ( 2016 ) blocks should take less than 14 days; if it is below it should take more. http://bitcoin.sipa.be/growth-10k.png
|
|
|
|