papampi
Full Member
 
Offline
Activity: 686
Merit: 140
Linux FOREVER! Resistance is futile!!!
|
 |
September 11, 2017, 08:57:13 AM |
|
Is it possible to compile latest ccminer klaust for linux ? I'm getting +1000 kh/s neoscrypt with 1070 on windows with it while SPccminer gives 800 kh/s on nvoc m1@m1-desktop-101:~/Downloads/ccminer-klaust/ccminer-klaust$ ./ccminer --version ccminer 8.13-KlausT (64bit) for nVidia GPUs Compiled with GCC 5.4 using Nvidia CUDA Toolkit 8.0
Based on pooler cpuminer 2.3.2 and the tpruvot@github fork CUDA support by Christian Buchner, Christian H. and DJM34 Includes optimizations implemented by sp-hash, klaust, tpruvot and tsiv.
ccminer v8.13-KlausT libcurl/7.47.0 GnuTLS/3.4.10 zlib/1.2.8 libidn/1.32 librtmp/2.3
I compiled it but gives me error : [2017-09-09 15:28:05] GPU #3: waiting for data [2017-09-09 15:28:05] GPU #0: waiting for data [2017-09-09 15:28:05] GPU #4: waiting for data [2017-09-09 15:28:05] GPU #5: waiting for data [2017-09-09 15:28:05] GPU #1: waiting for data [2017-09-09 15:28:06] Stratum difficulty set to 256 [2017-09-09 15:28:06] Stratum difficulty set to 64 [2017-09-09 15:28:06] hub.miningpoolhub.com:20510 neoscrypt block 1876969 Cuda error in func 'neoscrypt_cpu_init_2stream' at line 1439 : invalid device symbol. Cuda error in func 'neoscrypt_cpu_init_2stream' at line 1428 : driver shutting down.
I believe I compiled this miner for v0019; it should be under the directory KTminer. Might be a different version. Unfortunately we dont have KTccminer, we have KX for skunk-krnlx and TP for tpruvot 2.2 and AS for alexis78 Can you please give it a shot? Hash rate difference is around 20% with 1070 1000+ on windows with klaust ccminer and 800 with SP and KX ccminer on nvoc It (KTccminer) was supposed to be in nvOC19, but got omitted somehow! Please check the below post by 'salfter', I've tried to add it for MPH (to mine XMR); it is working as expected for me on nvOC19. Give it a go and let me know if it is working as expected. I finally got around to upgrading to v19 from v17. I'd had some trouble with v18 and went back to v17, but v19's being much better behaved. I put my config in (configured to use my MiningPoolHub switcher and Maxximus007's fan-control code) and let her rip. There is one issue I ran across: no Cryptonight miner. I noticed it when my mining rig's fans all slowed down. To get things running again in a hurry, I copied the mining software from my v17 setup. It's in this tarball: https://alfter.us/wp/wp-content/uploads/2017/09/KTccminer-cryptonight.tar.xzUnpack to ~m1. It'll unpack to a Git repo into which you can pull updates (I last updated it just a few days ago) that has already been configured and built. I might've built it for Pascal (GeForce 10-series) GPUs only; I don't recall for sure if I did, so if you're using older GPUs and it doesn't work for you, you might need to rebuild it. There was also an upstream change in this miner that requires a change in 3main. The "-a cryptonight" option is no longer accepted, so load 3main into the editor of your choice, search for "KTccminer-cryptonight" (it's near the beginning of the heredoc that gets written to mph_conf.json), and delete "-a cryptonight" from that line. Beyond that, the only other changes I made were to set the hostname equal to the worker name (changes in /etc/hostname and /etc/hosts), change the timezone from Eastern to Pacific, and to revise the speed and power figures in the mph_conf.json heredoc with my current configuration (up from 3 GPUs to 4). As I write this, I'm imaging my v19 setup so I can copy it over to the (physically smaller) stick with v17. Thanks a lot, will give a shot and post the results Here is the configs for equihash coins with failover pools see if you want to use it for next version https://www.dropbox.com/s/pabya5cohtqh6j1/nvOC-equihash-failover-pools.rar?dl=0
|
|
|
|
damNmad
Full Member
 
Offline
Activity: 378
Merit: 104
nvOC forever
|
 |
September 11, 2017, 09:08:39 AM |
|
Zpool SIB X11-GHOST with ccminer Alexis78 :
1bash: # ZPOOL uses your BTC_ADDRESS ZPOOL_SIB_POOL="stratum+tcp://sib.mine.zpool.ca:5033"
3main: if [ $COIN == "ZPOOL_SIB" ] then HCD='/home/m1/ASccminer/ccminer'
screen -dmS miner $HCD -a sib -o $ZPOOL_SIB_POOL -u $BTC_ADDRESS -p $WORKERNAME,c=BTC if [ $LOCALorREMOTE == "LOCAL" ] then screen -r miner fi BITCOIN="theGROUND" while [ $BITCOIN == "theGROUND" ] do sleep 60 done fi
Dont know if we should use sib or x11ghost in the miner line : screen -dmS miner $HCD -a sib -o $ZPOOL_SIB_POOL -u $BTC_ADDRESS -p $WORKERNAME,c=BTC
Please check and let me know Will test this out and confirm; Thanks a lot for all those coin updates. I will add all your stuff into a single file, test them one by one and add them to my next version  I would suggest you to add a disclaimer on top/bottom of the new coins you are going to add in future saying whether they have been tested or not; it will give a rough idea for new users about what they are into, if they want to give it a go  I really appreciate you adding all these coins; it does help me a lot; can't thank you enough for this 
|
|
|
|
Doftorul
Newbie
Offline
Activity: 16
Merit: 0
|
 |
September 11, 2017, 09:23:18 AM |
|
Hi guys, can someone give some hints or explain what is wrong with my rigs since only the gpu0 can have can receive/execute the fan speed as being set ? Below is the error i see all the time when any other than gpu0 fan gets set:
ERROR: Error assigning value 50 to attribute 'GPUTargetFanSpeed' (m1-desktop:0[fan:1]) as specified in assignment '[fan:1]/GPUTargetFanSpeed=50' (Unknown Error).
I am currently using rigs with 3 x 1080 Ti and 3 x 1070's and having the first release of v019 installed on SSD. Tried manually to do:
nvidia-xconfig --enable-all-gpus nvidia-xconfig --cool-bits=4 nvidia-settings -a [gpu:1]/GPUFanControlState=1 nvidia-settings -a [fan:1]/GPUTargetFanSpeed=50
but the error is the same. Other than being unable to set the fan speeds, the rigs are chugging along, however, the speed of the fans for the cards other than gpu0 is the factory default as neither the minimum fan speed set in 1bash gets applied, with or without the automated fan control feature.
Edit: in the nvidia x server settings i can see the Enable fan control feature for the gpu0 but it is missing for the other gpu's.
Does the error happens if you set speed to higher values like 60-65 too ? I think I had the same problem on low values Thank you for your answer ! Yes, the problem persists even if i set higher fan values or 100%... In the Thermal Settings in the nvidia x settings utility only gpu0 has fan control option enabled, the other gpu's are missing this feature.
|
|
|
|
damNmad
Full Member
 
Offline
Activity: 378
Merit: 104
nvOC forever
|
 |
September 11, 2017, 09:41:53 AM |
|
Have you set cpu mining to off ?
yep, for sure looks like 3main select only one GPU for a reason I don't know this is due to a problem with the 3main implementation wi$em@n found; but I haven't fixed yet. You can do this manually for now by finding this area in 3main: if [ $COIN == "XMR" ] then HCD='/home/m1/xmr/stakGPU/bin/xmr-stak-nvidia' ADDR="$XMR_ADDRESS.$XMR_WORKER"
cat <<EOF >/home/m1/xmr/stakGPU/bin/config.txt
"gpu_threads_conf" : [ { "index" : 0, "threads" : 32, "blocks" : 18, "bfactor" : 8, "bsleep" : 10, "affine_to_cpu" : false, }, ],
"use_tls" : false, "tls_secure_algo" : true, "tls_fingerprint" : "",
"pool_address" : "$XMR_POOL", "wallet_address" : "$ADDR", "pool_password" : "x",
"call_timeout" : 10, "retry_time" : 10, "giveup_limit" : 0,
"verbose_level" : 4,
"h_print_time" : 60,
"output_file" : "",
"httpd_port" : 0,
"prefer_ipv4" : true EOF
cd /home/m1/xmr/stakGPU/bin
screen -dmS miner $HCD
if [ $LOCALorREMOTE == "LOCAL" ] then screen -r miner fi and adding an additional: gpu_threads_conf" : [ { "index" : 0, "threads" : 32, "blocks" : 18, "bfactor" : 8, "bsleep" : 10, "affine_to_cpu" : false, }, { "index" : 1, "threads" : 32, "blocks" : 18, "bfactor" : 8, "bsleep" : 10, "affine_to_cpu" : false, }, { "index" : 2, "threads" : 32, "blocks" : 18, "bfactor" : 8, "bsleep" : 10, "affine_to_cpu" : false, }, { "index" : 3, "threads" : 32, "blocks" : 18, "bfactor" : 8, "bsleep" : 10, "affine_to_cpu" : false, }, ], index block per GPU to the "gpu_threads_conf" as the above would be for 4x gpus. thanks a lot Fullzero I'm now able to use my 7x GPU with your modification unfortunately watchdog is not working anymore as it saying that GPU utilisation is to low not a big deal thanks I will look into a watchdog conflict with multiple gpus while using xmr-stak. I noticed that xmr-stack takes a while to load up all the gpu's with work - on 13 card systems this can take up to 3-4 minutes (it also depends on the thread/block count in xmr-stak config.txt - the settings from fullzero load under 2G of data, but you can tweak those numbers higher getting, resulting higher hash rates, but also increasing the initialization time). Once stak loads all the cards, and assuming your OC settings are stable - it will behave itself for days. I meant to modify watchdog to recognize stak is running and increase the initialization time appropriately, but I still need to migrate to v0019. for now if you want to run watchdog, change the initial timeout to 360 seconds it should do the trick. Once all the cards are loaded stak will spit out a bunch of submits to the pool in one shot. I've seen issue with WATCHDOG while using keccak algo (for MAX COIN) on MPH; the GPU utilization for this coin is algo is always around 90 and WATCHDOG always restarts the mining eventually the whole RIG based on that usage! I remember one of the user complained about keccak; asked about how to disable it (thinking its probably because of this utilization low issue).
|
|
|
|
papampi
Full Member
 
Offline
Activity: 686
Merit: 140
Linux FOREVER! Resistance is futile!!!
|
 |
September 11, 2017, 09:42:26 AM |
|
Hi guys, can someone give some hints or explain what is wrong with my rigs since only the gpu0 can have can receive/execute the fan speed as being set ? Below is the error i see all the time when any other than gpu0 fan gets set:
ERROR: Error assigning value 50 to attribute 'GPUTargetFanSpeed' (m1-desktop:0[fan:1]) as specified in assignment '[fan:1]/GPUTargetFanSpeed=50' (Unknown Error).
I am currently using rigs with 3 x 1080 Ti and 3 x 1070's and having the first release of v019 installed on SSD. Tried manually to do:
nvidia-xconfig --enable-all-gpus nvidia-xconfig --cool-bits=4 nvidia-settings -a [gpu:1]/GPUFanControlState=1 nvidia-settings -a [fan:1]/GPUTargetFanSpeed=50
but the error is the same. Other than being unable to set the fan speeds, the rigs are chugging along, however, the speed of the fans for the cards other than gpu0 is the factory default as neither the minimum fan speed set in 1bash gets applied, with or without the automated fan control feature.
Edit: in the nvidia x server settings i can see the Enable fan control feature for the gpu0 but it is missing for the other gpu's.
Does the error happens if you set speed to higher values like 60-65 too ? I think I had the same problem on low values Thank you for your answer ! Yes, the problem persists even if i set higher fan values or 100%... In the Thermal Settings in the nvidia x settings utility only gpu0 has fan control option enabled, the other gpu's are missing this feature. Then may be some things wrong with the image/OS Some times starting from scratch is easier than trying to solve the problem. P.S. is GPU Power Mizer Mode set in 1bash? GPUPowerMizerMode_Adjust="YES"
|
|
|
|
papampi
Full Member
 
Offline
Activity: 686
Merit: 140
Linux FOREVER! Resistance is futile!!!
|
 |
September 11, 2017, 10:21:32 AM |
|
Is it possible to compile latest ccminer klaust for linux ? I'm getting +1000 kh/s neoscrypt with 1070 on windows with it while SPccminer gives 800 kh/s on nvoc m1@m1-desktop-101:~/Downloads/ccminer-klaust/ccminer-klaust$ ./ccminer --version ccminer 8.13-KlausT (64bit) for nVidia GPUs Compiled with GCC 5.4 using Nvidia CUDA Toolkit 8.0
Based on pooler cpuminer 2.3.2 and the tpruvot@github fork CUDA support by Christian Buchner, Christian H. and DJM34 Includes optimizations implemented by sp-hash, klaust, tpruvot and tsiv.
ccminer v8.13-KlausT libcurl/7.47.0 GnuTLS/3.4.10 zlib/1.2.8 libidn/1.32 librtmp/2.3
I compiled it but gives me error : [2017-09-09 15:28:05] GPU #3: waiting for data [2017-09-09 15:28:05] GPU #0: waiting for data [2017-09-09 15:28:05] GPU #4: waiting for data [2017-09-09 15:28:05] GPU #5: waiting for data [2017-09-09 15:28:05] GPU #1: waiting for data [2017-09-09 15:28:06] Stratum difficulty set to 256 [2017-09-09 15:28:06] Stratum difficulty set to 64 [2017-09-09 15:28:06] hub.miningpoolhub.com:20510 neoscrypt block 1876969 Cuda error in func 'neoscrypt_cpu_init_2stream' at line 1439 : invalid device symbol. Cuda error in func 'neoscrypt_cpu_init_2stream' at line 1428 : driver shutting down.
I believe I compiled this miner for v0019; it should be under the directory KTminer. Might be a different version. Unfortunately we dont have KTccminer, we have KX for skunk-krnlx and TP for tpruvot 2.2 and AS for alexis78 Can you please give it a shot? Hash rate difference is around 20% with 1070 1000+ on windows with klaust ccminer and 800 with SP and KX ccminer on nvoc It (KTccminer) was supposed to be in nvOC19, but got omitted somehow! Please check the below post by 'salfter', I've tried to add it for MPH (to mine XMR); it is working as expected for me on nvOC19. Give it a go and let me know if it is working as expected. I finally got around to upgrading to v19 from v17. I'd had some trouble with v18 and went back to v17, but v19's being much better behaved. I put my config in (configured to use my MiningPoolHub switcher and Maxximus007's fan-control code) and let her rip. There is one issue I ran across: no Cryptonight miner. I noticed it when my mining rig's fans all slowed down. To get things running again in a hurry, I copied the mining software from my v17 setup. It's in this tarball: https://alfter.us/wp/wp-content/uploads/2017/09/KTccminer-cryptonight.tar.xzUnpack to ~m1. It'll unpack to a Git repo into which you can pull updates (I last updated it just a few days ago) that has already been configured and built. I might've built it for Pascal (GeForce 10-series) GPUs only; I don't recall for sure if I did, so if you're using older GPUs and it doesn't work for you, you might need to rebuild it. There was also an upstream change in this miner that requires a change in 3main. The "-a cryptonight" option is no longer accepted, so load 3main into the editor of your choice, search for "KTccminer-cryptonight" (it's near the beginning of the heredoc that gets written to mph_conf.json), and delete "-a cryptonight" from that line. Beyond that, the only other changes I made were to set the hostname equal to the worker name (changes in /etc/hostname and /etc/hosts), change the timezone from Eastern to Pacific, and to revise the speed and power figures in the mph_conf.json heredoc with my current configuration (up from 3 GPUs to 4). As I write this, I'm imaging my v19 setup so I can copy it over to the (physically smaller) stick with v17. Its ccminer-cryptonight by klaust and has even lower hash rate on neoscrypt than others [2017-09-11 14:47:31] GPU #5: GeForce GTX 1070, 668.44 H/s (618.11 H/s avg)
m1@m1-desktop-101:~/KTccminer-cryptonight$ ./ccminer --help *** ccminer-cryptonight 2.04 (64 bit) for nVidia GPUs by tsiv and KlausT *** Built with GCC 5.4 using the Nvidia CUDA Toolkit 8.0
|
|
|
|
IAmNotAJeep
Newbie
Offline
Activity: 44
Merit: 0
|
 |
September 11, 2017, 11:33:07 AM |
|
Have you set cpu mining to off ?
yep, for sure looks like 3main select only one GPU for a reason I don't know this is due to a problem with the 3main implementation wi$em@n found; but I haven't fixed yet. You can do this manually for now by finding this area in 3main: if [ $COIN == "XMR" ] then HCD='/home/m1/xmr/stakGPU/bin/xmr-stak-nvidia' ADDR="$XMR_ADDRESS.$XMR_WORKER"
cat <<EOF >/home/m1/xmr/stakGPU/bin/config.txt
"gpu_threads_conf" : [ { "index" : 0, "threads" : 32, "blocks" : 18, "bfactor" : 8, "bsleep" : 10, "affine_to_cpu" : false, }, ],
"use_tls" : false, "tls_secure_algo" : true, "tls_fingerprint" : "",
"pool_address" : "$XMR_POOL", "wallet_address" : "$ADDR", "pool_password" : "x",
"call_timeout" : 10, "retry_time" : 10, "giveup_limit" : 0,
"verbose_level" : 4,
"h_print_time" : 60,
"output_file" : "",
"httpd_port" : 0,
"prefer_ipv4" : true EOF
cd /home/m1/xmr/stakGPU/bin
screen -dmS miner $HCD
if [ $LOCALorREMOTE == "LOCAL" ] then screen -r miner fi and adding an additional: gpu_threads_conf" : [ { "index" : 0, "threads" : 32, "blocks" : 18, "bfactor" : 8, "bsleep" : 10, "affine_to_cpu" : false, }, { "index" : 1, "threads" : 32, "blocks" : 18, "bfactor" : 8, "bsleep" : 10, "affine_to_cpu" : false, }, { "index" : 2, "threads" : 32, "blocks" : 18, "bfactor" : 8, "bsleep" : 10, "affine_to_cpu" : false, }, { "index" : 3, "threads" : 32, "blocks" : 18, "bfactor" : 8, "bsleep" : 10, "affine_to_cpu" : false, }, ], index block per GPU to the "gpu_threads_conf" as the above would be for 4x gpus. thanks a lot Fullzero I'm now able to use my 7x GPU with your modification unfortunately watchdog is not working anymore as it saying that GPU utilisation is to low not a big deal thanks I will look into a watchdog conflict with multiple gpus while using xmr-stak. I noticed that xmr-stack takes a while to load up all the gpu's with work - on 13 card systems this can take up to 3-4 minutes (it also depends on the thread/block count in xmr-stak config.txt - the settings from fullzero load under 2G of data, but you can tweak those numbers higher getting, resulting higher hash rates, but also increasing the initialization time). Once stak loads all the cards, and assuming your OC settings are stable - it will behave itself for days. I meant to modify watchdog to recognize stak is running and increase the initialization time appropriately, but I still need to migrate to v0019. for now if you want to run watchdog, change the initial timeout to 360 seconds it should do the trick. Once all the cards are loaded stak will spit out a bunch of submits to the pool in one shot. I've seen issue with WATCHDOG while using keccak algo (for MAX COIN) on MPH; the GPU utilization for this coin is algo is always around 90 and WATCHDOG always restarts the mining eventually the whole RIG based on that usage! I remember one of the user complained about keccak; asked about how to disable it (thinking its probably because of this utilization low issue). Yes I remember seeing a post like that, I think someone also answered that this can be remedied by lowering the threshold in watchdog which should work.
|
|
|
|
ivoldemar
Newbie
Offline
Activity: 23
Merit: 0
|
 |
September 11, 2017, 01:35:06 PM |
|
Hello! Put a new farm on P106 Gigabyte and here's the story: http://prntscr.com/gjsvn4(I'm the one who tipped the image from the production farm on P106) Tried to write a clean image, there is also an error with Nvidia. The monitor did not connect, risers and cards changed - it did not help.
|
|
|
|
damNmad
Full Member
 
Offline
Activity: 378
Merit: 104
nvOC forever
|
 |
September 11, 2017, 01:47:32 PM |
|
Have you set cpu mining to off ?
yep, for sure looks like 3main select only one GPU for a reason I don't know this is due to a problem with the 3main implementation wi$em@n found; but I haven't fixed yet. You can do this manually for now by finding this area in 3main: if [ $COIN == "XMR" ] then HCD='/home/m1/xmr/stakGPU/bin/xmr-stak-nvidia' ADDR="$XMR_ADDRESS.$XMR_WORKER"
cat <<EOF >/home/m1/xmr/stakGPU/bin/config.txt
"gpu_threads_conf" : [ { "index" : 0, "threads" : 32, "blocks" : 18, "bfactor" : 8, "bsleep" : 10, "affine_to_cpu" : false, }, ],
"use_tls" : false, "tls_secure_algo" : true, "tls_fingerprint" : "",
"pool_address" : "$XMR_POOL", "wallet_address" : "$ADDR", "pool_password" : "x",
"call_timeout" : 10, "retry_time" : 10, "giveup_limit" : 0,
"verbose_level" : 4,
"h_print_time" : 60,
"output_file" : "",
"httpd_port" : 0,
"prefer_ipv4" : true EOF
cd /home/m1/xmr/stakGPU/bin
screen -dmS miner $HCD
if [ $LOCALorREMOTE == "LOCAL" ] then screen -r miner fi and adding an additional: gpu_threads_conf" : [ { "index" : 0, "threads" : 32, "blocks" : 18, "bfactor" : 8, "bsleep" : 10, "affine_to_cpu" : false, }, { "index" : 1, "threads" : 32, "blocks" : 18, "bfactor" : 8, "bsleep" : 10, "affine_to_cpu" : false, }, { "index" : 2, "threads" : 32, "blocks" : 18, "bfactor" : 8, "bsleep" : 10, "affine_to_cpu" : false, }, { "index" : 3, "threads" : 32, "blocks" : 18, "bfactor" : 8, "bsleep" : 10, "affine_to_cpu" : false, }, ], index block per GPU to the "gpu_threads_conf" as the above would be for 4x gpus. thanks a lot Fullzero I'm now able to use my 7x GPU with your modification unfortunately watchdog is not working anymore as it saying that GPU utilisation is to low not a big deal thanks I will look into a watchdog conflict with multiple gpus while using xmr-stak. I noticed that xmr-stack takes a while to load up all the gpu's with work - on 13 card systems this can take up to 3-4 minutes (it also depends on the thread/block count in xmr-stak config.txt - the settings from fullzero load under 2G of data, but you can tweak those numbers higher getting, resulting higher hash rates, but also increasing the initialization time). Once stak loads all the cards, and assuming your OC settings are stable - it will behave itself for days. I meant to modify watchdog to recognize stak is running and increase the initialization time appropriately, but I still need to migrate to v0019. for now if you want to run watchdog, change the initial timeout to 360 seconds it should do the trick. Once all the cards are loaded stak will spit out a bunch of submits to the pool in one shot. I've seen issue with WATCHDOG while using keccak algo (for MAX COIN) on MPH; the GPU utilization for this coin is algo is always around 90 and WATCHDOG always restarts the mining eventually the whole RIG based on that usage! I remember one of the user complained about keccak; asked about how to disable it (thinking its probably because of this utilization low issue). Yes I remember seeing a post like that, I think someone also answered that this can be remedied by lowering the threshold in watchdog which should work. Ah! My bad, don't remember seeing the solution, will try to lower the threshold and see how it goes. I would also like to thank you so much for your amazing work. It inspires a lot for people like me 
|
|
|
|
Doftorul
Newbie
Offline
Activity: 16
Merit: 0
|
 |
September 11, 2017, 04:48:08 PM |
|
Hi guys, can someone give some hints or explain what is wrong with my rigs since only the gpu0 can have can receive/execute the fan speed as being set ? Below is the error i see all the time when any other than gpu0 fan gets set:
ERROR: Error assigning value 50 to attribute 'GPUTargetFanSpeed' (m1-desktop:0[fan:1]) as specified in assignment '[fan:1]/GPUTargetFanSpeed=50' (Unknown Error).
I am currently using rigs with 3 x 1080 Ti and 3 x 1070's and having the first release of v019 installed on SSD. Tried manually to do:
nvidia-xconfig --enable-all-gpus nvidia-xconfig --cool-bits=4 nvidia-settings -a [gpu:1]/GPUFanControlState=1 nvidia-settings -a [fan:1]/GPUTargetFanSpeed=50
but the error is the same. Other than being unable to set the fan speeds, the rigs are chugging along, however, the speed of the fans for the cards other than gpu0 is the factory default as neither the minimum fan speed set in 1bash gets applied, with or without the automated fan control feature.
Edit: in the nvidia x server settings i can see the Enable fan control feature for the gpu0 but it is missing for the other gpu's.
Does the error happens if you set speed to higher values like 60-65 too ? I think I had the same problem on low values Thank you for your answer ! Yes, the problem persists even if i set higher fan values or 100%... In the Thermal Settings in the nvidia x settings utility only gpu0 has fan control option enabled, the other gpu's are missing this feature. Then may be some things wrong with the image/OS Some times starting from scratch is easier than trying to solve the problem. P.S. is GPU Power Mizer Mode set in 1bash? GPUPowerMizerMode_Adjust="YES"
The PowerMizer setting makes no difference. However, i think there is something odd here. When i run manually m1@m1-desktop:~$ lspci |grep VGA 01:00.0 VGA compatible controller: NVIDIA Corporation Device 1b81 (rev a1) 20:00.0 VGA compatible controller: NVIDIA Corporation Device 1b81 (rev a1) 30:00.0 VGA compatible controller: NVIDIA Corporation Device 1b81 (rev a1) Then in the xorg.conf generated in an attempt to debug the issue i have: Section "Device" Identifier "Device0" Driver "nvidia" VendorName "NVIDIA Corporation" BoardName "GeForce GTX 1070" BusID "PCI:1:0:0" EndSection Section "Device" Identifier "Device1" Driver "nvidia" VendorName "NVIDIA Corporation" BoardName "GeForce GTX 1070" BusID "PCI:32:0:0" EndSection Section "Device" Identifier "Device2" Driver "nvidia" VendorName "NVIDIA Corporation" BoardName "GeForce GTX 1070" BusID "PCI:48:0:0" EndSection Shouldn't there be some sort of relation between what the lspci command lists and the Xorg.conf device id ?
|
|
|
|
elsystem
Newbie
Offline
Activity: 46
Merit: 0
|
 |
September 11, 2017, 04:56:01 PM |
|
Is it possible to cpu mine Zcoin and gpu mine simultaneously on nvoc?
anyone has an answer to this? fullzero, assuming it's not currently possible to cpu mine Zcoin, can you please add it as a feature in the next version?
|
|
|
|
Hostels
Newbie
Offline
Activity: 31
Merit: 0
|
 |
September 11, 2017, 06:35:52 PM |
|
Guys on P106 Maxximus007_AUTO_TEMPERATURE_CONTROL dont work. When temp reach target_temp and go above script do nothing(((( If anyone have to work it on P106. Plz write how.
|
|
|
|
papampi
Full Member
 
Offline
Activity: 686
Merit: 140
Linux FOREVER! Resistance is futile!!!
|
 |
September 11, 2017, 07:35:32 PM |
|
Hi guys, can someone give some hints or explain what is wrong with my rigs since only the gpu0 can have can receive/execute the fan speed as being set ? Below is the error i see all the time when any other than gpu0 fan gets set:
ERROR: Error assigning value 50 to attribute 'GPUTargetFanSpeed' (m1-desktop:0[fan:1]) as specified in assignment '[fan:1]/GPUTargetFanSpeed=50' (Unknown Error).
I am currently using rigs with 3 x 1080 Ti and 3 x 1070's and having the first release of v019 installed on SSD. Tried manually to do:
nvidia-xconfig --enable-all-gpus nvidia-xconfig --cool-bits=4 nvidia-settings -a [gpu:1]/GPUFanControlState=1 nvidia-settings -a [fan:1]/GPUTargetFanSpeed=50
but the error is the same. Other than being unable to set the fan speeds, the rigs are chugging along, however, the speed of the fans for the cards other than gpu0 is the factory default as neither the minimum fan speed set in 1bash gets applied, with or without the automated fan control feature.
Edit: in the nvidia x server settings i can see the Enable fan control feature for the gpu0 but it is missing for the other gpu's.
Does the error happens if you set speed to higher values like 60-65 too ? I think I had the same problem on low values Thank you for your answer ! Yes, the problem persists even if i set higher fan values or 100%... In the Thermal Settings in the nvidia x settings utility only gpu0 has fan control option enabled, the other gpu's are missing this feature. Then may be some things wrong with the image/OS Some times starting from scratch is easier than trying to solve the problem. P.S. is GPU Power Mizer Mode set in 1bash? GPUPowerMizerMode_Adjust="YES"
The PowerMizer setting makes no difference. However, i think there is something odd here. When i run manually m1@m1-desktop:~$ lspci |grep VGA 01:00.0 VGA compatible controller: NVIDIA Corporation Device 1b81 (rev a1) 20:00.0 VGA compatible controller: NVIDIA Corporation Device 1b81 (rev a1) 30:00.0 VGA compatible controller: NVIDIA Corporation Device 1b81 (rev a1) Then in the xorg.conf generated in an attempt to debug the issue i have: Section "Device" Identifier "Device0" Driver "nvidia" VendorName "NVIDIA Corporation" BoardName "GeForce GTX 1070" BusID "PCI:1:0:0" EndSection Section "Device" Identifier "Device1" Driver "nvidia" VendorName "NVIDIA Corporation" BoardName "GeForce GTX 1070" BusID "PCI:32:0:0" EndSection Section "Device" Identifier "Device2" Driver "nvidia" VendorName "NVIDIA Corporation" BoardName "GeForce GTX 1070" BusID "PCI:48:0:0" EndSection Shouldn't there be some sort of relation between what the lspci command lists and the Xorg.conf device id ? I think its better not to waste your time on finding the solution to the problem, as it should be none start from scratch. Set your bios, connect one gpu, boot,copy your 1bash (there is a bug that wont copy it from temp partition), reboot, check if every thing is ok. shutdown, connect rest of the gpu, restart, while first one is still connected, it may reboot with xorg error. After restart all should be ok.
|
|
|
|
papampi
Full Member
 
Offline
Activity: 686
Merit: 140
Linux FOREVER! Resistance is futile!!!
|
 |
September 11, 2017, 07:40:24 PM |
|
Guys on P106 Maxximus007_AUTO_TEMPERATURE_CONTROL dont work. When temp reach target_temp and go above script do nothing(((( If anyone have to work it on P106. Plz write how.
What is your target temp and minimum fan speed ? What are the individual temp limits? (even if its set to no) Manual fan = No ? maxximus auto temp = yes ?
|
|
|
|
damNmad
Full Member
 
Offline
Activity: 378
Merit: 104
nvOC forever
|
 |
September 11, 2017, 08:02:05 PM |
|
Is it possible to cpu mine Zcoin and gpu mine simultaneously on nvoc?
anyone has an answer to this? fullzero, assuming it's not currently possible to cpu mine Zcoin, can you please add it as a feature in the next version? If its not currently possible to CPU mine ZCOIN, what is the point of adding it??
|
|
|
|
elsystem
Newbie
Offline
Activity: 46
Merit: 0
|
 |
September 11, 2017, 09:24:57 PM |
|
Is it possible to cpu mine Zcoin and gpu mine simultaneously on nvoc?
anyone has an answer to this? fullzero, assuming it's not currently possible to cpu mine Zcoin, can you please add it as a feature in the next version? If its not currently possible to CPU mine ZCOIN, what is the point of adding it?? It is possible to CPU mine Zcoin. that's the whole point. i don't think nvoc is configured to cpu mine Zcoin and gpu mine on other coins. that's the reason i asked fullzero if he can add it...
|
|
|
|
Doftorul
Newbie
Offline
Activity: 16
Merit: 0
|
 |
September 11, 2017, 09:50:32 PM Last edit: September 12, 2017, 08:59:40 PM by Doftorul |
|
Hi guys, can someone give some hints or explain what is wrong with my rigs since only the gpu0 can have can receive/execute the fan speed as being set ? Below is the error i see all the time when any other than gpu0 fan gets set:
ERROR: Error assigning value 50 to attribute 'GPUTargetFanSpeed' (m1-desktop:0[fan:1]) as specified in assignment '[fan:1]/GPUTargetFanSpeed=50' (Unknown Error).
I am currently using rigs with 3 x 1080 Ti and 3 x 1070's and having the first release of v019 installed on SSD. Tried manually to do:
nvidia-xconfig --enable-all-gpus nvidia-xconfig --cool-bits=4 nvidia-settings -a [gpu:1]/GPUFanControlState=1 nvidia-settings -a [fan:1]/GPUTargetFanSpeed=50
but the error is the same. Other than being unable to set the fan speeds, the rigs are chugging along, however, the speed of the fans for the cards other than gpu0 is the factory default as neither the minimum fan speed set in 1bash gets applied, with or without the automated fan control feature.
Edit: in the nvidia x server settings i can see the Enable fan control feature for the gpu0 but it is missing for the other gpu's.
Does the error happens if you set speed to higher values like 60-65 too ? I think I had the same problem on low values Thank you for your answer ! Yes, the problem persists even if i set higher fan values or 100%... In the Thermal Settings in the nvidia x settings utility only gpu0 has fan control option enabled, the other gpu's are missing this feature. Then may be some things wrong with the image/OS Some times starting from scratch is easier than trying to solve the problem. P.S. is GPU Power Mizer Mode set in 1bash? GPUPowerMizerMode_Adjust="YES"
The PowerMizer setting makes no difference. However, i think there is something odd here. When i run manually m1@m1-desktop:~$ lspci |grep VGA 01:00.0 VGA compatible controller: NVIDIA Corporation Device 1b81 (rev a1) 20:00.0 VGA compatible controller: NVIDIA Corporation Device 1b81 (rev a1) 30:00.0 VGA compatible controller: NVIDIA Corporation Device 1b81 (rev a1) Then in the xorg.conf generated in an attempt to debug the issue i have: Section "Device" Identifier "Device0" Driver "nvidia" VendorName "NVIDIA Corporation" BoardName "GeForce GTX 1070" BusID "PCI:1:0:0" EndSection Section "Device" Identifier "Device1" Driver "nvidia" VendorName "NVIDIA Corporation" BoardName "GeForce GTX 1070" BusID "PCI:32:0:0" EndSection Section "Device" Identifier "Device2" Driver "nvidia" VendorName "NVIDIA Corporation" BoardName "GeForce GTX 1070" BusID "PCI:48:0:0" EndSection Shouldn't there be some sort of relation between what the lspci command lists and the Xorg.conf device id ? I think its better not to waste your time on finding the solution to the problem, as it should be none start from scratch. Set your bios, connect one gpu, boot,copy your 1bash (there is a bug that wont copy it from temp partition), reboot, check if every thing is ok. shutdown, connect rest of the gpu, restart, while first one is still connected, it may reboot with xorg error. After restart all should be ok. Well... i can build a complete rig for less than usd50 using older hp compaq business pc's like dc7800 or dc7900, everything works ok with dc7900, this issue seems to be related to the dc7800. The dc7800 has 3 pciex slots as its biger sibling dc7900, the only difference being the chipset: dc7900 has a q45 chipset while dc7800 has a q35 chipset. I'll try tomorrow with a dc7900 using the same gpu cards to see if it works. These older motherboards might have glitches with the latest ubuntu... EDIT: SOLUTION: In the /etc/X11/xorg.conf in the Devices section there should be a Screen 0 added right before the EndSection. That fixes it. Apparently if the busId of the VGA is >16 nvOc doesn't attach the screen to the cards and hence there is no power nor fan control enabled for the cards. Tested the fix with old motherboards using Q35 and Q45 chipsets.
|
|
|
|
damNmad
Full Member
 
Offline
Activity: 378
Merit: 104
nvOC forever
|
 |
September 11, 2017, 10:04:23 PM |
|
Is it possible to cpu mine Zcoin and gpu mine simultaneously on nvoc?
anyone has an answer to this? fullzero, assuming it's not currently possible to cpu mine Zcoin, can you please add it as a feature in the next version? If its not currently possible to CPU mine ZCOIN, what is the point of adding it?? It is possible to CPU mine Zcoin. that's the whole point. i don't think nvoc is configured to cpu mine Zcoin and gpu mine on other coins. that's the reason i asked fullzero if he can add it... Sorry for misunderstanding, ZCOIN CPU Mining isn't profitable at all (from my understanding). XMR used to be a little profitable; but it has changed in last 3 weeks; lets wait for fullzero's opinion.
|
|
|
|
damNmad
Full Member
 
Offline
Activity: 378
Merit: 104
nvOC forever
|
 |
September 11, 2017, 10:18:38 PM |
|
Any one having issue with MPH_SWITCHER ??
It was working fine, but stopped suddenly; can't see any error trace!!
|
|
|
|
Hostels
Newbie
Offline
Activity: 31
Merit: 0
|
 |
September 12, 2017, 01:24:57 AM |
|
Guys on P106 Maxximus007_AUTO_TEMPERATURE_CONTROL dont work. When temp reach target_temp and go above script do nothing(((( If anyone have to work it on P106. Plz write how.
What is your target temp and minimum fan speed ? What are the individual temp limits? (even if its set to no) Manual fan = No ? maxximus auto temp = yes ? Maxximus007_AUTO_TEMPERATURE_CONTROL="YES" MANUAL_FAN="NO" # Maxximus007_AUTO_TEMPERATURE_CONTROL TARGET_TEMP=65 __FAN_ADJUST=5 # Adjustment size in percent POWER_ADJUST=0 # Adjustment size in watts (i dont want to down PL, but i try to use any (1,2,3,4,5) and nothning happen) # Difference in actual temperature allowed before action: Works only if current is BELOW target temp ALLOWED_TEMP_DIFF=3 # Restore original power limit if fan speed is lower than this percentage RESTORE_POWER_LIMIT=99 # lowest fan speed that will be used MINIMAL_FAN_SPEED=60 INDIVIDUAL_TARGET_TEMPS="NO"
|
|
|
|
|