Bitcoin Forum
June 19, 2024, 05:11:45 PM *
News: Latest Bitcoin Core release: 27.0 [Torrent]
 
   Home   Help Search Login Register More  
Pages: 1 2 3 4 5 6 7 [All]
  Print  
Author Topic: HOW TO SET UP OVERCLOCKING AND FAN CONTROL ON UBUNTU 16.04 FOR NVIDIA CARDS  (Read 54990 times)
thevictimofuktyranny (OP)
Legendary
*
Offline Offline

Activity: 1092
Merit: 1004


View Profile
December 12, 2016, 05:05:25 PM
Last edit: April 12, 2018, 01:50:09 PM by thevictimofuktyranny
Merited by allyouracid (3)
 #1

This is quick Guide for setting up multiple Nvidia GPUs on Ubuntu 16.04LTS and 17.10LTS with Full Desktop.

Enabling all GPUs with overclocking and fan control.

CURRENTLY, THIS GUIDE REQUIRES (UBUNTU 16.04.04LTS - LATEST VERSION) ONE NVIDIA GPU NEEDS TO BE CONNECTED TO A MONITOR.

IT IS NOW SUPER-SIMPLE.

These are the steps are now:

1) install Ubuntu 16.04LTS or 17.04LTS

2) Update the Operating System via Software Centre. REBOOT

3) Go to Additional Drivers and switch the the CPU drivers, if not automatically loaded. (If you have problems with CPU drivers switch back to the Ubuntu default) REBOOT

4) Go to Additional Drivers and switch too Nvidia Drivers - I recommend you use default 378 driver optimised for Ubuntu OS. REBOOT

5) Open a Terminal and enter each line:

sudo update-grub

sudo nvidia-xconfig -a --cool-bits=28 --allow-empty-initial-configuration

(You can run with offer coolbits settings, 31 is frequently used as well).
 
REBOOT

Fans control and overclocking is now enabled.

To finish, you will create a startup sh file for each GPU so the overclocks and fan speed are loaded when you log into Ubuntu 16.04LTS.

Create some empty documents on Ubuntu Desktop and call them whatever you like. Make sure the filename has .sh at the end.

Paste in:

!/bin/bash

nvidia-settings -a '[gpu:0]/GPUGraphicsMemoryOffset[3]=100'

nvidia-settings -a '[gpu:0]/GPUMemoryTransferRateOffset[3]=100'

nvidia-settings -a '[gpu:0]/GPUFanControlState=1'

nvidia-settings -a '[fan:0]/GPUTargetFanSpeed=80'

Amend the clocks (GPU and Memory) and fan speeds to whatever you're comfortable with. Make separate documents for each GPU by changing the numbers for each card.

Save and open properties and make each file "executable".

Go to Startup Applications and ADD each .sh to the programs you run when you log in.

--------------------------------------------------------------------------Problems setting nvidia-xconfig for multi-GPU rigs try this work-around---------------------------

Firstly, log in and enable overclocking on one GPU:

sudo nvidia-xconfig -a --cool-bits=28

Log out and then log back in, then enable overclocking on all the other identical GPUs:

sudo nvidia-xconfig -a --cool-bits=28 --allow-empty-initial-configuration

Then log out and log back in and all the GPUs will have overclocking and fan control enabled.


-------------------------------------------------------------------------Reducing Watts Used By the GPUs---------------------------------------------------------------------------------------------------

Set Nvidia Drivers to persistent state (you must be in root - open terminal and enter "sudo -i"):

nvidia-smi -pm 1

First ask nvidia-smi what's the max power and min power limits are:

nvidia-smi -i 0 -q -d POWER

This will show MAX Power and MIN POWER allowed.

GTX 750TI as an example:
MIN POWER 30 W
MAX POWER 38.5 W

Then, you can reduce the watts to the MIN POWER allowed:

sudo nvidia-smi -pl 30

This gives you a net reduction of 22%.

Tested on Ubuntu, with max GPU load via running Unigine Heaven 4 Benchmark at MIN POWER.

For rigs with identical GPUs, you can set all power watts for all the cards at the same time with:
nvidia-smi -pm 1
sudo nvidia-smi -pl 30

---------------------------------------------------------------------Losing Share Efficiency after Updating the OS Security-----------------------------------------------------------------------

Switch to Ubuntu stock non-Nvidia drivers - Roboot.
On next boot up switch back to 378 drivers - Roboot
Re-enable overclocking and fan control.
Performance on share efficiency will be restored expected rates.
 

-------------------------------------------------------------------------PSU Capacitor Ageing--------------------------------------------------------------------------------------------------------------

The principle effects of this, will be a loss of efficiency. A PSU running at 88% efficiency after a 5 years will run at lower efficiency, closer to 78%.

Naturally, this will lead to higher wasted watts, depending on your locations electricity pricing buying a new PSU could be a worthwhile undertaking.

An extra 80watts wasted on a 800watt load does work out to be $84 (at $0.12 1 kilowatt-hour (kWh)) in a year.


----------------------------------------------------------Old Method as Reference Material and No Longer Needed-----------------------------------------------------------------------------

Install Ubuntu 16.04 with enabling software update options for Ubuntu Development team and third parties.

On reboot, after installation open up Ubuntu Software - update the the OS via Ubuntu Software (important to use the OS tool and not a terminal) and reboot.

Next, Software & Updates , and then Additional Drivers and install the Nvidia 367.57 drivers - these include extra tweaks from the Ubuntu development team for max GPU performance. Unfortunately, they do not allow you overclocking, but you will fix this latter on.

Next, go to Search Your Computer and bring up Nvidia Contol Center. And X Server Configuration and save the configuration file.

Next, open a terminal and enter the following:

sudo update-grub
sudo nvidia-xconfig -a --cool-bits=28 --allow-empty-initial-configuration

Reboot

This will enable all GPUs with screens and fan control on all GPUs.

Now, lets go get the latest drivers from Nvidia and Cuda 8 from their website or whatever Cuda version you need for your mining software. Save these downloads too the default Download folder.

Next, you will disable the Nvidia 367.57 driver to install the latest Nvidia drivers by going back into Software & Updates , and then Additional Drivers and select Nouveau Display Drivers. Apply changes.

Reboot


--------------------------------INSTALL DRIVERS VIA ADDITIONAL DRIVERS UBUNTU 16.04LTS----------------------------
Intstall the latest drivers with following instructions:

Press Control Alt F2 to get into non-desktop display.

Log in.

Switch off x-server with:

sudo service lightdm stop.

Go to downloads folder with:

cd ~/Downloads

ls

This will display driver name and run with:

sudo sh ./AND NVIDIA NAME LISTED

There will be two error messages, but select continue installation and say yes at the prompts.

Then, switch the x-server back on with:

sudo service lightdm start.

Reboot
---------------------------------------------------------------------------------------------------------------------------------------------------
Now, when you go back to the Nvidia Control Panel, it will show overclocking is enabled on all GPU's.

Next, install Cuda 8 with by opening a terminal in the Downloads folder:

sudo sh cuda_8.0.44_linux.run

Press "Control C" too fast foward to the end of EULA and "accept".

Say "No" to install drivers (trying to install these drivers when you have active Nvidia drivers will wreck the OS) and yes to toolkit, link and samples.
pallas
Legendary
*
Offline Offline

Activity: 2716
Merit: 1094


Black Belt Developer


View Profile
December 12, 2016, 07:32:27 PM
 #2

Thanks for this guide!

thevictimofuktyranny (OP)
Legendary
*
Offline Offline

Activity: 1092
Merit: 1004


View Profile
December 12, 2016, 07:42:49 PM
 #3

Thanks for this guide!

No problems, mate Smiley

It sorts out a lot of problems people have been having getting the max GPU performance on Linux installations and enabling all the GPUs overclocking and fan controls.
Nikolaj
Sr. Member
****
Offline Offline

Activity: 445
Merit: 255


View Profile
December 24, 2016, 12:14:58 PM
 #4

Thanks for this guide!

No problems, mate Smiley

It sorts out a lot of problems people have been having getting the max GPU performance on Linux installations and enabling all the GPUs overclocking and fan controls.

Thank you, nice guide Wink

ps: Merry Christmas
xPwnK
Newbie
*
Offline Offline

Activity: 41
Merit: 0


View Profile
December 24, 2016, 02:30:57 PM
 #5

Thank you for this guide! I'm sure this will help people who are new to ubuntu.
thevictimofuktyranny (OP)
Legendary
*
Offline Offline

Activity: 1092
Merit: 1004


View Profile
December 24, 2016, 08:02:56 PM
 #6

Thanks for this guide!

No problems, mate Smiley

It sorts out a lot of problems people have been having getting the max GPU performance on Linux installations and enabling all the GPUs overclocking and fan controls.

Thank you, nice guide Wink

ps: Merry Christmas

Merry Christmas as well
crysx
Sr. Member
****
Offline Offline

Activity: 364
Merit: 260


--- ChainWorks Industries ---


View Profile WWW
December 26, 2016, 05:57:51 AM
 #7

Thanks for this guide!

No problems, mate Smiley

It sorts out a lot of problems people have been having getting the max GPU performance on Linux installations and enabling all the GPUs overclocking and fan controls.

Thank you, nice guide Wink

ps: Merry Christmas

Merry Christmas as well

have a happy Christmas mate ...

when i get back into it all in the next couple of days - ill see what ( if any ) of this guide works with redhat based systems ...

once again - no one has any guide or intention on releasing anything for rhel based systems ( like feora ) even though it is onf of the highest rated and used system on the planet ...

rhel itself is used in the majority of the corporate systems backend - and no one has tapped into that market to supply to those people and systems as well ... except cwi of course Wink ...

tanx and njoi the time of this joyous occasion ...

#crysx

ChainWorks Industries . grn - Ga2TFVPW3y2vd9vMdqLWfid9hf8RPSQV19 . exchange - https://bleutrade.com/exchange/GRN/BTC/ . email - crysx@gnxs.com .
thevictimofuktyranny (OP)
Legendary
*
Offline Offline

Activity: 1092
Merit: 1004


View Profile
December 27, 2016, 08:41:27 AM
 #8

Thanks for this guide!

No problems, mate Smiley

It sorts out a lot of problems people have been having getting the max GPU performance on Linux installations and enabling all the GPUs overclocking and fan controls.

Thank you, nice guide Wink

ps: Merry Christmas

Merry Christmas as well

have a happy Christmas mate ...

when i get back into it all in the next couple of days - ill see what ( if any ) of this guide works with redhat based systems ...

once again - no one has any guide or intention on releasing anything for rhel based systems ( like feora ) even though it is onf of the highest rated and used system on the planet ...

rhel itself is used in the majority of the corporate systems backend - and no one has tapped into that market to supply to those people and systems as well ... except cwi of course Wink ...

tanx and njoi the time of this joyous occasion ...

#crysx

The latest Mint version is based on Ubuntu 16.04LTS - so I'd reckon that will be similar process
crysx
Sr. Member
****
Offline Offline

Activity: 364
Merit: 260


--- ChainWorks Industries ---


View Profile WWW
December 27, 2016, 09:01:45 AM
 #9

Thanks for this guide!

No problems, mate Smiley

It sorts out a lot of problems people have been having getting the max GPU performance on Linux installations and enabling all the GPUs overclocking and fan controls.

Thank you, nice guide Wink

ps: Merry Christmas

Merry Christmas as well

have a happy Christmas mate ...

when i get back into it all in the next couple of days - ill see what ( if any ) of this guide works with redhat based systems ...

once again - no one has any guide or intention on releasing anything for rhel based systems ( like feora ) even though it is onf of the highest rated and used system on the planet ...

rhel itself is used in the majority of the corporate systems backend - and no one has tapped into that market to supply to those people and systems as well ... except cwi of course Wink ...

tanx and njoi the time of this joyous occasion ...

#crysx

The latest Mint version is based on Ubuntu 16.04LTS - so I'd reckon that will be similar process

yup ...

debian based systems like that are fundamentally different to the rhel based systems ...

from installs to support in repos - some things are similar - but very few ... most things between the different distributions are completely different - but usually not impossible to do ...

the type of mining farm that is currently being devised is unique to cwi and thefarm ... it is being done no where else in the world and the cooling design i am designing for the miners themselves - is just as unique ... so when an oc is stable for a particular type of card or chipset - then the cooling design can be implemented to better keep the gpu cards themselves much cooler - and the mining much more simplified but highly optimized with our miner ...

so this is one of the things i would like to see happen within the coming 12months of 2017 come to fruition ... alongside all the other things we have - this will make a handy addition to the knowledgebase we are also putting together ... this will also simplify the installation procedures which almost NO developmental projects out there support off the bat - redhat systems ...

community members and devs alike seem to think the word linux is synonymous with ubuntu / debian ... its not ...

will let you know how we go in the coming months ...

#crysx

ChainWorks Industries . grn - Ga2TFVPW3y2vd9vMdqLWfid9hf8RPSQV19 . exchange - https://bleutrade.com/exchange/GRN/BTC/ . email - crysx@gnxs.com .
QuintLeo
Legendary
*
Offline Offline

Activity: 1498
Merit: 1030


View Profile
December 27, 2016, 09:12:08 AM
 #10

Ubuntu varients like XUbuntu should work the same.

 XUbuntu has that nice "startup" stuff built into the XFCE desktop though, and doesn't make you put ".sh" at the end of shell scripts to run them, though they DO need to be CHMODed to an executeable status.


 Most mining software programmers write to Ubuntu (I'm not sure why that started but it's definitely the default by now).
 Getting most mining sofware to work on non-Ubuntu (or at least non-Debian) varients therefore becomes a major chore at best.

 I just wish Grub didn't make "cloning" an existing working installation a PITA (I can clone a working Slackware setup with a single "DD" command, trivial easy and built into ANY distro, because of it's use of LILO).

 I suspect it wouldn't be such a PITA if grub didn't force the use of that stupid "UUID" .... GARBAGE ....


I'm no longer legendary just in my own mind!
Like something I said? Donations gratefully accepted. LYLnTKvLefz9izJFUvEGQEZzSkz34b3N6U (Litecoin)
1GYbjMTPdCuV7dci3iCUiaRrcNuaiQrVYY (Bitcoin)
crysx
Sr. Member
****
Offline Offline

Activity: 364
Merit: 260


--- ChainWorks Industries ---


View Profile WWW
December 27, 2016, 09:45:45 AM
 #11

Ubuntu varients like XUbuntu should work the same.

 XUbuntu has that nice "startup" stuff built into the XFCE desktop though, and doesn't make you put ".sh" at the end of shell scripts to run them, though they DO need to be CHMODed to an executeable status.


 Most mining software programmers write to Ubuntu (I'm not sure why that started but it's definitely the default by now).
 Getting most mining sofware to work on non-Ubuntu (or at least non-Debian) varients therefore becomes a major chore at best.

 I just wish Grub didn't make "cloning" an existing working installation a PITA (I can clone a working Slackware setup with a single "DD" command, trivial easy and built into ANY distro, because of it's use of LILO).

 I suspect it wouldn't be such a PITA if grub didn't force the use of that stupid "UUID" .... GARBAGE ....



hehehe ...

i actually couldnt agree with you more ...

debian based distros are now the 'norm' - but that doesnt mean that a multitude of rhel based systems are not in use today - because they are ... infact in much more businesses and corporates than any other distribution ...

but come crypto mining - and debian based systems become something that is used throughout - and it blows my mind why ... not that i am against these systems - just that im agaians the use of the term 'linux' to describe ubuntu - rather than ALL distributions ...

anyway - there are so many that have made it a warcry now - that newbies that come into the miner scene seem to get led down into a spiral rabbit hole and never come out the same when it comes to distros of any sort ... ive been a redaht guy ( yup - ive haerd it all before and heard all the condolences speeches ) but redhat systems are the number ONE and primary business systems in the world to date ... so in a crypto sense - it may not be too much of accolade - but in the business world ( which makes up more then 88% of the linux systems globally ) it matters that redhat is the stable and common system ... which means if crypto mining actually took anything really seriously with regards to implementation in this field - they may just have some corporate grunt on cryptos side ...

but alas ...

the details of the instructions for oc and linux of this thread however - are actually quite good ...

tanx for that victim Smiley ...

#crysx

ChainWorks Industries . grn - Ga2TFVPW3y2vd9vMdqLWfid9hf8RPSQV19 . exchange - https://bleutrade.com/exchange/GRN/BTC/ . email - crysx@gnxs.com .
thevictimofuktyranny (OP)
Legendary
*
Offline Offline

Activity: 1092
Merit: 1004


View Profile
December 27, 2016, 09:48:52 AM
 #12

Ubuntu varients like XUbuntu should work the same.

 XUbuntu has that nice "startup" stuff built into the XFCE desktop though, and doesn't make you put ".sh" at the end of shell scripts to run them, though they DO need to be CHMODed to an executeable status.


 Most mining software programmers write to Ubuntu (I'm not sure why that started but it's definitely the default by now).
 Getting most mining sofware to work on non-Ubuntu (or at least non-Debian) varients therefore becomes a major chore at best.

 I just wish Grub didn't make "cloning" an existing working installation a PITA (I can clone a working Slackware setup with a single "DD" command, trivial easy and built into ANY distro, because of it's use of LILO).

 I suspect it wouldn't be such a PITA if grub didn't force the use of that stupid "UUID" .... GARBAGE ....



Well, the Ubuntu development team have extra optimisation to ensure the max performance happens with GPU drivers. There are quite a few decent games playable on Ubuntu now.

For example: I tested a straight install of the latest Nvidia drivers, without installing via Software & Updates , and then Additional Drivers the Ubuntu officially tested Nvidia 367.57 drivers.

I found that the straight install of latest Nvidia drivers became unstable and would hash at 40% of the GPUs actual max hashrate speed.

Therefore, the Ubuntu development teams have extra installs, which make Nvidia drivers stable and achieve the max GPU performance, which carry over when updating too the latest Nvidia drivers.
crysx
Sr. Member
****
Offline Offline

Activity: 364
Merit: 260


--- ChainWorks Industries ---


View Profile WWW
December 27, 2016, 09:51:48 AM
 #13

Ubuntu varients like XUbuntu should work the same.

 XUbuntu has that nice "startup" stuff built into the XFCE desktop though, and doesn't make you put ".sh" at the end of shell scripts to run them, though they DO need to be CHMODed to an executeable status.


 Most mining software programmers write to Ubuntu (I'm not sure why that started but it's definitely the default by now).
 Getting most mining sofware to work on non-Ubuntu (or at least non-Debian) varients therefore becomes a major chore at best.

 I just wish Grub didn't make "cloning" an existing working installation a PITA (I can clone a working Slackware setup with a single "DD" command, trivial easy and built into ANY distro, because of it's use of LILO).

 I suspect it wouldn't be such a PITA if grub didn't force the use of that stupid "UUID" .... GARBAGE ....



Well, the Ubuntu development team have extra optimisation to ensure the max performance happens with GPU drivers. There are quite a few decent games playable on Ubuntu now.

For example: I tested a straight install of the latest Nvidia drivers, without installing via Software & Updates , and then Additional Drivers the Ubuntu officially tested Nvidia 367.57 drivers.

I found that the straight install of latest Nvidia drivers became unstable and would hash at 40% of the GPUs actual max hashrate speed.

Therefore, the Ubuntu development teams have extra installs, which make Nvidia drivers stable and achieve the max GPU performance, which carry when updating too the latest Nvidia drivers.

hence ...

backing what i was saying ...

if dev teams would actually support redhat systems as much as debian based systems - we would have a multitude of systems to choose from - and not a finite set of systems ...

though i totally agree with ubuntus support structure and setup - i am saddened by the lack of support for almost any other distro other that ubuntu / debian based ...

which we will hopefully change this coming year Wink ...

#crysx

ChainWorks Industries . grn - Ga2TFVPW3y2vd9vMdqLWfid9hf8RPSQV19 . exchange - https://bleutrade.com/exchange/GRN/BTC/ . email - crysx@gnxs.com .
painmaker
Member
**
Offline Offline

Activity: 71
Merit: 10


View Profile
December 30, 2016, 02:19:38 PM
 #14

hi there and thanks for this writeup! pretty much covers what i've tried/observed so far.
besides setting coolbits to 28 (which i don't think is necessary as a value of 16 should suffice unless one wants to fiddle with GPU-voltages) i've also come across the xorg.conf setting of

  Option "RegistryDwords" "PowerMizerEnable=0x1; PerfLevelSrc=0x2222; PowerMizerDefault=0x1; PowerMizerDefaultAC=0x1

which ought to help set a performance-level where GPUMemoryTransferRateOffset can bet set which is in my case only the cards highest performance-level (can be checked with nvidia-settings -q GPUPerfModes -t)!

however, after booting the card seems to be in the highest perf-level but as soon as i start my miner the perf-leves goes back to the second highest level where the GPUMemoryTransferRateOffset cannot be set.
any idea what i might do wrong on this?  Huh

setting fanspeed and GPUGraphicsClockOffset seems to work fine as both are settable not only in the highest perf-level...
driver used on my debian-system is 375.20.

cheers!
thevictimofuktyranny (OP)
Legendary
*
Offline Offline

Activity: 1092
Merit: 1004


View Profile
December 30, 2016, 02:50:29 PM
 #15

hi there and thanks for this writeup! pretty much covers what i've tried/observed so far.
besides setting coolbits to 28 (which i don't think is necessary as a value of 16 should suffice unless one wants to fiddle with GPU-voltages) i've also come across the xorg.conf setting of

 Option "RegistryDwords" "PowerMizerEnable=0x1; PerfLevelSrc=0x2222; PowerMizerDefault=0x1; PowerMizerDefaultAC=0x1

which ought to help set a performance-level where GPUMemoryTransferRateOffset can bet set which is in my case only the cards highest performance-level (can be checked with nvidia-settings -q GPUPerfModes -t)!

however, after booting the card seems to be in the highest perf-level but as soon as i start my miner the perf-leves goes back to the second highest level where the GPUMemoryTransferRateOffset cannot be set.
any idea what i might do wrong on this?  Huh

setting fanspeed and GPUGraphicsClockOffset seems to work fine as both are settable not only in the highest perf-level...
driver used on my debian-system is 375.20.

cheers!

Firstly, this does not work:

Option "RegistryDwords" "PowerMizerEnable=0x1; PerfLevelSrc=0x2222; PowerMizerDefault=0x1; PowerMizerDefaultAC=0x1

I would delete it off, I tried it and all it did was mess up settings.

Nvidia Compute always drops memory to 2nd memory profile setting, which is legacy problem from the launch of 10 series cards.

You need to set up .sh overclocking profile that will add 404mhz (or whatever you GPU is short at) overclock to the memory to get the gaming memory performance of 8008mhz applied to the Compute tasks.  

vv181
Legendary
*
Offline Offline

Activity: 1932
Merit: 1273


View Profile
December 30, 2016, 02:54:45 PM
 #16

thanks for the guide!
btw please make overvolting guide on linux.
thevictimofuktyranny (OP)
Legendary
*
Offline Offline

Activity: 1092
Merit: 1004


View Profile
December 30, 2016, 03:03:56 PM
 #17

thanks for the guide!
btw please make overvolting guide on linux.


The professional overclockers on youtube have said that Nvidia 10 series card don't get a worthwhile boost to higher mhz stability by adding more voltage.
painmaker
Member
**
Offline Offline

Activity: 71
Merit: 10


View Profile
December 30, 2016, 03:06:03 PM
 #18

[SNIP]
[SNIP]
Nvidia Compute always drops memory to 2nd memory profile setting, which is legacy problem from the launch of 10 series cards.

You need to set up .sh overclocking profile that will add 404mhz (or whatever you GPU is short at) overclock to the memory to get the gaming memory performance of 8008mhz applied to the Compute tasks.  
thanks a lot for your reply! i'm not sure i understand you correctly what you mean by 'set up .sh overclocking profile'.
i assume you mean i need to create a bash-script (.sh) which gets executed during startup (just as you posted in you guide)?
this is exactly what i have done and this leads to the described behaviour in my case (level 2 (without clock-offsets) of a total of 3 levels).

without the RegistryDwords-option, the card is stuck in level 2 (of 3) right from the beginning.
thevictimofuktyranny (OP)
Legendary
*
Offline Offline

Activity: 1092
Merit: 1004


View Profile
December 30, 2016, 03:19:11 PM
 #19

[SNIP]
[SNIP]
Nvidia Compute always drops memory to 2nd memory profile setting, which is legacy problem from the launch of 10 series cards.

You need to set up .sh overclocking profile that will add 404mhz (or whatever you GPU is short at) overclock to the memory to get the gaming memory performance of 8008mhz applied to the Compute tasks.  
thanks a lot for your reply! i'm not sure i understand you correctly what you mean by 'set up .sh overclocking profile'.
i assume you mean i need to create a bash-script (.sh) which gets executed during startup (just as you posted in you guide)?
this is exactly what i have done and this leads to the described behaviour in my case (level 2 (without clock-offsets) of a total of 3 levels).

without the RegistryDwords-option, the card is stuck in level 2 (of 3) right from the beginning.

Yeah, bash-script.

Then, it is a problem with your driver installation.

Or, did you accidental say "Yes" to install Nividia drivers when installing Cuda 8 - that will wreck everything.

Or, you've got a corrupted Ubuntu OS installation.

Anyway, I would zero wipe your SSD and start again.
ken-ray
Newbie
*
Offline Offline

Activity: 24
Merit: 0


View Profile
January 29, 2017, 06:56:30 AM
 #20

Install Ubuntu 16.10

sudo add-apt-repository ppa:graphics-drivers/ppa
sudo apt update
sudo apt install nvidia-370 nvidia-cuda-toolkit
sudo nvidia-xconfig -a --cool-bits=31 --allow-empty-initial-configuration

thevictimofuktyranny (OP)
Legendary
*
Offline Offline

Activity: 1092
Merit: 1004


View Profile
January 29, 2017, 10:27:56 PM
 #21

Install Ubuntu 16.10

sudo add-apt-repository ppa:graphics-drivers/ppa
sudo apt update
sudo apt install nvidia-370 nvidia-cuda-toolkit
sudo nvidia-xconfig -a --cool-bits=31 --allow-empty-initial-configuration



Well, you can do that, but when I tested it, some mining software lost a lot of after 4-7 hours of mining and hashed at 60% less.



ken-ray
Newbie
*
Offline Offline

Activity: 24
Merit: 0


View Profile
January 30, 2017, 04:19:12 AM
 #22

Install Ubuntu 16.10

sudo add-apt-repository ppa:graphics-drivers/ppa
sudo apt update
sudo apt install nvidia-370 nvidia-cuda-toolkit
sudo nvidia-xconfig -a --cool-bits=31 --allow-empty-initial-configuration



Well, you can do that, but when I tested it, some mining software lost a lot of after 4-7 hours of mining and hashed at 60% less.





I mostly run folding, results may vary I guess.
thevictimofuktyranny (OP)
Legendary
*
Offline Offline

Activity: 1092
Merit: 1004


View Profile
January 30, 2017, 07:00:40 AM
 #23

Install Ubuntu 16.10

sudo add-apt-repository ppa:graphics-drivers/ppa
sudo apt update
sudo apt install nvidia-370 nvidia-cuda-toolkit
sudo nvidia-xconfig -a --cool-bits=31 --allow-empty-initial-configuration



Well, you can do that, but when I tested it, some mining software lost a lot of after 4-7 hours of mining and hashed at 60% less.





I mostly run folding, results may vary I guess.

Just as a correction, I tested with 375 drivers to be accurate.

Lot's of people mining cryptos overclock the core and memory and the later drivers include better optimisations for that.

Don't have time to go back and test older drivers - gaming at the moment.
kopija
Sr. Member
****
Offline Offline

Activity: 308
Merit: 250


View Profile
April 15, 2017, 09:47:39 AM
 #24

Install Ubuntu 16.10

sudo add-apt-repository ppa:graphics-drivers/ppa
sudo apt update
sudo apt install nvidia-370 nvidia-cuda-toolkit
sudo nvidia-xconfig -a --cool-bits=31 --allow-empty-initial-configuration



Well, you can do that, but when I tested it, some mining software lost a lot of after 4-7 hours of mining and hashed at 60% less.





I mostly run folding, results may vary I guess.

No they may not.
 I run both folding and mining software and these four lines are all that is needed.
No need for complications in the OP.

we are nothing but a smart contracts on a cosmic blockchain
thevictimofuktyranny (OP)
Legendary
*
Offline Offline

Activity: 1092
Merit: 1004


View Profile
April 16, 2017, 03:25:06 PM
 #25

Install Ubuntu 16.10

sudo add-apt-repository ppa:graphics-drivers/ppa
sudo apt update
sudo apt install nvidia-370 nvidia-cuda-toolkit
sudo nvidia-xconfig -a --cool-bits=31 --allow-empty-initial-configuration



Well, you can do that, but when I tested it, some mining software lost a lot of after 4-7 hours of mining and hashed at 60% less.





I mostly run folding, results may vary I guess.

No they may not.
 I run both folding and mining software and these four lines are all that is needed.
No need for complications in the OP.


It is an old guide - some of it does not apply anymore.
lost_post
Newbie
*
Offline Offline

Activity: 13
Merit: 0


View Profile
June 17, 2017, 06:07:47 PM
 #26

Thanks a lot, for you guide.
I tryied to do it step by step. Three times. But had no succsesfull.
First difference is - after install Ubuntu (ubuntu-16.04.2-desktop-amd64), when I enable nvidia driver first time, I already have last version of nvidia drivers. I can't install  367.57 . Is it ok?
After install nvidia drivers I can open nvidia settings, but i don't know where to do to "And X Server Configuration and save the configuration file. " I tryed to reboot and after that I can see temps, fans rpm, clock of gpu's.
So then, when I disabled nvidia driver and enabled Nouveau driver, I start to install last nvidia driver one more time. One time I had error that because of enabled Nouveau driver, it can not install nvidia drivers. Is my job right?
After succesfull instalation cuda (without drivers) I tryed to overclock gpu, and have a lot of eerors. Failed to connect to Mir: Failed to connect to server socket: No such file or directory. Unable to init server: Could not connect: Connection refused. I tryed to install mir, but it was a bad idea.
What am I doing wrong? Can you give me some help?
thevictimofuktyranny (OP)
Legendary
*
Offline Offline

Activity: 1092
Merit: 1004


View Profile
June 18, 2017, 03:07:32 AM
Last edit: June 18, 2017, 03:27:17 AM by thevictimofuktyranny
 #27

Thanks a lot, for you guide.
I tryied to do it step by step. Three times. But had no succsesfull.
First difference is - after install Ubuntu (ubuntu-16.04.2-desktop-amd64), when I enable nvidia driver first time, I already have last version of nvidia drivers. I can't install  367.57 . Is it ok?
After install nvidia drivers I can open nvidia settings, but i don't know where to do to "And X Server Configuration and save the configuration file. " I tryed to reboot and after that I can see temps, fans rpm, clock of gpu's.
So then, when I disabled nvidia driver and enabled Nouveau driver, I start to install last nvidia driver one more time. One time I had error that because of enabled Nouveau driver, it can not install nvidia drivers. Is my job right?
After succesfull instalation cuda (without drivers) I tryed to overclock gpu, and have a lot of eerors. Failed to connect to Mir: Failed to connect to server socket: No such file or directory. Unable to init server: Could not connect: Connection refused. I tryed to install mir, but it was a bad idea.
What am I doing wrong? Can you give me some help?

Guide is pretty old. It super-simple now Smiley

These are the steps are now:

1) install Ubuntu 16.04LTS or 17.10LTS

2) Update the Operating System via Software Centre. REBOOT

3) Go to Additional Drivers and switch the the CPU drivers, if not automatically loaded. REBOOT

4) Go to Additional Drivers and switch too Nvidia Drivers. REBOOT

5) Open a Terminal and enter each line:

sudo update-grub

sudo nvidia-xconfig -a --cool-bits=28 --allow-empty-initial-configuration

REBOOT

Fans control and overclocking is now enabled.

Amended the 1st page, which explains how to set up automatics fan speeds and overclocks.


lost_post
Newbie
*
Offline Offline

Activity: 13
Merit: 0


View Profile
June 18, 2017, 09:34:40 AM
Last edit: June 18, 2017, 10:56:38 AM by lost_post
 #28

Thanks a lot for answer.
Do i need to install cuda8 or it is already instaleted with Additional Drivers or OS ?
I can use oveclock and fan control using nvidia-settings, right?

I did all you said.
When I try to start nvidia_gpus_oc.sh
Quote
#!/bin/bash
nvidia-settings -a '[gpu:0]/GPUGraphicsMemoryOffset[3]=100'
nvidia-settings -a '[gpu:0]/GPUMemoryTransferRateOffset[3]=100'
nvidia-settings -a '[gpu:0]/GPUFanControlState=1'
nvidia-settings -a '[fan:0]/GPUTargetFanSpeed=80'

I get errors:
Quote
ERROR: Error parsing assignment '[gpu:0]/GPUGraphicsMemoryOffset[3]=100'
       (Unrecognized attribute name).
ERROR: Error assigning value 80 to attribute 'GPUTargetFanSpeed'
       (Rig02:0[fan:0]) as specified in assignment
       '[fan:0]/GPUTargetFanSpeed=80' (Unknown Error).

Then I tryed to change fan:0 to gpu:0, and have no second error. But fan didn't start.

When I add same commands for gpu1 I have one more error ofr gpu1
Quote
ERROR: Error parsing assignment '[gpu:1]/GPUGraphicsMemoryOffset[3]=100'
       (Unrecognized attribute name).

ERROR: Unable to load info from any available system

ERROR: Error assigning value 80 to attribute 'GPUTargetFanSpeed'
       (Rig02:0[fan:1]) as specified in assignment
       '[fan:1]/GPUTargetFanSpeed=80' (Unknown Error).

What is wrong now?
thevictimofuktyranny (OP)
Legendary
*
Offline Offline

Activity: 1092
Merit: 1004


View Profile
June 18, 2017, 11:34:01 AM
 #29

Thanks a lot for answer.
Do i need to install cuda8 or it is already instaleted with Additional Drivers or OS ?
I can use oveclock and fan control using nvidia-settings, right?

I did all you said.
When I try to start nvidia_gpus_oc.sh
Quote
#!/bin/bash
nvidia-settings -a '[gpu:0]/GPUGraphicsMemoryOffset[3]=100'
nvidia-settings -a '[gpu:0]/GPUMemoryTransferRateOffset[3]=100'
nvidia-settings -a '[gpu:0]/GPUFanControlState=1'
nvidia-settings -a '[fan:0]/GPUTargetFanSpeed=80'

I get errors:
Quote
ERROR: Error parsing assignment '[gpu:0]/GPUGraphicsMemoryOffset[3]=100'
       (Unrecognized attribute name).
ERROR: Error assigning value 80 to attribute 'GPUTargetFanSpeed'
       (Rig02:0[fan:0]) as specified in assignment
       '[fan:0]/GPUTargetFanSpeed=80' (Unknown Error).

Then I tryed to change fan:0 to gpu:0, and have no second error. But fan didn't start.

When I add same commands for gpu1 I have one more error ofr gpu1
Quote
ERROR: Error parsing assignment '[gpu:1]/GPUGraphicsMemoryOffset[3]=100'
       (Unrecognized attribute name).

ERROR: Unable to load info from any available system

ERROR: Error assigning value 80 to attribute 'GPUTargetFanSpeed'
       (Rig02:0[fan:1]) as specified in assignment
       '[fan:1]/GPUTargetFanSpeed=80' (Unknown Error).

What is wrong now?

You don't need to install Cuda8 - already in the drivers.

You can set the fan one by one: Go to Nvidia X Server Settings and tick enable GPU Fan Settings and slide the bar to the desired speed and apply.

Here is downloadable working .sh file for you to check against, sorry about the adverts on this site and the link is only valid for 30 days:

https://ufile.io/miuj2

Finally, you may have custom fan controls made by your board partner, which is incompatible with X Server Settings - so cannot be set via .sh
lost_post
Newbie
*
Offline Offline

Activity: 13
Merit: 0


View Profile
June 18, 2017, 02:39:40 PM
 #30

I tried one more time.  Tried your oc.sh - the same errors.
I can't set clock or fan speed. The same errors.
Via Nvidia X Server Settings I also can't to set clocks or fans speed. Just read, or set PowerMizer profile. Is it ok?
Maybe special configuration of MB bios? Maybe I must disable internal GPU and use NVIDIA?
I tried to view available clocks via nvidia-smi and see that it is N/A. Is it normal?
I am using MSI GTX1070 8G Gaming X. Maybe I need to flash some "unlocked bios" to change clocks?
thevictimofuktyranny (OP)
Legendary
*
Offline Offline

Activity: 1092
Merit: 1004


View Profile
June 18, 2017, 02:45:26 PM
 #31

I tried one more time.  Tried your oc.sh - the same errors.
I can't set clock or fan speed. The same errors.
Via Nvidia X Server Settings I also can't to set clocks or fans speed. Just read, or set PowerMizer profile. Is it ok?
Maybe special configuration of MB bios? Maybe I must disable internal GPU and use NVIDIA?
I tried to view available clocks via nvidia-smi and see that it is N/A. Is it normal?
I am using MSI GTX1070 8G Gaming X. Maybe I need to flash some "unlocked bios" to change clocks?


Have many GPU's?

What Processor and chipset?

Have you tried Ubuntu 17.10LTS - is using Kernel 4.10 - may have better compatibility with newer hardware.
lost_post
Newbie
*
Offline Offline

Activity: 13
Merit: 0


View Profile
June 18, 2017, 04:57:36 PM
 #32

I did not try 17.10. I will try Ubuntu 17.10.
I have mb Asus Z-270-P, cpu intel celeron G4500, 4gb RAM, now i have two GPUs MSI GTX1070 Gaming X, but I am planing to use 8. Or 6.
Any suggestions?
thevictimofuktyranny (OP)
Legendary
*
Offline Offline

Activity: 1092
Merit: 1004


View Profile
June 18, 2017, 05:46:01 PM
 #33

I did not try 17.10. I will try Ubuntu 17.10.
I have mb Asus Z-270-P, cpu intel celeron G4500, 4gb RAM, now i have two GPUs MSI GTX1070 Gaming X, but I am planing to use 8. Or 6.
Any suggestions?

Your chipset is very new, even though CPU is from the previous generation. Ubuntu 17.10, with the newer 4.10 kernel should be more compatible.

Remember to update the bio as well.

Afterwards, go into bios, find the PCH option under the PCI-Express speed, change it from AUTO to Gen 1 or Gen 2.

I've got 3  versions of Ubuntu 16.04LTS, with different Kernels - some work and some don't.
lost_post
Newbie
*
Offline Offline

Activity: 13
Merit: 0


View Profile
June 18, 2017, 07:16:27 PM
 #34

Did you mean 17.04 ? Ubuntu site can't have link to 17.10. Only 17.04

PCH option under the PCI-Express speed works on PCI-E X16 slot 0. But I am using PCI-E X1 slot 2,3.
Does it change anything?
thevictimofuktyranny (OP)
Legendary
*
Offline Offline

Activity: 1092
Merit: 1004


View Profile
June 18, 2017, 07:38:15 PM
 #35

Did you mean 17.04 ? Ubuntu site can't have link to 17.10. Only 17.04

PCH option under the PCI-Express speed works on PCI-E X16 slot 0. But I am using PCI-E X1 slot 2,3.
Does it change anything?

Yeah, 17.04LTS.

You need to set all the slots to Gen 1 or Gen 2, whichever one works. Otherwise, you'll get weird Nvidia display errors.

For example:  Asus Prime H270 Plus - when it is not set to Gen 1 - on either Gen 2 or Gen 3 you get display errors, before the log in page.

And, only 2 cards are recognised - even though motherboard support is for 4 GPUs on PCI-E 1X.
Brucelats
Sr. Member
****
Offline Offline

Activity: 326
Merit: 250



View Profile
June 19, 2017, 12:14:24 AM
 #36

Hey!

Thanks for guide. I mannaged to set all my 4 GPUs to be able to get overclocked, and to adjust fan speed.

But i have problem again like a guy before, but we didnt got clear answer.


When my miner starts (Claymoore dual miner) all my GPU-s go back to Level 2 performance. If they were on LV 3 their clock would be higher, their "base" clock. For example my card have 8000 memory clock when i add 400 to mem clock on level 2, on level 3 it would hav 8.4k and so on. Feels like i am not using my cards full potential.


How can i unlock LV 3 so they mine on that level.


Cheers!


I use ubuntu 16.04

Brucelats
Sr. Member
****
Offline Offline

Activity: 326
Merit: 250



View Profile
June 19, 2017, 12:24:51 AM
 #37

I tried one more time.  Tried your oc.sh - the same errors.
I can't set clock or fan speed. The same errors.
Via Nvidia X Server Settings I also can't to set clocks or fans speed. Just read, or set PowerMizer profile. Is it ok?
Maybe special configuration of MB bios? Maybe I must disable internal GPU and use NVIDIA?
I tried to view available clocks via nvidia-smi and see that it is N/A. Is it normal?
I am using MSI GTX1070 8G Gaming X. Maybe I need to flash some "unlocked bios" to change clocks?




I use 3x Asus Dual X 8GB OC version gtx 1070 and MSI 1070gtx gaming X


My all cards work and i can OC them. I can put memory offsets and core offsets. I have ubuntu 16.04

It all worked after i did 4 lines of code:

sudo add-apt-repository ppa:graphics-drivers/ppa
sudo apt update
sudo apt install nvidia-370 nvidia-cuda-toolkit
sudo nvidia-xconfig -a --cool-bits=31 --allow-empty-initial-configuration

Altough i eventualy added newer drivers for nvidia and did some updates, but it worked like this also.

thevictimofuktyranny (OP)
Legendary
*
Offline Offline

Activity: 1092
Merit: 1004


View Profile
June 19, 2017, 12:29:56 AM
 #38

Hey!

Thanks for guide. I mannaged to set all my 4 GPUs to be able to get overclocked, and to adjust fan speed.

But i have problem again like a guy before, but we didnt got clear answer.


When my miner starts (Claymoore dual miner) all my GPU-s go back to Level 2 performance. If they were on LV 3 their clock would be higher, their "base" clock. For example my card have 8000 memory clock when i add 400 to mem clock on level 2, on level 3 it would hav 8.4k and so on. Feels like i am not using my cards full potential.


How can i unlock LV 3 so they mine on that level.


Cheers!


I use ubuntu 16.04

Level 2 is automatic for Compute workloads as specified by Nvidia. Only gaming workloads will run at the higher memory clock automatically (Level 3).

It's just the memory, that down-clocks - the originally release (last year) had heat issues running Micron memory ICs.

This was fixed with bios update earlier this year, but the Compute memory down-clock has not been changed, because lot's of people have not updated their bios. LOL
Brucelats
Sr. Member
****
Offline Offline

Activity: 326
Merit: 250



View Profile
June 19, 2017, 12:39:12 AM
 #39

Hey!

Thanks for guide. I mannaged to set all my 4 GPUs to be able to get overclocked, and to adjust fan speed.

But i have problem again like a guy before, but we didnt got clear answer.


When my miner starts (Claymoore dual miner) all my GPU-s go back to Level 2 performance. If they were on LV 3 their clock would be higher, their "base" clock. For example my card have 8000 memory clock when i add 400 to mem clock on level 2, on level 3 it would hav 8.4k and so on. Feels like i am not using my cards full potential.


How can i unlock LV 3 so they mine on that level.


Cheers!


I use ubuntu 16.04

Level 2 is automatic for Compute workloads as specified by Nvidia. Only gaming workloads will run at the higher memory clock automatically (Level 3).

It's just the memory, that down-clocks - the originally release (last year) had heat issues running Micron memory ICs.

This was fixed with bios update earlier this year, but the Compute memory down-clock has not been changed, because lot's of people have not updated their bios. LOL


Ahh ok. I see. So it's not possible yet to get to level 3.

Are there any other Linux tweaks possible to get higher speeds beside just slaming OC numbers? I run my MSI at +1400 from 2000 on memory clock. It's stable and not overheating, at around 40 celsius at 60% fan.

I am wondering can i get full potential from all these GTX i have and plan to buy?

thevictimofuktyranny (OP)
Legendary
*
Offline Offline

Activity: 1092
Merit: 1004


View Profile
June 19, 2017, 08:30:55 AM
Last edit: June 19, 2017, 07:41:45 PM by thevictimofuktyranny
 #40

Hey!

Thanks for guide. I mannaged to set all my 4 GPUs to be able to get overclocked, and to adjust fan speed.

But i have problem again like a guy before, but we didnt got clear answer.


When my miner starts (Claymoore dual miner) all my GPU-s go back to Level 2 performance. If they were on LV 3 their clock would be higher, their "base" clock. For example my card have 8000 memory clock when i add 400 to mem clock on level 2, on level 3 it would hav 8.4k and so on. Feels like i am not using my cards full potential.


How can i unlock LV 3 so they mine on that level.


Cheers!


I use ubuntu 16.04

Level 2 is automatic for Compute workloads as specified by Nvidia. Only gaming workloads will run at the higher memory clock automatically (Level 3).

It's just the memory, that down-clocks - the originally release (last year) had heat issues running Micron memory ICs.

This was fixed with bios update earlier this year, but the Compute memory down-clock has not been changed, because lot's of people have not updated their bios. LOL


Ahh ok. I see. So it's not possible yet to get to level 3.

Are there any other Linux tweaks possible to get higher speeds beside just slaming OC numbers? I run my MSI at +1400 from 2000 on memory clock. It's stable and not overheating, at around 40 celsius at 60% fan.

I am wondering can i get full potential from all these GTX i have and plan to buy?

Overclocking is pretty much based on using the card less, other than 24/7 for many years.

Say gaming - person plays games 4 hours a day - so overclocking will degrade the GPU, but because the card is not under full load 24/7, the GPU will last well over the warranty period.

I'm not overclocking myself (don't need too, when you mine for accumulation) - so, perhaps consult some threads from last year and see what the lifespan is on overclocked Nvidia GPUs.

Back in the day - AMD GPUs kept releasing new architectures every 1 year or so! And, those old releases, like Hawaii were really over the over necessary specification to have a low warranty events. So, those GPUs tended to last, even with 16% overclocks.

But, the current GPUs are all on 2 years architectures, so they will not be like those architectures.

Try this Nvidia thread, which has a lot of people overclocking their GPUs:

https://bitcointalk.org/index.php?topic=826901.0;topicseen
Brucelats
Sr. Member
****
Offline Offline

Activity: 326
Merit: 250



View Profile
June 21, 2017, 02:23:14 AM
 #41

Did you mean 17.04 ? Ubuntu site can't have link to 17.10. Only 17.04

PCH option under the PCI-Express speed works on PCI-E X16 slot 0. But I am using PCI-E X1 slot 2,3.
Does it change anything?

Yeah, 17.04LTS.

You need to set all the slots to Gen 1 or Gen 2, whichever one works. Otherwise, you'll get weird Nvidia display errors.

For example:  Asus Prime H270 Plus - when it is not set to Gen 1 - on either Gen 2 or Gen 3 you get display errors, before the log in page.

And, only 2 cards are recognised - even though motherboard support is for 4 GPUs on PCI-E 1X.

Hey! Do you have any clues how to make my Asus Z270-P Motherboard recognize 5 or more GPU-s. I can do 4 but not more Sad i am desperate. Not sure what to do. I use ubuntu 16.04 downloaded from ubuntu site. Desktop version

thevictimofuktyranny (OP)
Legendary
*
Offline Offline

Activity: 1092
Merit: 1004


View Profile
June 21, 2017, 10:17:06 AM
 #42

Did you mean 17.04 ? Ubuntu site can't have link to 17.10. Only 17.04

PCH option under the PCI-Express speed works on PCI-E X16 slot 0. But I am using PCI-E X1 slot 2,3.
Does it change anything?

Yeah, 17.04LTS.

You need to set all the slots to Gen 1 or Gen 2, whichever one works. Otherwise, you'll get weird Nvidia display errors.

For example:  Asus Prime H270 Plus - when it is not set to Gen 1 - on either Gen 2 or Gen 3 you get display errors, before the log in page.

And, only 2 cards are recognised - even though motherboard support is for 4 GPUs on PCI-E 1X.

Hey! Do you have any clues how to make my Asus Z270-P Motherboard recognize 5 or more GPU-s. I can do 4 but not more Sad i am desperate. Not sure what to do. I use ubuntu 16.04 downloaded from ubuntu site. Desktop version

Have Bios version 0609.

Find DMI/OPI Configuration and change to Gen 1

Find PEG Port Configuration and change to Gen 1

Then after installing drivers on  Ubuntu Nvidia drivers go back into bios and find Above 4G Decoding and change it to Enabled

thesmokingman
Full Member
***
Offline Offline

Activity: 212
Merit: 100



View Profile
June 21, 2017, 11:37:45 AM
 #43

Did you mean 17.04 ? Ubuntu site can't have link to 17.10. Only 17.04

PCH option under the PCI-Express speed works on PCI-E X16 slot 0. But I am using PCI-E X1 slot 2,3.
Does it change anything?

Yeah, 17.04LTS.

You need to set all the slots to Gen 1 or Gen 2, whichever one works. Otherwise, you'll get weird Nvidia display errors.

For example:  Asus Prime H270 Plus - when it is not set to Gen 1 - on either Gen 2 or Gen 3 you get display errors, before the log in page.

And, only 2 cards are recognised - even though motherboard support is for 4 GPUs on PCI-E 1X.

Hey! Do you have any clues how to make my Asus Z270-P Motherboard recognize 5 or more GPU-s. I can do 4 but not more Sad i am desperate. Not sure what to do. I use ubuntu 16.04 downloaded from ubuntu site. Desktop version

Have Bios version 0609.

Find DMI/OPI Configuration and change to Gen 1

Find PEG Port Configuration and change to Gen 1

Then after installing drivers on  Ubuntu Nvidia drivers go back into bios and find Above 4G Decoding and change it to Enabled



I see you say install the drivers then go back and enable Above 4G encoding. What about if you're using a prebuilt OS with drivers already loaded like pimpOS?

PIMP your AMD & Nvidia Farm. Finally a Multi-Miner Linux 16.04 Distro with Web Monitoring and On the Fly Algo-Switching. My PIMP AMD/Nvidia FARM: https://miner.farm/farmer/554/farmstatus
thevictimofuktyranny (OP)
Legendary
*
Offline Offline

Activity: 1092
Merit: 1004


View Profile
June 21, 2017, 11:48:08 AM
 #44

Did you mean 17.04 ? Ubuntu site can't have link to 17.10. Only 17.04

PCH option under the PCI-Express speed works on PCI-E X16 slot 0. But I am using PCI-E X1 slot 2,3.
Does it change anything?

Yeah, 17.04LTS.

You need to set all the slots to Gen 1 or Gen 2, whichever one works. Otherwise, you'll get weird Nvidia display errors.

For example:  Asus Prime H270 Plus - when it is not set to Gen 1 - on either Gen 2 or Gen 3 you get display errors, before the log in page.

And, only 2 cards are recognised - even though motherboard support is for 4 GPUs on PCI-E 1X.

Hey! Do you have any clues how to make my Asus Z270-P Motherboard recognize 5 or more GPU-s. I can do 4 but not more Sad i am desperate. Not sure what to do. I use ubuntu 16.04 downloaded from ubuntu site. Desktop version

Have Bios version 0609.

Find DMI/OPI Configuration and change to Gen 1

Find PEG Port Configuration and change to Gen 1

Then after installing drivers on  Ubuntu Nvidia drivers go back into bios and find Above 4G Decoding and change it to Enabled



I see you say install the drivers then go back and enable Above 4G encoding. What about if you're using a prebuilt OS with drivers already loaded like pimpOS?

Try it! It takes less than 1 hour to install a Linux OS.

It's just to avoid getting a no display event (black screen) when you don't have drivers installed.
thesmokingman
Full Member
***
Offline Offline

Activity: 212
Merit: 100



View Profile
June 21, 2017, 05:29:08 PM
 #45

Did you mean 17.04 ? Ubuntu site can't have link to 17.10. Only 17.04

PCH option under the PCI-Express speed works on PCI-E X16 slot 0. But I am using PCI-E X1 slot 2,3.
Does it change anything?

Yeah, 17.04LTS.

You need to set all the slots to Gen 1 or Gen 2, whichever one works. Otherwise, you'll get weird Nvidia display errors.

For example:  Asus Prime H270 Plus - when it is not set to Gen 1 - on either Gen 2 or Gen 3 you get display errors, before the log in page.

And, only 2 cards are recognised - even though motherboard support is for 4 GPUs on PCI-E 1X.

Hey! Do you have any clues how to make my Asus Z270-P Motherboard recognize 5 or more GPU-s. I can do 4 but not more Sad i am desperate. Not sure what to do. I use ubuntu 16.04 downloaded from ubuntu site. Desktop version

Have Bios version 0609.

Find DMI/OPI Configuration and change to Gen 1

Find PEG Port Configuration and change to Gen 1

Then after installing drivers on  Ubuntu Nvidia drivers go back into bios and find Above 4G Decoding and change it to Enabled



I see you say install the drivers then go back and enable Above 4G encoding. What about if you're using a prebuilt OS with drivers already loaded like pimpOS?

Try it! It takes less than 1 hour to install a Linux OS.

It's just to avoid getting a no display event (black screen) when you don't have drivers installed.

Yep I've been trying everything but can't seem to get past a black screen with 4G enabled on the prebuilt OS I'm using. Was just asking since you mentioned drivers first then enable 4G decoding. Thanks for outlining what you did to get this to work. I've been using xorg.conf to enable overclocking, so interested in giving this a shot and comparing the two methods.

Thanks and have a good one!

PIMP your AMD & Nvidia Farm. Finally a Multi-Miner Linux 16.04 Distro with Web Monitoring and On the Fly Algo-Switching. My PIMP AMD/Nvidia FARM: https://miner.farm/farmer/554/farmstatus
alextheman12
Newbie
*
Offline Offline

Activity: 1
Merit: 0


View Profile
July 17, 2017, 10:42:53 PM
 #46

How do you set this for multiple GPU's you put that I need to set the .sh file for each GPU? But it doesn't specifiy what to add to each .sh file in oreder for it to execute on each GPU? What am I missing?
thevictimofuktyranny (OP)
Legendary
*
Offline Offline

Activity: 1092
Merit: 1004


View Profile
July 18, 2017, 01:00:30 PM
 #47

How do you set this for multiple GPU's you put that I need to set the .sh file for each GPU? But it doesn't specifiy what to add to each .sh file in oreder for it to execute on each GPU? What am I missing?

Create multiple files and just change GPU0 to GPU1 and so on.
xMines
Newbie
*
Offline Offline

Activity: 37
Merit: 0


View Profile
July 18, 2017, 02:57:07 PM
 #48

thanks for the guide, is it also possible to set a powerlimit?
thevictimofuktyranny (OP)
Legendary
*
Offline Offline

Activity: 1092
Merit: 1004


View Profile
July 18, 2017, 04:23:01 PM
Last edit: July 24, 2017, 01:10:20 AM by thevictimofuktyranny
 #49

thanks for the guide, is it also possible to set a powerlimit?

Set Nvidia Drivers to persistent state (you must be in root - open terminal and enter "sudo -i"):

nvidia-smi -pm 1

First ask nvidia-smi what's the max power and min power limits are:

nvidia-smi -i 0 -q -d POWER

This will show MAX Power and MIN POWER allowed.

GTX 750TI as an example:
MIN POWER 30 W
MAX POWER 38.5 W

Then, you can reduce the watts to the MIN POWER allowed:

sudo nvidia-smi -pl 30

This gives you a net reduction of 22%.

Tested on Ubuntu, with max GPU load via running Unigine Heaven 4 Benchmark at MIN POWER.
Xeonus
Newbie
*
Offline Offline

Activity: 52
Merit: 0


View Profile
July 28, 2017, 09:58:25 AM
 #50

Thanks for the guide, was very helpful. Can you control fan speed of 2 or more cards?
For some reason, I can only control fans of one card, but not both (I have a 980 Ti and 970 installed, 980 Ti is adjustable). Works flawless otherwise.
thevictimofuktyranny (OP)
Legendary
*
Offline Offline

Activity: 1092
Merit: 1004


View Profile
July 28, 2017, 11:24:13 AM
 #51

Thanks for the guide, was very helpful. Can you control fan speed of 2 or more cards?
For some reason, I can only control fans of one card, but not both (I have a 980 Ti and 970 installed, 980 Ti is adjustable). Works flawless otherwise.

Assuming your fan controls are built to Nvidia spec, this activates both card controls:
 
sudo nvidia-xconfig -a --cool-bits=28 --allow-empty-initial-configuration

Doing it via the Nvidia X Server Settings

GPU 0 or GPU 1

Thermal Settings, move slider to the desired fan speed and click apply.

Or, you can create sh profile and attach it to the startup as the guide explains.
Xeonus
Newbie
*
Offline Offline

Activity: 52
Merit: 0


View Profile
July 28, 2017, 12:13:37 PM
 #52

Thanks for the guide, was very helpful. Can you control fan speed of 2 or more cards?
For some reason, I can only control fans of one card, but not both (I have a 980 Ti and 970 installed, 980 Ti is adjustable). Works flawless otherwise.

Assuming your fan controls are built to Nvidia spec, this activates both card controls:
 
sudo nvidia-xconfig -a --cool-bits=28 --allow-empty-initial-configuration

Doing it via the Nvidia X Server Settings

GPU 0 or GPU 1

Thermal Settings, move slider to the desired fan speed and click apply.

Or, you can create sh profile and attach it to the startup as the guide explains.
It was actually my mistake. I installed the second card after I modified xconfig with your command, so it did not add the string to the second GPU entry in the config file. I reran the command  and rebooted. Works as expected now. Thanks a lot Smiley
ste1
Newbie
*
Offline Offline

Activity: 36
Merit: 0


View Profile
July 29, 2017, 10:10:34 PM
 #53

Does not work for me. The xorg.conf is reset after every boot and no oc settings showing.
thevictimofuktyranny (OP)
Legendary
*
Offline Offline

Activity: 1092
Merit: 1004


View Profile
July 30, 2017, 12:59:44 PM
 #54

Does not work for me. The xorg.conf is reset after every boot and no oc settings showing.

What card and how many?

What CPU?

What motherboard?

Directly plugged into motherboard or USB 3 risers?

What Nvidia Drivers?

What PCI-Express Gen?
ste1
Newbie
*
Offline Offline

Activity: 36
Merit: 0


View Profile
August 03, 2017, 12:23:17 PM
 #55

2x 1060
6600k
MSI Z170-A Pro
directly
375 driver from ubuntu repo
Gen3

thevictimofuktyranny (OP)
Legendary
*
Offline Offline

Activity: 1092
Merit: 1004


View Profile
August 03, 2017, 02:34:00 PM
 #56

2x 1060
6600k
MSI Z170-A Pro
directly
375 driver from ubuntu repo
Gen3



Try Gen 1 and Gen 2. Also,

Usually, you need to try older PCI-E speeds.
ste1
Newbie
*
Offline Offline

Activity: 36
Merit: 0


View Profile
August 04, 2017, 01:10:57 PM
 #57

No it doesn't matter with 2 cards. Works perfectly in Windows. Just can't OC in Linux.
thevictimofuktyranny (OP)
Legendary
*
Offline Offline

Activity: 1092
Merit: 1004


View Profile
August 04, 2017, 09:12:12 PM
 #58

No it doesn't matter with 2 cards. Works perfectly in Windows. Just can't OC in Linux.

A couple of options:

a) try Ubuntu 17.04LTS - it has an updated kernel that may be better with your CPU and motherboard.

b) try Nvidia' newer drivers

sudo add-apt-repository ppa:graphics-drivers/ppa
sudo apt update.

Then go to additional drivers - an every Nvidia Ubuntu driver release up to the recent 384.59 will be available for install.

Sometimes, you'll need to click open Ubuntu Software and update X Server control panel for that driver.

Remember, after switching drivers and rebooting you will need to re-enter:

sudo nvidia-xconfig -a --cool-bits=28 --allow-empty-initial-configuration
ste1
Newbie
*
Offline Offline

Activity: 36
Merit: 0


View Profile
August 09, 2017, 09:36:18 PM
 #59

No it doesn't matter with 2 cards. Works perfectly in Windows. Just can't OC in Linux.

A couple of options:

a) try Ubuntu 17.04LTS - it has an updated kernel that may be better with your CPU and motherboard.

b) try Nvidia' newer drivers

sudo add-apt-repository ppa:graphics-drivers/ppa
sudo apt update.

Then go to additional drivers - an every Nvidia Ubuntu driver release up to the recent 384.59 will be available for install.

Sometimes, you'll need to click open Ubuntu Software and update X Server control panel for that driver.

Remember, after switching drivers and rebooting you will need to re-enter:

sudo nvidia-xconfig -a --cool-bits=28 --allow-empty-initial-configuration


I tried but still the same problem Sad
thevictimofuktyranny (OP)
Legendary
*
Offline Offline

Activity: 1092
Merit: 1004


View Profile
August 10, 2017, 11:39:07 PM
 #60

No it doesn't matter with 2 cards. Works perfectly in Windows. Just can't OC in Linux.

A couple of options:

a) try Ubuntu 17.04LTS - it has an updated kernel that may be better with your CPU and motherboard.

b) try Nvidia' newer drivers

sudo add-apt-repository ppa:graphics-drivers/ppa
sudo apt update.

Then go to additional drivers - an every Nvidia Ubuntu driver release up to the recent 384.59 will be available for install.

Sometimes, you'll need to click open Ubuntu Software and update X Server control panel for that driver.

Remember, after switching drivers and rebooting you will need to re-enter:

sudo nvidia-xconfig -a --cool-bits=28 --allow-empty-initial-configuration


I tried but still the same problem Sad

Not a lot can be done - especially, as you won't try using Gen 2 or Gen 1!
ste1
Newbie
*
Offline Offline

Activity: 36
Merit: 0


View Profile
August 11, 2017, 04:47:39 PM
 #61

No it doesn't matter with 2 cards. Works perfectly in Windows. Just can't OC in Linux.

A couple of options:

a) try Ubuntu 17.04LTS - it has an updated kernel that may be better with your CPU and motherboard.

b) try Nvidia' newer drivers

sudo add-apt-repository ppa:graphics-drivers/ppa
sudo apt update.

Then go to additional drivers - an every Nvidia Ubuntu driver release up to the recent 384.59 will be available for install.

Sometimes, you'll need to click open Ubuntu Software and update X Server control panel for that driver.

Remember, after switching drivers and rebooting you will need to re-enter:

sudo nvidia-xconfig -a --cool-bits=28 --allow-empty-initial-configuration


I tried but still the same problem Sad

Not a lot can be done - especially, as you won't try using Gen 2 or Gen 1!
I tried this too but didn't change anything.
I noticed that oc works on a card with monitor plugged in but this not a solution for mining.
thevictimofuktyranny (OP)
Legendary
*
Offline Offline

Activity: 1092
Merit: 1004


View Profile
August 12, 2017, 11:50:37 AM
 #62

No it doesn't matter with 2 cards. Works perfectly in Windows. Just can't OC in Linux.

A couple of options:

a) try Ubuntu 17.04LTS - it has an updated kernel that may be better with your CPU and motherboard.

b) try Nvidia' newer drivers

sudo add-apt-repository ppa:graphics-drivers/ppa
sudo apt update.

Then go to additional drivers - an every Nvidia Ubuntu driver release up to the recent 384.59 will be available for install.

Sometimes, you'll need to click open Ubuntu Software and update X Server control panel for that driver.

Remember, after switching drivers and rebooting you will need to re-enter:

sudo nvidia-xconfig -a --cool-bits=28 --allow-empty-initial-configuration


I tried but still the same problem Sad

Not a lot can be done - especially, as you won't try using Gen 2 or Gen 1!
I tried this too but didn't change anything.
I noticed that oc works on a card with monitor plugged in but this not a solution for mining.

The following command is meant to allow you to allow overclocking on the additional GPUs without a dummy monitor plug.

sudo nvidia-xconfig -a --cool-bits=28 --allow-empty-initial-configuration

Sometimes, non-reference engineering on a motherboard can prevent Ubuntu from installing or commands from working - some motherboard manufacturers don't give the non-reference engineering design to Ubuntu for inclusion in a Kernel revision.

However, you can buy an HDMI dummy monitor plug from Amazon or eBay for $8.39. But, if you are willing to wait for a bit, you can get one shipped from a Chinese seller on eBay for cheaper. That will get overclocking working on the 2nd GPU.
bmartin44
Member
**
Offline Offline

Activity: 85
Merit: 100


View Profile
August 12, 2017, 01:34:43 PM
 #63

Very detailed guide. I just wondering how to select which GPU to change the configuration?

nvidia-smi -i 0 -q -d POWER

-i 0 is selection of GPU 0?

And my mining rig is ubuntu 14.04, will it work?

thevictimofuktyranny (OP)
Legendary
*
Offline Offline

Activity: 1092
Merit: 1004


View Profile
August 12, 2017, 04:23:56 PM
 #64

Very detailed guide. I just wondering how to select which GPU to change the configuration?

nvidia-smi -i 0 -q -d POWER

-i 0 is selection of GPU 0?

And my mining rig is ubuntu 14.04, will it work?

No idea, I've not done much testing on the reduced power settings. Simply, tested it will a single card.

And, I don't use legacy OS's like 14.04LTS or Windows 7, because they take to long to set up and update.

Ubuntu 16.04LTS can be installed and updated with an install of Cuda 8 in under 30 minutes. 
bmartin44
Member
**
Offline Offline

Activity: 85
Merit: 100


View Profile
August 13, 2017, 03:40:02 PM
 #65

Very detailed guide. I just wondering how to select which GPU to change the configuration?

nvidia-smi -i 0 -q -d POWER

-i 0 is selection of GPU 0?

And my mining rig is ubuntu 14.04, will it work?

No idea, I've not done much testing on the reduced power settings. Simply, tested it will a single card.

And, I don't use legacy OS's like 14.04LTS or Windows 7, because they take to long to set up and update.

Ubuntu 16.04LTS can be installed and updated with an install of Cuda 8 in under 30 minutes. 


I have googled around and yes, "-i 0" means GPU 0.

I wish I could use ubuntu 16.04 but my mining rig already built in ubuntu 14.04. I will spend some time one day to re-install it. At the meantime, I will try to overclock and will update my status here later.

Thanks for your info anyway.

thevictimofuktyranny (OP)
Legendary
*
Offline Offline

Activity: 1092
Merit: 1004


View Profile
August 17, 2017, 05:01:14 PM
 #66

No it doesn't matter with 2 cards. Works perfectly in Windows. Just can't OC in Linux.

A couple of options:

a) try Ubuntu 17.04LTS - it has an updated kernel that may be better with your CPU and motherboard.

b) try Nvidia' newer drivers

sudo add-apt-repository ppa:graphics-drivers/ppa
sudo apt update.

Then go to additional drivers - an every Nvidia Ubuntu driver release up to the recent 384.59 will be available for install.

Sometimes, you'll need to click open Ubuntu Software and update X Server control panel for that driver.

Remember, after switching drivers and rebooting you will need to re-enter:

sudo nvidia-xconfig -a --cool-bits=28 --allow-empty-initial-configuration


I tried but still the same problem Sad

Not a lot can be done - especially, as you won't try using Gen 2 or Gen 1!
I tried this too but didn't change anything.
I noticed that oc works on a card with monitor plugged in but this not a solution for mining.

Had chance to look into this problem and found a work-around that does not involve using dummy monitor plugs.

Firstly, log in and enable overclocking on one GPU:

sudo nvidia-xconfig -a --cool-bits=28

Log out and then log back in, then enable overclocking on all the other identical GPUs:

sudo nvidia-xconfig -a --cool-bits=28 --allow-empty-initial-configuration

Then log out and log back in and all the GPUs will have overclocking and fan control enabled.
jdash
Newbie
*
Offline Offline

Activity: 34
Merit: 0


View Profile
August 17, 2017, 08:26:37 PM
 #67

can i ask if it supports undervolting of cards ? thanks
thevictimofuktyranny (OP)
Legendary
*
Offline Offline

Activity: 1092
Merit: 1004


View Profile
August 17, 2017, 10:11:20 PM
 #68

can i ask if it supports undervolting of cards ? thanks

I believe "cool-bits=31" allows you to overvolt and undervolt. You'll need to google the ins and outs of undervolting an Nvidia GPU - no experience of it myself.

It is easier, to simply set the max watts you willing to let the GPUs use whilst mining.
bmartin44
Member
**
Offline Offline

Activity: 85
Merit: 100


View Profile
August 22, 2017, 11:24:49 AM
 #69

can i ask if it supports undervolting of cards ? thanks

I believe "cool-bits=31" allows you to overvolt and undervolt. You'll need to google the ins and outs of undervolting an Nvidia GPU - no experience of it myself.

It is easier, to simply set the max watts you willing to let the GPUs use whilst mining.

I don't quite understand about the cool-bits setting, that's the reason why I still haven't OC my cards yet. I googled from net and found that some cards required cool-bits=24, and some 28. What is this actually? Any detailed info about the cool-bits code?

thevictimofuktyranny (OP)
Legendary
*
Offline Offline

Activity: 1092
Merit: 1004


View Profile
August 22, 2017, 01:02:48 PM
 #70

I have Coolbits set to 31 but I don't have any uder/overvolt settings in nvidia-settings…

I've got no interest in this feature, as I said before! It is a Command Terminal feature, only.

Some websites says it allows undervolting, but most websites say it only allows overvolting up to 37500 microvolts.
ste1
Newbie
*
Offline Offline

Activity: 36
Merit: 0


View Profile
September 19, 2017, 08:08:22 PM
 #71

No it doesn't matter with 2 cards. Works perfectly in Windows. Just can't OC in Linux.

A couple of options:

a) try Ubuntu 17.04LTS - it has an updated kernel that may be better with your CPU and motherboard.

b) try Nvidia' newer drivers

sudo add-apt-repository ppa:graphics-drivers/ppa
sudo apt update.

Then go to additional drivers - an every Nvidia Ubuntu driver release up to the recent 384.59 will be available for install.

Sometimes, you'll need to click open Ubuntu Software and update X Server control panel for that driver.

Remember, after switching drivers and rebooting you will need to re-enter:

sudo nvidia-xconfig -a --cool-bits=28 --allow-empty-initial-configuration


I tried but still the same problem Sad

Not a lot can be done - especially, as you won't try using Gen 2 or Gen 1!
I tried this too but didn't change anything.
I noticed that oc works on a card with monitor plugged in but this not a solution for mining.

Had chance to look into this problem and found a work-around that does not involve using dummy monitor plugs.

Firstly, log in and enable overclocking on one GPU:

sudo nvidia-xconfig -a --cool-bits=28

Log out and then log back in, then enable overclocking on all the other identical GPUs:

sudo nvidia-xconfig -a --cool-bits=28 --allow-empty-initial-configuration

Then log out and log back in and all the GPUs will have overclocking and fan control enabled.
After logging out I got suck on the login screen. Can't login again and after reboot the xorg.conf is reset to default (no oc)
thevictimofuktyranny (OP)
Legendary
*
Offline Offline

Activity: 1092
Merit: 1004


View Profile
September 19, 2017, 08:30:58 PM
 #72

No it doesn't matter with 2 cards. Works perfectly in Windows. Just can't OC in Linux.

A couple of options:

a) try Ubuntu 17.04LTS - it has an updated kernel that may be better with your CPU and motherboard.

b) try Nvidia' newer drivers

sudo add-apt-repository ppa:graphics-drivers/ppa
sudo apt update.

Then go to additional drivers - an every Nvidia Ubuntu driver release up to the recent 384.59 will be available for install.

Sometimes, you'll need to click open Ubuntu Software and update X Server control panel for that driver.

Remember, after switching drivers and rebooting you will need to re-enter:

sudo nvidia-xconfig -a --cool-bits=28 --allow-empty-initial-configuration


I tried but still the same problem Sad

Not a lot can be done - especially, as you won't try using Gen 2 or Gen 1!
I tried this too but didn't change anything.
I noticed that oc works on a card with monitor plugged in but this not a solution for mining.

Had chance to look into this problem and found a work-around that does not involve using dummy monitor plugs.

Firstly, log in and enable overclocking on one GPU:

sudo nvidia-xconfig -a --cool-bits=28

Log out and then log back in, then enable overclocking on all the other identical GPUs:

sudo nvidia-xconfig -a --cool-bits=28 --allow-empty-initial-configuration

Then log out and log back in and all the GPUs will have overclocking and fan control enabled.
After logging out I got suck on the login screen. Can't login again and after reboot the xorg.conf is reset to default (no oc)

Try it again, it is a workaround - sometimes it takes a couple of tries.
ste1
Newbie
*
Offline Offline

Activity: 36
Merit: 0


View Profile
September 19, 2017, 08:45:33 PM
 #73

No it doesn't matter with 2 cards. Works perfectly in Windows. Just can't OC in Linux.

A couple of options:

a) try Ubuntu 17.04LTS - it has an updated kernel that may be better with your CPU and motherboard.

b) try Nvidia' newer drivers

sudo add-apt-repository ppa:graphics-drivers/ppa
sudo apt update.

Then go to additional drivers - an every Nvidia Ubuntu driver release up to the recent 384.59 will be available for install.

Sometimes, you'll need to click open Ubuntu Software and update X Server control panel for that driver.

Remember, after switching drivers and rebooting you will need to re-enter:

sudo nvidia-xconfig -a --cool-bits=28 --allow-empty-initial-configuration


I tried but still the same problem Sad

Not a lot can be done - especially, as you won't try using Gen 2 or Gen 1!
I tried this too but didn't change anything.
I noticed that oc works on a card with monitor plugged in but this not a solution for mining.

Had chance to look into this problem and found a work-around that does not involve using dummy monitor plugs.

Firstly, log in and enable overclocking on one GPU:

sudo nvidia-xconfig -a --cool-bits=28

Log out and then log back in, then enable overclocking on all the other identical GPUs:

sudo nvidia-xconfig -a --cool-bits=28 --allow-empty-initial-configuration

Then log out and log back in and all the GPUs will have overclocking and fan control enabled.
After logging out I got suck on the login screen. Can't login again and after reboot the xorg.conf is reset to default (no oc)

Try it again, it is a workaround - sometimes it takes a couple of tries.

Tried 100x, not possible.
thevictimofuktyranny (OP)
Legendary
*
Offline Offline

Activity: 1092
Merit: 1004


View Profile
September 19, 2017, 08:47:11 PM
 #74

Need to find out about the setup then!

CPU?

USB 3 Risers?

Motherboard?

Number of GPUs?

Type of GPUs?

Ubuntu Version?

Do you have one GPU acting as the display output or are you using onboard graphics?

PCI-E settings?
ste1
Newbie
*
Offline Offline

Activity: 36
Merit: 0


View Profile
September 19, 2017, 09:04:49 PM
 #75

Need to find out about the setup then!

CPU?

USB 3 Risers?

Motherboard?

Number of GPUs?

Type of GPUs?

Ubuntu Version?

Do you have one GPU acting as the display output or are you using onboard graphics?

PCI-E settings?

6600K
MSI Z170 Pro
2x 1060, 2x 1050 Ti (USB3 Risers)
Ubuntu 17.04
onboard output (tried GPU but doesn't work either)
Gen1, 4G decoding
thevictimofuktyranny (OP)
Legendary
*
Offline Offline

Activity: 1092
Merit: 1004


View Profile
September 19, 2017, 09:20:29 PM
 #76

Need to find out about the setup then!

CPU?

USB 3 Risers?

Motherboard?

Number of GPUs?

Type of GPUs?

Ubuntu Version?

Do you have one GPU acting as the display output or are you using onboard graphics?

PCI-E settings?

6600K
MSI Z170 Pro
2x 1060, 2x 1050 Ti (USB3 Risers)
Ubuntu 17.04
onboard output (tried GPU but doesn't work either)
Gen1, 4G decoding


You need to have identical models for Ubuntu e.g. all GTX 1060 or all GTX 1050 TIs.

In your situation: you will be limited to using Windows 10.

Did you get all 4 cards installed or where some missing in Nividia Xorg Server?
ste1
Newbie
*
Offline Offline

Activity: 36
Merit: 0


View Profile
September 19, 2017, 10:12:39 PM
 #77

Need to find out about the setup then!

CPU?

USB 3 Risers?

Motherboard?

Number of GPUs?

Type of GPUs?

Ubuntu Version?

Do you have one GPU acting as the display output or are you using onboard graphics?

PCI-E settings?


6600K
MSI Z170 Pro
2x 1060, 2x 1050 Ti (USB3 Risers)
Ubuntu 17.04
onboard output (tried GPU but doesn't work either)
Gen1, 4G decoding


You need to have identical models for Ubuntu e.g. all GTX 1060 or all GTX 1050 TIs.

In your situation: you will be limited to using Windows 10.

Did you get all 4 cards installed or where some missing in Nividia Xorg Server?


All cards are working without OC.
thevictimofuktyranny (OP)
Legendary
*
Offline Offline

Activity: 1092
Merit: 1004


View Profile
September 20, 2017, 12:10:55 AM
 #78

Need to find out about the setup then!

CPU?

USB 3 Risers?

Motherboard?

Number of GPUs?

Type of GPUs?

Ubuntu Version?

Do you have one GPU acting as the display output or are you using onboard graphics?

PCI-E settings?


6600K
MSI Z170 Pro
2x 1060, 2x 1050 Ti (USB3 Risers)
Ubuntu 17.04
onboard output (tried GPU but doesn't work either)
Gen1, 4G decoding


You need to have identical models for Ubuntu e.g. all GTX 1060 or all GTX 1050 TIs.

In your situation: you will be limited to using Windows 10.

Did you get all 4 cards installed or where some missing in Nividia Xorg Server?


All cards are working without OC.


OK, overclocking and fan control can only be enabled when a monitor is connected to the GPU on primary PCI-Express slot.

Set it to PEG in bios. Make sure all the PCH is set to Gen1, and check if you also have the option to set the individual slot selections to Gen1.

Try starting with a GTX 1060 on primary PCI-Express slot.

If, that does not work, then try starting with GTX 1050 TI in the primary PCI-E slot.

As soon as you get a display output from one GPU connected to a monitor - you will be able to set fan controls and overclocking.

Also, see if Disabling above 4G decode works, it sometimes causes a black screen.

ste1
Newbie
*
Offline Offline

Activity: 36
Merit: 0


View Profile
September 20, 2017, 12:44:13 PM
 #79

Need to find out about the setup then!

CPU?

USB 3 Risers?

Motherboard?

Number of GPUs?

Type of GPUs?

Ubuntu Version?

Do you have one GPU acting as the display output or are you using onboard graphics?

PCI-E settings?


6600K
MSI Z170 Pro
2x 1060, 2x 1050 Ti (USB3 Risers)
Ubuntu 17.04
onboard output (tried GPU but doesn't work either)
Gen1, 4G decoding


You need to have identical models for Ubuntu e.g. all GTX 1060 or all GTX 1050 TIs.

In your situation: you will be limited to using Windows 10.

Did you get all 4 cards installed or where some missing in Nividia Xorg Server?


All cards are working without OC.


OK, overclocking and fan control can only be enabled when a monitor is connected to the GPU on primary PCI-Express slot.

Set it to PEG in bios. Make sure all the PCH is set to Gen1, and check if you also have the option to set the individual slot selections to Gen1.

Try starting with a GTX 1060 on primary PCI-Express slot.

If, that does not work, then try starting with GTX 1050 TI in the primary PCI-E slot.

As soon as you get a display output from one GPU connected to a monitor - you will be able to set fan controls and overclocking.

Also, see if Disabling above 4G decode works, it sometimes causes a black screen.



Does it matter if I use DP or HDMI?
thevictimofuktyranny (OP)
Legendary
*
Offline Offline

Activity: 1092
Merit: 1004


View Profile
September 20, 2017, 01:36:33 PM
 #80

Need to find out about the setup then!

CPU?

USB 3 Risers?

Motherboard?

Number of GPUs?

Type of GPUs?

Ubuntu Version?

Do you have one GPU acting as the display output or are you using onboard graphics?

PCI-E settings?


6600K
MSI Z170 Pro
2x 1060, 2x 1050 Ti (USB3 Risers)
Ubuntu 17.04
onboard output (tried GPU but doesn't work either)
Gen1, 4G decoding


You need to have identical models for Ubuntu e.g. all GTX 1060 or all GTX 1050 TIs.

In your situation: you will be limited to using Windows 10.

Did you get all 4 cards installed or where some missing in Nividia Xorg Server?


All cards are working without OC.


OK, overclocking and fan control can only be enabled when a monitor is connected to the GPU on primary PCI-Express slot.

Set it to PEG in bios. Make sure all the PCH is set to Gen1, and check if you also have the option to set the individual slot selections to Gen1.

Try starting with a GTX 1060 on primary PCI-Express slot.

If, that does not work, then try starting with GTX 1050 TI in the primary PCI-E slot.

As soon as you get a display output from one GPU connected to a monitor - you will be able to set fan controls and overclocking.

Also, see if Disabling above 4G decode works, it sometimes causes a black screen.



Does it matter if I use DP or HDMI?

Try it out, but Ubuntu tends to has a few of issues with HDMI.
e97
Jr. Member
*
Offline Offline

Activity: 58
Merit: 1


View Profile
October 28, 2017, 10:53:52 PM
 #81

Is there a way to do this without X?

The closest I've seen is launching X with a dummy display and running nvidia-settings there.

https://devtalk.nvidia.com/default/topic/789888/set-fan-speed-without-an-x-server-solved-/

https://sites.google.com/site/akohlmey/random-hacks/nvidia-gpu-coolness#TOC-Faking-a-Head-for-a-Headless-X-Server
banet
Newbie
*
Offline Offline

Activity: 24
Merit: 0


View Profile
October 29, 2017, 02:43:21 AM
 #82

Nice guide. I used it for this pendrive os. https://ba.net/zcash-eth-miner-os/
e97
Jr. Member
*
Offline Offline

Activity: 58
Merit: 1


View Profile
November 02, 2017, 02:57:22 AM
 #83

Does anyone have instructions on how to do this over SSH?
thevictimofuktyranny (OP)
Legendary
*
Offline Offline

Activity: 1092
Merit: 1004


View Profile
November 02, 2017, 04:32:43 PM
 #84

Does anyone have instructions on how to do this over SSH?

Nope. Been busy gaming this Fall Season!
e97
Jr. Member
*
Offline Offline

Activity: 58
Merit: 1


View Profile
November 05, 2017, 03:47:30 AM
 #85

Figured this out thanks to a few posts around the web and here..


    1. Clean install ubuntu-16.04.3-server
    2. Install cuda-9 using these instructions: https://developer.nvidia.com/cuda-downloads?target_os=Linux&target_arch=x86_64&target_distro=Ubuntu&target_version=1604&target_type=deblocal
    3. Run sudo nvidia-xconfig -a --allow-empty-initial-configuration --cool-bits=28 --use-display-device="DFP-0" --connected-monitor="DFP-0"
    4. Reboot

    5. Commands for overclocking over SSH:


    sudo nvidia-smi -pm 1 # persistence mode

    sudo nvidia-smi -pl 150 # power limit


    Turn off LEDs
    sudo DISPLAY=:0 XAUTHORITY=/var/run/lightdm/root/:0 nvidia-settings -a [gpu:0]/GPULogoBrightness=0

    Core clock and memory transfer speed (mem clock?)

    sudo DISPLAY=:0 XAUTHORITY=/var/run/lightdm/root/:0 nvidia-settings -a [gpu:0]/GPUGraphicsClockOffset[3]=-100

    sudo DISPLAY=:0 XAUTHORITY=/var/run/lightdm/root/:0 nvidia-settings -a [gpu:0]/GPUMemoryTransferRateOffset[3]=1900

    Fans
    sudo DISPLAY=:0 XAUTHORITY=/var/run/lightdm/root/:0 nvidia-settings -a [gpu:0]/GPUFanControlState=1
    sudo DISPLAY=:0 XAUTHORITY=/var/run/lightdm/root/:0 nvidia-settings -a [fan:0]/GPUTargetFanSpeed=45
davidwillis
Newbie
*
Offline Offline

Activity: 21
Merit: 0


View Profile
November 19, 2017, 05:11:26 PM
 #86

Hi,

I just went through this guide.  Thanks!

But I am running into a slight problem.  I only seem to have overclock adjustment on 4 of my 6GPU's

When I try to adjust the other two, it does not verify the setting is set, or give an error.  also in the nvide settings gui, there are no adjustable settings on two of the GPU's.

I have looked at xorg.conf, and it looks like it has the same settings for all 6 gpu's, so I have no idea why only 4 are adjustable (they are all  gigabit gtx 1070).

Here is my xorg.conf
Quote
# nvidia-xconfig: X configuration file generated by nvidia-xconfig
# nvidia-xconfig:  version 384.90  (buildmeister@swio-display-x86-rhel47-05)  Tue Sep 19 18:13:03 PDT 2017

Section "ServerLayout"
    Identifier     "Layout0"
    Screen      0  "Screen0"
    Screen      1  "Screen1" RightOf "Screen0"
    Screen      2  "Screen2" RightOf "Screen1"
    Screen      3  "Screen3" RightOf "Screen2"
    Screen      4  "Screen4" RightOf "Screen3"
    Screen      5  "Screen5" RightOf "Screen4"
    InputDevice    "Keyboard0" "CoreKeyboard"
    InputDevice    "Mouse0" "CorePointer"
EndSection

Section "Files"
EndSection

Section "InputDevice"
    # generated from default
    Identifier     "Mouse0"
    Driver         "mouse"
    Option         "Protocol" "auto"
    Option         "Device" "/dev/psaux"
    Option         "Emulate3Buttons" "no"
    Option         "ZAxisMapping" "4 5"
EndSection

Section "InputDevice"
    # generated from default
    Identifier     "Keyboard0"
    Driver         "kbd"
EndSection

Section "Monitor"
    Identifier     "Monitor0"
    VendorName     "Unknown"
    ModelName      "Unknown"
    HorizSync       28.0 - 33.0
    VertRefresh     43.0 - 72.0
    Option         "DPMS"
EndSection

Section "Monitor"
    Identifier     "Monitor1"
    VendorName     "Unknown"
    ModelName      "Unknown"
    HorizSync       28.0 - 33.0
    VertRefresh     43.0 - 72.0
    Option         "DPMS"
EndSection

Section "Monitor"
    Identifier     "Monitor2"
    VendorName     "Unknown"
    ModelName      "Unknown"
    HorizSync       28.0 - 33.0
    VertRefresh     43.0 - 72.0
    Option         "DPMS"
EndSection

Section "Monitor"
    Identifier     "Monitor3"
    VendorName     "Unknown"
    ModelName      "Unknown"
    HorizSync       28.0 - 33.0
    VertRefresh     43.0 - 72.0
    Option         "DPMS"
EndSection

Section "Monitor"
    Identifier     "Monitor4"
    VendorName     "Unknown"
    ModelName      "Unknown"
    HorizSync       28.0 - 33.0
    VertRefresh     43.0 - 72.0
    Option         "DPMS"
EndSection

Section "Monitor"
    Identifier     "Monitor5"
    VendorName     "Unknown"
    ModelName      "Unknown"
    HorizSync       28.0 - 33.0
    VertRefresh     43.0 - 72.0
    Option         "DPMS"
EndSection

Section "Device"
    Identifier     "Device0"
    Driver         "nvidia"
    VendorName     "NVIDIA Corporation"
    BoardName      "GeForce GTX 1070"
    BusID          "PCI:1:0:0"
EndSection

Section "Device"
    Identifier     "Device1"
    Driver         "nvidia"
    VendorName     "NVIDIA Corporation"
    BoardName      "GeForce GTX 1070"
    BusID          "PCI:2:0:0"
EndSection

Section "Device"
    Identifier     "Device2"
    Driver         "nvidia"
    VendorName     "NVIDIA Corporation"
    BoardName      "GeForce GTX 1070"
    BusID          "PCI:4:0:0"
EndSection

Section "Device"
    Identifier     "Device3"
    Driver         "nvidia"
    VendorName     "NVIDIA Corporation"
    BoardName      "GeForce GTX 1070"
    BusID          "PCI:6:0:0"
EndSection

Section "Device"
    Identifier     "Device4"
    Driver         "nvidia"
    VendorName     "NVIDIA Corporation"
    BoardName      "GeForce GTX 1070"
    BusID          "PCI:8:0:0"
EndSection

Section "Device"
    Identifier     "Device5"
    Driver         "nvidia"
    VendorName     "NVIDIA Corporation"
    BoardName      "GeForce GTX 1070"
    BusID          "PCI:9:0:0"
EndSection

Section "Screen"
    Identifier     "Screen0"
    Device         "Device0"
    Monitor        "Monitor0"
    DefaultDepth    24
    Option         "AllowEmptyInitialConfiguration" "True"
    Option         "Coolbits" "28"
    SubSection     "Display"
        Depth       24
    EndSubSection
EndSection

Section "Screen"
    Identifier     "Screen1"
    Device         "Device1"
    Monitor        "Monitor1"
    DefaultDepth    24
    Option         "AllowEmptyInitialConfiguration" "True"
    Option         "Coolbits" "28"
    SubSection     "Display"
        Depth       24
    EndSubSection
EndSection

Section "Screen"
    Identifier     "Screen2"
    Device         "Device2"
    Monitor        "Monitor2"
    DefaultDepth    24
    Option         "AllowEmptyInitialConfiguration" "True"
    Option         "Coolbits" "28"
    SubSection     "Display"
        Depth       24
    EndSubSection
EndSection

Section "Screen"
    Identifier     "Screen3"
    Device         "Device3"
    Monitor        "Monitor3"
    DefaultDepth    24
    Option         "AllowEmptyInitialConfiguration" "True"
    Option         "Coolbits" "28"
    SubSection     "Display"
        Depth       24
    EndSubSection
EndSection

Section "Screen"
    Identifier     "Screen4"
    Device         "Device4"
    Monitor        "Monitor4"
    DefaultDepth    24
    Option         "AllowEmptyInitialConfiguration" "True"
    Option         "Coolbits" "28"
    SubSection     "Display"
        Depth       24
    EndSubSection
EndSection

Section "Screen"
    Identifier     "Screen5"
    Device         "Device5"
    Monitor        "Monitor5"
    DefaultDepth    24
    Option         "AllowEmptyInitialConfiguration" "True"
    Option         "Coolbits" "28"
    SubSection     "Display"
        Depth       24
    EndSubSection
EndSection
ste1
Newbie
*
Offline Offline

Activity: 36
Merit: 0


View Profile
November 19, 2017, 06:21:01 PM
 #87

Quote
    3. Run sudo nvidia-xconfig -a --allow-empty-initial-configuration --cool-bits=28 --use-display-device="DFP-0" --connected-monitor="DFP-0"

This command never works. OC settings still hidden.
thevictimofuktyranny (OP)
Legendary
*
Offline Offline

Activity: 1092
Merit: 1004


View Profile
November 21, 2017, 04:21:05 PM
 #88

Interesting, I did test with six GTX 1060 3GB and there were no issues.

Try using the 378 drivers - those have been validated by Ubuntu and optimised for the OS

There are various bugs in using Nvidia driver release that are not optimised by developers of Ubuntu OS.

Should that fail, you can always get a couple of cheap dummy HDMI monitor plugs for those 2 cards.
ya5h
Newbie
*
Offline Offline

Activity: 5
Merit: 0


View Profile
November 21, 2017, 08:44:49 PM
 #89

Hello,

Went through this guide to OC my 1060 cards, but was not able to do it.

After this command "sudo nvidia-xconfig -a --cool-bits=28 --allow-empty-initial-configuration", on Reboot xorg.conf file is being reset to default. Coolbits option was not found in the file  Huh and Overclocking is not enabled on GPU's.
Is there any possible solution for this? Please let me know.. Thanks

Using Ubuntu 16.04, nvidia-384 drivers.
xorg.conf file after coolbits command:
Code:
# nvidia-xconfig: X configuration file generated by nvidia-xconfig
# nvidia-xconfig:  version 384.90  (buildmeister@swio-display-x86-rhel47-05)  Tue Sep 19 18:13:03 PDT 2017

Section "ServerLayout"
    Identifier     "layout"
    Screen      0  "Screen0"
    Screen      1  "Screen1" RightOf "Screen0"
    Screen      2  "Screen2" RightOf "Screen1"
    Inactive       "intel"
    InputDevice    "Keyboard0" "CoreKeyboard"
    InputDevice    "Mouse0" "CorePointer"
EndSection

Section "InputDevice"
    # generated from default
    Identifier     "Keyboard0"
    Driver         "keyboard"
EndSection

Section "InputDevice"
    # generated from default
    Identifier     "Mouse0"
    Driver         "mouse"
    Option         "Protocol" "auto"
    Option         "Device" "/dev/psaux"
    Option         "Emulate3Buttons" "no"
    Option         "ZAxisMapping" "4 5"
EndSection

Section "Monitor"
    Identifier     "Monitor0"
    VendorName     "Unknown"
    ModelName      "Unknown"
    HorizSync       28.0 - 33.0
    VertRefresh     43.0 - 72.0
    Option         "DPMS"
EndSection

Section "Monitor"
    Identifier     "Monitor1"
    VendorName     "Unknown"
    ModelName      "Unknown"
    HorizSync       28.0 - 33.0
    VertRefresh     43.0 - 72.0
    Option         "DPMS"
EndSection

Section "Monitor"
    Identifier     "Monitor2"
    VendorName     "Unknown"
    ModelName      "Unknown"
    HorizSync       28.0 - 33.0
    VertRefresh     43.0 - 72.0
    Option         "DPMS"
EndSection

Section "Device"
    Identifier     "Device0"
    Driver         "nvidia"
    VendorName     "NVIDIA Corporation"
    BoardName      "GeForce GTX 1060 6GB"
    BusID          "PCI:2:0:0"
EndSection

Section "Device"
    Identifier     "Device1"
    Driver         "nvidia"
    VendorName     "NVIDIA Corporation"
    BoardName      "GeForce GTX 1060 6GB"
    BusID          "PCI:3:0:0"
EndSection

Section "Device"
    Identifier     "Device2"
    Driver         "nvidia"
    VendorName     "NVIDIA Corporation"
    BoardName      "GeForce GTX 1060 6GB"
    BusID          "PCI:6:0:0"
EndSection

Section "Screen"
    Identifier     "Screen0"
    Device         "Device0"
    Monitor        "Monitor0"
    DefaultDepth    24
    Option         "AllowEmptyInitialConfiguration" "True"
    Option         "Coolbits" "28"
    SubSection     "Display"
        Depth       24
    EndSubSection
EndSection

Section "Screen"
    Identifier     "Screen1"
    Device         "Device1"
    Monitor        "Monitor1"
    DefaultDepth    24
    Option         "AllowEmptyInitialConfiguration" "True"
    Option         "Coolbits" "28"
    SubSection     "Display"
        Depth       24
    EndSubSection
EndSection

Section "Screen"
    Identifier     "Screen2"
    Device         "Device2"
    Monitor        "Monitor2"
    DefaultDepth    24
    Option         "AllowEmptyInitialConfiguration" "True"
    Option         "Coolbits" "28"
    SubSection     "Display"
        Depth       24
    EndSubSection
EndSection

xorg.conf file after reboot:
Code:
Section "ServerLayout"
    Identifier "layout"
    Screen 0 "nvidia"
    Inactive "intel"
EndSection

Section "Device"
    Identifier "intel"
    Driver "modesetting"
    BusID "PCI:0@0:2:0"
    Option "AccelMethod" "None"
EndSection

Section "Screen"
    Identifier "intel"
    Device "intel"
EndSection

Section "Device"
    Identifier "nvidia"
    Driver "nvidia"
    BusID "PCI:2@0:0:0"
    Option "ConstrainCursor" "off"
EndSection

Section "Screen"
    Identifier "nvidia"
    Device "nvidia"
    Option "AllowEmptyInitialConfiguration" "on"
    Option "IgnoreDisplayDevices" "CRT"
EndSection

Section "Device"
    Identifier "nvidia"
    Driver "nvidia"
    BusID "PCI:3@0:0:0"
    Option "ConstrainCursor" "off"
EndSection

Section "Screen"
    Identifier "nvidia"
    Device "nvidia"
    Option "AllowEmptyInitialConfiguration" "on"
    Option "IgnoreDisplayDevices" "CRT"
EndSection

Section "Device"
    Identifier "nvidia"
    Driver "nvidia"
    BusID "PCI:6@0:0:0"
    Option "ConstrainCursor" "off"
EndSection

Section "Screen"
    Identifier "nvidia"
    Device "nvidia"
    Option "AllowEmptyInitialConfiguration" "on"
    Option "IgnoreDisplayDevices" "CRT"
EndSection
givemesummer
Newbie
*
Offline Offline

Activity: 73
Merit: 0


View Profile
November 22, 2017, 03:31:26 AM
 #90

When running the Optiminer in Ubuntu the performance level of all GPUs changes from the max one (3) to 2 (Nvidia X Server Settings). So the memory becomes 7604 instead of 8008 (GTX1060). Is this somehow fixable? Couldnt find any info.
reb0rn21
Legendary
*
Offline Offline

Activity: 1898
Merit: 1024


View Profile
November 22, 2017, 10:21:38 PM
 #91

I did not see this guide at first, I had problems making it run without monitor + manage OC on all cards but I had to make and edit xorg.conf both in etc/X11 but first make it here /usr/share/X11/xorg.conf.d/xorg.conf
Also I used edit.bin there, now it works but as miners are far from me, I can not test this way explained here

              ▄▄▄ ▀▀▀▀▀▀▀▀▀ ▄▄▄
           ▄▀▀    ▄▄▄▄▄▄▄▄▄    ▀▀▄
        ▄▀▀  ▄▄▀█          ▀█▀▄▄  ▀▀▄
      ▄▀▀ ▄▄▀    ▀▀▄▄▄▄▄▄▄▀▀    ▀▄▄ ▀▀▄
     █   █            ▀            █   █
   ▄▀ █  ▀▄▄                     ▄█▀  █ ▀▄
  ▄▀ ▄▀ █▄ ▀▀▀██▄▄▄       ▄▄▄██▀▀  ██ ▀▄ ▀▄
  ▀▄▀▀▄ ██ ▄▄▄▄▄▄  ▀▄   ▄▀  ▄▄▄▄▄▄ ██ ▄▀▀▄▀
 ██   █ ██ ▀▄    ▀▄ █   █ ▄▀    ▄▀ ██ █  ▀██
 █  ▄█  ▀█  ▀▀▀▀▀▀▀ █   █ ▀▀▀▀▀▀▀  █   █▄  █
█▀ █  █  █          █   █          █  █  █ ▀▀
 █▀  ▄▀  █▀▄        █   █        ▄▀█  ▀▄  ▀█
 ▄  █▀   █ ▀█▄      ▀   ▀      ▄█▀ █  ▄▀█  ▄
 █▄▀  █  █                         █  █  ▀▄█
 ▀▄  █   ▀█        ▄▄▀▄▀▄▄        █▀   █  ▄
  ▀▄▀▀  █▄ █     ▀█  ▀▀▀  █▀     █ ▄█ ▄▀▀▄▀
   ▀ ▄  ██ █▀▄     ▀▀▄▄▄▀▀     ▄▀█ ██ ▀▄ ▀
    ▀█  ██ █ █▀▄    ▄▄▄▄▄    ▄▀█ █ ██  █▀
      ▀▄ ▀ █ █ ██▄         ▄██ █ █ ▀ ▄▀
        ▀▄ █ █ █ ▀█▄     ▄█▀ █ █ █ ▄▀
          ▀▀▄█ █    ▀▀▀▀▀    █ █▄▀▀
              ▀▀ ▄▄▄▄▄▄▄▄▄▄▄ ▀▀
   
..I  D  E  N  A..
   
Proof-of-Person Blockchain

Join the mining of the first human-centric
cryptocurrency
 



 
▲    2 3 2 2

..N  O  D  E  S..
   
                ██
                ██
                ██
                ██
                ██
         ▄      ██      ▄
         ███▄   ██   ▄███
          ▀███▄ ██ ▄███▀
            ▀████████▀
              ▀████▀
                ▀▀
██▄                            ▄██
███                            ███
███                            ███
███                            ███
 ███▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄███
  ▀▀██████████████████████████▀▀
   
D O W N L O A D

Idena node

   
   
▄▄▄██████▄▄▄
▄▄████████████████▄▄
▄█████▀▀        ▀▀█████▄
████▀                ▀████
███▀    ▄▄▄▄▄▄▄▄▄       ▀███
███      █   ▄▄ █▀▄        ███
██▀      █  ███ █  ▀▄      ▀██
███       █   ▀▀ ▀▀▀▀█       ███
███       █  ▄▄▄▄▄▄  █       ███
███       █  ▄▄▄▄▄▄  █       ███
██▄      █  ▄▄▄▄▄▄  █      ▄██
███      █          █      ███
███▄    ▀▀▀▀▀▀▀▀▀▀▀▀    ▄███
████▄                ▄████
▀█████▄▄        ▄▄█████▀
▀▀████████████████▀▀
▀▀▀██████▀▀▀
   
    .REQUEST INVITATION.
go6ooo1212
Legendary
*
Offline Offline

Activity: 1512
Merit: 1000


quarkchain.io


View Profile
November 23, 2017, 12:07:39 AM
 #92

I'm searching for similar solution. How may I shut the system's xorg auto reset on reboot. Maybe after that editing the xorg will remain permanent and the display attached wont be needed...
radeone
Full Member
***
Offline Offline

Activity: 169
Merit: 100


View Profile WWW
November 24, 2017, 11:17:24 AM
 #93

cant this all be done through nvidia-smi command? i dont run my gpus with xserver installed. i just install ubuntu server 16.10 and the nvidia drivers and go but i need to get more power efficiency and i cant seem to get my overclocks right.

ICO IS NOW LIVE    ▐┃▌    1WORLD ONLINE    ▐┃▌    WHITEPAPER
✣ ✣ ✣ ┃ Revolutionizing ONLINE MEDIA with BLOCKCHAIN TECHNOLOGY and incentives for AUDIENCE ENGAGEMENT™ ┃ ✣ ✣ ✣
ANN \ / TELEGRAM \ / FACEBOOK \ / WHATSAPP \ / TWITTER \ / KAKAOTALK
radeone
Full Member
***
Offline Offline

Activity: 169
Merit: 100


View Profile WWW
November 24, 2017, 11:42:31 AM
 #94

i do nvidia-smi -pl 110 and i shave off about 15-20 watts per card and only lose a marginal amount of hashing power and gain efficiency. this is with my gtx 1060. getting about 296 sols/s @ 110w from 120-125 watts.

ICO IS NOW LIVE    ▐┃▌    1WORLD ONLINE    ▐┃▌    WHITEPAPER
✣ ✣ ✣ ┃ Revolutionizing ONLINE MEDIA with BLOCKCHAIN TECHNOLOGY and incentives for AUDIENCE ENGAGEMENT™ ┃ ✣ ✣ ✣
ANN \ / TELEGRAM \ / FACEBOOK \ / WHATSAPP \ / TWITTER \ / KAKAOTALK
thevictimofuktyranny (OP)
Legendary
*
Offline Offline

Activity: 1092
Merit: 1004


View Profile
November 24, 2017, 06:38:50 PM
 #95

Hello,

Went through this guide to OC my 1060 cards, but was not able to do it.

After this command "sudo nvidia-xconfig -a --cool-bits=28 --allow-empty-initial-configuration", on Reboot xorg.conf file is being reset to default. Coolbits option was not found in the file  Huh and Overclocking is not enabled on GPU's.
Is there any possible solution for this? Please let me know.. Thanks

Using Ubuntu 16.04, nvidia-384 drivers.
xorg.conf file after coolbits command:
Code:
# nvidia-xconfig: X configuration file generated by nvidia-xconfig
# nvidia-xconfig:  version 384.90  (buildmeister@swio-display-x86-rhel47-05)  Tue Sep 19 18:13:03 PDT 2017

Section "ServerLayout"
    Identifier     "layout"
    Screen      0  "Screen0"
    Screen      1  "Screen1" RightOf "Screen0"
    Screen      2  "Screen2" RightOf "Screen1"
    Inactive       "intel"
    InputDevice    "Keyboard0" "CoreKeyboard"
    InputDevice    "Mouse0" "CorePointer"
EndSection

Section "InputDevice"
    # generated from default
    Identifier     "Keyboard0"
    Driver         "keyboard"
EndSection

Section "InputDevice"
    # generated from default
    Identifier     "Mouse0"
    Driver         "mouse"
    Option         "Protocol" "auto"
    Option         "Device" "/dev/psaux"
    Option         "Emulate3Buttons" "no"
    Option         "ZAxisMapping" "4 5"
EndSection

Section "Monitor"
    Identifier     "Monitor0"
    VendorName     "Unknown"
    ModelName      "Unknown"
    HorizSync       28.0 - 33.0
    VertRefresh     43.0 - 72.0
    Option         "DPMS"
EndSection

Section "Monitor"
    Identifier     "Monitor1"
    VendorName     "Unknown"
    ModelName      "Unknown"
    HorizSync       28.0 - 33.0
    VertRefresh     43.0 - 72.0
    Option         "DPMS"
EndSection

Section "Monitor"
    Identifier     "Monitor2"
    VendorName     "Unknown"
    ModelName      "Unknown"
    HorizSync       28.0 - 33.0
    VertRefresh     43.0 - 72.0
    Option         "DPMS"
EndSection

Section "Device"
    Identifier     "Device0"
    Driver         "nvidia"
    VendorName     "NVIDIA Corporation"
    BoardName      "GeForce GTX 1060 6GB"
    BusID          "PCI:2:0:0"
EndSection

Section "Device"
    Identifier     "Device1"
    Driver         "nvidia"
    VendorName     "NVIDIA Corporation"
    BoardName      "GeForce GTX 1060 6GB"
    BusID          "PCI:3:0:0"
EndSection

Section "Device"
    Identifier     "Device2"
    Driver         "nvidia"
    VendorName     "NVIDIA Corporation"
    BoardName      "GeForce GTX 1060 6GB"
    BusID          "PCI:6:0:0"
EndSection

Section "Screen"
    Identifier     "Screen0"
    Device         "Device0"
    Monitor        "Monitor0"
    DefaultDepth    24
    Option         "AllowEmptyInitialConfiguration" "True"
    Option         "Coolbits" "28"
    SubSection     "Display"
        Depth       24
    EndSubSection
EndSection

Section "Screen"
    Identifier     "Screen1"
    Device         "Device1"
    Monitor        "Monitor1"
    DefaultDepth    24
    Option         "AllowEmptyInitialConfiguration" "True"
    Option         "Coolbits" "28"
    SubSection     "Display"
        Depth       24
    EndSubSection
EndSection

Section "Screen"
    Identifier     "Screen2"
    Device         "Device2"
    Monitor        "Monitor2"
    DefaultDepth    24
    Option         "AllowEmptyInitialConfiguration" "True"
    Option         "Coolbits" "28"
    SubSection     "Display"
        Depth       24
    EndSubSection
EndSection

xorg.conf file after reboot:
Code:
Section "ServerLayout"
    Identifier "layout"
    Screen 0 "nvidia"
    Inactive "intel"
EndSection

Section "Device"
    Identifier "intel"
    Driver "modesetting"
    BusID "PCI:0@0:2:0"
    Option "AccelMethod" "None"
EndSection

Section "Screen"
    Identifier "intel"
    Device "intel"
EndSection

Section "Device"
    Identifier "nvidia"
    Driver "nvidia"
    BusID "PCI:2@0:0:0"
    Option "ConstrainCursor" "off"
EndSection

Section "Screen"
    Identifier "nvidia"
    Device "nvidia"
    Option "AllowEmptyInitialConfiguration" "on"
    Option "IgnoreDisplayDevices" "CRT"
EndSection

Section "Device"
    Identifier "nvidia"
    Driver "nvidia"
    BusID "PCI:3@0:0:0"
    Option "ConstrainCursor" "off"
EndSection

Section "Screen"
    Identifier "nvidia"
    Device "nvidia"
    Option "AllowEmptyInitialConfiguration" "on"
    Option "IgnoreDisplayDevices" "CRT"
EndSection

Section "Device"
    Identifier "nvidia"
    Driver "nvidia"
    BusID "PCI:6@0:0:0"
    Option "ConstrainCursor" "off"
EndSection

Section "Screen"
    Identifier "nvidia"
    Device "nvidia"
    Option "AllowEmptyInitialConfiguration" "on"
    Option "IgnoreDisplayDevices" "CRT"
EndSection


This method was tested with Ubuntu validated 378 drivers - performance level with 384 drivers is worse and they generate more heat per card.

I did put a workaround for this problem:

Firstly, log in and enable overclocking on one GPU:

sudo nvidia-xconfig -a --cool-bits=28

Log out and then log back in, then enable overclocking on all the other identical GPUs:

sudo nvidia-xconfig -a --cool-bits=28 --allow-empty-initial-configuration

Then log out and log back in and all the GPUs will have overclocking and fan control enabled.
thevictimofuktyranny (OP)
Legendary
*
Offline Offline

Activity: 1092
Merit: 1004


View Profile
November 24, 2017, 06:46:00 PM
 #96

When running the Optiminer in Ubuntu the performance level of all GPUs changes from the max one (3) to 2 (Nvidia X Server Settings). So the memory becomes 7604 instead of 8008 (GTX1060). Is this somehow fixable? Couldnt find any info.

No, this was hard set by Nvidia for compute workloads due to memory instability issue for Micron modules at launch. A later bio update fixed the issue, but because not everyone updated their bios the memory defaults down to the lower speed still on Ubuntu.

This is why the guide shows how to enable overclocking and you can set the correct memory speed or set a higher speed.
davidwillis
Newbie
*
Offline Offline

Activity: 21
Merit: 0


View Profile
November 25, 2017, 06:44:56 PM
 #97

How do you install 378 drivers?  Ubuntu wants to update to the latest ones
thevictimofuktyranny (OP)
Legendary
*
Offline Offline

Activity: 1092
Merit: 1004


View Profile
November 28, 2017, 04:13:39 PM
Last edit: December 03, 2017, 05:01:02 PM by thevictimofuktyranny
 #98

How do you install 378 drivers?  Ubuntu wants to update to the latest ones

It appears they have been withdrawn, which is weird.

You can access Nvidia Driver Archive and install via that method and I have old method for installing them listed below the current drivers.
munkam
Newbie
*
Offline Offline

Activity: 1
Merit: 0


View Profile
November 28, 2017, 05:01:01 PM
 #99

hi all,

drivers 378 can be found at :

http://us.download.nvidia.com/XFree86/Linux-x86_64/378.13/NVIDIA-Linux-x86_64-378.13.run

and for my french mate just change us.download.nvidia.com by fr.download.nvidia.com  Wink

usually any driver version could be found at :

http://us.download.nvidia.com/XFree86/Linux-x86_64/<Version number>/NVIDIA-Linux-x86_64-<Version number>.run

additionnal info with :

http://us.download.nvidia.com/XFree86/Linux-x86_64/<Version number>/README

but config modification in README is often outdated

davidwillis
Newbie
*
Offline Offline

Activity: 21
Merit: 0


View Profile
November 30, 2017, 04:15:33 AM
 #100

Thanks.  Is there anything I need to do to install that driver?  Do I need to remove the old one first?
thevictimofuktyranny (OP)
Legendary
*
Offline Offline

Activity: 1092
Merit: 1004


View Profile
December 03, 2017, 05:02:28 PM
 #101

Thanks.  Is there anything I need to do to install that driver?  Do I need to remove the old one first?

Depends how you are installing them, usually you need to be in the non-Nvidia Ubuntu default driver called Noveau.
k3rt
Newbie
*
Offline Offline

Activity: 18
Merit: 0


View Profile
December 04, 2017, 05:19:48 PM
 #102

This is the output from nvidia-smi -q -d CLOCK:

Code:
==============NVSMI LOG==============

Timestamp                           : Mon Dec  4 19:05:48 2017
Driver Version                      : 384.98

Attached GPUs                       : 2
GPU 00000000:01:00.0
    Clocks
        Graphics                    : 1657 MHz
        SM                          : 1657 MHz
        Memory                      : 3802 MHz
        Video                       : 1480 MHz
    Applications Clocks
        Graphics                    : N/A
        Memory                      : N/A
    Default Applications Clocks
        Graphics                    : N/A
        Memory                      : N/A
    Max Clocks
        Graphics                    : 1974 MHz
        SM                          : 1974 MHz
        Memory                      : 4004 MHz
        Video                       : 1708 MHz
    Max Customer Boost Clocks
        Graphics                    : N/A
    SM Clock Samples
        Duration                    : 4.36 sec
        Number of Samples           : 100
        Max                         : 1733 MHz
        Min                         : 1620 MHz
        Avg                         : 1682 MHz
    Memory Clock Samples
        Duration                    : 4.36 sec
        Number of Samples           : 100
        Max                         : 3802 MHz
        Min                         : 3802 MHz
        Avg                         : 3802 MHz
    Clock Policy
        Auto Boost                  : N/A
        Auto Boost Default          : N/A

- Does Graphics: 1657 MHz stand for CPU Clock?
- Does Max Clocks / Graphics: 1900 MHz means I can rise it to 1900 without any harm to the GPU?

I've noticed that chaning GPUGraphicsMemoryOffset changes Graphics precisely. Changing GPUMemoryTransferRateOffset affect Memory but a bit weird, e.g.:

Code:
Attribute 'GPUMemoryTransferRateOffset' (ubuntu:0[gpu:0]) assigned value 400.

Will set Memory to 3999 (4000 is the max value??).

Can someone with more experience provide some explanation?
thevictimofuktyranny (OP)
Legendary
*
Offline Offline

Activity: 1092
Merit: 1004


View Profile
December 06, 2017, 06:12:56 PM
 #103

This is the output from nvidia-smi -q -d CLOCK:

Code:
==============NVSMI LOG==============

Timestamp                           : Mon Dec  4 19:05:48 2017
Driver Version                      : 384.98

Attached GPUs                       : 2
GPU 00000000:01:00.0
    Clocks
        Graphics                    : 1657 MHz
        SM                          : 1657 MHz
        Memory                      : 3802 MHz
        Video                       : 1480 MHz
    Applications Clocks
        Graphics                    : N/A
        Memory                      : N/A
    Default Applications Clocks
        Graphics                    : N/A
        Memory                      : N/A
    Max Clocks
        Graphics                    : 1974 MHz
        SM                          : 1974 MHz
        Memory                      : 4004 MHz
        Video                       : 1708 MHz
    Max Customer Boost Clocks
        Graphics                    : N/A
    SM Clock Samples
        Duration                    : 4.36 sec
        Number of Samples           : 100
        Max                         : 1733 MHz
        Min                         : 1620 MHz
        Avg                         : 1682 MHz
    Memory Clock Samples
        Duration                    : 4.36 sec
        Number of Samples           : 100
        Max                         : 3802 MHz
        Min                         : 3802 MHz
        Avg                         : 3802 MHz
    Clock Policy
        Auto Boost                  : N/A
        Auto Boost Default          : N/A

- Does Graphics: 1657 MHz stand for CPU Clock?
- Does Max Clocks / Graphics: 1900 MHz means I can rise it to 1900 without any harm to the GPU?

I've noticed that chaning GPUGraphicsMemoryOffset changes Graphics precisely. Changing GPUMemoryTransferRateOffset affect Memory but a bit weird, e.g.:

Code:
Attribute 'GPUMemoryTransferRateOffset' (ubuntu:0[gpu:0]) assigned value 400.

Will set Memory to 3999 (4000 is the max value??).

Can someone with more experience provide some explanation?

So. let's say the memory is at running at 7600mhz. You put 400mhz on the GPUMemoryTransferRateOffset to reach 8GHz
jaromiradamek
Newbie
*
Offline Offline

Activity: 23
Merit: 0


View Profile
December 23, 2017, 05:08:18 PM
 #104

Everytime, when I tried to Overclock or change Fan speed Im getting "The control display is undefined"

Only one think which working is to enable PL: "nvidia-smi -pm 1"
And set TDP: "nvidia-smi -i 0 -pl 151"



Fun speed control do not working:
-----------------------
root@ja:~# nvidia-settings -a '[gpu:0]/GPUFanControlState=1'
Unable to init server: Could not connect: Connection refused

ERROR: The control display is undefined; please run `nvidia-settings --help` for usage information.
------------------------


Overclocking do not working:
-----------------------
root@jatrovka:~# nvidia-settings -a '[gpu:0]/GPUGraphicsMemoryOffset[3]=100'
Unable to init server: Could not connect: Connection refused

ERROR: The control display is undefined; please run `nvidia-settings --help` for usage information.
--------------------------




Dont you know what to do?

Im using Ubuntu 17.10 with proprietary Nvidia drivers:

root@jatrovka:~# nvidia-smi
Sat Dec 23 18:03:58 2017
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 384.90                 Driver Version: 384.90                    |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|===============================+======================+======================|
|   0  GeForce GTX 1070    Off  | 00000000:04:00.0 Off |                  N/A |
| 69%   69C    P2   158W / 220W |    540MiB /  8111MiB |     99%      Default |
+-------------------------------+----------------------+----------------------+
|   1  GeForce GTX 108...  Off  | 00000000:0C:00.0  On |                  N/A |
| 60%   70C    P2   243W / 250W |    601MiB / 11172MiB |     99%      Default |
+-------------------------------+----------------------+----------------------+
|   2  GeForce GTX 108...  Off  | 00000000:0D:00.0 Off |                  N/A |
| 65%   78C    P2   248W / 250W |    592MiB / 11172MiB |     99%      Default |
+-------------------------------+----------------------+----------------------+

+-----------------------------------------------------------------------------+
| Processes:                                                       GPU Memory |
|  GPU       PID   Type   Process name                             Usage      |
|=============================================================================|
|    0       914      G   /usr/lib/xorg/Xorg                            10MiB |
|    0      1016      G   /usr/bin/gnome-shell                           8MiB |
|    0      1344      C   ./zm                                         509MiB |
|    1       914      G   /usr/lib/xorg/Xorg                            15MiB |
|    1      1344      C   ./zm                                         573MiB |
|    2       914      G   /usr/lib/xorg/Xorg                             7MiB |
|    2      1344      C   ./zm                                         573MiB |
+-----------------------------------------------------------------------------+


========================================
# nvidia-xconfig: X configuration file generated by nvidia-xconfig
# nvidia-xconfig:  version 384.90  (buildmeister@swio-display-x86-rhel47-05)  Tue Sep 19 18:13:03 PDT 2017

Section "ServerLayout"
    Identifier     "Layout0"
    Screen      0  "Screen0"
    Screen      1  "Screen1" RightOf "Screen0"
    Screen      2  "Screen2" RightOf "Screen1"
    InputDevice    "Keyboard0" "CoreKeyboard"
    InputDevice    "Mouse0" "CorePointer"
EndSection

Section "Files"
EndSection

Section "InputDevice"
    # generated from default
    Identifier     "Mouse0"
    Driver         "mouse"
    Option         "Protocol" "auto"
    Option         "Device" "/dev/psaux"
    Option         "Emulate3Buttons" "no"
    Option         "ZAxisMapping" "4 5"
EndSection

Section "InputDevice"
    # generated from default
    Identifier     "Keyboard0"
    Driver         "kbd"
EndSection

Section "Monitor"
    Identifier     "Monitor0"
    VendorName     "Unknown"
    ModelName      "Unknown"
    HorizSync       28.0 - 33.0
    VertRefresh     43.0 - 72.0
    Option         "DPMS"
EndSection
Section "Monitor"
    Identifier     "Monitor1"
    VendorName     "Unknown"
    ModelName      "Unknown"
    HorizSync       28.0 - 33.0
    VertRefresh     43.0 - 72.0
    Option         "DPMS"
EndSection

Section "Monitor"
    Identifier     "Monitor2"
    VendorName     "Unknown"
    ModelName      "Unknown"
    HorizSync       28.0 - 33.0
    VertRefresh     43.0 - 72.0
    Option         "DPMS"
EndSection

Section "Device"
    Identifier     "Device0"
    Driver         "nvidia"
    VendorName     "NVIDIA Corporation"
    BoardName      "GeForce GTX 1070"
    BusID          "PCI:4:0:0"
    Option         "Coolbits" "31"
EndSection

Section "Device"
    Identifier     "Device1"
    Driver         "nvidia"
    VendorName     "NVIDIA Corporation"
    BoardName      "GeForce GTX 1080 Ti"
    BusID          "PCI:12:0:0"
    Option         "Coolbits" "31"
EndSection

Section "Device"
    Identifier     "Device2"
    Driver         "nvidia"
    VendorName     "NVIDIA Corporation"
    BoardName      "GeForce GTX 1080 Ti"
    BusID          "PCI:13:0:0"
    Option         "Coolbits" "31"
EndSection

Section "Screen"
    Identifier     "Screen0"
    Device         "Device0"
    Monitor        "Monitor0"
    DefaultDepth    24
    Option         "AllowEmptyInitialConfiguration" "True"
    Option         "Coolbits" "31"
    SubSection     "Display"
        Depth       24
    EndSubSection
EndSection

Section "Screen"
    Identifier     "Screen1"
    Device         "Device1"
    Monitor        "Monitor1"
    DefaultDepth    24
    Option         "AllowEmptyInitialConfiguration" "True"
    Option         "Coolbits" "31"
    SubSection     "Display"
        Depth       24
    EndSubSection
EndSection

Section "Screen"
    Identifier     "Screen2"
    Device         "Device2"
    Monitor        "Monitor2"
    DefaultDepth    24
    Option         "AllowEmptyInitialConfiguration" "True"
    Option         "Coolbits" "31"
    SubSection     "Display"
        Depth       24
    EndSubSection
EndSection

raoulus
Newbie
*
Offline Offline

Activity: 3
Merit: 0


View Profile
December 31, 2017, 11:19:49 PM
 #105

I'm searching for similar solution. How may I shut the system's xorg auto reset on reboot. Maybe after that editing the xorg will remain permanent and the display attached wont be needed...

In my case gpumanager was rewriting xorg.conf. You can check the logs: /var/log/gpu-manager.log

I solved by disabling gpumanager at startup adding nogpumanager in grub loader.
See https://askubuntu.com/a/732004 for details
NetopyrMan
Member
**
Offline Offline

Activity: 144
Merit: 10


View Profile
January 11, 2018, 09:52:11 PM
 #106

THIS!!!!!!!!!!!

you absolutly dont know how much i like this guide
it solved everything ...

only 1 thing: i realized that (at least) for 1070ti is needed coolbits sets on 12 ... then i can OC every card Smiley

THANKS THANKS and again THANKS
zx_master
Newbie
*
Offline Offline

Activity: 3
Merit: 0


View Profile
January 19, 2018, 12:33:11 PM
 #107

BTW after you are able to control your fans manually you can setup tool to do it in proper (smart) way.
It was designed especially for rigs.
https://github.com/ktsol/karlson
penailija
Newbie
*
Offline Offline

Activity: 3
Merit: 0


View Profile
January 30, 2018, 08:59:19 PM
 #108

Hi guys,

I've been trying to overclock my GPUs on linux for way too long now and I'm about to lose my mind.

I've tried all of the advices and tips in this thread numerous times. I have tried running Ubuntu, Lubuntu, Kubuntu, Xubuntu and Debian. 17.10 AND 16.04 versions of all these.

And no matter what I do and what version of Linux I'm running, the same problem persists: xorg.conf resets every time I reboot the system or even just log out.

Does anyone have any suggestions anymore? I have tried every possible solution I've come across while trying to troubleshoot this but nothing has worked.

My setup:

Asrock H110 pro btc+
Intel i7-7700K
4GB Crucial Ballistix DDR4-2400
5 x Zotac Geforce GTX1080ti (blower model)
2 x Corsair RM850X
32gb usb3.0 stick/some random hdd (tried running on both of these)

I'm literally clueless. Huh All help is much appreciated.
dejan_p
Member
**
Offline Offline

Activity: 132
Merit: 11


View Profile
January 30, 2018, 09:08:14 PM
 #109

^
how did you install the drivers? did you just install them, have you blacklisted the other driver?

i've had that problem when used to install nvidia+cuda via the provided downloaded file (just by running .sh file if i remember correctly) - the system would work until rebooting

i solved that by installing the drivers via repository (apt-get & stuff), NOT via the install file
pallas
Legendary
*
Offline Offline

Activity: 2716
Merit: 1094


Black Belt Developer


View Profile
January 30, 2018, 09:22:41 PM
 #110

New driver out (390) but, apparently, still no nvidia-smi clock setting support :-/

penailija
Newbie
*
Offline Offline

Activity: 3
Merit: 0


View Profile
January 31, 2018, 01:34:31 PM
 #111

^
how did you install the drivers? did you just install them, have you blacklisted the other driver?

i've had that problem when used to install nvidia+cuda via the provided downloaded file (just by running .sh file if i remember correctly) - the system would work until rebooting

i solved that by installing the drivers via repository (apt-get & stuff), NOT via the install file

I've tried both ways of installing the drivers and it doesn't seem to have any effect. What do you exactly mean by blacklisting? I've just selected the proprietary nvidia driver from driver manager.

Should I be able to adjust the Powermizer settings instantly after modifying the xorg.conf file with coolbits? Or does the overclocking get enabled only once the system reboots with the modified xorg.conf file?
penailija
Newbie
*
Offline Offline

Activity: 3
Merit: 0


View Profile
February 01, 2018, 07:36:15 PM
 #112

Ok, I feel kind of noobish to admit this..

Problem behind this was that I connected my display to mobo while I was configuring the rig. I've done this with mining rigs and it has not been a problem so far. When I switched to one of the GPUs, everything started working perfectly.
miner.31
Newbie
*
Offline Offline

Activity: 8
Merit: 0


View Profile
February 13, 2018, 07:53:39 AM
 #113

Hi guys .
I’m new and would need some help .
I’m not practice of Ubuntu but I have to go for it because don’t know why ... on win10 just recognize 4 gpu only.
Maybe I’m in the wrong place but I hope some1 will help me .
I have :
Asus mining expert
Intel g4400
4gb balistics ram
5 1080ti

I wanna mine Ethereum and ofcorse overclock my gpus ...
Pls help me , I will make a donation . Thanks
thevictimofuktyranny (OP)
Legendary
*
Offline Offline

Activity: 1092
Merit: 1004


View Profile
February 17, 2018, 03:57:14 PM
 #114

Hi guys .
I’m new and would need some help .
I’m not practice of Ubuntu but I have to go for it because don’t know why ... on win10 just recognize 4 gpu only.
Maybe I’m in the wrong place but I hope some1 will help me .
I have :
Asus mining expert
Intel g4400
4gb balistics ram
5 1080ti

I wanna mine Ethereum and ofcorse overclock my gpus ...
Pls help me , I will make a donation . Thanks

Normal stuff - set everything to PCI-Express Gen 1 or Gen 2.

Enable 4G performance.

Set 16GB of virtual memory.

 
miner.31
Newbie
*
Offline Offline

Activity: 8
Merit: 0


View Profile
February 18, 2018, 12:05:20 AM
 #115

Hi guys .
I’m new and would need some help .
I’m not practice of Ubuntu but I have to go for it because don’t know why ... on win10 just recognize 4 gpu only.
Maybe I’m in the wrong place but I hope some1 will help me .
I have :
Asus mining expert
Intel g4400
4gb balistics ram
5 1080ti

I wanna mine Ethereum and ofcorse overclock my gpus ...
Pls help me , I will make a donation . Thanks

Normal stuff - set everything to PCI-Express Gen 1 or Gen 2.

Enable 4G performance.

Set 16GB of virtual memory.

 
Let’s say I got for word . I’m mining with 5 gpu , but the problem now is that I can’t overclock all 5 gpus . Only the main one. I know I must change the xorg.conf file... I tryed everything but I have no permission . Could you help me with the configurazion of all gpu ? Thanks
miner.31
Newbie
*
Offline Offline

Activity: 8
Merit: 0


View Profile
February 20, 2018, 06:42:39 PM
 #116

Hi guys .
I’m new and would need some help .
I’m not practice of Ubuntu but I have to go for it because don’t know why ... on win10 just recognize 4 gpu only.
Maybe I’m in the wrong place but I hope some1 will help me .
I have :
Asus mining expert
Intel g4400
4gb balistics ram
5 1080ti

I wanna mine Ethereum and ofcorse overclock my gpus ...
Pls help me , I will make a donation . Thanks

Normal stuff - set everything to PCI-Express Gen 1 or Gen 2.

Enable 4G performance.

Set 16GB of virtual memory.

Hi man....i need help pls...
I was regularly mining for about 24h....when my ethminer crashes with this error : Error CUDA mining : an illegal memoery access was ancounered

Pls help me...i don`t know wat to do...it was working so nice...


 
WaveRiderx
Member
**
Offline Offline

Activity: 168
Merit: 39


View Profile
February 22, 2018, 04:45:19 AM
 #117

Thanks for the post.

is this still the proper way to do it or is there a new way?  Would like to get setup in linux with my rigs.

Also, can you make oc settings for each card individually or is it a global setting?  A few of my rigs have cards with mixed freqs.  had to just buy what I could, because of the video card drought.  Can you set power limit with this too?
DrJury
Member
**
Offline Offline

Activity: 122
Merit: 10


View Profile
February 22, 2018, 08:42:02 AM
Last edit: February 22, 2018, 09:34:14 AM by DrJury
 #118

Overclocking & Fancontrol is only enabled for first GPU?!

*********************************************
# nvidia-xconfig: X configuration file generated by nvidia-xconfig
# nvidia-xconfig:  version 390.25  (buildmeister@swio-display-x86-rhel47-03)  Wed Jan 24 20:46:04 PST 2018

Section "ServerLayout"
    Identifier     "layout"
    Screen      0  "nvidia" 0 0
    Inactive       "intel"
    InputDevice    "Keyboard0" "CoreKeyboard"
    InputDevice    "Mouse0" "CorePointer"
EndSection

Section "InputDevice"
    # generated from default
    Identifier     "Keyboard0"
    Driver         "keyboard"
EndSection

Section "InputDevice"
    # generated from default
    Identifier     "Mouse0"
    Driver         "mouse"
    Option         "Protocol" "auto"
    Option         "Device" "/dev/psaux"
    Option         "Emulate3Buttons" "no"
    Option         "ZAxisMapping" "4 5"
EndSection

Section "Monitor"
    Identifier     "Monitor0"
    VendorName     "Unknown"
    ModelName      "Unknown"
    HorizSync       28.0 - 33.0
    VertRefresh     43.0 - 72.0
    Option         "DPMS"
EndSection

Section "Device"
    Identifier     "intel"
    Driver         "modesetting"
    Option         "AccelMethod" "None"
    BusID          "PCI:0@0:2:0"
EndSection

Section "Device"
    Identifier     "nvidia"
    Driver         "nvidia"
    BusID          "PCI:1@0:0:0"
    Option         "Coolbits" "31"
EndSection

Section "Device"
    Identifier     "nvidia"
    Driver         "nvidia"
    BusID          "PCI:2@0:0:0"
    Option         "Coolbits" "31"
EndSection

Section "Device"
    Identifier     "nvidia"
    Driver         "nvidia"
    BusID          "PCI:3@0:0:0"
    Option         "Coolbits" "31"
EndSection

Section "Device"
    Identifier     "nvidia"
    Driver         "nvidia"
    BusID          "PCI:4@0:0:0"
    Option         "Coolbits" "31"
EndSection

Section "Device"
    Identifier     "nvidia"
    Driver         "nvidia"
    BusID          "PCI:5@0:0:0"
    Option         "Coolbits" "31"
EndSection

Section "Device"
    Identifier     "nvidia"
    Driver         "nvidia"
    BusID          "PCI:6@0:0:0"
    Option         "Coolbits" "31"
EndSection

Section "Screen"
    Identifier     "intel"
    Device         "intel"
    Monitor        "Monitor0"
EndSection

Section "Screen"
    Identifier     "nvidia"
    Device         "nvidia"
    Monitor        "Monitor0"
    DefaultDepth    24
    Option         "AllowEmptyInitialConfiguration" "on"
    Option         "IgnoreDisplayDevices" "CRT"
    Option         "ConstrainCursor" "off"
    SubSection     "Display"
        Depth       24
        Modes      "nvidia-auto-select"
    EndSubSection
EndSection

Section "Screen"
    Identifier     "nvidia"
    Device         "nvidia"
    Monitor        "Monitor0"
    Option         "AllowEmptyInitialConfiguration" "on"
    Option         "IgnoreDisplayDevices" "CRT"
EndSection

Section "Screen"
    Identifier     "nvidia"
    Device         "nvidia"
    Monitor        "Monitor0"
    Option         "AllowEmptyInitialConfiguration" "on"
    Option         "IgnoreDisplayDevices" "CRT"
EndSection

Section "Screen"
    Identifier     "nvidia"
    Device         "nvidia"
    Monitor        "Monitor0"
    Option         "AllowEmptyInitialConfiguration" "on"
    Option         "IgnoreDisplayDevices" "CRT"
EndSection

Section "Screen"
    Identifier     "nvidia"
    Device         "nvidia"
    Monitor        "Monitor0"
    Option         "AllowEmptyInitialConfiguration" "on"
    Option         "IgnoreDisplayDevices" "CRT"
EndSection

Section "Screen"
    Identifier     "nvidia"
    Device         "nvidia"
    Monitor        "Monitor0"
    Option         "AllowEmptyInitialConfiguration" "on"
    Option         "IgnoreDisplayDevices" "CRT"
EndSection

thevictimofuktyranny (OP)
Legendary
*
Offline Offline

Activity: 1092
Merit: 1004


View Profile
March 04, 2018, 03:57:46 PM
 #119

Thanks for the post.

is this still the proper way to do it or is there a new way?  Would like to get setup in linux with my rigs.

Also, can you make oc settings for each card individually or is it a global setting?  A few of my rigs have cards with mixed freqs.  had to just buy what I could, because of the video card drought.  Can you set power limit with this too?

You can set up individual overclocking and fan profiles per card via a .sh file for each card loaded up when you log into OS via startup program.
The Godfather
Newbie
*
Offline Offline

Activity: 41
Merit: 0


View Profile
March 04, 2018, 08:08:29 PM
 #120

i reinstalled ubuntu 20 times and never could overclock nvidia cards.... in nvidia setings there is fan control but not core clock... tryed everything.. any new tutorial ?
thevictimofuktyranny (OP)
Legendary
*
Offline Offline

Activity: 1092
Merit: 1004


View Profile
March 04, 2018, 08:28:51 PM
 #121

i reinstalled ubuntu 20 times and never could overclock nvidia cards.... in nvidia setings there is fan control but not core clock... tryed everything.. any new tutorial ?

Well, there is a new Ubuntu 16.04.04 LTS - not had time to test it as a mining platform. But, working perfectly on my test PC.

16.04.03 had a lot of bug and issues around CPUs and GPUs, so you may find the new version does work correctly.
zloy_hulk
Newbie
*
Offline Offline

Activity: 40
Merit: 0


View Profile
April 12, 2018, 11:43:08 AM
 #122

Hello!
Please help me with one problem! I don't know how to enable OC on nvidia p106/p104 system with connected monitor to inegrated graphics.

I installed different nvidia drivers, but everytime had the same problem - adding options like coolbits doesn't work. When I restart x-server or reboot after changing xorg.conf - it resets to the default. If I make this file read-only - x-server crashes at the start.
I tried a lot of variants, but nothing is working.

Such commands like this do nothing for me
sudo nvidia-xconfig -a --allow-empty-initial-configuration --cool-bits=28 --use-display-device="DFP-0" --connected-monitor="DFP-0" --custom-edid="DFP-0:/etc/X11/dfp-edid.bin"

On other system with usual GTX 10xx cards all is working good.

My system is usual, it is for test only.

Asus H110M-k
Intel Celeron G3900
4GB DDR4
1x GPU P104-100 (on the 1st pci-e slot)

I have a connected monitor to internal graphics.

 
How can I unlock the overclocking on nvidia P104?
x001tk
Newbie
*
Offline Offline

Activity: 29
Merit: 0


View Profile
April 14, 2018, 03:08:04 AM
 #123

Hello!
Please help me with one problem! I don't know how to enable OC on nvidia p106/p104 system with connected monitor to inegrated graphics.

I installed different nvidia drivers, but everytime had the same problem - adding options like coolbits doesn't work. When I restart x-server or reboot after changing xorg.conf - it resets to the default. If I make this file read-only - x-server crashes at the start.
I tried a lot of variants, but nothing is working.

Such commands like this do nothing for me
sudo nvidia-xconfig -a --allow-empty-initial-configuration --cool-bits=28 --use-display-device="DFP-0" --connected-monitor="DFP-0" --custom-edid="DFP-0:/etc/X11/dfp-edid.bin"

On other system with usual GTX 10xx cards all is working good.

My system is usual, it is for test only.

Asus H110M-k
Intel Celeron G3900
4GB DDR4
1x GPU P104-100 (on the 1st pci-e slot)

I have a connected monitor to internal graphics.

 
How can I unlock the overclocking on nvidia P104?


https://miningclub.info/threads/majning-na-linux-mint-ubuntu-nvidia-amd-majnery-razgon-ehtakij-how-to-dlja-novichkov.19005/page-27#post-902545

Is that you Bro ?
If not, maybe it will be helpfull
ken-ray
Newbie
*
Offline Offline

Activity: 24
Merit: 0


View Profile
May 28, 2018, 05:32:07 PM
 #124

If using monitor attached to intel graphics of ssh try this command. This worked for me with Xubuntu 18.04 and nvidia 390 drivers.

sudo nvidia-xconfig -a --cool-bits=31 --allow-empty-initial-configuration
sudo systemctl restart lightdm.service
sudo DISPLAY=:0 XAUTHORITY=/var/run/lightdm/root/:0 nvidia-settings -a 'GPUFanControlState=1' -a 'GPUTargetFanSpeed=100'

This changes fan to 100% but over clock works the same way.


I found the answer here.

https://www.linuxquestions.org/questions/linux-general-1/how-to-run-nvidia-settings-remotely-4175599391/

Maxhodl
Newbie
*
Offline Offline

Activity: 6
Merit: 0


View Profile
June 26, 2018, 05:16:36 PM
 #125

Thanks to the OP, I configured my ubuntu 16.04 for overclocking and fan control, however few questions.


1. I had to connect one of the GPU to monitor in order to configure OC , however after everything done, can i go back and use Integrated Graphics? , that worked smooth for me , with Nvidia GPU to monitor , the mouse cursor lags to move (even when it is not mining)  and it is frustrating.   Has anyone come across such ? any solution

2. Is there any way to control fan curve instead of a fix % of fan speed ?

Thanks, again.
MinersRus
Member
**
Offline Offline

Activity: 214
Merit: 24


View Profile
November 21, 2018, 04:52:27 AM
Last edit: November 25, 2018, 06:05:24 AM by MinersRus
 #126

I am having great difficulty trying to overclock the memory of the GTX 750 in my rig.


After much research this is where I am at:

From a shellinabox terminal to the rig I do this command:

DISPLAY=:0 XAUTHORITY=/var/run/lightdm/root/:0 nvidia-settings -a [gpu:0]/GPUMemoryTransferRateOffset[0]=200

The result is this message: The attribute 'GPUMemoryTransferRateOffset' specified in assignment '[gpu:0]/GPUMemoryTransferRateOffset[0]=100' cannot be assigned (it is a read-only attribute).

Any ideas in how to solve this issue?

-------------------------------------------------------------------

EDIT SOLVED: the issue was I needed to use index 1 not 0 as in:

DISPLAY=:0 XAUTHORITY=/var/run/lightdm/root/:0 nvidia-settings -a [gpu:0]/GPUMemoryTransferRateOffset[1]=200

This Guide states to use 3 but if I use either 3 or 2 the command doesn't return anything but when I use 1 I get this:

Attribute 'GPUMemoryTransferRateOffset' (H110-1:0[gpu:0]) assigned value 200.

and with this command: nvidia-smi -i 0 -q -d CLOCK

I can see the memory speed has changed from 2505 to 2605 or by 100. Memory MHz is twice that so 2505 == 5010 MHz and 2605 is 5210 MHz or 200 MHz faster.

-------------------------------------------------------------------

Rig Spec's:

ASRock H110 Pro BTC+ Motherboard
Celeron G3930 CPU
4GB DDR4 Memory
180GB NVMe SSD
13x Nvidia GTX 750 GPUs

Running Ubuntu 16.04.5 LTS

Coolbits is set to 28 for all the GPU's

Nvidia Driver: Driver Version: 396.54
Rabinovitch
Legendary
*
Offline Offline

Activity: 2030
Merit: 1076


BTCLife.global participant


View Profile
March 06, 2019, 08:30:14 AM
Last edit: March 06, 2019, 09:02:30 AM by Rabinovitch
 #127

If using monitor attached to intel graphics of ssh try this command. This worked for me with Xubuntu 18.04 and nvidia 390 drivers.

sudo nvidia-xconfig -a --cool-bits=31 --allow-empty-initial-configuration
sudo systemctl restart lightdm.service
sudo DISPLAY=:0 XAUTHORITY=/var/run/lightdm/root/:0 nvidia-settings -a 'GPUFanControlState=1' -a 'GPUTargetFanSpeed=100'

This changes fan to 100% but over clock works the same way.


I found the answer here.

https://www.linuxquestions.org/questions/linux-general-1/how-to-run-nvidia-settings-remotely-4175599391/

Ok, but me and another guy are looking for an answer hot to set GPU fan speed to any custom value on Ubuntu 18.x Server, i.e. with no X, no GUI, no DE...

https://superuser.com/q/1398591/1004977

Could anyone please help us?

p.s. I found it, and it seems to be working solution for our case! https://bitcointalk.org/index.php?topic=2432849.msg24914477#msg24914477

From Siberia with love! Hosting by Rabinovitch!
Fundraising for BOINC Farm
Пpoфeccиoнaльнo зaнимaюcь paзвёpтывaниeм фepм (ASIC, GPU, BURST, STORJ, Filecoin), oбopyдoвaниeм пoмeщeний для мaйнингa.
Pages: 1 2 3 4 5 6 7 [All]
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!