Bitcoin Forum
November 15, 2024, 09:31:53 AM *
News: Latest Bitcoin Core release: 28.0 [Torrent]
 
   Home   Help Search Login Register More  
Pages: « 1 ... 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 [263] 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 ... 416 »
  Print  
Author Topic: [OS] nvOC easy-to-use Linux Nvidia Mining  (Read 418242 times)
Temporel
Full Member
***
Offline Offline

Activity: 224
Merit: 100


View Profile
November 04, 2017, 02:23:48 PM
 #5241

Hi,
I make a try with nvOC.
In the configuration file, I see where I write the parameters for each GPU and each Pool, but for the pools, I didn't see where I write the worker password ?
There is the name of pool, the port, the coin address, the worker name, but no worker password...
Thank you.

Usually pools dont check worker password and best keep it the default "x"
If any one wants to mine for you ... let them mine ... lol
Yes, I know... "-p x" or something else, or nothing.
But I use this option : some of my workers have password for other reasons and the pool test it : if the password is wrong, I cannot connect my worker (error connection message).
How can I make with nvOC ?

not sure why people are still spreading this but some pool do need the right password. I spent hours trying to figure out why I couldnt mine on a pool using nvOC until I replaced the default password with x (nvOC v19 had z for some reason)
So use the password you have in your account (like the ones you setup in suprnova) other pools probably dont care though
papampi
Full Member
***
Offline Offline

Activity: 686
Merit: 140


Linux FOREVER! Resistance is futile!!!


View Profile WWW
November 04, 2017, 04:58:59 PM
 #5242

A few updates I've need to do since installing 1.4 that may help others:

Many errors in my syslog referencing timeout waiting for device - There is a hardcoded Sandisk drive in /etc/fstab that should be commented out:
Code:
UUID=55184403759586FB /mnt/55184403759586FB auto nosuid,nodev,nofail,x-gvfs-show,ro 0 0
/dev/disk/by-id/usb-SanDisk_Cruzer_Blade_4C530001260812105231-0:0-part1 /mnt/usb-SanDisk_Cruzer_Blade_4C530001260812105231-0:0-part1 auto nosuid,nodev,nofail,x-gvfs-show,ro 0 0

Also, if you adjust wattage limits into the triple digits (more than 100w) the temp script seems to throw issues now.  Changing:
Code:
echo -n 117.00| tail -c -5 | head -c -3

to

Code:
echo -n 117.00| tail -c -6 | head -c -3

seems to bring things back to normal.

Still seeing lower hashrates than I did on version 1.1, due to cards using about 80% of their available power limit and not sure why.  Also seeing hostname errors and plenty of small other things on a vanilla build (literally just changed 5 items in 1bash) - v1.1 was rock solid, and while I love the idea of these new features, I value stability of mining operation above all else and it seems like we're going backwards a bit on that front.

Nice catch on triple digit, I just install 1.4 on one of my 1070 rigs and was goin crazy why it keep popping up old power limit 25 .... New 125
Then I remember your post

Saved my day.

Stubo
Member
**
Offline Offline

Activity: 224
Merit: 13


View Profile
November 04, 2017, 05:23:02 PM
 #5243

So, in my effort to thoroughly understand nvOC at a reasonably low level, I am stumped by one question. What is calling/launching 2unix at boot and again if 3main is killed and 2unix "falls out"?

It appears to be part of gnome-session but I am not yet familiar enough with Ubuntu to find it. I see the auto login setup in /etc/lightdm/lightdm.conf but I don't see anything launching that explicitly in .bashrc or .profile. What am I missing?

Thanks in advance.

Guake terminal is involved

Yes. I found gnome-terminal.desktop and guake:guake.desktop in /home/m1/.config/autostart but I am still not putting all of the pieces of the puzzle together yet. I don't see any mention of 2unix being launched there.

For those of you that care, I found it:

m1        1943  1703  0 12:02 ?        00:00:00 python2 -m guake.main
m1        1999  1943  0 12:02 ?        00:00:00 gnome-pty-helper
m1        2000  1943  0 12:02 pts/16   00:00:00 /bin/bash
m1        2085  1943  0 12:02 pts/18   00:00:00 /bin/bash
m1        2316  1943  0 12:03 pts/21   00:00:00 /bin/bash




kk003
Member
**
Offline Offline

Activity: 117
Merit: 10


View Profile
November 04, 2017, 10:08:34 PM
 #5244

So, in my effort to thoroughly understand nvOC at a reasonably low level, I am stumped by one question. What is calling/launching 2unix at boot and again if 3main is killed and 2unix "falls out"?

It appears to be part of gnome-session but I am not yet familiar enough with Ubuntu to find it. I see the auto login setup in /etc/lightdm/lightdm.conf but I don't see anything launching that explicitly in .bashrc or .profile. What am I missing?

Thanks in advance.

Guake terminal is involved

Yes. I found gnome-terminal.desktop and guake:guake.desktop in /home/m1/.config/autostart but I am still not putting all of the pieces of the puzzle together yet. I don't see any mention of 2unix being launched there.

For those of you that care, I found it:

m1        1943  1703  0 12:02 ?        00:00:00 python2 -m guake.main
m1        1999  1943  0 12:02 ?        00:00:00 gnome-pty-helper
m1        2000  1943  0 12:02 pts/16   00:00:00 /bin/bash
m1        2085  1943  0 12:02 pts/18   00:00:00 /bin/bash
m1        2316  1943  0 12:03 pts/21   00:00:00 /bin/bash



Thx. Did you find the location of guake.main?
codereddew12
Newbie
*
Offline Offline

Activity: 36
Merit: 0


View Profile
November 05, 2017, 07:07:26 AM
 #5245

By the way, a little off topic, but do you guys think it's ok to run 12 GPUs (all 1070s except for 2 which are 1060s) on two EVGA G3 850W PSUs? I downvolted all of the cards and each 1070 is right around 100W/piece and 1060s are roughly 80W. Been stable now for over a week, just to get some feedback regarding this setup.
ComputerGenie
Hero Member
*****
Offline Offline

Activity: 1092
Merit: 552


Retired IRCX God


View Profile
November 05, 2017, 07:23:29 AM
 #5246

By the way, a little off topic, but do you guys think it's ok to run 12 GPUs (all 1070s except for 2 which are 1060s) on two EVGA G3 850W PSUs? I downvolted all of the cards and each 1070 is right around 100W/piece and 1060s are roughly 80W. Been stable now for over a week, just to get some feedback regarding this setup.

There is no part of me that will ever understand the idea behind taking 10 cards and intentionally turning them into 7  Roll Eyes

If you have to ask "why?", you wouldn`t understand my answer.
Always be on the look out, because you never know when you'll be stalked by hit-men that eat nothing but cream cheese....
codereddew12
Newbie
*
Offline Offline

Activity: 36
Merit: 0


View Profile
November 05, 2017, 07:38:08 AM
 #5247

By the way, a little off topic, but do you guys think it's ok to run 12 GPUs (all 1070s except for 2 which are 1060s) on two EVGA G3 850W PSUs? I downvolted all of the cards and each 1070 is right around 100W/piece and 1060s are roughly 80W. Been stable now for over a week, just to get some feedback regarding this setup.

There is no part of me that will ever understand the idea behind taking 10 cards and intentionally turning them into 7  Roll Eyes

What do you mean exactly? I have a pretty small setup (~20 GPUs), so I just try to consolidate whenever possible.
Stubo
Member
**
Offline Offline

Activity: 224
Merit: 13


View Profile
November 05, 2017, 07:41:02 AM
 #5248

By the way, a little off topic, but do you guys think it's ok to run 12 GPUs (all 1070s except for 2 which are 1060s) on two EVGA G3 850W PSUs? I downvolted all of the cards and each 1070 is right around 100W/piece and 1060s are roughly 80W. Been stable now for over a week, just to get some feedback regarding this setup.

There is no part of me that will ever understand the idea behind taking 10 cards and intentionally turning them into 7  Roll Eyes

I am with @ComputerGenie. Is there a reason you are not running individual power limits and clocks? Worst case, I would remove the 1060's and just run the 1070's correctly.
papampi
Full Member
***
Offline Offline

Activity: 686
Merit: 140


Linux FOREVER! Resistance is futile!!!


View Profile WWW
November 05, 2017, 07:55:03 AM
 #5249

By the way, a little off topic, but do you guys think it's ok to run 12 GPUs (all 1070s except for 2 which are 1060s) on two EVGA G3 850W PSUs? I downvolted all of the cards and each 1070 is right around 100W/piece and 1060s are roughly 80W. Been stable now for over a week, just to get some feedback regarding this setup.

There is no part of me that will ever understand the idea behind taking 10 cards and intentionally turning them into 7  Roll Eyes

I am with @ComputerGenie. Is there a reason you are not running individual power limits and clocks? Worst case, I would remove the 1060's and just run the 1070's correctly.

Agree,
80 W for 1060 is not so low, but 100 for 1070 is too low,
what are your hash rates with 1070 ?
I run my 1070 rig at 125 W, oc 125, cc 600 getting 460-470 sol/s
and 1060 rig with 85W, oc 125, cc, 600 getting 300 sol/s

JayneL
Member
**
Offline Offline

Activity: 104
Merit: 10


View Profile
November 05, 2017, 10:41:13 AM
 #5250

Hi guys can you help me how to add more algo on SALTER_NICEHASH? I want to add cryptonight so that it can automatically switch to that algo pls tnx tnx tnx Grin

Hi Fullzero or anyone here figure it how? i tried to copy the settings of other algo and change the settings but it gets buggy and got insane income result lol
Stubo
Member
**
Offline Offline

Activity: 224
Merit: 13


View Profile
November 05, 2017, 10:41:49 AM
 #5251

Installed v19-1.4 yesterday on a new rig to test a new card.

sudo: unable to resolve host gtx1080ti-r1

Check your /etc/hosts file. It would appear that you are missing the entry for your miner - gtx1080ti-r1.

   m1@Miner2:~$ cat /etc/hosts
   127.0.0.1       localhost
  127.0.1.1       Miner2

   # The following lines are desirable for IPv6 capable hosts
   .
   .
   .

In the example, my host is named Miner2 and I have the necessary entry for it in my hosts file. Hope this helps.

codereddew12
Newbie
*
Offline Offline

Activity: 36
Merit: 0


View Profile
November 05, 2017, 10:43:51 AM
 #5252

By the way, a little off topic, but do you guys think it's ok to run 12 GPUs (all 1070s except for 2 which are 1060s) on two EVGA G3 850W PSUs? I downvolted all of the cards and each 1070 is right around 100W/piece and 1060s are roughly 80W. Been stable now for over a week, just to get some feedback regarding this setup.

There is no part of me that will ever understand the idea behind taking 10 cards and intentionally turning them into 7  Roll Eyes

I am with @ComputerGenie. Is there a reason you are not running individual power limits and clocks? Worst case, I would remove the 1060's and just run the 1070's correctly.

Agree,
80 W for 1060 is not so low, but 100 for 1070 is too low,
what are your hash rates with 1070 ?
I run my 1070 rig at 125 W, oc 125, cc 600 getting 460-470 sol/s
and 1060 rig with 85W, oc 125, cc, 600 getting 300 sol/s

Been mining ETH and my hasharates have been 30-31 per 1070. I never thought you could have a PL "too low" if it's within the supported wattage of the card. Of course, some GPUs, like the MSI 1070 Gaming X requires minimum 115 watts but some other 1070s can go to as low as 90 so that's why I say average is roughly 100W per card. It's bee stable like this for nearly 2 weeks now so I don't see why it's a big issue if it's stable?
fk1
Full Member
***
Offline Offline

Activity: 216
Merit: 100


View Profile
November 05, 2017, 12:02:19 PM
 #5253

Hi! I am currently using nvOC 1.3 and nicehash salfter script sich is great. I also use telegram and sometimes I see two telegram messages that utilizations i 0 and another one two mins later with utilization 100%. I guess the rig is restarting but i am not sure why. Is there any logfile you can suggest me to take a look at? tyvm

e: found 5_restartlog but its empty
papampi
Full Member
***
Offline Offline

Activity: 686
Merit: 140


Linux FOREVER! Resistance is futile!!!


View Profile WWW
November 05, 2017, 12:11:51 PM
 #5254

Hi! I am currently using nvOC 1.3 and nicehash salfter script sich is great. I also use telegram and sometimes I see two telegram messages that utilizations i 0 and another one two mins later with utilization 100%. I guess the rig is restarting but i am not sure why. Is there any logfile you can suggest me to take a look at? tyvm

e: found 5_restartlog but its empty

check this in 1bash

Code:
CLEAR_LOGS_ON_BOOT="NO"        	# YES NO

fk1
Full Member
***
Offline Offline

Activity: 216
Merit: 100


View Profile
November 05, 2017, 12:15:13 PM
 #5255

tyvm! Smiley
ComputerGenie
Hero Member
*****
Offline Offline

Activity: 1092
Merit: 552


Retired IRCX God


View Profile
November 05, 2017, 01:18:02 PM
 #5256

...Been mining ETH and my hasharates have been 30-31 per 1070. I never thought you could have a PL "too low" if it's within the supported wattage of the card. Of course, some GPUs, like the MSI 1070 Gaming X requires minimum 115 watts but some other 1070s can go to as low as 90 so that's why I say average is roughly 100W per card. It's bee stable like this for nearly 2 weeks now so I don't see why it's a big issue if it's stable?
I'll get my "DOH!", for mining ETH with that many NVIDIA cards, out of the way right off the bat.

That being out of the way:

Granted, in the real world, the loss isn't 1:1; however, for ease of math, we'll pretend it is.
If you have a 150 TDP card and you down the output by 30%, then you have taken a 1500W set of cards and lowered them to 1000W. Now you have a 500W reduction in power that is the same as the total amount of power required to power 3.3333 cards at full power (for ease of math we will call this 3 cards). So, you have an effective rate of 7 cards and have 10 cards sitting on the rack. To what end?

Yes, it's at the lower end of stable, but what is the point?

Not counting the 1060s and your other rig(s) that make up your other 8 cards....
Even if my numbers are off by 1/2, and we pretend you paid wholesale ($375) prices for those cards, you have $624 worth of cards sitting idle the save $438 per year in consumption while giving up 49% of your potential earnings (by running cards at hashrates of as low as 30 when they can hit as high as 58).

It's something that makes less and less sense the more and more cards you run.

If you have to ask "why?", you wouldn`t understand my answer.
Always be on the look out, because you never know when you'll be stalked by hit-men that eat nothing but cream cheese....
fullzero (OP)
Hero Member
*****
Offline Offline

Activity: 882
Merit: 1009



View Profile
November 05, 2017, 05:17:17 PM
 #5257

Been busy lately; l will try to respond to the pm's I haven't gotten to and posts in the thread either tonight or tomorrow.

I will explain how the execution logic works in nvOC.

There are some problems with the newest Nvidia driver; so I will roll it back for the next update. 


mnh_license@proton.me https://github.com/hartmanm How difficulty adjustment works: Every 2016 blocks, the Network adjusts the current difficulty to estimated difficulty in an attempt to keep the block generation time at 10 minutes or 600 seconds. Thus the Network re-targets the difficulty at a total difficulty time of:  2016 blocks * 10 minutes per block = 20160 minutes / 60 minutes = 336 hours / 24 hours = 14 days. When the Network hashrate is increasing; a difficulty ( 2016 blocks ) should take less than 14 days.  How much less can be estimated by comparing the % Network hashrate growth + what the Network hashrate was at the beginning of the difficulty ( 2016 blocks ) against what the Network hashrate was at the beginning of the difficulty ( 2016 blocks ).  This is only an estimate because you cannot account for "luck"; but you can calculate reasonably well using explicitly delimited stochastic ranges. The easy way to think about this is to look at this graph and see how close to 0 the current data points are on its y axis.  If the blue line is above 0 the difficulty ( 2016 ) blocks should take less than 14 days; if it is below it should take more. http://bitcoin.sipa.be/growth-10k.png
gs777
Member
**
Offline Offline

Activity: 118
Merit: 10


View Profile
November 05, 2017, 05:40:56 PM
 #5258

I installed nvoc 19 1.4 it works fine except - auto temp control
I'm constantly getting this message -

sudo: unable to resolve host 19_1_4
Power limit for GPU 00000000:0E:00.0 was set to 150.00 W from 150.00 W.

Warning: persistence mode is disabled on this device. This settings will go back to default as soon as driver unloads (e.g. last application like nvidia-smi or cuda application terminates). Run with [--help | -h] switch to get more information on how to enable persistence mode.

All done.
GPU 12, Target temp: 70, Current: 58, Diff: 12, Fan: 30, Power: 50

I've setup PL to 150W but somehow it shows Power 50
Can you help me please?
 
WaveFront
Member
**
Offline Offline

Activity: 126
Merit: 10


View Profile
November 05, 2017, 05:55:27 PM
 #5259

Let's suppose that I have several rigs running nvOC behind a router with one public IP and a NAT server (the rigs have static private IPs).
I would like to SSH on the rigs individually from a remote location from a different IP address.

I was thinking about setting SSH on a different port on each rig, for example:
rig1 SSH on port 1024
rig2 SSH on port 1025
rig3 SSH on port 1026
And so on...

On the router I would setup virtual servers to redirect traffic on port 1024 to rig 1, 1025 to rig 2 and so on

Do you think it's a good idea or are there better ways to do this?
Temporel
Full Member
***
Offline Offline

Activity: 224
Merit: 100


View Profile
November 05, 2017, 06:10:26 PM
 #5260

Let's suppose that I have several rigs running nvOC behind a router with one public IP and a NAT server (the rigs have static private IPs).
I would like to SSH on the rigs individually from a remote location from a different IP address.

I was thinking about setting SSH on a different port on each rig, for example:
rig1 SSH on port 1024
rig2 SSH on port 1025
rig3 SSH on port 1026
And so on...

On the router I would setup virtual servers to redirect traffic on port 1024 to rig 1, 1025 to rig 2 and so on

Do you think it's a good idea or are there better ways to do this?

just redirect a different port for each rig so when you connect:

XXX.XXX.XXX.XXX port 10001 for rig1 redirect to 192.168.1.11 port 22 for rig1
XXX.XXX.XXX.XXX port 10002 for rig2 redirect to 192.168.1.12 port 22 for rig2
etc...

If you are sing putty, just create a new shortcut with -P 1000x for each rig

Pages: « 1 ... 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 [263] 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 ... 416 »
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!