fullzero (OP)
|
nvoc now falls under the mnh_license: https://github.com/hartmanm/mnh_license/blob/main/license.mdSOFTWARE WARNINGIMPORTANT: READ THIS WARNING CAREFULLY BEFORE USING OR VIEWING THE SOFTWARE.This Software Warning (" Warning") is issued by Michael Neill Hartman ("Licensor") to you (" Potential Violator"). By installing, copying, or viewing the software, methods, scripts, or architecture (" Software") in whole or in part, you (" Potential Violator") acknowledge that you are liable for any damages resulting from such actions. 1. No Grant of LicenseYou are not granted any rights to use, copy, modify, distribute, or view the Software in any form. All rights to the Software are fully retained by the Licensor. The Software may never be used for any purpose, including personal, commercial, educational, governmental, or organizational use. Any interaction with the Software is strictly prohibited. The Licensor retains all rights, title, and interest in the Software, including all intellectual property rights. 2. Previous VersionsAny previous version of the license is void and is replaced with this version. Any existing copies of the (" Software") must be destroyed. 3. Violation Reporting and RewardIndividuals who notify the Licensor in writing of a specific violation of this Agreement are eligible for a reward of 10% of any successful legal settlement resulting from that violation, calculated after taxes. The written notice must provide sufficient details about the violation, and the individual must be the first to provide this information. If multiple individuals submit information that collectively enables a successful legal settlement, the Licensor shall, at their sole discretion, determine the division of the 10% reward after a successful legal settlement. 4. Limitation of LiabilityIn no event shall the Licensor be liable for any damages arising from the illegal or unauthorized use or interaction with the Software, even if the Licensor has been advised of the possibility of such damages.
|
|
|
|
https://github.com/hartmanm How difficulty adjustment works: Every 2016 blocks, the Network adjusts the current difficulty to estimated difficulty in an attempt to keep the block generation time at 10 minutes or 600 seconds. Thus the Network re-targets the difficulty at a total difficulty time of: 2016 blocks * 10 minutes per block = 20160 minutes / 60 minutes = 336 hours / 24 hours = 14 days. When the Network hashrate is increasing; a difficulty ( 2016 blocks ) should take less than 14 days. How much less can be estimated by comparing the % Network hashrate growth + what the Network hashrate was at the beginning of the difficulty ( 2016 blocks ) against what the Network hashrate was at the beginning of the difficulty ( 2016 blocks ). This is only an estimate because you cannot account for "luck"; but you can calculate reasonably well using explicitly delimited stochastic ranges. The easy way to think about this is to look at this graph and see how close to 0 the current data points are on its y axis. If the blue line is above 0 the difficulty ( 2016 ) blocks should take less than 14 days; if it is below it should take more. http://bitcoin.sipa.be/growth-10k.png
|
|
|
fullzero (OP)
|
 |
April 04, 2017, 01:13:41 AM Last edit: December 24, 2024, 06:03:58 PM by fullzero |
|
Fellow miners, this is the build I use with my Nvidia 1000 series rigs. nvOC is a customized Ubuntu 16.04 build with Nvidia individual card OC and individual card powerlimit support, manual fan support, auto launching on boot, and a single easy to configure Bash Script (1bash) which can be configured from a windows pc using wordpad. nvOC is easy-to-use (with v0017): partially supported motherboard links: ASUS B250 MINING EXPERT (13x gpu) Linkfully supported motherboard links: ASRock H110 PRO BTC+ (13x gpu) LinkBIOSTAR TB250-BTC PRO (12x gpu) LinkASRock H81 PRO BTC (6x gpu) LinkBIOSTAR TB85 (6x gpu) LinkMSI Z270-A PRO (6x gpu: 7x if you use 1x m2 adapter) LinkGIGABYTE GA-B250M-Gaming 3 (4x gpu) LinkBIOSTAR TB250-BTC (6x gpu) LinkASUS Z270-F GAMING (7x gpu: 9x if you use 2x m2 adapters) LinkMSI Z170A GAMING M5 (7x gpu) LinkASUS PRIME Z270-A (7x gpu: 9x if you use 2x m2 adapters) LinkGIGABYTE GA-Z270P-D3 (6x gpu) LinkASUS PRIME H270-PLUS (6x gpu: 8x if you use 2x m2 adapters) LinkIf you are using an ASUS B250 MINING EXPERT ; ensure you enable Launch CSM option in the bios before connecting the nvOC USB. If you are using an ASRock H110 PRO BTC+ or ASRock H81 PRO BTC or BIOSTAR TB85; no changes to the bios settings are needed. If you are using a BIOSTAR TB250-BTC PRO; ensure Mining Mode is enabled in the bios. Also ensure Max TOLUD is set to 3.5 GB in the bios.NOTE: you must first only connect 6x GPUs, boot, make Bios changes, save and reboot, shutdown, add the other 6x GPUs, attach the USB or SSD and bootIf you are using an MSI Z270-A PRO; ensure you enable Above 4G memory option in the bios before connecting the nvOC USB. If you are using a GIGABYTE GA-B250M-Gaming 3; ensure the Audio Controller is disabled in the bios.If you are using a BIOSTAR TB250-BTC; ensure Miner Mode is enabled in the bios. Also ensure Max TOLUD is set to 3.5 GB in the bios.If you are using an ASUS Z270-F GAMING; ensure 'Above 4G Decoding' is enabled in the bios. Also ensure PTP aware OS: is set to 'Not PTP Aware' in the bios. Finally, ensure you 'Clear Secure Boot Keys' in the bios. If you are using an MSI Z170-A GAMING M5; ensure 'Above 4G Decoding' is enabled in the bios. Also download, unzip and copy to a usb key ( the 2016-12-19 Version 1.D ) Bios and follow instructions to flash the bios.If you are using an ASUS PRIME Z270-A; ensure 'Above 4G Decoding' is enabled in the bios. Also ensure PTP aware OS: is set to 'Not PTP Aware' in the bios. Finally, ensure you 'Clear Secure Boot Keys' in the bios. If you are using a GIGABYTE GA-Z270P-D3; ensure the Audio Controller is disabled in the bios.If you are using a ASUS PRIME H270-PLUS; You must update the bios; with this motherboard it can be done by connecting an ethernet cable and entering the EZ Flash 3 Utility. Select DHCP and download then install the update. It should look like this. After updating ensure 'Above 4G Decoding' is enabled in the bios.If you are using a BIOSTAR RACING Z170GT7; Ensure you are only using the first 6 pcie slots closest to the CPU. Ensure you set Security Device Support to: Disable Finally ensure you set the max TOLUD to 3.5 gb
|
|
|
|
https://github.com/hartmanm How difficulty adjustment works: Every 2016 blocks, the Network adjusts the current difficulty to estimated difficulty in an attempt to keep the block generation time at 10 minutes or 600 seconds. Thus the Network re-targets the difficulty at a total difficulty time of: 2016 blocks * 10 minutes per block = 20160 minutes / 60 minutes = 336 hours / 24 hours = 14 days. When the Network hashrate is increasing; a difficulty ( 2016 blocks ) should take less than 14 days. How much less can be estimated by comparing the % Network hashrate growth + what the Network hashrate was at the beginning of the difficulty ( 2016 blocks ) against what the Network hashrate was at the beginning of the difficulty ( 2016 blocks ). This is only an estimate because you cannot account for "luck"; but you can calculate reasonably well using explicitly delimited stochastic ranges. The easy way to think about this is to look at this graph and see how close to 0 the current data points are on its y axis. If the blue line is above 0 the difficulty ( 2016 ) blocks should take less than 14 days; if it is below it should take more. http://bitcoin.sipa.be/growth-10k.png
|
|
|
kopija
|
 |
April 04, 2017, 05:41:23 AM Last edit: April 04, 2017, 06:04:46 AM by kopija |
|
Thank you for contributing! Regarding the need for dummy plugs, have you tried using --allow-empty-initial-configuration in xorg.conf? sudo nvidia-xconfig -a --cool-bits=28 --allow-empty-initial-configuration
This allows me to run headless without the need for plugs of any kind.
|
we are nothing but a smart contracts on a cosmic blockchain
|
|
|
kopija
|
 |
April 05, 2017, 04:00:42 AM |
|
The only downside is that you will be limited to VGA resolution when using remote VNC connection. Dummy plug helps with that problem.
|
we are nothing but a smart contracts on a cosmic blockchain
|
|
|
laik2
|
 |
April 06, 2017, 06:20:48 AM |
|
Hi, I haven't seen your modded 16.04 yet but I intend to, meanwhile here is something to replace stupid dummy requirements: /etc/X11/xorg.conf Section "ServerLayout" Identifier "Layout0" Screen 0 "Screen0" 0 0 Screen 1 "Screen1" 0 0 Screen 2 "Screen2" 0 0 Screen 3 "Screen3" 0 0 Screen 4 "Screen4" 0 0 Screen 5 "Screen5" 0 0 InputDevice "Mouse0" "CorePointer" EndSection
Section "Files" EndSection
Section "InputDevice" Identifier "Mouse0" Driver "mouse" Option "Protocol" "auto" Option "Device" "/dev/psaux" Option "Emulate3Buttons" "no" Option "ZAxisMapping" "4 5" EndSection
Section "InputDevice" Identifier "Keyboard0" Driver "kbd" EndSection
Section "Monitor" Identifier "Monitor0" VendorName "Unknown" ModelName "Unknown" HorizSync 28.0 - 33.0 VertRefresh 43.0 - 72.0 Option "DPMS" EndSection
Section "Device" Identifier "Device0" Driver "nvidia" VendorName "NVIDIA Corporation" BusID "PCI:1:0:0" Option "ConnectedMonitor" "DFP-0" Option "CustomEDID" "DFP-0:/etc/X11/edid.bin" Option "Coolbits" "31 EndSection
Section "Device" Identifier "Device1" Driver "nvidia" VendorName "NVIDIA Corporation" Option "Coolbits" "31" BusID "PCI:2:0:0" Option "ConnectedMonitor" "DFP-0" Option "CustomEDID" "DFP-0:/etc/X11/edid.bin" EndSection
Section "Device" Identifier "Device2" Driver "nvidia" VendorName "NVIDIA Corporation" Option "Coolbits" "31" BusID "PCI:3:0:0" Option "ConnectedMonitor" "DFP-0" Option "CustomEDID" "DFP-0:/etc/X11/edid.bin" EndSection
Section "Device" Identifier "Device3" Driver "nvidia" VendorName "NVIDIA Corporation" Option "Coolbits" "31" BusID "PCI:4:0:0" Option "ConnectedMonitor" "DFP-0" Option "CustomEDID" "DFP-0:/etc/X11/edid.bin" EndSection
Section "Device" Identifier "Device4" Driver "nvidia" VendorName "NVIDIA Corporation" Option "Coolbits" "31" BusID "PCI:5:0:0" Option "ConnectedMonitor" "DFP-0" Option "CustomEDID" "DFP-0:/etc/X11/edid.bin" EndSection
Section "Device" Identifier "Device5" Driver "nvidia" VendorName "NVIDIA Corporation" Option "Coolbits" "31" BusID "PCI:6:0:0" Option "ConnectedMonitor" "DFP-0" Option "CustomEDID" "DFP-0:/etc/X11/edid.bin" EndSection
Section "Screen" Identifier "Screen0" Device "Device0" Monitor "Monitor0" DefaultDepth 24 Option "Coolbits" "31" SubSection "Display" Depth 24 EndSubSection EndSection
Section "Screen" Identifier "Screen1" Device "Device1" Option "Coolbits" "31" Option "UseDisplayDevice" "none" EndSection
Section "Screen" Identifier "Screen2" Device "Device2" Option "Coolbits" "31" Option "UseDisplayDevice" "none" EndSection
Section "Screen" Identifier "Screen3" Device "Device3" Option "Coolbits" "31" Option "UseDisplayDevice" "none" EndSection
Section "Screen" Identifier "Screen4" Device "Device4" Option "Coolbits" "31" Option "UseDisplayDevice" "none" EndSection
Section "Screen" Identifier "Screen5" Device "Device5" Option "Coolbits" "31" Option "UseDisplayDevice" "none" EndSection
Search on google for edid.bin, I can't remember where I got mine or I've generated it...
|
|
|
|
Tidsdilatation
|
 |
April 06, 2017, 04:16:46 PM |
|
Nice man!! Im running nvidia mining rig and i will def test this!!
|
|
|
|
kopija
|
 |
April 08, 2017, 06:36:19 AM Last edit: April 08, 2017, 08:07:09 AM by kopija |
|
edid.bin trick was required for headless setup before nvidia introduced --allow-empty-initial-configuration i also tried playing with edid.bin to fix issues with VGA resolution on remote VNC connection, no luck if anybody managed to fix it please share also, how to set power limit on boot-up, so I do not have to do it manually after every restart? thanks and have a nice day everybody edit: mr. fullzero, have you considered using Xubuntu in your next version? It is much less memory and CPU hungry than vanilla Ubuntu which eats almost 500MB more memory compared to Xubuntu https://www.reddit.com/r/linux/comments/5kdq92/linux_distros_ram_consumption_9_distros_compared/I am unsing it right now on an machine with 1GB of memory and all important stuff like graphics-drivers PPA and Nvidia overclocking work like a charm.
|
we are nothing but a smart contracts on a cosmic blockchain
|
|
|
philipma1957
Legendary
Offline
Activity: 4522
Merit: 9928
'The right to privacy matters'
|
 |
April 08, 2017, 06:49:09 AM |
|
linux nice.
I will test 1080 and 1080 ti cards on it.
I don't need the headless fix.
|
|
|
|
fullzero (OP)
|
 |
April 08, 2017, 02:10:29 PM |
|
edid.bin trick was required for headless setup before nvidia introduced --allow-empty-initial-configuration i also tried playing with edid.bin to fix issues with VGA resolution on remote VNC connection, no luck if anybody managed to fix it please share also, how to set power limit on boot-up, so I do not have to do it manually after every restart? thanks and have a nice day everybody edit: mr. fullzero, have you considered using Xubuntu in your next version? It is much less memory and CPU hungry than vanilla Ubuntu which eats almost 500MB more memory compared to Xubuntu https://www.reddit.com/r/linux/comments/5kdq92/linux_distros_ram_consumption_9_distros_compared/I am unsing it right now on an machine with 1GB of memory and all important stuff like graphics-drivers PPA and Nvidia overclocking work like a charm. I used expect; see http://expect.sourceforge.net/install it; then you can use the same method I did in oneBash to set the powerlimit automatically I could make an Xubuntu version if there is enough interest.
|
|
|
|
https://github.com/hartmanm How difficulty adjustment works: Every 2016 blocks, the Network adjusts the current difficulty to estimated difficulty in an attempt to keep the block generation time at 10 minutes or 600 seconds. Thus the Network re-targets the difficulty at a total difficulty time of: 2016 blocks * 10 minutes per block = 20160 minutes / 60 minutes = 336 hours / 24 hours = 14 days. When the Network hashrate is increasing; a difficulty ( 2016 blocks ) should take less than 14 days. How much less can be estimated by comparing the % Network hashrate growth + what the Network hashrate was at the beginning of the difficulty ( 2016 blocks ) against what the Network hashrate was at the beginning of the difficulty ( 2016 blocks ). This is only an estimate because you cannot account for "luck"; but you can calculate reasonably well using explicitly delimited stochastic ranges. The easy way to think about this is to look at this graph and see how close to 0 the current data points are on its y axis. If the blue line is above 0 the difficulty ( 2016 ) blocks should take less than 14 days; if it is below it should take more. http://bitcoin.sipa.be/growth-10k.png
|
|
|
kopija
|
 |
April 08, 2017, 02:41:12 PM |
|
What to put in the .sh script for power limit adjustment so it sticks on reboot? When I set the PL in CLI with nvidia-smi -pl it asks for sudo Would nvidia-settings -pl command work in a script? And which comand to put in the script? How did you formulate it in OneBash?
BTW: here goes my vote for xubuntu!
|
we are nothing but a smart contracts on a cosmic blockchain
|
|
|
fullzero (OP)
|
 |
April 08, 2017, 05:33:04 PM |
|
What to put in the .sh script for power limit adjustment so it sticks on reboot? When I set the PL in CLI with nvidia-smi -pl it asks for sudo Would nvidia-settings -pl command work in a script? And which comand to put in the script? How did you formulate it in OneBash?
BTW: here goes my vote for xubuntu!
You have to use sudo; that is what I use expect for: to send the root password automatically in oneBash.sh. this is the relevant section of oneBash.sh: # set wattage for powerlimit: if [ $POWERLIMIT == "YES" ] then sleep 4 #change powerlimit by changing the number after -pl to the desired wattage expect -c 'spawn sudo nvidia-smi -pl 125 expect "*password*:" send "miner1\r" ' sleep 4 fi
|
|
|
|
https://github.com/hartmanm How difficulty adjustment works: Every 2016 blocks, the Network adjusts the current difficulty to estimated difficulty in an attempt to keep the block generation time at 10 minutes or 600 seconds. Thus the Network re-targets the difficulty at a total difficulty time of: 2016 blocks * 10 minutes per block = 20160 minutes / 60 minutes = 336 hours / 24 hours = 14 days. When the Network hashrate is increasing; a difficulty ( 2016 blocks ) should take less than 14 days. How much less can be estimated by comparing the % Network hashrate growth + what the Network hashrate was at the beginning of the difficulty ( 2016 blocks ) against what the Network hashrate was at the beginning of the difficulty ( 2016 blocks ). This is only an estimate because you cannot account for "luck"; but you can calculate reasonably well using explicitly delimited stochastic ranges. The easy way to think about this is to look at this graph and see how close to 0 the current data points are on its y axis. If the blue line is above 0 the difficulty ( 2016 ) blocks should take less than 14 days; if it is below it should take more. http://bitcoin.sipa.be/growth-10k.png
|
|
|
kopija
|
 |
April 13, 2017, 04:08:25 PM |
|
Version v0013 is up with all member requested changes. If you only mine ZEC; unless you want to use individual overclocks for each card this update is not necessary.
Change Log: v0013 (current release) dummy plug is no longer required added ccminer (both tpruvot and sp-hash) added CUDA 8.0 updated Claymore to 9.0 installed Ubuntu updates oneBash changes: moved pool addresses and ports to the top section added individual card cc and mc OC added 1050 switch (use if you have 1050's in your rig) added LBC, DUAL_ETC_PASC DUAL_ETC_LBC DUAL_ETH_PASC DUAL_ETH_LBC
Thanks for the update! Suggestion for future version: http://glances.readthedocs.io/en/stable/aoa/gpu.htmlCurrently the best monitoring tool for Linux, IMHO.
|
we are nothing but a smart contracts on a cosmic blockchain
|
|
|
newmz
Sr. Member
  
Offline
Activity: 372
Merit: 250
The road of excess leads to the palace of wisdom
|
 |
April 15, 2017, 11:21:27 AM |
|
This is all very interesting to me because I currently use Windows 8.1 for my Nvidia Rig but in the past I have mined with my AMD rigs using EthOS and found it much more stable and reliable, and using much lower resources, etc - so a Linux solution for an Nvidia rig sounds great. I was considering trying the PiMP Nvidia version but when I tried that for my AMD rigs it confused the hell out of me.
Anyway, I have one big question because it doesn't seem clear to me from the small amount of info in this thread whether it will suit my rig. Reason being, my rig is currently 2 x Gigabyte gtx1070 G1 Gaming cards and 2 x Galax gtx 1060 6GB cards.
It seems from what I read above that when you set cc & mc overclock and powerlimit - this is one setting to apply to all cards on the rig. This is obviously suitable for the typical situation where people commonly use multiple instances of 1 type of card on a rig, which I understand - people choose a GPU, buy however many of them and populate the rig with them.
What about a situation like mine though, where I have 2 of one GPU and 2 of another, so I need to be able to specify different OC and powerlimits for the different cards.
Currently in Windows 8.1 I just use MSI afterburner and set each card individually, so for example I mine ZEC and the 1070s are powerlimited to around 68%, core OC to +70 and mem OC to +700, while the 1060s are powerlimited to about 75%, core OC to +50 and me OC to +500. Using EWBF this is giving me approximately 1400sol/s using 500W at the wall.
Since the powerlimit in Linux seems to be set in watts rather than percent, one setting in a percentage (70% would probably work) but I would need to be able to set the 1070s to around 125W and the 160s to around 85W. The CC could all conceivably be set to +60 and MC to +500 or +600 but is there a way to set powerlimits individually for each card?
|
Crypto currency enthusiast and miner since 2015. Mined approx 200 ETH during 2016 and 2017 and sold it at approximately $US40 each. Then I watched it reach $1000+ each. If anyone bothers to read this stuff pay attention to this: HODL HODL HODL HODL HODL HODL
I started mining with 1 AMD 7950 and 1 R9-280X. Then I gradually built my AMD operation into 12 R9-290s. Awesome ETH hash but ridiculous power consumption and heat. Over the last year I defected to the Nvidia team. I now use GTX 1070s. They were expensive to buy (probably a bargain now) but awesome hash rate vs. power consumption. blah blah blah blah
|
|
|
DMQUALITY
Newbie
Offline
Activity: 1
Merit: 0
|
 |
April 25, 2017, 02:41:54 PM Last edit: April 25, 2017, 03:16:31 PM by DMQUALITY |
|
Hello, good OC. But could you integrate fan control in the system. Something like this. #!/bin/sh # cool_gpu This script will enable or disable fixed gpu fan speed # # chkconfig: 345 95 5 # description: A script hack for GPU fan control on headless GPU nodes #
# Copyright (c)2011, Axel Kohlmeyer <akohlmey@gmail.com>
# locations of all the magic dir=/opt/set-gpu-fans smi=/usr/bin/nvidia-smi set=/usr/bin/nvidia-settings
# if we have a previous GPU logger, terminate it nvlpid=`pgrep -P 1 nvidia-smi` if [ "x${nvlpid}" != "x" ] then kill -TERM ${nvlpid} fi
# determine major driver version ver=`awk '/NVIDIA/ {print $8}' /proc/driver/nvidia/version | cut -d . -f 1`
# drivers from 285.x.y on allow persistence mode setting # so we should not need the logger hack anymore if [ ${ver} -ge 285 ] then ${smi} -pm 1 else # initialize GPU logger printing status once per hour # to keep an active handle on the GPUs and we don't # "lose" the settings applied in the following section. nohup ${smi} -d -l -i 900 < /dev/null &> /dev/null &
# it always takes some time to get all the GPUs initialized, # (about 5 seconds) so we give nvidia-smi a little while until # we launch any X servers to avoid unwanted race conditions. sleep 10 fi
# for multiple tesla devices, only one display is supported. # thus we need to launch the X server once for each display # making each of the PCI IDs the primary device in turn.
# command to set fan speed on primary GPU. nvscmd="${set} -a [gpu:0]/GPUFanControlState=1 -a [fan:0]/GPUCurrentFanSpeed=85"
# go back to automatic, if called with stop argument if [ "x$1" == "xstop" ] then # with the newer driver, going back to default # is simple. we just turn off persistence mode # and trigger one reset by showing the GPU status if [ ${ver} -ge 285 ] then ${smi} -pm 0 ${smi} exit else nvscmd="${set} -a [gpu:0]/GPUFanControlState=0" fi fi
# get PCI bus ids of Nvidia cards and convert from hexadecimal to decimal. watch out for the falling toothpicks. pciid=`lspci | sed -n -e '/VGA compatib.*nVidia/s/^\(..\):\(..\).\(.\).*/printf "PCI:%d:%d:%d\\\\\\\\n" 0x\1 0x\2 0x\3;/p'`
for s in `eval ${pciid}` do \ cfg=`mktemp /tmp/xorg-XXXXXXXX.conf` sed -e s,@GPU_BUS_ID@,${s}, \ -e s,@SET_GPU_DIR@,${dir}, \ ${dir}/xorg.conf >> ${cfg} xinit ${nvscmd} -- :0 -once -config ${cfg} rm -f ${cfg} done
# no need to keep the logger around if [ "x$1" == "xstop" ] then nvlpid=`pgrep -P 1 nvidia-smi` if [ "x${nvlpid}" != "x" ] then kill -TERM ${nvlpid} fi fi
Just find another way, but after this overclock doesn't work/ sudo nvidia-xconfig sudo nvidia-xconfig --cool-bits=4
Reboot your system and open NVIDIA X Server from Unity dash. You will find an extra Thermal Settings entry has been added where you can adjust GPU fan speed using the slider.
|
|
|
|
machiavellious
Newbie
Offline
Activity: 8
Merit: 0
|
 |
April 28, 2017, 07:21:31 AM |
|
Thank you for this! I was pulling my hair out trying to OC on 16.04.
I'm having trouble getting my ethernet to work, but I'm sure I'll figure it out. 16.04 server also didn't connect via ethernet until I tweaked a few things, but unfortunately the same tweaks aren't working for nvOC, and my usbstick is too painfully slow to want to keep troubleshooting. Wifi works no problem though.
As a token of my appreciation, I'll be mining ZEC with the default oneBash until I get a faster usbstick.
|
|
|
|
zer0k
|
 |
April 28, 2017, 05:18:30 PM |
|
Pretty sure this is something really simple...but I can't get a login to the desktop  I get a prompt with either m1 or Guest session and the password miner1 doesn't work for me at all. It seems to take it but just jumps back to the login screen again 
|
|
|
|
zer0k
|
 |
April 28, 2017, 07:14:31 PM |
|
Doesn't help  I think the issue is with the fact I'm running headless over iKVM on a SuperMicro X9DR7-LN4F that is using the Matrox GPU built into the IPMI chipset.
|
|
|
|
zer0k
|
 |
April 28, 2017, 07:39:06 PM Last edit: April 28, 2017, 07:51:49 PM by zer0k |
|
Doesn't help  I think the issue is with the fact I'm running headless over iKVM on a SuperMicro X9DR7-LN4F that is using the Matrox GPU built into the IPMI chipset. That's probably it. Might work with a monitor direct to the primary GPU. What cards are you running on this? What program or application layer protocol are you using to remote access? Do you mine XMR or other coin with the cpus? I'm running a couple of 1080 ti cards, but need it to work without a monitor Remote access is via the built in IPMI on the Supermicro motherboard https://www.servethehome.com/supermicro-ipmiview-review-remote-server-monitoring-management-ipmi-20-kvm-over-ip/I might do some XMR mining as it's a dual E5 Xeon board  The same symptoms occur if I take a vanilla ubuntu install and then load the nvidia drivers. Login prompt takes the password, but then just reloads to the login again
|
|
|
|
machiavellious
Newbie
Offline
Activity: 8
Merit: 0
|
 |
April 29, 2017, 04:55:44 AM |
|
What motherboard are you using?
I'm using a Gigabyte 990FXA-UD3 R5. To get ethernet working on 16.04 server I just had to enable ioummu in the bios, and just set dhcp in /etc/network/interfaces.
|
|
|
|
philipma1957
Legendary
Offline
Activity: 4522
Merit: 9928
'The right to privacy matters'
|
 |
May 07, 2017, 03:04:31 AM |
|
Very nice setup.
I can see this doing 1080 ti
With a gold Evga 1600 g2
It will be more costly but if you only have room for 1 rig you could push 3840 hash .
My two Msi 1080 ti do 640 and 642 which is 1282
I posted in thread I think you could do the build with 1080 ti. And push 3800 h
This would cost more but if you could only have 1 unit it would be a monster
|
|
|
|
|