Bitcoin Forum
May 22, 2024, 07:06:53 AM *
News: Latest Bitcoin Core release: 27.0 [Torrent]
 
   Home   Help Search Login Register More  
Pages: [1]
  Print  
Author Topic: CGMiner on Ubuntu Hangs on Startup  (Read 2902 times)
lsmith130 (OP)
Newbie
*
Offline Offline

Activity: 7
Merit: 0


View Profile
November 08, 2012, 12:49:14 AM
Last edit: November 08, 2012, 01:10:20 AM by lsmith130
 #1

I've been going at this for two days now. When I start CGMiner it clears the screen and says
Code:
[2012-11-07 19:43:35] Started cgminer 2.9.1
and just sits like that. If I run it with the flags that don't launch it fully, like --help, it works fine, except with -n, which causes it to hang too. I've tried everything I can think of, including running as root and uninstalling and reinstalling the drivers and even Ubuntu from scratch. Any help would be very much appreciated.

-- Edit

Oh and my rig has one 5970 and is running Ubuntu 12.10
SAC
Sr. Member
****
Offline Offline

Activity: 322
Merit: 250


View Profile
November 08, 2012, 01:44:18 AM
 #2

I've been going at this for two days now. When I start CGMiner it clears the screen and says
Code:
[2012-11-07 19:43:35] Started cgminer 2.9.1
and just sits like that. If I run it with the flags that don't launch it fully, like --help, it works fine, except with -n, which causes it to hang too. I've tried everything I can think of, including running as root and uninstalling and reinstalling the drivers and even Ubuntu from scratch. Any help would be very much appreciated.

-- Edit

Oh and my rig has one 5970 and is running Ubuntu 12.10

Try the command below to make sure the driver is correctly identifying the card(s).

Code:

miner4@miner4:~$ DISPLAY=:0 aticonfig --odgc --adapter=all

Adapter 0 - ATI Radeon HD 5800 Series 
                            Core (MHz)    Memory (MHz)
           Current Clocks :    835           1185
             Current Peak :    835           1185
  Configurable Peak Range : [600-875]     [900-1200]
                 GPU load :    99%

Adapter 1 - ATI Radeon HD 5900 Series
                            Core (MHz)    Memory (MHz)
           Current Clocks :    780           1185
             Current Peak :    780           1185
  Configurable Peak Range : [550-1000]     [1200-1500]
                 GPU load :    99%

Adapter 2 - ATI Radeon HD 5900 Series
                            Core (MHz)    Memory (MHz)
           Current Clocks :    780           1185
             Current Peak :    780           1185
  Configurable Peak Range : [550-1000]     [1200-1500]
                 GPU load :    99%

Now in this snippet I post my 5970 is the second/third in the system. You do not mention if using a config file or not so here is the one from that machine with the first card and pools removed.

Code:

miner4@miner4:~$ cat .cgminer/cgminer.conf
{
"pools" : [
{
"url" : "....",
"user" : "....",
"pass" : "...."
},
{
"url" : ""....",
"user" :"....",
"pass" : "...."
},
{
"url" : "....",
"user" : "....",
"pass" : "...."
}
],

"intensity" : "8,8",
"gpu-engine" : "0-800,0-800",
"gpu-memclock" : "300,300",
"gpu-threads" : "2",
"auto-fan" : true,
"auto-gpu" : true,
"temp-target" : "76",
"gpu-fan" : "0-100",
"log" : "5",
"queue" : "1",
"retry-pause" : "5",
"scan-time" : "60",
"donation" : "0",
"shares" : "0",
"api-listen" : true,
"api-network" : true,
"api-port" : "4028",
"kernel-path" : "/usr/local/bin"
}

You want to change the engine and memclock settings to something your cards support and sometimes when I start and it hangs like you describe I need to do rm *.bin in my home directory where it puts the .bin files it compiles up for use with the program then it starts normally. If you get it running and see no temperatures then quit program and before starting do export DISPLAY=:0 in the terminal window.

lsmith130 (OP)
Newbie
*
Offline Offline

Activity: 7
Merit: 0


View Profile
November 08, 2012, 02:01:43 AM
 #3

Ahhh it gave me an error telling me I forgot to rerun
Code:
sudo aticonfig -f --initial --adapter=all
when I reinstalled. I've got a different error now but I think I can handle it from here. Thanks so much!
SAC
Sr. Member
****
Offline Offline

Activity: 322
Merit: 250


View Profile
November 08, 2012, 02:18:07 AM
 #4

Ahhh it gave me an error telling me I forgot to rerun
Code:
sudo aticonfig -f --initial --adapter=all
when I reinstalled. I've got a different error now but I think I can handle it from here. Thanks so much!

Your welcome and post again if you need any more help getting it going.
svirus
Member
**
Offline Offline

Activity: 72
Merit: 10


View Profile WWW
November 08, 2012, 09:53:57 AM
 #5

I getting sometimes same problem on pure debian OS.
5850 lastest SDK,ADL, and catalyst.

Restert X windows help.

Try :
Code:
sudo killall X

This will restart your graphic environment.

lsmith130 (OP)
Newbie
*
Offline Offline

Activity: 7
Merit: 0


View Profile
November 08, 2012, 02:00:55 PM
 #6

Ok I've almost got everything working. My last problem is that both GPUs are detected and it says they both are hashing at about half of normal rate, but the activity on the second GPU is 0% and the clock is running at idle speeds. if I set the intensity on the second GPU to 0 then it reports that GPU 0 is hashing at normal rate and GPU 1 is barely hashing. I tried using the --gpu-map option to swap the GPUs, and the hardware reports are switched appropriately but the (already incorrect) hash rate reports are unchanged. If I could at least choose which GPU to run then I could run two instances of CGMiner, one for each GPU.
SAC
Sr. Member
****
Offline Offline

Activity: 322
Merit: 250


View Profile
November 08, 2012, 08:33:48 PM
 #7

Ok I've almost got everything working. My last problem is that both GPUs are detected and it says they both are hashing at about half of normal rate, but the activity on the second GPU is 0% and the clock is running at idle speeds. if I set the intensity on the second GPU to 0 then it reports that GPU 0 is hashing at normal rate and GPU 1 is barely hashing. I tried using the --gpu-map option to swap the GPUs, and the hardware reports are switched appropriately but the (already incorrect) hash rate reports are unchanged. If I could at least choose which GPU to run then I could run two instances of CGMiner, one for each GPU.

Should be no need for second cgminer instance. Can you post the --odgc output when it is running, the cgminer.conf you are using and the command on starting it. Now most times on litecoin mining it will not set the clocks properly for me or even if it shows the clock properly set it seems to get stuck at lower than expected hash rate this is when I need to set the clock manually which seems to jump start it to proper hash rate, this is sometimes needed to be done a couple of times in a row for it to work. This may be worth try here the clock setting command example below, the engine speed is first followed by memory then the adapter (card) you want to set it on.

Code:

DISPLAY=:0 aticonfig --od-setclocks=770,1050 --adapter=0
Scopedude
Newbie
*
Offline Offline

Activity: 22
Merit: 0


View Profile
November 08, 2012, 08:50:14 PM
 #8

type

export DISPLAY=:0

before you start cgminer
lsmith130 (OP)
Newbie
*
Offline Offline

Activity: 7
Merit: 0


View Profile
November 08, 2012, 09:52:35 PM
 #9

Unfortunately I've already tried using
Code:
export DISPLAY=:0
to no avail.

I start it using
Code:
screen cgminer
my config file:
Code:
{
"pools" : [
        {
                "url" : "http://mint.bitminter.com",
                "user" : "lsmith130.worker1",
                "pass" : "pass"
        }
]
,
"intensity" : "14,0",
"vectors" : "2,2",
"worksize" : "256,128",
"kernel" : "phatk,phatk",
"lookup-gap" : "0,0",
"thread-concurrency" : "0,0",
"shaders" : "0,0",
"gpu-engine" : "0-0,0-0",
"gpu-fan" : "0-100,0-100",
"gpu-memclock" : "150,150",
"gpu-memdiff" : "0,0",
"gpu-powertune" : "0,0",
"gpu-vddc" : "0.000,0.000",
"temp-cutoff" : "95,95",
"temp-overheat" : "85,85",
"temp-target" : "75,75",
"api-port" : "4028",
"expiry" : "120",
"gpu-dyninterval" : "7",
"gpu-platform" : "0",
"gpu-threads" : "2",
"log" : "5",
"queue" : "1",
"scan-time" : "60",
"temp-hysteresis" : "3",
"shares" : "0",
"kernel-path" : "/usr/local/bin"
}
and --odgc output:
Code:
Adapter 0 - ATI Radeon HD 5900 Series
                            Core (MHz)    Memory (MHz)
           Current Clocks :    725           150
             Current Peak :    725           150
  Configurable Peak Range : [550-1000]     [150-1500]
                 GPU load :    99%

Adapter 1 - ATI Radeon HD 5900 Series
                            Core (MHz)    Memory (MHz)
           Current Clocks :    157           150
             Current Peak :    725           150
  Configurable Peak Range : [550-1000]     [150-1500]
                 GPU load :    0%

Here is the displayed hash rate in cgminer
Code:
GPU 0:  51.0C 4551RPM | 166.3M/172.1Mh/s | A:15 R:0 HW:0 U:2.00/m I:14
 GPU 1:  34.0C 4545RPM | 199.2M/148.2Mh/s | A:14 R:0 HW:0 U:1.87/m I:14

These numbers always add to the expected hash rate for GPU 0 only.

After using aticonfig to overclock the --odgc reports:
Code:
Adapter 0 - ATI Radeon HD 5900 Series
                            Core (MHz)    Memory (MHz)
           Current Clocks :    750           150
             Current Peak :    750           150
  Configurable Peak Range : [550-1000]     [150-1500]
                 GPU load :    99%

Adapter 1 - ATI Radeon HD 5900 Series
                            Core (MHz)    Memory (MHz)
           Current Clocks :    157           150
             Current Peak :    770           1050
  Configurable Peak Range : [550-1000]     [150-1500]
                 GPU load :    0%

I'm completely stuck. Somehow cgminer is convinced that two of the threads are running on GPU 1 when really all four are on GPU 0.
SAC
Sr. Member
****
Offline Offline

Activity: 322
Merit: 250


View Profile
November 08, 2012, 10:41:25 PM
 #10

Unfortunately I've already tried using
Code:
export DISPLAY=:0
to no avail.

I start it using
Code:
screen cgminer
my config file:
Code:
{
"pools" : [
        {
                "url" : "http://mint.bitminter.com",
                "user" : "lsmith130.worker1",
                "pass" : "pass"
        }
]
,
"intensity" : "14,0",
"vectors" : "2,2",
"worksize" : "256,128",
"kernel" : "phatk,phatk",
"lookup-gap" : "0,0",
"thread-concurrency" : "0,0",
"shaders" : "0,0",
"gpu-engine" : "0-0,0-0",
"gpu-fan" : "0-100,0-100",
"gpu-memclock" : "150,150",
"gpu-memdiff" : "0,0",
"gpu-powertune" : "0,0",
"gpu-vddc" : "0.000,0.000",
"temp-cutoff" : "95,95",
"temp-overheat" : "85,85",
"temp-target" : "75,75",
"api-port" : "4028",
"expiry" : "120",
"gpu-dyninterval" : "7",
"gpu-platform" : "0",
"gpu-threads" : "2",
"log" : "5",
"queue" : "1",
"scan-time" : "60",
"temp-hysteresis" : "3",
"shares" : "0",
"kernel-path" : "/usr/local/bin"
}
and --odgc output:
Code:
Adapter 0 - ATI Radeon HD 5900 Series
                            Core (MHz)    Memory (MHz)
           Current Clocks :    725           150
             Current Peak :    725           150
  Configurable Peak Range : [550-1000]     [150-1500]
                 GPU load :    99%

Adapter 1 - ATI Radeon HD 5900 Series
                            Core (MHz)    Memory (MHz)
           Current Clocks :    157           150
             Current Peak :    725           150
  Configurable Peak Range : [550-1000]     [150-1500]
                 GPU load :    0%

Here is the displayed hash rate in cgminer
Code:
GPU 0:  51.0C 4551RPM | 166.3M/172.1Mh/s | A:15 R:0 HW:0 U:2.00/m I:14
 GPU 1:  34.0C 4545RPM | 199.2M/148.2Mh/s | A:14 R:0 HW:0 U:1.87/m I:14

These numbers always add to the expected hash rate for GPU 0 only.

After using aticonfig to overclock the --odgc reports:
Code:
Adapter 0 - ATI Radeon HD 5900 Series
                            Core (MHz)    Memory (MHz)
           Current Clocks :    750           150
             Current Peak :    750           150
  Configurable Peak Range : [550-1000]     [150-1500]
                 GPU load :    99%

Adapter 1 - ATI Radeon HD 5900 Series
                            Core (MHz)    Memory (MHz)
           Current Clocks :    157           150
             Current Peak :    770           1050
  Configurable Peak Range : [550-1000]     [150-1500]
                 GPU load :    0%

I'm completely stuck. Somehow cgminer is convinced that two of the threads are running on GPU 1 when really all four are on GPU 0.

No your config file tells it to not use the second gpu with this "intensity" : "14,0", you have an intensity of 0 second gpu which is off/don't use, now as I understand it nothing over 9 is for BTC mining so that should be at most 9 and I never found that anything above 8 gave me more hashes on BTC for my cards. You also have lot of extra junk in that file try mine I posted earlier with your pool in there it should just work giving you about ~365mh/s per core. Oh and your export command did work as you see temperatures listed without it you will not see any.
lsmith130 (OP)
Newbie
*
Offline Offline

Activity: 7
Merit: 0


View Profile
November 08, 2012, 11:17:18 PM
 #11

Oh sorry i have intensity turned off on GPU 1 in the config so I can get an accurate reading on GPU 0's hash rate. The values I posted of the hash rate show I:14 for both GPUs
SAC
Sr. Member
****
Offline Offline

Activity: 322
Merit: 250


View Profile
November 08, 2012, 11:30:27 PM
 #12

Oh sorry i have intensity turned off on GPU 1 in the config so I can get an accurate reading on GPU 0's hash rate. The values I posted of the hash rate show I:14 for both GPUs

Yes I had seen that as well still does not change the idea that it is way to high for BTC mining anything above 9 was added for LTC mining, try the simplified .conf file it should just work.
lsmith130 (OP)
Newbie
*
Offline Offline

Activity: 7
Merit: 0


View Profile
November 08, 2012, 11:38:55 PM
 #13

Yeah I'm trying to find the peak under 9 right now. Thanks for the tip and the config file. I was just using the auto-generated one.

 And yes the export command did work for that, however I still only can get GPU 0 to mine  Sad
SAC
Sr. Member
****
Offline Offline

Activity: 322
Merit: 250


View Profile
November 09, 2012, 12:05:09 AM
 #14

Yeah I'm trying to find the peak under 9 right now. Thanks for the tip and the config file. I was just using the auto-generated one.

 And yes the export command did work for that, however I still only can get GPU 0 to mine  Sad

You can put I think it is -D -T on the end of your startup command to get the debug output this may be worth try.

Edit: Also the official support thread link is below you may want to post the debug info there I don't really have clue how to read that.

https://bitcointalk.org/index.php?topic=28402.0
Scopedude
Newbie
*
Offline Offline

Activity: 22
Merit: 0


View Profile
November 09, 2012, 12:41:34 AM
 #15

when you start cgminer do you add the config file?  I start it like this

./cgminer -c config.cfg
Cletus
Newbie
*
Offline Offline

Activity: 1
Merit: 0


View Profile
May 20, 2013, 02:18:48 AM
Last edit: May 20, 2013, 02:55:43 AM by Cletus
 #16

I am having the same issue as the original poster. I have no .bin files in my home directory and I start my configuration script with:

Code:
#!/bin/sh
export DISPLAY=:0
export GPU_USE_SYNC_OBJECTS=1
cd /home/worker1/cgminer
./cgminer -o http://pool:port -u username -p password --api-listen --api-network -I 5 --gpu-reorder --auto-fan --gpu-powertune 20

I have tried loading cgminer with out the configuration file above and it still crashes right after I enter my password. I had cgminer running right after I installed, but since rebooting can't get it up and running.

Any thoughts would appreciated.

P.S.
Here is the point where my computer crashes when connecting to Slush (stratum.bitcoin.cz:3333) mining pool:
Popping ping in miner thread
Popping work from get queue to get work
Viceroy
Hero Member
*****
Offline Offline

Activity: 924
Merit: 501


View Profile
May 20, 2013, 02:21:46 AM
 #17

my suggestion...

dump ubuntu and get a real os.  follow the centos guide in my sig.
Pages: [1]
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!