Scared
Member
Offline
Activity: 70
Merit: 10
|
|
March 16, 2012, 08:09:50 PM |
|
Found the bug that caused cgminer to fail under BAMT. If you find cgminer not launching correctly you need to edit /opt/bamt/common.pl. Find the sub startCGMiner function and change the following line $ENV{DISPLAY} = ":0.0"; to $ENV{DISPLAY} = ":0";
|
|
|
|
lodcrappo (OP)
|
|
March 16, 2012, 08:36:54 PM |
|
Found the bug that caused cgminer to fail under BAMT. If you find cgminer not launching correctly you need to edit /opt/bamt/common.pl. Find the sub startCGMiner function and change the following line $ENV{DISPLAY} = ":0.0"; to $ENV{DISPLAY} = ":0";
What version/situation/etc does this apply to?
|
|
|
|
Scared
Member
Offline
Activity: 70
Merit: 10
|
|
March 16, 2012, 09:07:28 PM |
|
Installed BAMT 0.5c Install amd-driver-installer-12-2-x86.x86_64.run --force
bamt.conf -> cgminer: 1 cgminer_opts: --api-listen -I 10 -u [whatever] -p [moreso]
gpu0-3 disabled: 0 cgminer: 1
The machine has 4 x 7970
I changed your pearl script because it called with a DISPLAY= :0.0 when it should be calling DISPLAY = :0
|
|
|
|
lodcrappo (OP)
|
|
March 16, 2012, 09:10:30 PM |
|
Installed BAMT 0.5c Install amd-driver-installer-12-2-x86.x86_64.run --force
bamt.conf -> cgminer: 1 cgminer_opts: --api-listen -I 10 -u [whatever] -p [moreso]
gpu0-3 disabled: 0 cgminer: 1
The machine has 4 x 7970
I changed your pearl script because it called with a DISPLAY= :0.0 when it should be calling DISPLAY = :0
my point is that it works correctly with :0.0 for most people. I am trying to ascertain in what situation it does not work correctly. is this something that applies to 7970s only? perhaps because of the different driver you are using in that case? and probably more importantly, if I change to :0, will it break things for everyone else? always tricky.
|
|
|
|
|
stoppots
|
|
March 16, 2012, 11:49:41 PM |
|
think I read somewhere about installing opencl after the 12.2 driver causes problems. I messed with it yesterday and I don't recall seeing the option to uncheck the APP SDK that normally is present
|
|
|
|
lodcrappo (OP)
|
|
March 17, 2012, 01:29:18 AM |
|
Trying to research this.. I'm not finding many references or examples of leaving off the screen number (using only :display instead of the full specifier :display.screen) or reasons for doing this. What made you think it should be done, and is there an explanation of why you would export only the partial setting? Mostly I am worried that this may cause problems with other cgminer versions or driver versions or whatever it is that wants the strange setting in the first place. Any pointer to how you came up with this is appreciated. If I can be sure it's not going to be more trouble, I'll just change the script in a fix/next image. As for 12.2, I haven't tried it (and don't have enough variations of hardware here to make good judgment anyway). As of 12.1, it was often said that any driver past 11.6 caused significant slow downs in mining performance, so we have stuck with 11.6 so far. Same with SDK 2.4, although there are are a couple cards that get better performance with 2.6, most have worse, so we stuck with 2.4. Like I said, I can't judge this myself so rely on whats said here on the forums and from people running large farms for feedback and advice on what versions to use in the stock image.
|
|
|
|
boozer
|
|
March 17, 2012, 03:38:53 AM |
|
Trying to research this.. I'm not finding many references or examples of leaving off the screen number (using only :display instead of the full specifier :display.screen) or reasons for doing this. What made you think it should be done, and is there an explanation of why you would export only the partial setting? Mostly I am worried that this may cause problems with other cgminer versions or driver versions or whatever it is that wants the strange setting in the first place. Any pointer to how you came up with this is appreciated. If I can be sure it's not going to be more trouble, I'll just change the script in a fix/next image.
Its in the cgminer readme... On Linux you virtually always need to export your display settings before starting to get all the cards recognised and/or temperature+clocking working:
export DISPLAY=:0 I didnt dig into your scripts to find out why it wasnt working on my 7970's, but I did create my own script to export that display variable to get it working on bamt for myself.
|
|
|
|
lodcrappo (OP)
|
|
March 17, 2012, 04:24:44 AM |
|
Trying to research this.. I'm not finding many references or examples of leaving off the screen number (using only :display instead of the full specifier :display.screen) or reasons for doing this. What made you think it should be done, and is there an explanation of why you would export only the partial setting? Mostly I am worried that this may cause problems with other cgminer versions or driver versions or whatever it is that wants the strange setting in the first place. Any pointer to how you came up with this is appreciated. If I can be sure it's not going to be more trouble, I'll just change the script in a fix/next image.
Its in the cgminer readme... On Linux you virtually always need to export your display settings before starting to get all the cards recognised and/or temperature+clocking working:
export DISPLAY=:0 I didnt dig into your scripts to find out why it wasnt working on my 7970's, but I did create my own script to export that display variable to get it working on bamt for myself. Ok, if cgminer docs call for it then good enough for me. I will update the script to give cgminer :0. Still don't see why cgminer needs a strange display export, been using X for years and always used :display.screen not just :display, not to mention all the other tools (atitweak, phoenix, etc) use the :0.0 form, and I don't see why this is only effecting 7970 people if cgminer docs call for it on all versions, but I'm not much of an X person so maybe its normal or has a sensible explanation. seems like everything that comes up in bamt lately never gets explained
|
|
|
|
boozer
|
|
March 17, 2012, 04:40:18 AM |
|
..... seems like everything that comes up in bamt lately never gets explained Lol, sometimes it just goes that way...
|
|
|
|
Splirow
|
|
March 17, 2012, 05:41:14 PM |
|
Does any one knows where is the mgpumon.css file? I can't find it.
I want to try something
|
|
|
|
Red Emerald
|
|
March 17, 2012, 06:23:34 PM |
|
My rig was crashing when I changed some fan speeds. I put them back and it started working again just fine. So weird.
Best advice IMO: When you get it working, stop fuxing with it.
|
|
|
|
Isokivi
|
|
March 17, 2012, 08:21:19 PM |
|
This is propably more of a feature than a bug/problem, but im clueless as to where to start looking for a solution so here goes: (In before rtfm and read the thread, done multiple times) Im running a rig with 3 gpu's in it and to the gpu0 (5830) there is a monitor attached. Whenever I turn the monitor off my hashrate drops about 100Mhs. the motherboard is Asus P5Q-E, bamt is in it's most recent version (up to date 5h ago atleast). Any other info I should supply ?
And I do know I shouldnt even have a monitor attached to the rig, but as it is very much a work in progress I find it easyer this way.
I am unsure what happens when the monitor powers itself doen after X minutes, am currently waiting for this to happen.
Any and all hints would greatly be appreciated.
|
Bitcoin trinkets now on my online store: btc trinkets.com <- Bitcoin Tiepins, cufflinks, lapel pins, keychains, card holders and challenge coins.
|
|
|
lodcrappo (OP)
|
|
March 17, 2012, 08:25:22 PM |
|
This is propably more of a feature than a bug/problem, but im clueless as to where to start looking for a solution so here goes: (In before rtfm and read the thread, done multiple times) Im running a rig with 3 gpu's in it and to the gpu0 (5830) there is a monitor attached. Whenever I turn the monitor off my hashrate drops about 100Mhs. the motherboard is Asus P5Q-E, bamt is in it's most recent version (up to date 5h ago atleast). Any other info I should supply ?
And I do know I shouldnt even have a monitor attached to the rig, but as it is very much a work in progress I find it easyer this way.
I am unsure what happens when the monitor powers itself doen after X minutes, am currently waiting for this to happen.
Any and all hints would greatly be appreciated.
first time I've ever heard of monitor on/off effecting hash rate. try swapping GPUs around so a different one is gpu 0, maybe will go away.
|
|
|
|
malevolent
can into space
Legendary
Offline
Activity: 3472
Merit: 1724
|
|
March 17, 2012, 08:34:27 PM Last edit: March 17, 2012, 09:17:24 PM by malevolent |
|
I had a monitor hooked up to a rig for some time because I was testing lowest energy consuming BIOS settings (no way to do that remotely I think at least not cheaply). Didn't get any problems after turning it off but I did once I unplugged it (bamt crashed, but it was more driver-related as far as I remember), I remember another user here having the same problem but the solution is to unplug the monitor after saving the BIOS settings (before booting bamt). But this is really not much of a problem Once you have everything set up, boot the rig without the monitor and see throught web gpumon or ssh if everything's correct. It happens for you with all gpus, regardless of the slot used?
|
Signature space available for rent.
|
|
|
Isokivi
|
|
March 17, 2012, 08:44:07 PM |
|
I had a monitor hooked up to a rig for some time because I was testing lowest energy consuming BIOS settings (no way to do that remotely I think at least not cheaply). Did get any problems after turning it off but I did once I unplugged it (bamt crashed, but it was more driver-related as far as I remember), I remember another user here having the same problem but the solution is to unplug the monitor after saving the BIOS settings (before booting bamt). But this is really not much of a problem Once you have everything set up, boot the rig without the monitor and see throught web gpumon or ssh if everything's correct. It happens for you with all gpus, regardless of the slot used? No, just Gpu0. Im going to try Lordcrappos advice once Im done swapping out the stock cooling on the next gpu im plugging in as it's a lowly 5770 and I figure having the lowest hashing card as gpu0 makes sense... guess that'll be tuesday because of snail-mail latency
|
Bitcoin trinkets now on my online store: btc trinkets.com <- Bitcoin Tiepins, cufflinks, lapel pins, keychains, card holders and challenge coins.
|
|
|
malevolent
can into space
Legendary
Offline
Activity: 3472
Merit: 1724
|
|
March 18, 2012, 12:35:53 AM Last edit: March 18, 2012, 11:23:08 AM by malevolent |
|
first time I've ever heard of monitor on/off effecting hash rate. try swapping GPUs around so a different one is gpu 0, maybe will go away.
Just had time to check bamt with fat16, boots and works flawlessly now, will toss you a coin first thing in the morning EDIT: Tossed.
|
Signature space available for rent.
|
|
|
xenon5
Newbie
Offline
Activity: 9
Merit: 0
|
|
March 18, 2012, 09:22:09 PM |
|
Thanks for this, I was looking for a good alternative to linuxcoin since it hasn't been maintained. I have a problem with overclocking, however. If I use the bamt config file to overclock, the system locks up and I have to force it to shut down, then boot up bamt on a computer without ati cards to fix the config file. However, if I use ati config to overclock the cards manually, everything works fine. At the moment I have 2 5770 cards. When overclocking worked, I used these commands: aticonfig --od-enable aticonfig --odsc=960,1200 --adapter=all
when I used the bamt config file and it locked up, I used these settings: card_0: ... core_0:960 core_1:960 core_2:960
mem_0:1200 mem_1:1200 mem_2:1200 ...
card_1: ... core_0:960 core_1:960 core_2:960
mem_0:1200 mem_1:1200 mem_2:1200 ...
I don't remember exactly what all the parameters are called but you get the point. What is the difference between how aticonfig sets up overclocking and how bamt sets up overclocking to cause this problem?
|
|
|
|
lodcrappo (OP)
|
|
March 18, 2012, 10:49:39 PM |
|
Thanks for this, I was looking for a good alternative to linuxcoin since it hasn't been maintained. I have a problem with overclocking, however. If I use the bamt config file to overclock, the system locks up and I have to force it to shut down, then boot up bamt on a computer without ati cards to fix the config file. However, if I use ati config to overclock the cards manually, everything works fine. At the moment I have 2 5770 cards. When overclocking worked, I used these commands: aticonfig --od-enable aticonfig --odsc=960,1200 --adapter=all
when I used the bamt config file and it locked up, I used these settings: card_0: ... core_0:960 core_1:960 core_2:960
mem_0:1200 mem_1:1200 mem_2:1200 ...
card_1: ... core_0:960 core_1:960 core_2:960
mem_0:1200 mem_1:1200 mem_2:1200 ...
I don't remember exactly what all the parameters are called but you get the point. What is the difference between how aticonfig sets up overclocking and how bamt sets up overclocking to cause this problem? Why are you changing profiles 0 and 1?? Please review overclocking basics and how the profiles work in ATI cards. aticonfig changes clocks only in profile 2 by default. so should you.
|
|
|
|
|
|