KillrBee
Jr. Member
Offline
Activity: 139
Merit: 3
|
|
January 23, 2022, 03:51:23 PM Last edit: January 23, 2022, 04:03:18 PM by KillrBee |
|
I am leaving the house now and can continue testing tomorrow. Thanks for support!
TeamBlackMiner_1_44_cuda_11_6_beta6.7z The dag is taking 10 seconds on the 3070 Mobile in 1-44 beta 6 with --dagintensity 1. I am able to run stable on higher memclocks My observations for almost a day of mining on a 12x3080 no LHR rig. 1. If you are using Hive OS, use cuda 11.4, 11.5 gives a slight drop in hashrate. 2. I tried all the cores of the miner, the core does not automatically set correctly in Hive OS (1.42 core 1, 1.43 core 3). You must manually set the sign from 6 to 8 if you use 1.42 and 12-15 if use 1.43 20xx 30xx series of cards. 3. It makes no sense to set the core clock above 1440, it does not give any performance, it only increases the temperature of the cards and power consumption. This only works on older cards like 1070, 1080, 1080 Ti with gdd5 memory. On cards with GDDR 6-6X memory, the core does not participate in ETH mining and there is NO point in increasing it! I tested the range from 1200 to 1600, anything above 1440 gives a drop in hashrate. 4. Option --xintensity , you need to select individually, for example, it works better for me on 8100 than on 4096. Bugs that I found, on a rig with 13 video cards 3090x2 3080x7 3070 x4 NO LHR, I could not achieve stable operation, constant dag generation errors and a lot of invalid shares. Also, a 12x3070 NO lhr rig gives errors during dag generation, option --dagintensity 1 is always on. Wishes, you should conduct detailed tests under HIVE OS, mining on Windows is extremely inconvenient and not popular. Big miners don't use Windows. Improve stability and fix core map definition. I will continue to test it on different rigs to find bugs and errors in the hive, as well as to find the optimal settings. I've extensively tested TBM with an RTX3090 and can conclusively say that setting the core clock above 1440 does make a difference, especially in a windows environment. setting core clock is the only way we have to set the voltage of the cards in Linux, and 1650 for the GPU sets the voltage to 0.78 which is the sweet spot for the 3090 with a high xintensity value. I've tested 1500, 1550 and 1600 and the only way I've been able to obtain 135+ MH/s at-pool hashrate is by being at 1650 -- I will provide one caveat on those numbers, is that the testing was done on versions 1.24 to 1.28, afterwards I stopped because I was happy with the performance, and the later versions focused on LHR, which for some reason had a negative effect on the non-LHR cards until this last release. Version 1.27 was where I was able to achieve the highest rates on my 3090, and version 1.41 is where the AMD cards started to achieve 66+ MH/s under windows and 65+ under HiveOS. -Xintensity of 8100 seems like it's much too high, since it will reduce the number of jobs being requested and lower the time your GPU is in compute mode, there's a significant drop off in at-pool performance when you get over a certain number, despite what the client says. The client may very well be hashing at a greater speed, but for whatever reason, the number of shares generated will suffer at higher intensities, which may be due to how work threads are abandoned when new jobs come in as per the stratum protocol. So those long running jobs are being held when they should have been released for a new, potentially lower difficulty job -- or that's my best guess.
|
|
|
|
ekgd39
Newbie
Offline
Activity: 30
Merit: 0
|
|
January 23, 2022, 03:59:34 PM |
|
I am leaving the house now and can continue testing tomorrow. Thanks for support!
TeamBlackMiner_1_44_cuda_11_6_beta6.7z The dag is taking 10 seconds on the 3070 Mobile in 1-44 beta 6 with --dagintensity 1. I am able to run stable on higher memclocks https://i.ibb.co/KGmn76x/1-44.pngMy observations for almost a day of mining on a 12x3080 no LHR rig. 1. If you are using Hive OS, use cuda 11.4, 11.5 gives a slight drop in hashrate. 2. I tried all the cores of the miner, the core does not automatically set correctly in Hive OS (1.42 core 1, 1.43 core 3). You must manually set the sign from 6 to 8 if you use 1.42 and 12-15 if use 1.43 20xx 30xx series of cards. 3. It makes no sense to set the core clock above 1440, it does not give any performance, it only increases the temperature of the cards and power consumption. This only works on older cards like 1070, 1080, 1080 Ti with gdd5 memory. On cards with GDDR 6-6X memory, the core does not participate in ETH mining and there is NO point in increasing it! I tested the range from 1200 to 1600, anything above 1440 gives a drop in hashrate. 4. Option --xintensity , you need to select individually, for example, it works better for me on 8100 than on 4096. Bugs that I found, on a rig with 13 video cards 3090x2 3080x7 3070 x4 NO LHR, I could not achieve stable operation, constant dag generation errors and a lot of invalid shares. Also, a 12x3070 NO lhr rig gives errors during dag generation, option --dagintensity 1 is always on. Wishes, you should conduct detailed tests under HIVE OS, mining on Windows is extremely inconvenient and not popular. Big miners don't use Windows. Improve stability and fix core map definition. I will continue to test it on different rigs to find bugs and errors in the hive, as well as to find the optimal settings. https://i.ibb.co/7QxhBbb/2022-01-23-151607.pngI've extensively tested TBM with an RTX3090 and can conclusively say that setting the core clock above 1440 does make a difference, especially in a windows environment. setting core clock is the only way we have to set the voltage of the cards in Linux, and 1650 for the GPU sets the voltage to 0.78 which is the sweet spot for the 3090 with a high xintensity value. I've tested 1500, 1550 and 1600 and the only way I've been able to obtain 135+ MH/s at-pool hashrate is by being at 1650 -- I will provide one caveat on those numbers, is that the testing was done on versions 1.24 to 1.28, afterwards I stopped because I was happy with the performance, and the later versions focused on LHR, which for some reason had a negative effect on the non-LHR cards until this last release. Version 1.27 was where I was able to achieve the highest rates on my 3090, and version 1.41 is where the AMD cards started to achieve 66+ MH/s under windows and 65+ under HiveOS. OK, I think you use it now ? Can you show your up time in hive os ?
|
|
|
|
KillrBee
Jr. Member
Offline
Activity: 139
Merit: 3
|
|
January 23, 2022, 04:11:24 PM Last edit: January 23, 2022, 04:22:11 PM by KillrBee |
|
I am leaving the house now and can continue testing tomorrow. Thanks for support!
TeamBlackMiner_1_44_cuda_11_6_beta6.7z The dag is taking 10 seconds on the 3070 Mobile in 1-44 beta 6 with --dagintensity 1. I am able to run stable on higher memclocks My observations for almost a day of mining on a 12x3080 no LHR rig. 1. If you are using Hive OS, use cuda 11.4, 11.5 gives a slight drop in hashrate. 2. I tried all the cores of the miner, the core does not automatically set correctly in Hive OS (1.42 core 1, 1.43 core 3). You must manually set the sign from 6 to 8 if you use 1.42 and 12-15 if use 1.43 20xx 30xx series of cards. 3. It makes no sense to set the core clock above 1440, it does not give any performance, it only increases the temperature of the cards and power consumption. This only works on older cards like 1070, 1080, 1080 Ti with gdd5 memory. On cards with GDDR 6-6X memory, the core does not participate in ETH mining and there is NO point in increasing it! I tested the range from 1200 to 1600, anything above 1440 gives a drop in hashrate. 4. Option --xintensity , you need to select individually, for example, it works better for me on 8100 than on 4096. Bugs that I found, on a rig with 13 video cards 3090x2 3080x7 3070 x4 NO LHR, I could not achieve stable operation, constant dag generation errors and a lot of invalid shares. Also, a 12x3070 NO lhr rig gives errors during dag generation, option --dagintensity 1 is always on. Wishes, you should conduct detailed tests under HIVE OS, mining on Windows is extremely inconvenient and not popular. Big miners don't use Windows. Improve stability and fix core map definition. I will continue to test it on different rigs to find bugs and errors in the hive, as well as to find the optimal settings. I've extensively tested TBM with an RTX3090 and can conclusively say that setting the core clock above 1440 does make a difference, especially in a windows environment. setting core clock is the only way we have to set the voltage of the cards in Linux, and 1650 for the GPU sets the voltage to 0.78 which is the sweet spot for the 3090 with a high xintensity value. I've tested 1500, 1550 and 1600 and the only way I've been able to obtain 135+ MH/s at-pool hashrate is by being at 1650 -- I will provide one caveat on those numbers, is that the testing was done on versions 1.24 to 1.28, afterwards I stopped because I was happy with the performance, and the later versions focused on LHR, which for some reason had a negative effect on the non-LHR cards until this last release. Version 1.27 was where I was able to achieve the highest rates on my 3090, and version 1.41 is where the AMD cards started to achieve 66+ MH/s under windows and 65+ under HiveOS. OK, I think you use it now ? Can you show your up time in hive os ? Here is my current HiveOS status -- I have a windows machine running an AMD 6900XT which I'm using to try and optimize my AMD cards in the Linux rig. I just realized the the GPU and Memory stats are missing from the display (happens with HiveOS). Here is my HiveOS GUI for the reference values:
|
|
|
|
ekgd39
Newbie
Offline
Activity: 30
Merit: 0
|
|
January 23, 2022, 04:30:01 PM |
|
I am leaving the house now and can continue testing tomorrow. Thanks for support!
TeamBlackMiner_1_44_cuda_11_6_beta6.7z The dag is taking 10 seconds on the 3070 Mobile in 1-44 beta 6 with --dagintensity 1. I am able to run stable on higher memclocks https://i.ibb.co/KGmn76x/1-44.pngMy observations for almost a day of mining on a 12x3080 no LHR rig. 1. If you are using Hive OS, use cuda 11.4, 11.5 gives a slight drop in hashrate. 2. I tried all the cores of the miner, the core does not automatically set correctly in Hive OS (1.42 core 1, 1.43 core 3). You must manually set the sign from 6 to 8 if you use 1.42 and 12-15 if use 1.43 20xx 30xx series of cards. 3. It makes no sense to set the core clock above 1440, it does not give any performance, it only increases the temperature of the cards and power consumption. This only works on older cards like 1070, 1080, 1080 Ti with gdd5 memory. On cards with GDDR 6-6X memory, the core does not participate in ETH mining and there is NO point in increasing it! I tested the range from 1200 to 1600, anything above 1440 gives a drop in hashrate. 4. Option --xintensity , you need to select individually, for example, it works better for me on 8100 than on 4096. Bugs that I found, on a rig with 13 video cards 3090x2 3080x7 3070 x4 NO LHR, I could not achieve stable operation, constant dag generation errors and a lot of invalid shares. Also, a 12x3070 NO lhr rig gives errors during dag generation, option --dagintensity 1 is always on. Wishes, you should conduct detailed tests under HIVE OS, mining on Windows is extremely inconvenient and not popular. Big miners don't use Windows. Improve stability and fix core map definition. I will continue to test it on different rigs to find bugs and errors in the hive, as well as to find the optimal settings. https://i.ibb.co/7QxhBbb/2022-01-23-151607.pngI've extensively tested TBM with an RTX3090 and can conclusively say that setting the core clock above 1440 does make a difference, especially in a windows environment. setting core clock is the only way we have to set the voltage of the cards in Linux, and 1650 for the GPU sets the voltage to 0.78 which is the sweet spot for the 3090 with a high xintensity value. I've tested 1500, 1550 and 1600 and the only way I've been able to obtain 135+ MH/s at-pool hashrate is by being at 1650 -- I will provide one caveat on those numbers, is that the testing was done on versions 1.24 to 1.28, afterwards I stopped because I was happy with the performance, and the later versions focused on LHR, which for some reason had a negative effect on the non-LHR cards until this last release. Version 1.27 was where I was able to achieve the highest rates on my 3090, and version 1.41 is where the AMD cards started to achieve 66+ MH/s under windows and 65+ under HiveOS. OK, I think you use it now ? Can you show your up time in hive os ? Here is my current HiveOS status -- I have a windows machine running an AMD 6900XT which I'm using to try and optimize my AMD cards in the Linux rig. https://i.imgur.com/Y0wuSej.pngI just realized the the GPU and Memory stats are missing from the display (happens with HiveOS). Here is my HiveOS GUI for the reference values: https://i.imgur.com/b1bjcil.pngNot bad , what cuda and drivers do you use ? 11.5 ? Or 11.4 ?
|
|
|
|
KillrBee
Jr. Member
Offline
Activity: 139
Merit: 3
|
|
January 23, 2022, 04:33:59 PM |
|
I am leaving the house now and can continue testing tomorrow. Thanks for support!
TeamBlackMiner_1_44_cuda_11_6_beta6.7z The dag is taking 10 seconds on the 3070 Mobile in 1-44 beta 6 with --dagintensity 1. I am able to run stable on higher memclocks My observations for almost a day of mining on a 12x3080 no LHR rig. 1. If you are using Hive OS, use cuda 11.4, 11.5 gives a slight drop in hashrate. 2. I tried all the cores of the miner, the core does not automatically set correctly in Hive OS (1.42 core 1, 1.43 core 3). You must manually set the sign from 6 to 8 if you use 1.42 and 12-15 if use 1.43 20xx 30xx series of cards. 3. It makes no sense to set the core clock above 1440, it does not give any performance, it only increases the temperature of the cards and power consumption. This only works on older cards like 1070, 1080, 1080 Ti with gdd5 memory. On cards with GDDR 6-6X memory, the core does not participate in ETH mining and there is NO point in increasing it! I tested the range from 1200 to 1600, anything above 1440 gives a drop in hashrate. 4. Option --xintensity , you need to select individually, for example, it works better for me on 8100 than on 4096. Bugs that I found, on a rig with 13 video cards 3090x2 3080x7 3070 x4 NO LHR, I could not achieve stable operation, constant dag generation errors and a lot of invalid shares. Also, a 12x3070 NO lhr rig gives errors during dag generation, option --dagintensity 1 is always on. Wishes, you should conduct detailed tests under HIVE OS, mining on Windows is extremely inconvenient and not popular. Big miners don't use Windows. Improve stability and fix core map definition. I will continue to test it on different rigs to find bugs and errors in the hive, as well as to find the optimal settings. I've extensively tested TBM with an RTX3090 and can conclusively say that setting the core clock above 1440 does make a difference, especially in a windows environment. setting core clock is the only way we have to set the voltage of the cards in Linux, and 1650 for the GPU sets the voltage to 0.78 which is the sweet spot for the 3090 with a high xintensity value. I've tested 1500, 1550 and 1600 and the only way I've been able to obtain 135+ MH/s at-pool hashrate is by being at 1650 -- I will provide one caveat on those numbers, is that the testing was done on versions 1.24 to 1.28, afterwards I stopped because I was happy with the performance, and the later versions focused on LHR, which for some reason had a negative effect on the non-LHR cards until this last release. Version 1.27 was where I was able to achieve the highest rates on my 3090, and version 1.41 is where the AMD cards started to achieve 66+ MH/s under windows and 65+ under HiveOS. OK, I think you use it now ? Can you show your up time in hive os ? Here is my current HiveOS status -- I have a windows machine running an AMD 6900XT which I'm using to try and optimize my AMD cards in the Linux rig. I just realized the the GPU and Memory stats are missing from the display (happens with HiveOS). Here is my HiveOS GUI for the reference values: Not bad , what cuda and drivers do you use ? 11.5 ? Or 11.4 ? I'm using the 11.4 CUDA right now since 11.5 in linux underperforms. But I'm switching back and forth between 470.94 and 495.46 while running the 11.4 CUDA runtime to see if there's any driver benefits.
|
|
|
|
KillrBee
Jr. Member
Offline
Activity: 139
Merit: 3
|
|
January 23, 2022, 04:42:10 PM |
|
I am leaving the house now and can continue testing tomorrow. Thanks for support!
TeamBlackMiner_1_44_cuda_11_6_beta6.7z The dag is taking 10 seconds on the 3070 Mobile in 1-44 beta 6 with --dagintensity 1. I am able to run stable on higher memclocks My observations for almost a day of mining on a 12x3080 no LHR rig. 1. If you are using Hive OS, use cuda 11.4, 11.5 gives a slight drop in hashrate. 2. I tried all the cores of the miner, the core does not automatically set correctly in Hive OS (1.42 core 1, 1.43 core 3). You must manually set the sign from 6 to 8 if you use 1.42 and 12-15 if use 1.43 20xx 30xx series of cards. 3. It makes no sense to set the core clock above 1440, it does not give any performance, it only increases the temperature of the cards and power consumption. This only works on older cards like 1070, 1080, 1080 Ti with gdd5 memory. On cards with GDDR 6-6X memory, the core does not participate in ETH mining and there is NO point in increasing it! I tested the range from 1200 to 1600, anything above 1440 gives a drop in hashrate. 4. Option --xintensity , you need to select individually, for example, it works better for me on 8100 than on 4096. Bugs that I found, on a rig with 13 video cards 3090x2 3080x7 3070 x4 NO LHR, I could not achieve stable operation, constant dag generation errors and a lot of invalid shares. Also, a 12x3070 NO lhr rig gives errors during dag generation, option --dagintensity 1 is always on. Wishes, you should conduct detailed tests under HIVE OS, mining on Windows is extremely inconvenient and not popular. Big miners don't use Windows. Improve stability and fix core map definition. I will continue to test it on different rigs to find bugs and errors in the hive, as well as to find the optimal settings. I've extensively tested TBM with an RTX3090 and can conclusively say that setting the core clock above 1440 does make a difference, especially in a windows environment. setting core clock is the only way we have to set the voltage of the cards in Linux, and 1650 for the GPU sets the voltage to 0.78 which is the sweet spot for the 3090 with a high xintensity value. I've tested 1500, 1550 and 1600 and the only way I've been able to obtain 135+ MH/s at-pool hashrate is by being at 1650 -- I will provide one caveat on those numbers, is that the testing was done on versions 1.24 to 1.28, afterwards I stopped because I was happy with the performance, and the later versions focused on LHR, which for some reason had a negative effect on the non-LHR cards until this last release. Version 1.27 was where I was able to achieve the highest rates on my 3090, and version 1.41 is where the AMD cards started to achieve 66+ MH/s under windows and 65+ under HiveOS. OK, I think you use it now ? Can you show your up time in hive os ? Here is my current HiveOS status -- I have a windows machine running an AMD 6900XT which I'm using to try and optimize my AMD cards in the Linux rig. I just realized the the GPU and Memory stats are missing from the display (happens with HiveOS). Here is my HiveOS GUI for the reference values: Not bad , what cuda and drivers do you use ? 11.5 ? Or 11.4 ? I'm using the 11.4 CUDA right now since 11.5 in linux underperforms. But I'm switching back and forth between 470.94 and 495.46 while running the 11.4 CUDA runtime to see if there's any driver benefits. And here is my 6900XT on windows TBM 1.42 in case you're running any AMD 6-series cards (and this card usually underperforms against the 6800/XT by about 1-2 MH/s, so I'm seriously contemplating switching back to windows for my main rig and just using something like team viewer as a remote interface, but I like the efficiencies I get with Linux...)
|
|
|
|
Alexstabilini
Jr. Member
Offline
Activity: 42
Merit: 2
|
|
January 23, 2022, 05:53:38 PM |
|
Hi sp_! What about vega 64? This is the last problematic card with your miner, the others are ok :-) Getting 33.5 MH/s in ethereum vs 50.5 with trm :-(
please help me! Alex
|
|
|
|
wasarianta
Newbie
Offline
Activity: 40
Merit: 0
|
|
January 23, 2022, 06:16:01 PM |
|
What is your up time ? With tweak ?
arround 8 hrs https://ibb.co/RTzqKwysorry that the last ss i have.. i dont know i would need it. i already change to gminer for testing just now.. For my LHR card core 1485 is better in term of efficiency than 1440. can go up to 1600 and adding more hash but also consume more power + dropping efficiency. Imo invalid in other miner (not TBM) related to memclock too high or need adding core (more power) to stabilize memclock. If still appear then try P0 same process adding core while begin memclock -400 from P2 step by step adding memclock. Still appear then should lowering memclock. If we drop core down from 1440 to 1380, hash and power will drop. So ya core need to support mem.
|
|
|
|
sp_ (OP)
Legendary
Offline
Activity: 2954
Merit: 1087
Team Black developer
|
|
January 23, 2022, 06:40:37 PM Last edit: January 23, 2022, 06:59:48 PM by sp_ |
|
The next build is good. My 3070 LHR can have around 200-300MHZ higher memclock without dag verification errors. Hi sp_! What about vega 64? This is the last problematic card with your miner, the others are ok :-) Getting 33.5 MH/s in ethereum vs 50.5 with trm :-(
The Vega needs a program called memtweak. It's buildt into hiveos. I don't know the optimal parameters, because I have never owned a Vega. Bugs that I found, on a rig with 13 video cards 3090x2 3080x7 3070 x4 NO LHR, I could not achieve stable operation, constant dag generation errors and a lot of invalid shares. Also, a 12x3070 NO lhr rig gives errors during dag generation, option --dagintensity 1 is always on.
Hopefully v1.44 will solve this bug. But it might not work on high intensities. To make it work on High intensities you need to lower the memclock
|
|
|
|
Alexstabilini
Jr. Member
Offline
Activity: 42
Merit: 2
|
|
January 23, 2022, 07:32:33 PM Last edit: January 23, 2022, 08:58:16 PM by Alexstabilini |
|
Hi sp_! What about vega 64? This is the last problematic card with your miner, the others are ok :-) Getting 33.5 MH/s in ethereum vs 50.5 with trm :-(
The Vega needs a program called memtweak. It's buildt into hiveos. I don't know the optimal parameters, because I have never owned a Vega. I use amdmemtweak under windows, tbm 1.43 cuda 11.4.. it works with Trm, Gminer and lolminer but not for Tbm! in Trm/gminer/lolminer vega64 gives me similar result..50.5mhs! thanks alex
|
|
|
|
sp_ (OP)
Legendary
Offline
Activity: 2954
Merit: 1087
Team Black developer
|
|
January 23, 2022, 09:16:46 PM |
|
|
|
|
|
sp_ (OP)
Legendary
Offline
Activity: 2954
Merit: 1087
Team Black developer
|
|
January 23, 2022, 09:40:48 PM |
|
2. I tried all the cores of the miner, the core does not automatically set correctly in Hive OS (1.42 core 1, 1.43 core 3). You must manually set the sign from 6 to 8 if you use 1.42 and 12-15 if use 1.43 20xx 30xx series of cards.
The program will autotune to find the best Kernel. In 1.44 kernel 1 or 12 seems to be best on rtx 3xxx cards.
|
|
|
|
Dojo76
Newbie
Offline
Activity: 30
Merit: 0
|
|
January 23, 2022, 11:04:36 PM |
|
I upgraded to 1.44 and now I'm unable to start mining (full AMD rig + Ubuntu): 00:00:02 [2022-01-24 01:00:36.111] eth_submitHashrate: Pool reponded with error 00:00:02 [2022-01-24 01:00:36.111] Error(-32602): Invalid params 1.42 is working, I had to downgrade
|
|
|
|
sp_ (OP)
Legendary
Offline
Activity: 2954
Merit: 1087
Team Black developer
|
|
January 23, 2022, 11:13:34 PM |
|
00:00:02 [2022-01-24 01:00:36.111] eth_submitHashrate: Pool reponded with error 00:00:02 [2022-01-24 01:00:36.111] Error(-32602): Invalid params
Which pool is this?
|
|
|
|
Dojo76
Newbie
Offline
Activity: 30
Merit: 0
|
|
January 23, 2022, 11:31:44 PM |
|
00:00:02 [2022-01-24 01:00:36.111] eth_submitHashrate: Pool reponded with error 00:00:02 [2022-01-24 01:00:36.111] Error(-32602): Invalid params
Which pool is this? Hiveon
|
|
|
|
sp_ (OP)
Legendary
Offline
Activity: 2954
Merit: 1087
Team Black developer
|
|
January 24, 2022, 01:04:36 AM Last edit: January 24, 2022, 01:15:05 AM by sp_ |
|
00:00:02 [2022-01-24 01:00:36.111] eth_submitHashrate: Pool reponded with error 00:00:02 [2022-01-24 01:00:36.111] Error(-32602): Invalid params
Which pool is this? Hiveon Do you get any other error messages? Mining should continue even if the eth_submithashrate fails. Cuda 11_4 or 11_5 build? This code indicate a missing ethereum adress, so perhaps something is wrong in your setup or wallet adress. (Invalid parameters: must provide an Ethereum address. Code -32602)
|
|
|
|
long2905
Newbie
Offline
Activity: 21
Merit: 0
|
|
January 24, 2022, 06:43:43 AM |
|
there is a new cuda version 11.6 with the latest nvidia driver, have you gotten a look at it?
also for recent nvidia drivers, on cards like 3080 and 3090 you can get the bug of the card stuck at p3 state after a while and only produce about half of the expected hashrate. can you confirm and maybe create a workaround for this issue?
|
|
|
|
UniJo
Jr. Member
Offline
Activity: 60
Merit: 2
|
|
January 24, 2022, 07:29:11 AM |
|
Hi sp_! What about vega 64? This is the last problematic card with your miner, the others are ok :-) Getting 33.5 MH/s in ethereum vs 50.5 with trm :-(
The Vega needs a program called memtweak. It's buildt into hiveos. I don't know the optimal parameters, because I have never owned a Vega. I use amdmemtweak under windows, tbm 1.43 cuda 11.4.. it works with Trm, Gminer and lolminer but not for Tbm! in Trm/gminer/lolminer vega64 gives me similar result..50.5mhs! thanks alex I can confirm this. TBM can't handle a Vega card. I tried to play with different timings and settings to make it work but the best i get is around 70% of the max speed i get with TRM or Phoenix. I sold the vega card though so i can't test it anymore.
|
|
|
|
sp_ (OP)
Legendary
Offline
Activity: 2954
Merit: 1087
Team Black developer
|
|
January 24, 2022, 12:28:22 PM Last edit: January 24, 2022, 01:21:09 PM by sp_ |
|
|
|
|
|
ekgd39
Newbie
Offline
Activity: 30
Merit: 0
|
|
January 24, 2022, 12:44:20 PM |
|
I am leaving the house now and can continue testing tomorrow. Thanks for support!
TeamBlackMiner_1_44_cuda_11_6_beta6.7z The dag is taking 10 seconds on the 3070 Mobile in 1-44 beta 6 with --dagintensity 1. I am able to run stable on higher memclocks https://i.ibb.co/KGmn76x/1-44.pngMy observations for almost a day of mining on a 12x3080 no LHR rig. 1. If you are using Hive OS, use cuda 11.4, 11.5 gives a slight drop in hashrate. 2. I tried all the cores of the miner, the core does not automatically set correctly in Hive OS (1.42 core 1, 1.43 core 3). You must manually set the sign from 6 to 8 if you use 1.42 and 12-15 if use 1.43 20xx 30xx series of cards. 3. It makes no sense to set the core clock above 1440, it does not give any performance, it only increases the temperature of the cards and power consumption. This only works on older cards like 1070, 1080, 1080 Ti with gdd5 memory. On cards with GDDR 6-6X memory, the core does not participate in ETH mining and there is NO point in increasing it! I tested the range from 1200 to 1600, anything above 1440 gives a drop in hashrate. 4. Option --xintensity , you need to select individually, for example, it works better for me on 8100 than on 4096. Bugs that I found, on a rig with 13 video cards 3090x2 3080x7 3070 x4 NO LHR, I could not achieve stable operation, constant dag generation errors and a lot of invalid shares. Also, a 12x3070 NO lhr rig gives errors during dag generation, option --dagintensity 1 is always on. Wishes, you should conduct detailed tests under HIVE OS, mining on Windows is extremely inconvenient and not popular. Big miners don't use Windows. Improve stability and fix core map definition. I will continue to test it on different rigs to find bugs and errors in the hive, as well as to find the optimal settings. https://i.ibb.co/7QxhBbb/2022-01-23-151607.pngI've extensively tested TBM with an RTX3090 and can conclusively say that setting the core clock above 1440 does make a difference, especially in a windows environment. setting core clock is the only way we have to set the voltage of the cards in Linux, and 1650 for the GPU sets the voltage to 0.78 which is the sweet spot for the 3090 with a high xintensity value. I've tested 1500, 1550 and 1600 and the only way I've been able to obtain 135+ MH/s at-pool hashrate is by being at 1650 -- I will provide one caveat on those numbers, is that the testing was done on versions 1.24 to 1.28, afterwards I stopped because I was happy with the performance, and the later versions focused on LHR, which for some reason had a negative effect on the non-LHR cards until this last release. Version 1.27 was where I was able to achieve the highest rates on my 3090, and version 1.41 is where the AMD cards started to achieve 66+ MH/s under windows and 65+ under HiveOS. OK, I think you use it now ? Can you show your up time in hive os ? Here is my current HiveOS status -- I have a windows machine running an AMD 6900XT which I'm using to try and optimize my AMD cards in the Linux rig. https://i.imgur.com/Y0wuSej.pngI just realized the the GPU and Memory stats are missing from the display (happens with HiveOS). Here is my HiveOS GUI for the reference values: https://i.imgur.com/b1bjcil.pngNot bad , what cuda and drivers do you use ? 11.5 ? Or 11.4 ? I'm using the 11.4 CUDA right now since 11.5 in linux underperforms. But I'm switching back and forth between 470.94 and 495.46 while running the 11.4 CUDA runtime to see if there's any driver benefits. how is your test going? what are the results?
|
|
|
|
|