...
Well, aside from advising to try if the same happens with another OS, there is nothing else that i can say.
Yeah. I can try any other. I had thought of HiveOS, which is also very well known (although I do not know if it has support for TRM ) Nor would I mind using a live-USB from any Linux distro, although maybe the configuration is a bit complicated. What I do not want is to use Windows (it gave me many problems of locks and restarts). You know, any recommendation is welcome Uhh... If it locks, it means bad OC. If it restarts, it means not-so bad OC. I am using Windows on my rig, and uptime is like... 1 month on CN and before that, 1 month on Lyra? I guess I've found the correct OC / UV for the "damn" GPU. FAN 68 68 68 68% CORE 1257 1257 1257 1237 MHz MEM 300 300 1000 1000 MHz VDDC 945 945 0 955 mV VDDCI 0 0 0 0 mV Three stable days, without stoppages or restarts. Dammit. If I open my mouth first ... The gpu3 has failed me again, and this time the gpu1 has accompanied him. Shit OC. It's driving me crazy. @ku4eto, do you also have RX580? Can you tell me your setup? I do not mind going back to Windows if I find stability. Windows 10 1703, a i5-3470 i think, 4GB RAM. 850W PSU. 19.2.1 drivers i think. 1x Gigabyte Aorus 570 4GB Elpida 1x Power Color Red Dragon 580 4GB Samsung 2x MSI Gaming X 580 4GB Elpida Running 0.4.3 TRM. Clocks are 1240Mhz core on all 4 GPUs. Voltages are in the 0.875V-0.891V range. Using the same IMC voltage ("Memory" voltage) as the Core one. All Elpida cards are running for CN on 1850Mhz memory. The Samsung one is on 2050Mhz. Below those, i start to lose hashrate. Otherwise, i was running the memory before that on 1950/2100. On Lyra, i set the memory slider on below 1750 Mhz, which forces DPM0 for some reason. Hi @ku4eto! I have tried another miner (with permission from TRM ) and apart from getting less hashrate I have not solved anything: GPUs are still dying after a few hours of use. This weekend I will try a clean Windows 10, to see if it improves. Two questions, friend: 1. I also use two CPU threads (of 4) with xmrig. Can that affect TRM? 2. In Windows I use OverdriveNTool for OC/UV. Do you see these values well for lyra2rev3? [Profile_5] Name = 1215-910_LUX GPU_P0 = 300; 750 GPU_P1 = 600; 769 GPU_P2 = 900; 881 GPU_P3 = 1220; 910 GPU_P4 = 1220; 910 GPU_P5 = 1220; 910 GPU_P6 = 1220; 910 GPU_P7 = 1220; 910 Mem_P0 = 300; 750 Mem_P1 = 1000; 800; 0 Mem_P2 = 300; 800 Fan_Min = 800 Fan_Max = 2280 Fan_Target = 75 Fan_Acoustic = 1366 Power_Temp = 80 Power_Target = 0 It's the same profile that I used for PHI2 algo, which does not require memory either. Yes, Kerney and todd have stated, that you may have issues with the miner, if you mine with the CPU as well. Also, why use that other tool, and not Wattman?
|
|
|
IMO, as it stands, this is proobably more of a "reputation" thread than a "scam accusation". You might want to move it to the appropriate board. Unless you have some specific evidence of a specific scam (or scams). You've made some claims regarding certain events, but I don't see anything that actually shows any of this. What you appear to have here is some "he said, she said" comments posted on discord. Unless I'm missing something obvious (and no, I'm not going to go trawling through some random discord server trying to piece this all together... That's your job as the "accuser") The whole Mineority ended up as a scam. Lots of people bought the BCU1525 FPGAs, which were going to be hosted by Mineority. They werent hosted. Instead, SQRL took on the job. 60-ish people also bought hosted GPUs. Their "contracts" got terminated forcefully. They did not receive their GPUs as per "contract" (no choice was given, despite actually this was said at the beginning). Instead, they got their ETH back, which was worth 80% less at the time of the refund. Can you guess who lead that project? OhGodAGirl. Can you guess who also worked there (under OhGodACompany and Mineority)? OhGodAPet. Pet also dropped sensitive customer information into public Discord server, without the clients consent (email addresses, street addresses, such stuff). So, yea, its not about reputation, its actually about scams.
|
|
|
...
Well, aside from advising to try if the same happens with another OS, there is nothing else that i can say.
Yeah. I can try any other. I had thought of HiveOS, which is also very well known (although I do not know if it has support for TRM ) Nor would I mind using a live-USB from any Linux distro, although maybe the configuration is a bit complicated. What I do not want is to use Windows (it gave me many problems of locks and restarts). You know, any recommendation is welcome Uhh... If it locks, it means bad OC. If it restarts, it means not-so bad OC. I am using Windows on my rig, and uptime is like... 1 month on CN and before that, 1 month on Lyra? I guess I've found the correct OC / UV for the "damn" GPU. FAN 68 68 68 68% CORE 1257 1257 1257 1237 MHz MEM 300 300 1000 1000 MHz VDDC 945 945 0 955 mV VDDCI 0 0 0 0 mV Three stable days, without stoppages or restarts. Dammit. If I open my mouth first ... The gpu3 has failed me again, and this time the gpu1 has accompanied him. Shit OC. It's driving me crazy. @ku4eto, do you also have RX580? Can you tell me your setup? I do not mind going back to Windows if I find stability. Windows 10 1703, a i5-3470 i think, 4GB RAM. 850W PSU. 19.2.1 drivers i think. 1x Gigabyte Aorus 570 4GB Elpida 1x Power Color Red Dragon 580 4GB Samsung 2x MSI Gaming X 580 4GB Elpida Running 0.4.3 TRM. Clocks are 1240Mhz core on all 4 GPUs. Voltages are in the 0.875V-0.891V range. Using the same IMC voltage ("Memory" voltage) as the Core one. All Elpida cards are running for CN on 1850Mhz memory. The Samsung one is on 2050Mhz. Below those, i start to lose hashrate. Otherwise, i was running the memory before that on 1950/2100. On Lyra, i set the memory slider on below 1750 Mhz, which forces DPM0 for some reason.
|
|
|
Lul, -8 trade rating. Whenever she posts with that acc, people will not take her seriously (as they should). But then again, the bot/alt army is not sleeping.
Unfortunately, people still seem her as a "god" in this sphere. Same as Wolf0. While, its exactly the vice versa (at least for her, although Wolf0 is not a pink unicorn as well).
|
|
|
I'm on Windows and far from these result ! Is it possible to mid timing on Windows too ?
The MMPOS Agent is also under testing for WIndows, so you can try that as well.
|
|
|
...
Well, aside from advising to try if the same happens with another OS, there is nothing else that i can say.
Yeah. I can try any other. I had thought of HiveOS, which is also very well known (although I do not know if it has support for TRM ) Nor would I mind using a live-USB from any Linux distro, although maybe the configuration is a bit complicated. What I do not want is to use Windows (it gave me many problems of locks and restarts). You know, any recommendation is welcome Uhh... If it locks, it means bad OC. If it restarts, it means not-so bad OC. I am using Windows on my rig, and uptime is like... 1 month on CN and before that, 1 month on Lyra?
|
|
|
...
50MH/s seems too high, are you running the GPUs at 1350Mhz+?
You know you can post your voltage and clocks used, right?
I'm not sure if it's a lot or a little. I have nothing to compare What I can do is share with you my Minerstat setup. I remember that they are 4x Sapphire Pulse RX580 8Gb, miner TRM 0.4.3, bios modified w/Polaris. GPUS: OC/UV: Summary: As you can see, GPU core clock at 1257Mhz ... Not sure how accurate is the VDDC, but 0.945V for ~1250Mhz is actually within the OK range. What OS are you using? I use a live-USB that minerstat calls MSOS. Basically, a Linux distribution. Surely there are many more options via frontend or even backend, but I have configured the minimum to start mining. If you use minerstat or if you have any interesting command line parameter for TRM, ... Any advice is welcome Well, aside from advising to try if the same happens with another OS, there is nothing else that i can say.
|
|
|
...
50MH/s seems too high, are you running the GPUs at 1350Mhz+?
You know you can post your voltage and clocks used, right?
I'm not sure if it's a lot or a little. I have nothing to compare What I can do is share with you my Minerstat setup. I remember that they are 4x Sapphire Pulse RX580 8Gb, miner TRM 0.4.3, bios modified w/Polaris. GPUS: OC/UV: Summary: As you can see, GPU core clock at 1257Mhz ... Not sure how accurate is the VDDC, but 0.945V for ~1250Mhz is actually within the OK range. What OS are you using?
|
|
|
Hi guys! Im running TRM from Minerstat software. Algo Lyra2REV3, pool NH, 4*RX580 8Gb. Something I must be doing wrong in the UV, because lowering memory to 300 in the 4 GPU, one of them ends up dying (always the same). [2019-04-03 12:26:27] Pool lyra2rev3.eu.nicehash.com share accepted. (GPU1) (a: 7757 r: 3) (66 ms) [2019-04-03 12:26:30] Stats Uptime: 0 days, 17:28:01 [2019-04-03 12:26:30] GPU 0 [51C, fan 65%] lyra2rev3: 50.13Mh / s, avg 50.07Mh / s, pool 46.81Mh / s to: 2297 r: 1 hw: 0 [2019-04-03 12:26:30] GPU 1 [47C, fan 65%] lyra2rev3: 50.12Mh / s, avg 50.06Mh / s, pool 46.26Mh / s to: 2260 r: 1 hw: 0 [2019-04-03 12:26:30] GPU 2 [52C, fan 65%] lyra2rev3: 50.09Mh / s, avg 50.03Mh / s, pool 45.07Mh / s to: 2220 r: 1 hw: 0 [2019-04-03 12:26:30] GPU 3 [27C, fan 65%] lyra2rev3: 0.0 h / s, avg 27.27Mh / s, pool 23.05Mh / s to: 982 r: 0 hw: 0 [2019-04-03 12:27:01] Please use command line argument --watchdog_script to handle dead GPUs. [2019-04-03 12:27:11] GPU 3: detected DEAD (05: 00.0), no restart script configured, will continue mining. [2019-04-03 12:27:11] Please use command line argument --watchdog_script to handle dead GPUs. Can you think of something I can look at or try? Anyone? Hm, i am not sure. How does it crash? Instantly, or after some period of time? If its after some period, your core voltage is probably too low. Increase by 12.5mV and try again. Thanks @ku4eto! It is not immediate. Nor am I aware of exactly when it happens. I look at the miner a couple of times a day, and sometimes I find that the GPU3 (always the 3) is dead. I already have a few UV / OC tests, and it makes sense that you say to raise the voltages. I have to find the key Does anyone mine this algo with RX580? What setup do you have? I have also read in one of the threads that version 0.4.3 gives more problems in that sense (dead GPU) than 4.0.2. Would I solve something by going back to an earlier version? Regards! Hi! I don't believe you'll see any difference with earlier versions, things haven't really changed at all for the compute algos lately, the last 4-5 updates have all been update CN algos/variants. Otherwise, I agree with ku4eto that you simply need to tweak clocks and voltages here to find a level where this problematic gpu survives over time. Thanks @kerney666. I keep doing tests. I use the 0.4.3 miner from Minerstat, and it is not easy to set UV / OC values. Comment also that today, for the first time, a different GPU has died: It has touched GPU2 ... 50MH/s seems too high, are you running the GPUs at 1350Mhz+? You know you can post your voltage and clocks used, right?
|
|
|
Hi guys! Im running TRM from Minerstat software. Algo Lyra2REV3, pool NH, 4*RX580 8Gb. Something I must be doing wrong in the UV, because lowering memory to 300 in the 4 GPU, one of them ends up dying (always the same). [2019-04-03 12:26:27] Pool lyra2rev3.eu.nicehash.com share accepted. (GPU1) (a: 7757 r: 3) (66 ms) [2019-04-03 12:26:30] Stats Uptime: 0 days, 17:28:01 [2019-04-03 12:26:30] GPU 0 [51C, fan 65%] lyra2rev3: 50.13Mh / s, avg 50.07Mh / s, pool 46.81Mh / s to: 2297 r: 1 hw: 0 [2019-04-03 12:26:30] GPU 1 [47C, fan 65%] lyra2rev3: 50.12Mh / s, avg 50.06Mh / s, pool 46.26Mh / s to: 2260 r: 1 hw: 0 [2019-04-03 12:26:30] GPU 2 [52C, fan 65%] lyra2rev3: 50.09Mh / s, avg 50.03Mh / s, pool 45.07Mh / s to: 2220 r: 1 hw: 0 [2019-04-03 12:26:30] GPU 3 [27C, fan 65%] lyra2rev3: 0.0 h / s, avg 27.27Mh / s, pool 23.05Mh / s to: 982 r: 0 hw: 0 [2019-04-03 12:27:01] Please use command line argument --watchdog_script to handle dead GPUs. [2019-04-03 12:27:11] GPU 3: detected DEAD (05: 00.0), no restart script configured, will continue mining. [2019-04-03 12:27:11] Please use command line argument --watchdog_script to handle dead GPUs. Can you think of something I can look at or try? Anyone? Hm, i am not sure. How does it crash? Instantly, or after some period of time? If its after some period, your core voltage is probably too low. Increase by 12.5mV and try again.
|
|
|
Gotta say, at first i did not like the Elpida 4032BABG, but now... they are pretty cool.
With the strap i am using, i can go as low as 1875Mhz and lose only 5h/s per GPU, compared to 1950Mhz. That is at 1240Mhz core.
Sounds a bit high core frequency, though. You forgot to say what hashrate you get, and what voltages Its the Graft algo, CN_RW. Hashrate is around 1350. Core is ok, why should it go lower? At this clocks, its already below 0.9V, around 0.875V-0.893V, depending on the card.
|
|
|
Gotta say, at first i did not like the Elpida 4032BABG, but now... they are pretty cool.
With the strap i am using, i can go as low as 1875Mhz and lose only 5h/s per GPU, compared to 1950Mhz. That is at 1240Mhz core.
|
|
|
Please, do not call it a Memory Pill. Or pill at all. This has nothing to do with OGAC/OGAG. OGAG mentioned, they are working on Vega Pill, but DO NOT GIVE HER CREDIT for this. She does not deserve it. Also, why would they implement 3rd party software, that currently works only on Linux, since its not ported?
|
|
|
4085H/s with 7+7 for 4 GPUs - 1x570, 3x580. Gotta check the power consumption, but the performance seems about the same as CNv8. A job well done!
|
|
|
I am observing something weird.
With v0.3.10, i was getting on Vertcoin (Lyra2rev3), 190.8Mh/s for ~435-440W at wall. With v0.4.0, i am getting 191.7Mh/s for 450-455W at wall.
The claimed performance increase is 2-3%, but for me, in reality, is ~1%, while getting 2-4% power consumption increase. Windows 10, AMD 19.2.1, using Memory P0 state.
1x RX 570, 3x RX 580.
Hi! The changelog says +0.5%, which you’re spot on. Not sure where you saw 2-3%? This thread should also have been updated though, our bad. For the power draw, that is strange. I can leak that the only thing that has changed is the last kernel, so small changes really. We do have some other changes host side, but it feels very odd that they would add 15-20W!?! Will do some testing of my own shortly! Okay tested again. There is difference for sure, but not as big as i initially reported. 447W on 3.10, vs 451-452W on 4.1. Guess the initial big difference was because of the slightly different ambient temperatures... Aight, so +0.47% perf increase for +1% power, hmm. Well, no disaster really, I haven't had to time to dig into it myself though. Like I sad, we've added a few small things host-side, and some smaller things gpu-side as well. Will try to run those checks myself shortly. Well, i thought it was bigger. 1% for 1% shouldnt be something to bother with, if you ask me. It was mistake on my side with bad measurements.
|
|
|
I am observing something weird.
With v0.3.10, i was getting on Vertcoin (Lyra2rev3), 190.8Mh/s for ~435-440W at wall. With v0.4.0, i am getting 191.7Mh/s for 450-455W at wall.
The claimed performance increase is 2-3%, but for me, in reality, is ~1%, while getting 2-4% power consumption increase. Windows 10, AMD 19.2.1, using Memory P0 state.
1x RX 570, 3x RX 580.
Hi! The changelog says +0.5%, which you’re spot on. Not sure where you saw 2-3%? This thread should also have been updated though, our bad. For the power draw, that is strange. I can leak that the only thing that has changed is the last kernel, so small changes really. We do have some other changes host side, but it feels very odd that they would add 15-20W!?! Will do some testing of my own shortly! Okay tested again. There is difference for sure, but not as big as i initially reported. 447W on 3.10, vs 451-452W on 4.1. Guess the initial big difference was because of the slightly different ambient temperatures...
|
|
|
I am observing something weird.
With v0.3.10, i was getting on Vertcoin (Lyra2rev3), 190.8Mh/s for ~435-440W at wall. With v0.4.0, i am getting 191.7Mh/s for 450-455W at wall.
The claimed performance increase is 2-3%, but for me, in reality, is ~1%, while getting 2-4% power consumption increase. Windows 10, AMD 19.2.1, using Memory P0 state.
1x RX 570, 3x RX 580.
Hi! The changelog says +0.5%, which you’re spot on. Not sure where you saw 2-3%? This thread should also have been updated though, our bad. For the power draw, that is strange. I can leak that the only thing that has changed is the last kernel, so small changes really. We do have some other changes host side, but it feels very odd that they would add 15-20W!?! Will do some testing of my own shortly! I will check again and test as well, to see if its not some weird behavior. As for the exact number figures - yea, my bad, i did not read that post properly.
|
|
|
I am observing something weird.
With v0.3.10, i was getting on Vertcoin (Lyra2rev3), 190.8Mh/s for ~435-440W at wall. With v0.4.0, i am getting 191.7Mh/s for 450-455W at wall.
The claimed performance increase is 2-3%, but for me, in reality, is ~1%, while getting 2-4% power consumption increase. Windows 10, AMD 19.2.1, using Memory P0 state.
1x RX 570, 3x RX 580.
|
|
|
Does the new version support graft?
No, i asked Kerney, and so far it is not supported. They MAY look at it, since its really close to CNv8, but i would guess, they will want to get the CN-R fixed first.
|
|
|
awesome! can you disable or change the watchdog?
What are you looking to do? The watchdog checks for a number of things: shuts on/off a gpu if the temp go out of range, executes your watchdog script if a gpu dies, then some internal checks as well. Hello, miner doesn’t seem to be executing watchdog.bat when using —watchdog_test, just notifies, is this normal behavior? UPDATE: My watchdog.bat script works when executed by itself, but not via the miner watchdog_test. Use absolute paths instead of relative. Windows may not like relative.
|
|
|
|