vmozara
Member
Offline
Activity: 190
Merit: 59
|
|
June 05, 2018, 10:35:04 PM |
|
Doktor, i also have issue to report goes like this have 4 rigs, all 4 of them go from xmr stak to srbminer1.5.8 each rig 6x Vega 18.5.2 all 4 rigs run super stable with srbminer one of the rigs i restart for some reason after like 2000 minutes of uptime other 3 rigs run for like 3000-3500 minutes of uptime and then all crash at approximate the same time the miner stopped saying GPU hashrate 0, restarting miner, but miner could not restart, even by process kill or try to reset PC, it hangs in reset process. Only hard reset help Why would all miners stop at same uptime? Anyway, I reset all so can´t really say what happened or give any more details, but i will monitor this time and see if it happen again Otherwise, i fell in love with your miner.
|
|
|
|
vmozara
Member
Offline
Activity: 190
Merit: 59
|
|
June 05, 2018, 10:41:44 PM |
|
max i can get with 6 rx vegas on cryptonight v7 algo is abuot 8900, on cryptonight heavy i can get 7900 h/s can anyone help me? i should be getting at least 10,900 maybe a bit more on cryptonight v7 That is really bad, with srbminer you can go 12000 V7 almost plug and play 1. DDU (use DDU pprogram to remove all your GPU drivers cleanly) and reset the computer 2. Install Adrenaline 18.5.2, click custom install, install ONLY driver, no any other crap. Do not reset computer after installation 3. Apply Vega power tables from vega mining guide 4. Disable crossfire and ULPS in registry on all 6 cards 5. Reset computer 6. Start your new favorite miner with OverdriveNTool settings applied (you can also get them fro mvega mining guide, I use 1408/800 and mem 1100/800 but driver seems to keep them at 0.87V anyway) and enjoy 12000 hashrate If you are not familiar with any of these steps i recommend you take a break and study it because you probably can´t reliably operate vega mining rig without knowing these steps
|
|
|
|
CryptoPeter
Newbie
Offline
Activity: 19
Merit: 0
|
|
June 05, 2018, 11:20:01 PM |
|
not familiar with the power tables or crossfire or ULPS but ty for writing a 6 step guide on exactly what needs to be done giving me something iw can work with. thank you
|
|
|
|
hamesh
Newbie
Offline
Activity: 22
Merit: 0
|
|
June 06, 2018, 12:30:20 AM |
|
+ I would like to politely ask everyone if they appreciate my work, to switch their miners to this new version.
All of my miners have been updated to 1.5.8. Great work - keep it up
|
|
|
|
MaxHa$h
Newbie
Offline
Activity: 37
Merit: 0
|
|
June 06, 2018, 02:28:24 AM |
|
V1.5.8 - Fixed a bug in pool switching process - Fixed a bug in watchdog's "reboot_script" - Changed default devfee pool for Heavy algo
+ I would like to politely ask everyone if they appreciate my work, to switch their miners to this new version.
The default devfee pool for heavy algo was pool.sumokoin.com, which is under the control of the origin Sumo team. They self-decided, without any community voting or anything to switch their algo back to now ASIC friendly Cryptonight from block 137500. That will happen in about two days. What this means to users of SRBMiner/Me ? On previous versions if you mine any coin on 'heavy' algo from monday, the devfee will connect to pool.sumokoin.com using 'heavy' algo, but the pool will be using classic CN, so shares will be ALL rejected, and i won't be getting the 0.85% fee.
Thank you in advance, if you are going to support me and going to switch to this version.
Switched all my rigs, just have a few more at the office will change them too. Thanks again 👍
|
|
|
|
henri2018
Newbie
Offline
Activity: 46
Merit: 0
|
|
June 06, 2018, 04:41:03 AM |
|
Hi Dok, I moved to 1.5.8 for the last 2 days, but I got problem. Please see screen shot below. It happened to all my 8 rigs randomly. So sorry that I moved back to 1.5.6. And so far, my rigs are stable. Not sure with other users, but may be you can take a look it, hopefully solve the issue as well. https://imgur.com/a/ldXrutGDo you maybe have logging turned on so i can get a log file? Does this happen only when reloading pools, or switching pools ? edit: i maybe found something, we will see in next version Hi dok, sorry for late reply. Here is a piece of log regarding the error in the screen shot. There is text: socket_error: PARSE error: Job id, target or blob missing ---------------------------------------------- [2018-06-06 10:20:35] pool_have_job: Pool difficulty: 130884 [2018-06-06 10:20:35] pool_have_job: Pool sent a new job (ID: 4095) [2018-06-06 10:37:36] json_receive: {"params": {"target": "2f800000", "id": "2447636967", "blob": "0303b7a7ddd80512a56f391f88d87cfb79a835377939f3567b9f0876a9486ef340c1a4d17c5d830 000000076448d52b58b4907044205261d3aeb89f305991e295a53df6eefa3950e43fe3b04", "job_id": "4195"}, "jsonrpc": "2.0", "method": "job"} [2018-06-06 10:37:36] json_receive: {"params": {"target": "9bc42000", "id": "2447636967", "blob": "0303b7a7ddd80512a56f391f88d87cfb79a835377939f3567b9f0876a9486ef340c1a4d17c5d830 0000000520ba18539814d3206a82a8c5c4b6f6c52a594c3e938698a6b1f6b6d5639958304", "job_id": "4196"}, "jsonrpc": "2.0", "method": "job"} [2018-06-06 10:37:36] socket_error: PARSE error: Job id, target or blob missing [2018-06-06 10:37:36] Connection to pool lost. Reconnecting in 10 seconds. [2018-06-06 10:38:24] Miner version: 1.5.8 [2018-06-06 10:38:25] AMD Platform ID: 1 [2018-06-06 10:38:25] AMD platform FOUND [2018-06-06 10:38:25] Found 6 AMD devices [2018-06-06 10:38:25] GPU0: Radeon RX Vega [gfx900] [8176 MB][Intensity 56.0][W: 8][T: 2][K: 1][BUS: 3] [2018-06-06 10:38:25] GPU1: Radeon RX Vega [gfx900] [8176 MB][Intensity 56.0][W: 8][T: 2][K: 1][BUS: 6] [2018-06-06 10:38:25] GPU2: Radeon RX Vega [gfx900] [8176 MB][Intensity 56.0][W: 8][T: 2][K: 1][BUS: 9] [2018-06-06 10:38:25] GPU3: Radeon RX Vega [gfx900] [8176 MB][Intensity 56.0][W: 8][T: 2][K: 1][BUS: 13] [2018-06-06 10:38:25] GPU4: Radeon RX Vega [gfx900] [8176 MB][Intensity 56.0][W: 8][T: 2][K: 1][BUS: 16] [2018-06-06 10:38:25] GPU5: Radeon RX Vega [gfx900] [8176 MB][Intensity 56.0][W: 8][T: 2][K: 1][BUS: 19]
|
|
|
|
Kgonla
Newbie
Offline
Activity: 129
Merit: 0
|
|
June 06, 2018, 09:10:02 AM |
|
Hi all. What is "invalid job ID" in the miner?
I have 18 now after 8h on bitcedi network... It is a pool problem? Memory errors minimal (1-10) so thats good, and the pool reporting 0 invalid shares.
Yes, is a pool problem. If you send a result for a job it is already finished you get this error. Usually it is because the pool is taking too long to send the new job.
|
|
|
|
livada
Newbie
Offline
Activity: 417
Merit: 0
|
|
June 06, 2018, 10:44:57 AM |
|
see my old post- memory hynix or samsung- vega 56 with 64 bios BC driver - old 56 int P7=1442/905 P3=1095/905- for samsung mem-referent gpu - 1560HR heavy P7= 1465/950 p3=965/930 - for hynix -nitro+ gpu - 1480HR heavy evrery card is diferent- my 1 nitro go 1465/965 but other nitro go only 1410/950 and wont more power @DOCTOR - danas se spaja uredno na devpool server
|
|
|
|
Kgonla
Newbie
Offline
Activity: 129
Merit: 0
|
|
June 06, 2018, 11:27:07 AM |
|
someone knows how to make work the parameters that go in .bat with awesome miner? I think it is a problem of awesome miner it is not using the start.bat file. I placed commands everywhere in awesome but don't work. Any trick?
|
|
|
|
doktor83 (OP)
|
|
June 06, 2018, 12:16:15 PM |
|
V1.5.9- Added "max_difficulty" parameter in pools, if reached miner will reconnect to pool - Better logging on miner crash - Kernels are now built in Cache directory - Probably fixed situation when miner crashes on pool switch - Fixed .srb file creation on every miner run - Hopefully reduced nicehash duplicate share errors - Changed the way devfee pools are used + Now you can define a "max_difficulty" for every pool you have in pools config. Sometimes it happens that pool sends abnormally high difficulty, and miner can't find a share for a long time. In this case, by setting "max_difficulty", when ever pool difficulty is higher than the value you set, miner disconnects and reconnects to the pool. + There is now a Cache directory where the miner creates cached versions of OpenCL kernels + Some reported miner crash after multiple pool disconnect/reconnects , not able to log in etc.. I hope i found the cause of it and fixed it + Learned from the previous big mistake with devfee pools hardcoded in the miner, now miner gets the list of devfee pools from http://srbminer.com, so please allow/don't block it on your firewall
|
|
|
|
vmozara
Member
Offline
Activity: 190
Merit: 59
|
|
June 06, 2018, 12:18:52 PM |
|
Any hashrate increase? Like for 1.5.8. when you didn´t say anything and my vegas hashed like 200 more per rig, what a nice surprise
|
|
|
|
doktor83 (OP)
|
|
June 06, 2018, 12:25:49 PM |
|
Any hashrate increase? Like for 1.5.8. when you didn´t say anything and my vegas hashed like 200 more per rig, what a nice surprise lol,no not this time. I am surprised too because i had a 0.2-0.3% hash increase on my 580 8g test card, so thats so low i did not want to mention it, but looks like its more than 0.2% on Vegas
|
|
|
|
knittycatkitty
Newbie
Offline
Activity: 19
Merit: 0
|
|
June 06, 2018, 12:49:31 PM |
|
V1.5.9- Added "max_difficulty" parameter in pools, if reached miner will reconnect to pool - Better logging on miner crash - Kernels are now built in Cache directory - Probably fixed situation when miner crashes on pool switch - Fixed .srb file creation on every miner run - Hopefully reduced nicehash duplicate share errors - Changed the way devfee pools are used + Now you can define a "max_difficulty" for every pool you have in pools config. Sometimes it happens that pool sends abnormally high difficulty, and miner can't find a share for a long time. In this case, by setting "max_difficulty", when ever pool difficulty is higher than the value you set, miner disconnects and reconnects to the pool. + There is now a Cache directory where the miner creates cached versions of OpenCL kernels + Some reported miner crash after multiple pool disconnect/reconnects , not able to log in etc.. I hope i found the cause of it and fixed it + Learned from the previous big mistake with devfee pools hardcoded in the miner, now miner gets the list of devfee pools from http://srbminer.com, so please allow/don't block it on your firewall much appreciated, gotta try this out. just updated to 1.5.8 recently but i noticed after a few days the miner would just suddenly stop, hopefully its fixed! will update tonight
|
|
|
|
doktor83 (OP)
|
|
June 06, 2018, 12:56:12 PM |
|
much appreciated, gotta try this out. just updated to 1.5.8 recently but i noticed after a few days the miner would just suddenly stop, hopefully its fixed! will update tonight
Do you have these kind of problems ?
|
|
|
|
RuMiner
Member
Offline
Activity: 168
Merit: 15
|
|
June 06, 2018, 01:01:08 PM |
|
doktor83, Claymore's miner used to crash like this sometimes, so I coped with it using Windows Scheduler - there should be an error in Application Log, I just got error code and triggered miner restart. I'm sure this is not correct, but it works )
|
|
|
|
hesido
Jr. Member
Offline
Activity: 158
Merit: 5
|
|
June 06, 2018, 01:29:32 PM |
|
Any hashrate increase? Like for 1.5.8. when you didn´t say anything and my vegas hashed like 200 more per rig, what a nice surprise lol,no not this time. I am surprised too because i had a 0.2-0.3% hash increase on my 580 8g test card, so thats so low i did not want to mention it, but looks like its more than 0.2% on Vegas Great update, good thinking on devfee pool settings, although a hardcoded fallback in code would still be beneficial. Doktor, how costly is algo change? I'm working on a simple coin switching proxy (forked from a sebseb7's repository of cryptonote-proxy) which works well for same algo coins. The upside is there's no hasrate drop between changes as SRBMiner (or any other miner connected to it) just sees it as a new job. That said, just as GPU's can run several different shaders, how hard would it be to change algo without a restart? Memory would be reserved for the one with the most mem requirements. It would be awesomely dandy if SRB could do this, it would support an extra json property to change algos in the job request.
|
|
|
|
doktor83 (OP)
|
|
June 06, 2018, 01:38:36 PM |
|
Any hashrate increase? Like for 1.5.8. when you didn´t say anything and my vegas hashed like 200 more per rig, what a nice surprise lol,no not this time. I am surprised too because i had a 0.2-0.3% hash increase on my 580 8g test card, so thats so low i did not want to mention it, but looks like its more than 0.2% on Vegas Great update, good thinking on devfee pool settings, although a hardcoded fallback in code would still be beneficial. Doktor, how costly is algo change? I'm working on a simple coin switching proxy (forked from a sebseb7's repository of cryptonote-proxy) which works well for same algo coins. The upside is there's no hasrate drop between changes as SRBMiner (or any other miner connected to it) just sees it as a new job. That said, just as GPU's can run several different shaders, how hard would it be to change algo without a restart? Memory would be reserved for the one with the most mem requirements. It would be awesomely dandy if SRB could do this, it would support an extra json property to change algos in the job request. There are hardcoded fallbacks, in case the site isn't reachable, those are used. I hope they will never be used I know about that proxy, i like the idea , it can be very useful for coin switching on the fly. Algo switching could be a problem, but a very easy and elegant way would be just to restart miner with the new config. It shouldn't take more than 15-20 sec if the user has cached files. It's interesting that you ask this now, because i am in a phase of planning and working on a built-in coin switching (for same algo) in SRBminer based on profitability.
|
|
|
|
lebuawu2
Jr. Member
Offline
Activity: 176
Merit: 2
|
|
June 06, 2018, 02:01:57 PM |
|
V1.5.9- Added "max_difficulty" parameter in pools, if reached miner will reconnect to pool - Better logging on miner crash - Kernels are now built in Cache directory - Probably fixed situation when miner crashes on pool switch - Fixed .srb file creation on every miner run - Hopefully reduced nicehash duplicate share errors - Changed the way devfee pools are used + Now you can define a "max_difficulty" for every pool you have in pools config. Sometimes it happens that pool sends abnormally high difficulty, and miner can't find a share for a long time. In this case, by setting "max_difficulty", when ever pool difficulty is higher than the value you set, miner disconnects and reconnects to the pool. + There is now a Cache directory where the miner creates cached versions of OpenCL kernels + Some reported miner crash after multiple pool disconnect/reconnects , not able to log in etc.. I hope i found the cause of it and fixed it + Learned from the previous big mistake with devfee pools hardcoded in the miner, now miner gets the list of devfee pools from http://srbminer.com, so please allow/don't block it on your firewall wow many thanks dok, will try now mining on nicehash with max_difficulty.
|
|
|
|
doktor83 (OP)
|
|
June 06, 2018, 02:06:39 PM |
|
V1.5.9- Added "max_difficulty" parameter in pools, if reached miner will reconnect to pool - Better logging on miner crash - Kernels are now built in Cache directory - Probably fixed situation when miner crashes on pool switch - Fixed .srb file creation on every miner run - Hopefully reduced nicehash duplicate share errors - Changed the way devfee pools are used + Now you can define a "max_difficulty" for every pool you have in pools config. Sometimes it happens that pool sends abnormally high difficulty, and miner can't find a share for a long time. In this case, by setting "max_difficulty", when ever pool difficulty is higher than the value you set, miner disconnects and reconnects to the pool. + There is now a Cache directory where the miner creates cached versions of OpenCL kernels + Some reported miner crash after multiple pool disconnect/reconnects , not able to log in etc.. I hope i found the cause of it and fixed it + Learned from the previous big mistake with devfee pools hardcoded in the miner, now miner gets the list of devfee pools from http://srbminer.com, so please allow/don't block it on your firewall wow many thanks dok, will try now mining on nicehash with max_difficulty. ok, and report is it working like it should You can define max_difficulty for every pool like : {"pool" : "lotsofcoinspool1", "wallet" : "", "password" : "x", "max_difficulty" : 150000}, {"pool" : "lotsofcoinspool2", "wallet" : "", "password" : "x", "max_difficulty" : 400000}
|
|
|
|
lebuawu2
Jr. Member
Offline
Activity: 176
Merit: 2
|
|
June 06, 2018, 02:53:31 PM |
|
V1.5.9- Added "max_difficulty" parameter in pools, if reached miner will reconnect to pool - Better logging on miner crash - Kernels are now built in Cache directory - Probably fixed situation when miner crashes on pool switch - Fixed .srb file creation on every miner run - Hopefully reduced nicehash duplicate share errors - Changed the way devfee pools are used + Now you can define a "max_difficulty" for every pool you have in pools config. Sometimes it happens that pool sends abnormally high difficulty, and miner can't find a share for a long time. In this case, by setting "max_difficulty", when ever pool difficulty is higher than the value you set, miner disconnects and reconnects to the pool. + There is now a Cache directory where the miner creates cached versions of OpenCL kernels + Some reported miner crash after multiple pool disconnect/reconnects , not able to log in etc.. I hope i found the cause of it and fixed it + Learned from the previous big mistake with devfee pools hardcoded in the miner, now miner gets the list of devfee pools from http://srbminer.com, so please allow/don't block it on your firewall wow many thanks dok, will try now mining on nicehash with max_difficulty. ok, and report is it working like it should You can define max_difficulty for every pool like : {"pool" : "lotsofcoinspool1", "wallet" : "", "password" : "x", "max_difficulty" : 150000}, {"pool" : "lotsofcoinspool2", "wallet" : "", "password" : "x", "max_difficulty" : 400000} yes I already define max_difficulty, will monitoring for a while. just to let you know I can confirm hash rate drop after reconnecting to pool for RX Vega. SRBMiner 1.5.9 AMD Driver 18.5.2. First running. https://imgur.com/a/pVgDVlpAfter reconnect because of network error. https://imgur.com/a/g4GC6xzEdit : Reconnecting due to high diff work perfectly. https://imgur.com/a/8icbdsL
|
|
|
|
|