Bitcoin Forum
November 01, 2024, 09:04:37 AM *
News: Bitcoin Pumpkin Carving Contest
 
   Home   Help Search Login Register More  
Pages: « 1 ... 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 [591] 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 ... 1411 »
  Print  
Author Topic: Claymore's Dual Ethereum AMD+NVIDIA GPU Miner v15.0 (Windows/Linux)  (Read 6590714 times)
Teress
Full Member
***
Offline Offline

Activity: 224
Merit: 102


View Profile
June 13, 2017, 07:15:44 PM
 #11801

Hi Everybody:

As a lot of messages I have seen here, I will start stating that I'm new here and new in mining, I started mining on friday, so far great, I have learnt a lot.

Right now I am mining with only 3 ASUS STRIX RX580 8GB OC, I have reduced the GPU clock in 5% and increased the memory clock speed to 2200 MHz, I have got 27.4 MH/s from 24.5MH/s, all this in AMDs Wattman, I had problems with the Asus Aura Light Effects, it seems it mess with the MH/s fluency, It is off and uninstalled now.

I have read about some widely known power adjustments, I understand the idea is to reduce voltage, so reduce power consumption, and increase memory clock speed, so increase MH/s, I have read too about doing it with AMDs Wattman or MSIs AfterBurner, and about bios flashing.

Knowing many has been able to get 29MH/s and 135 Watts of power from this cards, my questions so far are the following:

- There is need to flash a new bios to get the best performance?, if so, how many MH/s could I get?, or power reduction?.
- If I can get similar improvements with AMDs Wattman, which are the parameters I must use?.

Thank you very much for your support.

Carlos


If you're still using Windows (which is about as stable as a house of cards during a hurricane), then you can easily undervolt using MSI AB or whatever. Also, only 27.4MH/s? Those can't be Samsung, are they?

How do you check for memory errors under Linux? I was asking this question few times with no answer Sad

Quite serious ones will appear in the kernel log - so you'd just check dmesg. I'm pretty sure there's a register with the count somewhere in the space, if I directly access the GPU (not bothering with the driver), but there's already SO MUCH awesome shit I can do with this access that I have yet to implement (and I've implemented plenty!) so it's not that high on my TODO list...

So thats it. There is no way for nonterminal gurus to easilly check for memory errors. I am about 25 years Mac user, also have some FreeBSD & Linux servers. I like Unixes and hate Win, but.... For example this little HWinfo64 is really handy and easy to use. I would like to try build a linux miner system but don't want to ruin my cards running them on the edge of millions of hw errors without knowing it...
And the solution to first run them on Win to find each cards limits, then flash those values into their bioses and after that run them under Linux seems a bit uncomfortable, don't ya think? Smiley

ACTUALLY - even if you're an expert in bash, you still aren't able to see the memory errors - all of them. You would have to code - access the GPU directly, telling the driver to go fuck itself, and read them.

About your worry with memory errors - they have zero chance of harming the GPU. A memory error is basically that the delay that was waited before a given memory command simply was not long enough, and as such, you got garbage back (most likely), assuming it was a read command. It's not going to hurt a thing, besides possibly your profits.

There is one other option, if you're a dev with a shitload of time... (or you just bribe me for a copy of mine) - write a tool to directly access the VRM controller(s) on the GPU, and command them directly. This is fun & rewarding, because you find out the Windows... those nice tools like MSI AB and Sapphire Trixx... they hide SO much power from you, and are like safety scissors when you need a scalpel.

Wolf, maybe it would sound strange, but - are you 100% sure that running cards 24/7 for a year with millions of memory errors will have NO impact to possible degradation or damage of that cards? I saw a lot of cards for example RX 480 8GB with Samsung memories, which in normal condition can do easilly 30+ Mhs, but they were hardly hitting just 27 Mhs! Something had to degrade them and memory errors is got an idea no. 1.

I am absolutely certain. The reason I'm paid so well for custom performance timings is because not only do I not copy+paste timing sets, I also do not blindly change values - I understand how to use & interact with GDDR5, and how it functions, to an extent. I don't mean in code, storing & retrieving shit, but more on the level of how to operate it, and how the GPU's memory controller will drive it, the various delays required between different commands issued, and whatnot. Attempting to do something too quickly (like back-to-back ACTIVE commands to rows in different banks without waiting long enough) will simply end with incorrect data if you fuck up just a little, or a memcrash (identifiable by the GPU's core clock being normal, but the memclk dropping to 300 and it not hashing) if you fuck up a lot. You ain't gonna damage it, short of voltage modifications.

Now - I have seen this case of Samsung just not being... well, Samsung, in some cases. In all of them, the issue was heat. Now, I know what you're thinking. Something along the lines of the core temp being more than fine, right? This is due to the cooling being what I call a "show cooler." XFX RS XXX (470 or 480, 4G or 8G), as well as MSI's Armor coolers are ones I have personally bought and confirmed this behavior. They ensure the GPU's ASIC is connected *really* well to the heatsink - and that's about all they do. Most gamers/overclockers/miners don't even know there are other temp sensors, let alone check them... this means that while everything on the PCB besides the core is left to cook, most notably the VRM controller(s) and the GDDR5, all appears well! This has been the cause of my Samsung under-performance issues without fail.

Well, little example. I had 2 identical cards, Nitro+ RX480 4GB Samsung. Same settings, same core, mem, voltages etc... Simply exact same confirmed settings. But one card simply was running 13-15 degC hotter then second one. Yes, far away from each other to make the cooling factor, airflow etc. irrelevant. And guess what? The hotter card even with the same mem straps was hashing a lot lower. And now why was that? Both cards had the same cooling solution from factory, backplate, not sure about Nitro+ VRM heatsink etc. My guess - one card simply was "a bit screwed" or something, my first idea previous owner run it with mem errors and the memory is since that time "more likely to produce another mem errors"...
So what is the conclusion? Should I change settings on all my card to run faster (more Mhs) but with mem errors, or just have all cards running slower but clean with zero mem errors? Where is the border where mem errors will start affecting accepted shares due to incorrect shares?
Xevaria
Newbie
*
Offline Offline

Activity: 12
Merit: 0


View Profile
June 13, 2017, 07:35:15 PM
 #11802

Is anyone mining with both AMD and NVIDIA cards in a single rig? I'm having trouble getting Afterburner to work with the AMD cards when I mix.. Anyone got a solution to this? Cheesy
weishengchow
Newbie
*
Offline Offline

Activity: 2
Merit: 0


View Profile
June 13, 2017, 07:52:07 PM
 #11803

Hello,

newbie here. I am facing the following problem with the installation.

The OpenCl seems not working. I am using MSI GTX980ti. Anyone to my rescue???

The message:
SYSTEM32\OpenCL.dll is either not designed to run on Windows or it contains an error. Try installing the program again using the original installation media or contact your system administrator or the software vendor for support. Error status 0xc000012f.

doktor83
Hero Member
*****
Offline Offline

Activity: 2688
Merit: 626


View Profile WWW
June 13, 2017, 08:06:52 PM
 #11804

I already asked about 20 pages ago, but here it goes again, cause i did not get an answer:
can pool job timeout be changed? Now it's 15 minutes, can it be set to less?

SRBMiner-MULTI thread - HERE
http://www.srbminer.com
Calrornds
Newbie
*
Offline Offline

Activity: 19
Merit: 0


View Profile
June 13, 2017, 08:36:42 PM
 #11805

Hi Everybody:

As a lot of messages I have seen here, I will start stating that I'm new here and new in mining, I started mining on friday, so far great, I have learnt a lot.

Right now I am mining with only 3 ASUS STRIX RX580 8GB OC, I have reduced the GPU clock in 5% and increased the memory clock speed to 2200 MHz, I have got 27.4 MH/s from 24.5MH/s, all this in AMDs Wattman, I had problems with the Asus Aura Light Effects, it seems it mess with the MH/s fluency, It is off and uninstalled now.

I have read about some widely known power adjustments, I understand the idea is to reduce voltage, so reduce power consumption, and increase memory clock speed, so increase MH/s, I have read too about doing it with AMDs Wattman or MSIs AfterBurner, and about bios flashing.

Knowing many has been able to get 29MH/s and 135 Watts of power from this cards, my questions so far are the following:

- There is need to flash a new bios to get the best performance?, if so, how many MH/s could I get?, or power reduction?.
- If I can get similar improvements with AMDs Wattman, which are the parameters I must use?.

Thank you very much for your support.

Carlos


If you're still using Windows (which is about as stable as a house of cards during a hurricane), then you can easily undervolt using MSI AB or whatever. Also, only 27.4MH/s? Those can't be Samsung, are they?

LOL.... "...If you're still using Windows (which is about as stable as a house of cards during a hurricane)..." ~ Wolf0, 2017

thanks for brief grin... needed one.... after the BTC bloodbath last night

27.5mh -- standard 1500 straps -- maybe Elpida memory

possibly SK-Hynix... lucky card this

Use GPUZ to find out memory type

It seems I am Lucky, these are Samsung, just checked.

What I need to change?, which settings must I use?.

Thx very much,


Damn! I thought copy+paste hackjob from 1750 -> 2000 would do better than that on Samsung K4G80325FB. I figured at least 28.5MH/s (which still, for Samsung, is not good) but... wow.

Hi:

So far I have reduced Voltage -96 mV, increased GPU Clock Speed to 1400MHz and Memory Clock Speeds to 2200MHz, I keep reading about that 1750=>2000 "thing", I understand it can be done flashing the bios, so if I understand it ok, there is nothing else I can do to improve my MH/s but to flash the bios, 27.5MH/s is the Top I can get with AfterBurner?.

Thx very much,

Carlos
Calrornds
Newbie
*
Offline Offline

Activity: 19
Merit: 0


View Profile
June 13, 2017, 08:45:14 PM
 #11806

Hello everyone.

I’m a new guy in this forum and i come to expose my big problème with claymore.
I don’t find solution and nobody knows how to help me.

I mine with 6 RX 480 4Go Nitro Sapphire on windows 10.

https://cdn.discordapp.com/attachments/317646633104048128/323717897786490892/claymore_s.jpg

As you cas see on this picture, i have good result BUT, about FAN, i can’t see all of GPU.
I always see the GPU0 FAN.
I don’t know why, but before i always see all of GPU FANS.

And i use MISafterburer to OC my cards but claymore just OC the GPU0 and not all cards.
I think it's probably because i just see gpu0 fans.

Have you an explication about my problème ?  
I test claymore 9,4, and 9,5 but this problème existe on all claymore version.

Thank you.

Uff still the same question for milionth time. Disable crossfire, don't use RDP...

Thx, I was going to ask the same question. Crossfire disabled. About RDP, this question must be another one asked million times, which is the problem with it?.

Thx

Carlos
Kartaba
Sr. Member
****
Offline Offline

Activity: 276
Merit: 250


View Profile
June 13, 2017, 08:55:14 PM
 #11807

My hashrate dropped from 30 to 29.7MH since a couple of days now. Miner never went off or something. Why i get 0.3 mh less now? Anyone have this issue?
I am/was using 9.3 and life was great. Now every claymore version give me 0.3 mh less (verions 8.1 to 9.5) all the same now. Dont understand what happened
AlainC
Member
**
Offline Offline

Activity: 91
Merit: 10


View Profile
June 13, 2017, 09:24:49 PM
 #11808

I've got one gpu, which make one incorrect share for 2-3 thousands good shares, running 30.7 Mhs. Zero memory errors on hwinfo64. And you would trash its bios? Can't agree..
What you say make sense... I've a tendency to be too cautious. I could tolerate 2 errors separated by enough delay (eg 60 minutes) Undecided
cs2727
Newbie
*
Offline Offline

Activity: 15
Merit: 0


View Profile
June 13, 2017, 09:38:03 PM
Last edit: June 13, 2017, 10:57:25 PM by cs2727
 #11809

can any one help i have one rig working prefect, my other rig with the same cofig wont mine. i have 4 1060s running at 2-3 mhs. while watching afterburner the cards wont go over 40% power...if i switch to vert with ccminer they work perfect? i dont under standwhy my other rig wont mine if im using the same exact setting?
Teress
Full Member
***
Offline Offline

Activity: 224
Merit: 102


View Profile
June 13, 2017, 10:06:37 PM
 #11810

Hi Everybody:

As a lot of messages I have seen here, I will start stating that I'm new here and new in mining, I started mining on friday, so far great, I have learnt a lot.

Right now I am mining with only 3 ASUS STRIX RX580 8GB OC, I have reduced the GPU clock in 5% and increased the memory clock speed to 2200 MHz, I have got 27.4 MH/s from 24.5MH/s, all this in AMDs Wattman, I had problems with the Asus Aura Light Effects, it seems it mess with the MH/s fluency, It is off and uninstalled now.

I have read about some widely known power adjustments, I understand the idea is to reduce voltage, so reduce power consumption, and increase memory clock speed, so increase MH/s, I have read too about doing it with AMDs Wattman or MSIs AfterBurner, and about bios flashing.

Knowing many has been able to get 29MH/s and 135 Watts of power from this cards, my questions so far are the following:

- There is need to flash a new bios to get the best performance?, if so, how many MH/s could I get?, or power reduction?.
- If I can get similar improvements with AMDs Wattman, which are the parameters I must use?.

Thank you very much for your support.

Carlos


If you're still using Windows (which is about as stable as a house of cards during a hurricane), then you can easily undervolt using MSI AB or whatever. Also, only 27.4MH/s? Those can't be Samsung, are they?

How do you check for memory errors under Linux? I was asking this question few times with no answer Sad

Quite serious ones will appear in the kernel log - so you'd just check dmesg. I'm pretty sure there's a register with the count somewhere in the space, if I directly access the GPU (not bothering with the driver), but there's already SO MUCH awesome shit I can do with this access that I have yet to implement (and I've implemented plenty!) so it's not that high on my TODO list...

So thats it. There is no way for nonterminal gurus to easilly check for memory errors. I am about 25 years Mac user, also have some FreeBSD & Linux servers. I like Unixes and hate Win, but.... For example this little HWinfo64 is really handy and easy to use. I would like to try build a linux miner system but don't want to ruin my cards running them on the edge of millions of hw errors without knowing it...
And the solution to first run them on Win to find each cards limits, then flash those values into their bioses and after that run them under Linux seems a bit uncomfortable, don't ya think? Smiley

ACTUALLY - even if you're an expert in bash, you still aren't able to see the memory errors - all of them. You would have to code - access the GPU directly, telling the driver to go fuck itself, and read them.

About your worry with memory errors - they have zero chance of harming the GPU. A memory error is basically that the delay that was waited before a given memory command simply was not long enough, and as such, you got garbage back (most likely), assuming it was a read command. It's not going to hurt a thing, besides possibly your profits.

There is one other option, if you're a dev with a shitload of time... (or you just bribe me for a copy of mine) - write a tool to directly access the VRM controller(s) on the GPU, and command them directly. This is fun & rewarding, because you find out the Windows... those nice tools like MSI AB and Sapphire Trixx... they hide SO much power from you, and are like safety scissors when you need a scalpel.

Wolf, maybe it would sound strange, but - are you 100% sure that running cards 24/7 for a year with millions of memory errors will have NO impact to possible degradation or damage of that cards? I saw a lot of cards for example RX 480 8GB with Samsung memories, which in normal condition can do easilly 30+ Mhs, but they were hardly hitting just 27 Mhs! Something had to degrade them and memory errors is got an idea no. 1.

I am absolutely certain. The reason I'm paid so well for custom performance timings is because not only do I not copy+paste timing sets, I also do not blindly change values - I understand how to use & interact with GDDR5, and how it functions, to an extent. I don't mean in code, storing & retrieving shit, but more on the level of how to operate it, and how the GPU's memory controller will drive it, the various delays required between different commands issued, and whatnot. Attempting to do something too quickly (like back-to-back ACTIVE commands to rows in different banks without waiting long enough) will simply end with incorrect data if you fuck up just a little, or a memcrash (identifiable by the GPU's core clock being normal, but the memclk dropping to 300 and it not hashing) if you fuck up a lot. You ain't gonna damage it, short of voltage modifications.

Now - I have seen this case of Samsung just not being... well, Samsung, in some cases. In all of them, the issue was heat. Now, I know what you're thinking. Something along the lines of the core temp being more than fine, right? This is due to the cooling being what I call a "show cooler." XFX RS XXX (470 or 480, 4G or 8G), as well as MSI's Armor coolers are ones I have personally bought and confirmed this behavior. They ensure the GPU's ASIC is connected *really* well to the heatsink - and that's about all they do. Most gamers/overclockers/miners don't even know there are other temp sensors, let alone check them... this means that while everything on the PCB besides the core is left to cook, most notably the VRM controller(s) and the GDDR5, all appears well! This has been the cause of my Samsung under-performance issues without fail.

Well, little example. I had 2 identical cards, Nitro+ RX480 4GB Samsung. Same settings, same core, mem, voltages etc... Simply exact same confirmed settings. But one card simply was running 13-15 degC hotter then second one. Yes, far away from each other to make the cooling factor, airflow etc. irrelevant. And guess what? The hotter card even with the same mem straps was hashing a lot lower. And now why was that? Both cards had the same cooling solution from factory, backplate, not sure about Nitro+ VRM heatsink etc. My guess - one card simply was "a bit screwed" or something, my first idea previous owner run it with mem errors and the memory is since that time "more likely to produce another mem errors"...
So what is the conclusion? Should I change settings on all my card to run faster (more Mhs) but with mem errors, or just have all cards running slower but clean with zero mem errors? Where is the border where mem errors will start affecting accepted shares due to incorrect shares?

Honestly, it's simpler than you think when it comes to memory errors...  I think HWiNFO64 is giving you too much information. Basically - try checking average pool hashrate (I do like Nanopool for this, with their 1, 3, 6, 12, and 24 hour averages.) You just worry about the amount of valid shares adding up to the hashrate you want - because this is what determines what you get paid, in the end.
I like nanopool too, I agree accepted shares is nice information, but nanopool does not show stale shares or invalid shares. Erhermine will show you exact number of stales.
But you didn't answered my "where is the border" question Smiley
jenci
Newbie
*
Offline Offline

Activity: 17
Merit: 0


View Profile
June 13, 2017, 10:54:26 PM
 #11811

windows defender today place to quarantine ethdcrminer64.exe. (Trojan: Win32/Skeeyah.A!rfn)
http://imgur.com/a/0te75
anyone have same problem?

i found, sorry
TechPark
Full Member
***
Offline Offline

Activity: 238
Merit: 100


View Profile
June 13, 2017, 11:26:12 PM
 #11812

batch:-

setx GPU_FORCE_64BIT_PTR 0
setx GPU_MAX_HEAP_SIZE 100
setx GPU_USE_SYNC_OBJECTS 1
setx GPU_MAX_ALLOC_PERCENT 100
setx GPU_SINGLE_ALLOC_PERCENT 100
EthDcrMiner64.exe -epool eu1.ethermine.org:4444 -ewal 0x9793F71eC2f913d01D8564f1F3E4bF73a02a2Cd1.123 -epsw x


but i get this error?  where am going wrong?  must be making a simple mistake Huh



C:\Users\GADGET-ELECTRONICS\Desktop\Claymore CryptoNote GPU Miner v9.7 Beta - PO
OL>setx GPU_FORCE_64BIT_PTR 0

SUCCESS: Specified value was saved.

C:\Users\GADGET-ELECTRONICS\Desktop\Claymore CryptoNote GPU Miner v9.7 Beta - PO
OL>setx GPU_MAX_HEAP_SIZE 100

SUCCESS: Specified value was saved.

C:\Users\GADGET-ELECTRONICS\Desktop\Claymore CryptoNote GPU Miner v9.7 Beta - PO
OL>setx GPU_USE_SYNC_OBJECTS 1

SUCCESS: Specified value was saved.

C:\Users\GADGET-ELECTRONICS\Desktop\Claymore CryptoNote GPU Miner v9.7 Beta - PO
OL>setx GPU_MAX_ALLOC_PERCENT 100

SUCCESS: Specified value was saved.

C:\Users\GADGET-ELECTRONICS\Desktop\Claymore CryptoNote GPU Miner v9.7 Beta - PO
OL>setx GPU_SINGLE_ALLOC_PERCENT 100

SUCCESS: Specified value was saved.

C:\Users\GADGET-ELECTRONICS\Desktop\Claymore CryptoNote GPU Miner v9.7 Beta - PO
OL>EthDcrMiner64.exe -epool eu1.ethermine.org:4444 -ewal 0x9793F71eC2f913d01D856
4f1F3E4bF73a02a2Cd1.home1 -epsw
'EthDcrMiner64.exe' is not recognized as an internal or external command,
operable program or batch file.

C:\Users\GADGET-ELECTRONICS\Desktop\Claymore CryptoNote GPU Miner v9.7 Beta - PO
OL>pause
Press any key to continue . . .

Cause: The script does not know where your "EthDcrMiner64.exe" is located.
Solution: Add a full path in the string "C:\XXXX\XXXX\EthDcrMiner64.exe -epool eu1.ethermine.org:4444 -ewal 0x9793F71eC2f913d01D8564f1F3E4bF73a02a2Cd1.123 -epsw x"
TechPark
Full Member
***
Offline Offline

Activity: 238
Merit: 100


View Profile
June 13, 2017, 11:37:51 PM
 #11813

Hi Everybody:

As a lot of messages I have seen here, I will start stating that I'm new here and new in mining, I started mining on friday, so far great, I have learnt a lot.

Right now I am mining with only 3 ASUS STRIX RX580 8GB OC, I have reduced the GPU clock in 5% and increased the memory clock speed to 2200 MHz, I have got 27.4 MH/s from 24.5MH/s, all this in AMDs Wattman, I had problems with the Asus Aura Light Effects, it seems it mess with the MH/s fluency, It is off and uninstalled now.

I have read about some widely known power adjustments, I understand the idea is to reduce voltage, so reduce power consumption, and increase memory clock speed, so increase MH/s, I have read too about doing it with AMDs Wattman or MSIs AfterBurner, and about bios flashing.

Knowing many has been able to get 29MH/s and 135 Watts of power from this cards, my questions so far are the following:

- There is need to flash a new bios to get the best performance?, if so, how many MH/s could I get?, or power reduction?.
- If I can get similar improvements with AMDs Wattman, which are the parameters I must use?.

Thank you very much for your support.

Carlos


If you're still using Windows (which is about as stable as a house of cards during a hurricane), then you can easily undervolt using MSI AB or whatever. Also, only 27.4MH/s? Those can't be Samsung, are they?

How do you check for memory errors under Linux? I was asking this question few times with no answer Sad

Quite serious ones will appear in the kernel log - so you'd just check dmesg. I'm pretty sure there's a register with the count somewhere in the space, if I directly access the GPU (not bothering with the driver), but there's already SO MUCH awesome shit I can do with this access that I have yet to implement (and I've implemented plenty!) so it's not that high on my TODO list...

So thats it. There is no way for nonterminal gurus to easilly check for memory errors. I am about 25 years Mac user, also have some FreeBSD & Linux servers. I like Unixes and hate Win, but.... For example this little HWinfo64 is really handy and easy to use. I would like to try build a linux miner system but don't want to ruin my cards running them on the edge of millions of hw errors without knowing it...
And the solution to first run them on Win to find each cards limits, then flash those values into their bioses and after that run them under Linux seems a bit uncomfortable, don't ya think? Smiley

ACTUALLY - even if you're an expert in bash, you still aren't able to see the memory errors - all of them. You would have to code - access the GPU directly, telling the driver to go fuck itself, and read them.

About your worry with memory errors - they have zero chance of harming the GPU. A memory error is basically that the delay that was waited before a given memory command simply was not long enough, and as such, you got garbage back (most likely), assuming it was a read command. It's not going to hurt a thing, besides possibly your profits.

There is one other option, if you're a dev with a shitload of time... (or you just bribe me for a copy of mine) - write a tool to directly access the VRM controller(s) on the GPU, and command them directly. This is fun & rewarding, because you find out the Windows... those nice tools like MSI AB and Sapphire Trixx... they hide SO much power from you, and are like safety scissors when you need a scalpel.

Wolf, maybe it would sound strange, but - are you 100% sure that running cards 24/7 for a year with millions of memory errors will have NO impact to possible degradation or damage of that cards? I saw a lot of cards for example RX 480 8GB with Samsung memories, which in normal condition can do easilly 30+ Mhs, but they were hardly hitting just 27 Mhs! Something had to degrade them and memory errors is got an idea no. 1.

I am absolutely certain. The reason I'm paid so well for custom performance timings is because not only do I not copy+paste timing sets, I also do not blindly change values - I understand how to use & interact with GDDR5, and how it functions, to an extent. I don't mean in code, storing & retrieving shit, but more on the level of how to operate it, and how the GPU's memory controller will drive it, the various delays required between different commands issued, and whatnot. Attempting to do something too quickly (like back-to-back ACTIVE commands to rows in different banks without waiting long enough) will simply end with incorrect data if you fuck up just a little, or a memcrash (identifiable by the GPU's core clock being normal, but the memclk dropping to 300 and it not hashing) if you fuck up a lot. You ain't gonna damage it, short of voltage modifications.

Now - I have seen this case of Samsung just not being... well, Samsung, in some cases. In all of them, the issue was heat. Now, I know what you're thinking. Something along the lines of the core temp being more than fine, right? This is due to the cooling being what I call a "show cooler." XFX RS XXX (470 or 480, 4G or 8G), as well as MSI's Armor coolers are ones I have personally bought and confirmed this behavior. They ensure the GPU's ASIC is connected *really* well to the heatsink - and that's about all they do. Most gamers/overclockers/miners don't even know there are other temp sensors, let alone check them... this means that while everything on the PCB besides the core is left to cook, most notably the VRM controller(s) and the GDDR5, all appears well! This has been the cause of my Samsung under-performance issues without fail.

Well, little example. I had 2 identical cards, Nitro+ RX480 4GB Samsung. Same settings, same core, mem, voltages etc... Simply exact same confirmed settings. But one card simply was running 13-15 degC hotter then second one. Yes, far away from each other to make the cooling factor, airflow etc. irrelevant. And guess what? The hotter card even with the same mem straps was hashing a lot lower. And now why was that? Both cards had the same cooling solution from factory, backplate, not sure about Nitro+ VRM heatsink etc. My guess - one card simply was "a bit screwed" or something, my first idea previous owner run it with mem errors and the memory is since that time "more likely to produce another mem errors"...
So what is the conclusion? Should I change settings on all my card to run faster (more Mhs) but with mem errors, or just have all cards running slower but clean with zero mem errors? Where is the border where mem errors will start affecting accepted shares due to incorrect shares?

Cards may look like identical but have different BIOS settings, especially when you mentioned a "previous owner"... BIOS setting do affect voltage, frequencies which affect the temperature. If those cards are used the thermal paste could dry out and have to be replaced...
Brucelats
Sr. Member
****
Offline Offline

Activity: 326
Merit: 250



View Profile
June 14, 2017, 12:14:49 AM
 #11814

Greetings!

IS this miner supporting cuda8? Like using nvidia Cuda8 for more effective mining? Like genoil cuda miner did?


I wanted to install Claymore miner on Linux, Ubuntu 16.04 and is it working with cuda?


Cheers!!

crazylizz
Newbie
*
Offline Offline

Activity: 2
Merit: 0


View Profile
June 14, 2017, 02:26:40 AM
 #11815

Im having some problems running the miner on my system. it gives me a CUDA error, saying that it cant allocate for the DAG

https://puu.sh/wjonC/51837b2990.png

ive tried -lidag or just -li to the lowest intensity, got the latest drivers from nvidia, but it still gives the same error. help?
Drawde
Member
**
Offline Offline

Activity: 87
Merit: 11


View Profile
June 14, 2017, 02:35:13 AM
 #11816

You GPU is only 2gb and has to be at least 3gb to hold the dag.
siampumpkin
Sr. Member
****
Offline Offline

Activity: 420
Merit: 260



View Profile
June 14, 2017, 02:41:16 AM
 #11817

Im having some problems running the miner on my system. it gives me a CUDA error, saying that it cant allocate for the DAG



ive tried -lidag or just -li to the lowest intensity, got the latest drivers from nvidia, but it still gives the same error. help?

You can mine one of the other Forks of ETH like MusicCoin or Expansys

Buy a Trezor and Protect all your Crypto Currencies from hackers.
If I was helpful please tip me BTC: 3Bt4E78XjcEhCLEQUB6R1ujiQG58DXaazg  ETH: 0xc6541E163A7C513580f4C1897297452c71b44909
gbux
Newbie
*
Offline Offline

Activity: 1
Merit: 0


View Profile
June 14, 2017, 02:45:17 AM
 #11818

trying to get going with mining for the first time. been trying to find my own way but im stuck and could use a hand. Trying to set up geth, and i think im getting ok with understanding that. I run it with

 geth --rpc --fast --cache=1024

and it goes on its merry way

but when i launch ethdcrminer64 with the command provided by the first post

./ethdcrminer64 -epool <ip address and port that geth provides>


i get:

ETH: 1 pool is specified
Main Ethereum pool is <ip address and port that geth provides>
DCR: 0 pool is specified
AMD OpenCL platform not found


then after a bunch of other stuff i get

No pool specified for Decred! Ethereum-only mining mode is enabled
ETHEREUM-ONLY MINING MODE ENABLED (-mode 1)

Probably you are trying to mine Ethereum fork. Please specify "-allcoins 1" or "-allpools 1" option. Check "Readme" file for details.
Pool sent wrong data, cannot set epoch, disconnectETH: Connection lost, retry in 20 sec...

ive tried doing just that, setting the --allcoins 1 or the --allpools 1 option

but then it just sits there, occasionally telling me my gpus temp and fan speed a couple times, till watchdog restarts it and rinse and repeat. Any advice?
crazylizz
Newbie
*
Offline Offline

Activity: 2
Merit: 0


View Profile
June 14, 2017, 02:46:36 AM
 #11819

You GPU is only 2gb and has to be at least 3gb to hold the dag.

isnt the eth miner supposed to work with 2 gb cards though?
jackbox
Legendary
*
Offline Offline

Activity: 1246
Merit: 1024



View Profile
June 14, 2017, 02:51:39 AM
 #11820

You GPU is only 2gb and has to be at least 3gb to hold the dag.

isnt the eth miner supposed to work with 2 gb cards though?

No, it is impossible as the DAG file must be in the card's memory and it is over 2GB in size. No way around that.

Buy a Trezor and Protect your BTC, BCH, BTG, DASH, LTC, DGB, ZEC, ETH and ETC from hackers.
If I was helpful please buy me a coffee BTC: 1DWK7vBaxcTC5Wd2nQwLGEoy8xdFVzGKLK  BTG: AWvN1iBqCUqG2tEh3XoVvRbdcGrAzfBBpW
If I was helpful please buy me a burger DGB: DLASV6CUQpGtGSyaVz5FYuu5YxZ17MoGQz
Pages: « 1 ... 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 [591] 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 ... 1411 »
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!