Da li je neko probao na ![](https://ip.bitcointalk.org/?u=http%3A%2F%2Fwww.novimilenij.eu%2Fmedia%2Fk2%2Fitems%2Fcache%2Fce742d950ca63a98d59ecec5eba0da2e_L.jpg&t=663&c=2hKDtiaC33VC5g)
|
|
|
DEV! we want to mine aswel!
Currently people with 7850/7870/7950/7970/270/270x/280/280x/370/370x/380/380x cant mine this coin
the miner only gives Hardware errors and cant get shares Tried it on r9 380, works but slow (6+ MH/s) so i gave up
|
|
|
I apologize in advance if this question has already been raised, but I can not find an answer.
I can not understand why claymore does not control the fans of NVIDIA cards. All the controls by the BIOS settings of the cards only.
Installed the latest drivers and claymore 9.8 - no effect.
Win 10 x64 and GTX 1070 EVGA
Yet, if you would just read first post in thread, or claymore readme.txt, you'd know the answer
|
|
|
Also, profitability calculators only make sense if you immediately buy BTC or $$$, otherwise in the long run some non-obvious choices could be way more profitable
|
|
|
For RX560, Micron is way way better than Hynix, still didn't see Samsung on them
|
|
|
Definitely ask for permission first. I doubt they'll allow it. You'll have better luck at smaller companies rather than big ones.
This is the only post in the whole thread that even attempted to give you good advice. Everything else is just trying to skirt around the core question, is your boss ok with it? Unless you own the company all by yourself, you will answer to someone else, as even CEO's answer to their company's Board and stockholders. If your boss does agree, it would be a good idea to get it in writing, such as an email rather than just as a verbal agreement. They may not fully understand the implications at the time and may go back on you if something would come up down the road. I don't think using the office for mining some coins is that bad. Even the NSA supercomputers were used to mine bitcoins in past. I'm just using my designated PC and It's not my boss paying for the electricity bill, the government pays it. 1. government is your boos 2. you're spending taxpayers money that could feed some hungry child
|
|
|
I think not only core speed matter, with nvidia you have cuda technology what is done by nvidia for nvidia, it is well optimised. On amd you have opencl what is created by apple with amd, intel, nvidia, qualcomm cooperation, it is not optimsed for amd, it is for all. If you compare opencl on amd and nvidia then amd would win, but zcash on cuda it is different matter..
I thought claymore on Nvidia uses CUDA for ETH, too (at least, I see two CUDA directories installed with it)...so what happened with CUDA magic on ETH?
|
|
|
BTW believe it or not R9 280X have faster memory than RX580
No, slower memory...but wider bus...384 vs 256 bit, so better memory throughput However, people tried to explain that Nvidia is (with same memory speed) faster than RX580 because ZEC depends on core speed, and not memory speed...contradictions on all sides of story ![Smiley](https://bitcointalk.org/Smileys/default/smiley.gif)
|
|
|
ohhhhh,nooooooo, i guess many people like who started this thread share similar thoughts, they still think each card is giving the profit they first saw, in this case 1st of june was probably giving $5 per card, so I guess they think will be $5 per card for life hehe
Yes, they don't see that its actually $30 per card, they just have to hold ETH and wait it hit $3000 like BTC
|
|
|
I solved my problem with the ASUS Mainboard. I just needed to update the BIOS and the enable above 4g decoding
did you guys got anything higher than 15.7 Mh/s?? I was playing with my cards to get them over 14Mh/s. i got sapphire rx560 4gb with micron. I cant remember what exactly i have changes, but i got one (the one i was playing with) to mine with 18.xxx Mh/s. But the machine crashed after 2min. I restarted it and played with the settings again (lower memory and core) but it didnt help. Still at 18.xxx but crashes after some seconds. So i let it rest and tried smt else.
Now i am mining with 13.5x stable. I read someone got the with 15.xx and posted his rom. Gonna check it out later.
I will try to find the rom that got 18.xxx and post it here. Maybe someone can get it stable?
just wanted to share my experience
For 18MH/s you'd gave to overclock memory to at least 2600MHz...is that even possible?
|
|
|
Warning With the latest Crimson ReLive Edition 17.7.2, you can not OC with MSI Afterburner or Sapphire TriXX.
Use Radeon WattMan - AMD
Sapphire Pulse Radeon RX 560 4GB OC GDDR5 (P / N: 11267-00-20G) 6 pin Micron Bios mod: 1750 -> 1875, 2000 Max memory clock 2000 -> 2200 AMD radeon wattman settings: Gpu clock -> 1163 (- 10.5%) Memory clock -> 2140 Fan -> auto; Temp 69 ° C Voltage: auto 42 w
ETH ~ 14.2Mh / s stable using Claymore miner 9.7
Yes, its already covered in claymore 9.8 main thread, only wattman works for my micron Gigabyte that gives 15.7 MH/s stable I usually OC from claymore directly instead using afterburner/trixx, but that totally doesn't work in claymore 9.7 and works erratically in claymore 9.8
|
|
|
It is the end of the AMD mining in Ethereum world : only nidia GPUs will be profitable in few days/weeks. Ad the problem is if all the amd gpu are going on ubiq or an other coin, difficulty will rise a lot and the profitability will never be good.
Please repeat that few more times, someone gotta believe it if you keep repeating it often enough
|
|
|
Even old R9 280X does it better in ZEC with 340...also, Nvidia drivers are shit thats why twice as expensive card barely do the same in ETH ![Cool](https://bitcointalk.org/Smileys/default/cool.gif)
|
|
|
1070 gtx is much more powerful on the compute side , also the cards access memory in a different way . more over the 580 not modded is much slower than a 1070 .... 1070 in ETHEREUM uses half of the energy of a 580 or 2/3 depends on the oc ... 1070 gtx is overall a much better card
Before mining craze, 1070 costed as two RX580's, so real question is if 1070 gtx is better than TWO RX580s
|
|
|
GTX 1070 is a way better card then the RX 580. But in Etheruem AMD cards are better so that's why you don't see the difference. But Nvidia cards are stronger in Zcash, so that's the reason why your RX580 won't compete with a 1070.
You just repeated what he said...I think he wanted to know WHY ![Cheesy](https://bitcointalk.org/Smileys/default/cheesy.gif) One of guys who write minig software could probably explain it best (claymore, wolf etc)
|
|
|
Could it be that he runs 10000 graphic cards farm, all with same worker name?
|
|
|
I actually do watch memory errors, thats why me and you have worse results than some scammers ![Smiley](https://bitcointalk.org/Smileys/default/smiley.gif) I have 13.8-13-9 with the same card as you So, I'll have to fight again with Hynix to get close to your 13.8 Mh/s. It was actually about 14.4 until recently, only thing that changed meantime is new epoch, but since my micron based Gigabyte took almost no drop, I'm a bit confused here Can Claymore be to blame? 9.7 is better for nVidia, but worse for AMD, I suppose. I could be wrong, but I did see that they will be releasing a new version of Claymore soon to go along with the new drivers being release by AMD. Will be interesting to see what it does to speeds. Claymore got released, but we still have to wait for AMD...and probably even more new claymore too, as clock/fan/temperature settings seem to still be a problem
|
|
|
I'd like to say a bit more about ADL (library that is used to manage clocks/voltage/fan). I have been working with ADL API for a couple of years, so I have some experience. When you use ADL, you must ask what version of API supports your GPU. For example, Tahiti reports v5, Hawaii v6. Every API version uses own set of functions, sample code can be found in ADL SDK. I implemented v5 and v6 and it worked fine for some time. Then in some drivers AMD disabled underclocking via ADL for some reason, API calls report "OK" but GPU still uses old clocks. Then AMD releases Polaris cards which report API v7. And no new ADL SDK was released so no headers for new API or any samples. We had to wait for a couple of months before it was finally released. I added code that uses v7 and it worked fine, except one thing: if I set fan 50%, API sets it to 41% or 46% and I have no idea why. I had to change fan management logic to make it work properly with this issue. Then in some drivers AMD changed something again and OC stopped working, partially or completely. Then AMD releases new drivers and Hawaii cards started to report that they use API v7 instead of v6. But in this mode they support v7 a bit differently than Polaris so I had to change my code again to support this new "feature" of drivers. Then AMD releases 17.7.2 and API v7 returns "not supported" error code. After some research I found that AMD added new functions for API v7 and blocked old functions. So this time they decided not to create API v8, but changed functions for already released API version. It's awesome. No new ADL SDK is released for these changes, so again no headers or code samples for "new" API v7. Finally I made it work, but sometimes it works with strange issues, currently I try to find the reason. Vega support in ADL has own problems currently, for example, you can set fan to 90% but then there is no way to set it to any lower value. For Linux, AMD decided to drop ADL support completely in gpu-pro drivers and use new way that I have to support too. So ADL is the most weird API I've ever seen, but when you see that some options don't work as expected, probably you will blame the miner ![Smiley](https://bitcointalk.org/Smileys/default/smiley.gif) On other hand, when I decided to add fan/OC management I found that Nvidia has no public API for it at all. From this point of view, I think weird API is better than no API at all. Still, bad way to do things from AMD, breaking compatibility is always bad idea, specially when you can fry the card doing it...who wants API that OCs your card and stops fan at the same time? ![Cheesy](https://bitcointalk.org/Smileys/default/cheesy.gif)
|
|
|
There are bugs in ADL, as always. If you have problems with clocks/voltage settings in 17.7.2 - either don't use these options or use previous drivers. I suspect it will take a lot of time to make it work properly... and then AMD will release another ADL update...
watching wattman, memory speed tick-tock from 625 to max defined in miner command line and back, every 2 seconds Whole problem probably comes from here (driver release notes): Radeon WattMan(6) Now supports memory underclocking. Now supports power state controls.
|
|
|
|