|
August 07, 2017, 04:30:21 AM |
|
First time poster, sorry if I'm not following any rules. I've googled and I've searched the forum before posting this.
I'm using a ccminer that says its 2.2:
*** ccminer 2.2 for nVidia GPUs by tpruvot@github *** Built with VC++ 2013 and nVidia CUDA SDK 8.0 64-bits
Originally based on Christian Buchner and Christian H. project Include some algos from alexis78, djm34, sp, tsiv and klausT.
BTC donation address: 1AJdfCpLWPNoAMDfHF1wD5y8VgKSSTHxPo (tpruvot)
I'm using it with algorithm skunk.
The machine has a single Gigabyte GTX 1080 GPU and a number of EVGA GTX 1070. Each has 8GB of memory. They all work great using EWBF equihash.
However, when using skunk on ccminer, I get these errors:
[2017-08-06 23:50:17] GPU #5: EVGA GTX 1070, 19.66 MH/s [2017-08-06 23:50:17] GPU #0: scanhash_skunk out of memory [2017-08-06 23:50:17] GPU #1: EVGA GTX 1070, 19.23 MH/s [2017-08-06 23:50:18] GPU #0: out of memory
Not on every block, but almost every 3rd block. OC settings do not seem to change the result. For the time being, I'm having GPU 0 work on another problem with another set of software. That works fine. Any advice? Thanks in advance.
|