Bitcoin Forum
June 17, 2024, 04:12:55 PM *
News: Latest Bitcoin Core release: 27.0 [Torrent]
 
   Home   Help Search Login Register More  
Pages: « 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 [20] 21 22 23 24 25 26 27 28 29 »
  Print  
Author Topic: CCminer M7 (XCN) by djm34, fixed + optimized for cuda 8 and new cards by PALLAS  (Read 52575 times)
bensam1231
Legendary
*
Offline Offline

Activity: 1750
Merit: 1024


View Profile
October 12, 2016, 04:26:06 AM
 #381

Weird there isn't anyone around willing to compile this. I even offered some devs BTC to compile it and fix the bugs, no one has responded.

I buy private Nvidia miners. Send information and/or inquiries to my PM box.
sibisi666
Full Member
***
Offline Offline

Activity: 269
Merit: 100


View Profile
October 12, 2016, 08:52:59 AM
 #382

hm, is there source code? just require compile?
pallas (OP)
Legendary
*
Offline Offline

Activity: 2716
Merit: 1094


Black Belt Developer


View Profile
October 12, 2016, 09:54:34 AM
 #383

hm, is there source code? just require compile?

yes it's on git, see first post.

Amph
Legendary
*
Offline Offline

Activity: 3206
Merit: 1069



View Profile
October 12, 2016, 10:17:44 AM
 #384

hm, is there source code? just require compile?

yes it's on git, see first post.

what version there is there on git, the one that does 20MH?
pallas (OP)
Legendary
*
Offline Offline

Activity: 2716
Merit: 1094


Black Belt Developer


View Profile
October 12, 2016, 11:16:31 AM
 #385

hm, is there source code? just require compile?

yes it's on git, see first post.

what version there is there on git, the one that does 20MH?

the one that works on cuda8 and pascal and is about 10% faster than the code base (djm version).

krnlx
Full Member
***
Offline Offline

Activity: 243
Merit: 105


View Profile
October 12, 2016, 03:22:02 PM
 #386

Code:
		m7_keccak512_cpu_hash(thr_id, throughput*tp_coef_f[thr_id], pdata[29], KeccakH[thr_id], order++);
m7_sha512_cpu_hash_120(thr_id, throughput*tp_coef_f[thr_id], pdata[29], d_prod1[thr_id], order++);
    cpu_mulT4(0, throughput*tp_coef_f[thr_id], 8, 8, d_prod1[thr_id], KeccakH[thr_id], d_prod0[thr_id],order); //64
// MyStreamSynchronize(0,order++,thr_id);

    m7_whirlpool512_cpu_hash_120(thr_id, throughput*tp_coef_f[thr_id], pdata[29], KeccakH[thr_id], order++);
cpu_mulT4(0, throughput*tp_coef_f[thr_id],8, 16, KeccakH[thr_id], d_prod0[thr_id], d_prod1[thr_id],order); //128
// MyStreamSynchronize(0,order++,thr_id);

m7_sha256_cpu_hash_120(thr_id, throughput*tp_coef_f[thr_id], pdata[29], KeccakH[thr_id], order++);
cpu_mulT4(0, throughput*tp_coef_f[thr_id], 4, 24, KeccakH[thr_id], d_prod1[thr_id], d_prod0[thr_id],order); //96
// MyStreamSynchronize(0,order++,thr_id);

m7_haval256_cpu_hash_120(thr_id, throughput*tp_coef_f[thr_id], pdata[29], KeccakH[thr_id], order++);
cpu_mulT4(0, throughput*tp_coef_f[thr_id], 4, 28, KeccakH[thr_id], d_prod0[thr_id], d_prod1[thr_id],order);  //112
// MyStreamSynchronize(0,order++,thr_id);

m7_tiger192_cpu_hash_120(thr_id, throughput*tp_coef_f[thr_id], pdata[29], KeccakH[thr_id], order++);
m7_bigmul_unroll1_cpu(thr_id, throughput*tp_coef_f[thr_id], KeccakH[thr_id], d_prod1[thr_id], d_prod0[thr_id],order);
// MyStreamSynchronize(0,order++,thr_id);

m7_ripemd160_cpu_hash_120(thr_id, throughput*tp_coef_f[thr_id], pdata[29], KeccakH[thr_id], order++);
m7_bigmul_unroll2_cpu(thr_id, throughput*tp_coef_f[thr_id], KeccakH[thr_id], d_prod0[thr_id], d_prod1[thr_id],order);
// MyStreamSynchronize(0,order++,thr_id);

uint32_t foundNonce = m7_sha256_cpu_hash_300(thr_id, throughput*tp_coef_f[thr_id], pdata[29], NULL, d_prod1[thr_id], order);

someone must rewrite this(join in one function) to prevent to many memory writes... like Tanguy Pruvot did with lbry miner.
pallas (OP)
Legendary
*
Offline Offline

Activity: 2716
Merit: 1094


Black Belt Developer


View Profile
October 12, 2016, 03:37:30 PM
 #387

Code:
		m7_keccak512_cpu_hash(thr_id, throughput*tp_coef_f[thr_id], pdata[29], KeccakH[thr_id], order++);
m7_sha512_cpu_hash_120(thr_id, throughput*tp_coef_f[thr_id], pdata[29], d_prod1[thr_id], order++);
    cpu_mulT4(0, throughput*tp_coef_f[thr_id], 8, 8, d_prod1[thr_id], KeccakH[thr_id], d_prod0[thr_id],order); //64
// MyStreamSynchronize(0,order++,thr_id);

    m7_whirlpool512_cpu_hash_120(thr_id, throughput*tp_coef_f[thr_id], pdata[29], KeccakH[thr_id], order++);
cpu_mulT4(0, throughput*tp_coef_f[thr_id],8, 16, KeccakH[thr_id], d_prod0[thr_id], d_prod1[thr_id],order); //128
// MyStreamSynchronize(0,order++,thr_id);

m7_sha256_cpu_hash_120(thr_id, throughput*tp_coef_f[thr_id], pdata[29], KeccakH[thr_id], order++);
cpu_mulT4(0, throughput*tp_coef_f[thr_id], 4, 24, KeccakH[thr_id], d_prod1[thr_id], d_prod0[thr_id],order); //96
// MyStreamSynchronize(0,order++,thr_id);

m7_haval256_cpu_hash_120(thr_id, throughput*tp_coef_f[thr_id], pdata[29], KeccakH[thr_id], order++);
cpu_mulT4(0, throughput*tp_coef_f[thr_id], 4, 28, KeccakH[thr_id], d_prod0[thr_id], d_prod1[thr_id],order);  //112
// MyStreamSynchronize(0,order++,thr_id);

m7_tiger192_cpu_hash_120(thr_id, throughput*tp_coef_f[thr_id], pdata[29], KeccakH[thr_id], order++);
m7_bigmul_unroll1_cpu(thr_id, throughput*tp_coef_f[thr_id], KeccakH[thr_id], d_prod1[thr_id], d_prod0[thr_id],order);
// MyStreamSynchronize(0,order++,thr_id);

m7_ripemd160_cpu_hash_120(thr_id, throughput*tp_coef_f[thr_id], pdata[29], KeccakH[thr_id], order++);
m7_bigmul_unroll2_cpu(thr_id, throughput*tp_coef_f[thr_id], KeccakH[thr_id], d_prod0[thr_id], d_prod1[thr_id],order);
// MyStreamSynchronize(0,order++,thr_id);

uint32_t foundNonce = m7_sha256_cpu_hash_300(thr_id, throughput*tp_coef_f[thr_id], pdata[29], NULL, d_prod1[thr_id], order);

someone must rewrite this(join in one function) to prevent to many memory writes... like Tanguy Pruvot did with lbry miner.

no, it will be much slower.
you can do it with lbry because they are a few little hashes, those are 7 (+ all the muls) and some are much heavier (like whirlpool).

e.nexus
Member
**
Offline Offline

Activity: 96
Merit: 25


View Profile
October 13, 2016, 01:56:55 AM
 #388

Would there be much involved in changing the cuda toolkit used to 8.0 so it compiles with VS2015?
pallas (OP)
Legendary
*
Offline Offline

Activity: 2716
Merit: 1094


Black Belt Developer


View Profile
October 13, 2016, 07:06:08 AM
 #389

Would there be much involved in changing the cuda toolkit used to 8.0 so it compiles with VS2015?

It does use cuda 8 already and compiles fine on linux and, I assume, Windows with mingw. For visual studio, I think you need to update the project file.

e.nexus
Member
**
Offline Offline

Activity: 96
Merit: 25


View Profile
October 13, 2016, 09:01:00 AM
 #390

Would there be much involved in changing the cuda toolkit used to 8.0 so it compiles with VS2015?

It does use cuda 8 already and compiles fine on linux and, I assume, Windows with mingw. For visual studio, I think you need to update the project file.

That's what I thought but when using the master branch from github it tries to use 6.5.

line 54: "The imported project "C:\Program Files (x86)\MSBuild\Microsoft.Cpp\v4.0\V140\BuildCustomizations\CUDA 6.5.props" was not found"  I only have 8.0 in that folder.
mendoza1468
Newbie
*
Offline Offline

Activity: 54
Merit: 0


View Profile
October 15, 2016, 02:27:43 AM
 #391

Honestly this topic is ...
Still no windows compile after few weeks.
We better donate to tpruvot, he's the man !!

Peace
pokeytex
Legendary
*
Offline Offline

Activity: 1504
Merit: 1002



View Profile
October 15, 2016, 06:31:53 PM
 #392

nvcc is not in the PATH.
find it and then:

export PATH=***nvcc directory***:$PATH

if you install cuda by .run file, it will be into a subdir of /usr/local, don't know about the deb method.
you can run "find / -xdev -name nvcc" if unsure.

OK, GOOD, NVCC IS THERE--

But, then "make" failed on not finding "gcc-4.9".  An attempt to install v4.9 revealed that  gcc-4.9 was up-to-date.  I then created a symlink to the "/usr/bin/gcc-4.8" directory with "sudo ln -s /usr/bin/gcc-4.8 /usr/bin/gcc-4.9".  

On returning to the "pallas" directory, I issued the command "make", and the compilation continued from where it had errored out.  The compilation is running while I type this.  

VOILA'  It compiled!!!! I  ran the "--help" command.  Right now, the machine is mining Ethereum (ETH), and I will check out your work sometime today.  Thank you, Pallas!

--scryptr

Evening scryptr - I tried your method above - in full disclosure - I am a Ubuntu noob.  I keep running into this error and have tried now for 24 hours with no luck to get past this point.  I am running Ubuntu 14.04 LTS.

The error:
nvcc -g -O2 -I . -Xptxas "-v" --compiler-bindir /usr/bin/gcc-4.9 -gencode=arch=compute_50,code=\"sm_50,compute_50\" -gencode=arch=compute_52,code=\"sm_52,compute_52\" --maxrregcount=80 --ptxas-options=-v -I./compat/jansson -o heavy/cuda_blake512.o -c heavy/cuda_blake512.cu
heavy/cuda_blake512.cu(11): error: invalid redeclaration of type name "uint64_t"
/usr/include/stdint.h(55): here

1 error detected in the compilation of "/tmp/tmpxft_00003be0_00000000-9_cuda_blake512.compute_52.cpp1.ii".
make[2]: *** [heavy/cuda_blake512.o] Error 2
make[2]: Leaving directory `/home/pokeytex/Downloads/ccminer-m7-branch-master'
make[1]: *** [all-recursive] Error 1
make[1]: Leaving directory `/home/pokeytex/Downloads/ccminer-m7-branch-master'
make: *** [all] Error 2

Any help is appreciated.

Thanks - pokeytex

pallas (OP)
Legendary
*
Offline Offline

Activity: 2716
Merit: 1094


Black Belt Developer


View Profile
October 15, 2016, 06:38:04 PM
 #393

nvcc is not in the PATH.
find it and then:

export PATH=***nvcc directory***:$PATH

if you install cuda by .run file, it will be into a subdir of /usr/local, don't know about the deb method.
you can run "find / -xdev -name nvcc" if unsure.

OK, GOOD, NVCC IS THERE--

But, then "make" failed on not finding "gcc-4.9".  An attempt to install v4.9 revealed that  gcc-4.9 was up-to-date.  I then created a symlink to the "/usr/bin/gcc-4.8" directory with "sudo ln -s /usr/bin/gcc-4.8 /usr/bin/gcc-4.9".  

On returning to the "pallas" directory, I issued the command "make", and the compilation continued from where it had errored out.  The compilation is running while I type this.  

VOILA'  It compiled!!!! I  ran the "--help" command.  Right now, the machine is mining Ethereum (ETH), and I will check out your work sometime today.  Thank you, Pallas!

--scryptr

Evening scryptr - I tried your method above - in full disclosure - I am a Ubuntu noob.  I keep running into this error and have tried now for 24 hours with no luck to get past this point.  I am running Ubuntu 14.04 LTS.

The error:
nvcc -g -O2 -I . -Xptxas "-v" --compiler-bindir /usr/bin/gcc-4.9 -gencode=arch=compute_50,code=\"sm_50,compute_50\" -gencode=arch=compute_52,code=\"sm_52,compute_52\" --maxrregcount=80 --ptxas-options=-v -I./compat/jansson -o heavy/cuda_blake512.o -c heavy/cuda_blake512.cu
heavy/cuda_blake512.cu(11): error: invalid redeclaration of type name "uint64_t"
/usr/include/stdint.h(55): here

1 error detected in the compilation of "/tmp/tmpxft_00003be0_00000000-9_cuda_blake512.compute_52.cpp1.ii".
make[2]: *** [heavy/cuda_blake512.o] Error 2
make[2]: Leaving directory `/home/pokeytex/Downloads/ccminer-m7-branch-master'
make[1]: *** [all-recursive] Error 1
make[1]: Leaving directory `/home/pokeytex/Downloads/ccminer-m7-branch-master'
make: *** [all] Error 2

Any help is appreciated.

Thanks - pokeytex

your system headers have those typedefs already defined. probably gcc 5.4, the issue wasn't there with older compilers.
you need to remove those definitions at the beginning of the related files.
there are a bunch.
or use gcc 4.9 (the default for this fork).

pokeytex
Legendary
*
Offline Offline

Activity: 1504
Merit: 1002



View Profile
October 15, 2016, 09:15:01 PM
 #394

nvcc is not in the PATH.
find it and then:

export PATH=***nvcc directory***:$PATH

if you install cuda by .run file, it will be into a subdir of /usr/local, don't know about the deb method.
you can run "find / -xdev -name nvcc" if unsure.

OK, GOOD, NVCC IS THERE--

But, then "make" failed on not finding "gcc-4.9".  An attempt to install v4.9 revealed that  gcc-4.9 was up-to-date.  I then created a symlink to the "/usr/bin/gcc-4.8" directory with "sudo ln -s /usr/bin/gcc-4.8 /usr/bin/gcc-4.9".  

On returning to the "pallas" directory, I issued the command "make", and the compilation continued from where it had errored out.  The compilation is running while I type this.  

VOILA'  It compiled!!!! I  ran the "--help" command.  Right now, the machine is mining Ethereum (ETH), and I will check out your work sometime today.  Thank you, Pallas!

--scryptr

Evening scryptr - I tried your method above - in full disclosure - I am a Ubuntu noob.  I keep running into this error and have tried now for 24 hours with no luck to get past this point.  I am running Ubuntu 14.04 LTS.

The error:
nvcc -g -O2 -I . -Xptxas "-v" --compiler-bindir /usr/bin/gcc-4.9 -gencode=arch=compute_50,code=\"sm_50,compute_50\" -gencode=arch=compute_52,code=\"sm_52,compute_52\" --maxrregcount=80 --ptxas-options=-v -I./compat/jansson -o heavy/cuda_blake512.o -c heavy/cuda_blake512.cu
heavy/cuda_blake512.cu(11): error: invalid redeclaration of type name "uint64_t"
/usr/include/stdint.h(55): here

1 error detected in the compilation of "/tmp/tmpxft_00003be0_00000000-9_cuda_blake512.compute_52.cpp1.ii".
make[2]: *** [heavy/cuda_blake512.o] Error 2
make[2]: Leaving directory `/home/pokeytex/Downloads/ccminer-m7-branch-master'
make[1]: *** [all-recursive] Error 1
make[1]: Leaving directory `/home/pokeytex/Downloads/ccminer-m7-branch-master'
make: *** [all] Error 2

Any help is appreciated.

Thanks - pokeytex

your system headers have those typedefs already defined. probably gcc 5.4, the issue wasn't there with older compilers.
you need to remove those definitions at the beginning of the related files.
there are a bunch.
or use gcc 4.9 (the default for this fork).


@pallas  I am not sure what you mean by typedefs.  I am running Ubuntu 14.04 - clean install on an intel box.  I believe the 64 bit version.  I installed all basic stuff and have followed along on all of the tutorials so far.  The error I am getting is based in GCC - I downloaded gcc 4.9 but get the same error still.  I refuse to give up! :-)  Any more advice please?  With a cherry on top!

pallas (OP)
Legendary
*
Offline Offline

Activity: 2716
Merit: 1094


Black Belt Developer


View Profile
October 15, 2016, 09:18:01 PM
 #395

If you can wait a bit, I will commit the changes to github.

pokeytex
Legendary
*
Offline Offline

Activity: 1504
Merit: 1002



View Profile
October 15, 2016, 09:25:52 PM
 #396

If you can wait a bit, I will commit the changes to github.

Thank You!  I can wait.

pallas (OP)
Legendary
*
Offline Offline

Activity: 2716
Merit: 1094


Black Belt Developer


View Profile
October 16, 2016, 08:15:15 AM
 #397

If you can wait a bit, I will commit the changes to github.

Thank You!  I can wait.

Here it is! :-)

New commit: "GCC 5.4 and cuda8 final build fix"

Enjoy!

ioglnx
Sr. Member
****
Offline Offline

Activity: 574
Merit: 250

Fighting mob law and inquisition in this forum


View Profile
October 16, 2016, 02:23:17 PM
 #398

Looks like XCN gets delisted from Poloniex due to weak network and transaction explorer the lack of it.
So its getting pointless to mine this coin right?

GTX 1080Ti rocks da house... seriously... this card is a beast³
Owning by now 18x GTX1080Ti :-D @serious love of efficiency
pallas (OP)
Legendary
*
Offline Offline

Activity: 2716
Merit: 1094


Black Belt Developer


View Profile
October 16, 2016, 02:27:17 PM
 #399

Looks like XCN gets delisted from Poloniex due to weak network and transaction explorer the lack of it.
So its getting pointless to mine this coin right?

I wouldn't say "weak network" but "weak exchange": they are unable to run the wallet, I offered to help, given them the blockchain and a patched wallet, still nothing. I think they just want to reduce supply in order to pump it.

ioglnx
Sr. Member
****
Offline Offline

Activity: 574
Merit: 250

Fighting mob law and inquisition in this forum


View Profile
October 16, 2016, 02:35:07 PM
 #400

Updated wallet? Huh anything we can also use?
You have the current blockchain..maybe you can also offer it for download somewhere so we get a better syncing for all people.

Ah what i wanted to ask..you complied now on Ubuntu 16.04? Or still 14.04..stoneage :-D

GTX 1080Ti rocks da house... seriously... this card is a beast³
Owning by now 18x GTX1080Ti :-D @serious love of efficiency
Pages: « 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 [20] 21 22 23 24 25 26 27 28 29 »
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!