Hi, We've added FTC to the miners multipool: www.zpool.caCheers! Thank you! What has happened to FTC? I was mining it yesterday on a multipool when the bottom fell out. Even my confirmed coins lost most of their value. But on the exchanges everything is normal. I've dealt with scams, hard forks, excessive orphans etc but I've never seen confirmed coins lose nearly all their value while the trading price on the exchanges remained stable (rising actually).
|
|
|
I'm noticing a big difference between the share rate under the miners section of the wallet page compared to the share rate on Last 50 Earnings, example 55.1033% compared to 39.7007% it has only been like this for the past 2 days. Prior to that the both share rates matched perfectly every time a block was found.
Are you still seeing this? Those tables update at different frequencies so there will be some differences, but should be somewhat close. I have seen it too and it doesn't appear to be a timing discrepency. In the past the payouts for any coin in a pool was the same andwas prettu close to the share listed in the miners section, but I have noticed recently the payout share from a pool varies depending on the coin. For example I could get credited for a share of a block of two different coins at the same time with differernt share rates but if the two blocks are the same coin the share rate will be the same, or close. I assume I'm now getting creditted based on my share of a specific coin rather than my share of the entire pool. So how does it really work? Do I get my average share in the pool or my specific share of the found coin? I should add that the current behaviour need not be changed if it is as I assumed. It is more fair to credit based on the coin actually mined rather than some pool average.
|
|
|
FTC is spitting out mostly orphans.
|
|
|
I've found a solution to the neoscrypt problem: building a cuda 6.5/7.5 hybrid. Tested working on Linux. Here's the procedure:
- build ccminer with cuda 7.5 as usual - remove all the object files in the neoscrypt folder: rm neoscrypt/*.o - edit Makefile - replace all the instances of "7.5" with "6.5" - run make again - you just made a ccminer executable with all the algos on 7.5 except neoscrypt on 6.5 :-) - revert the Makefile changes to build it again in the future
If you find this useful, please donate to the BTC address in my signature.
If I undertstand the result neo is compiled with 6.5 and eveyrthing else with 7.5. Then it is all linked with 6.5. I'm not sure linking object files from different compilers is safe. I prefer to use a script to select the prefered executable based on the algo. Less work, less risk, more flexible. Edit: but it's still a workaround. ![Wink](https://bitcointalk.org/Smileys/default/wink.gif) Linking object files from different compilers: I've often linked object files create with a C compiler and others created with an assembler. In ccminer, some objects are compiled with gcc, some others with nvcc... you get the picture. I get the picture. I got a pascal program to call a fortran subroutine several decades ago. No available linker could handle it I had to link it manually. The examples you gave are where it is explicitly supported by the respective linkers. I wouldn't expect that support in different major versions of the same compiler. ABI changes can be introduced. I'm not saying it can't be done, just that it's probably not supported and might not always work. used for quite some time c++ programme linked to old fortran libraries, this was done automatically (well in a script). Also regarding nvcc, there are different way to link and compile. You can very well compile one part or the other with one cuda version or another and do the linking with gcc (and I am not entirely sure, but I think for linking cuda is just calling the default compiler (gcc or visual stuff... so actually it doesn't really matter that you compile with various cuda version... Just for kicks I googled it. nvcc does do a compatibility check on g++ (in the example below) versions so they do explicitly support it and there are versions that don't work. http://stackoverflow.com/questions/9421108/how-can-i-compile-cuda-code-then-link-it-to-a-c-projectI don't know for sure whether nvcc supports different versions of itself, it may have checked and passed in your case. If I was a compiler developper it would be a very low priority as it would complicate changing the ABI and is generally only useful when the original source code is not available.
|
|
|
Got the test results on a EVGA 980 reference standard clocks.
76-7.5 76-6.5 74-6.5 74-7.5 quark 19.9 19.3 19.3 19.7 x11 9850 9920 10000 7680 lyra2v2 10.7 11.4 11.6 10.9 neo 220 635 640 220
Thanks for testing. Can you please try to compile release 74 with x86 build and cuda 7.5? Done, results included above. The quark rate with 74-7.5 was unexpected, some of the previous changes must have provided a bigger improvement on cuda 7.5 than 6.5
|
|
|
I've found a solution to the neoscrypt problem: building a cuda 6.5/7.5 hybrid. Tested working on Linux. Here's the procedure:
- build ccminer with cuda 7.5 as usual - remove all the object files in the neoscrypt folder: rm neoscrypt/*.o - edit Makefile - replace all the instances of "7.5" with "6.5" - run make again - you just made a ccminer executable with all the algos on 7.5 except neoscrypt on 6.5 :-) - revert the Makefile changes to build it again in the future
If you find this useful, please donate to the BTC address in my signature.
If I undertstand the result neo is compiled with 6.5 and eveyrthing else with 7.5. Then it is all linked with 6.5. I'm not sure linking object files from different compilers is safe. I prefer to use a script to select the prefered executable based on the algo. Less work, less risk, more flexible. Edit: but it's still a workaround. ![Wink](https://bitcointalk.org/Smileys/default/wink.gif) Linking object files from different compilers: I've often linked object files create with a C compiler and others created with an assembler. In ccminer, some objects are compiled with gcc, some others with nvcc... you get the picture. I get the picture. I got a pascal program to call a fortran subroutine several decades ago. No available linker could handle it I had to link it manually. The examples you gave are where it is explicitly supported by the respective linkers. I wouldn't expect that support in different major versions of the same compiler. ABI changes can be introduced. I'm not saying it can't be done, just that it's probably not supported and might not always work.
|
|
|
I've found a solution to the neoscrypt problem: building a cuda 6.5/7.5 hybrid. Tested working on Linux. Here's the procedure:
- build ccminer with cuda 7.5 as usual - remove all the object files in the neoscrypt folder: rm neoscrypt/*.o - edit Makefile - replace all the instances of "7.5" with "6.5" - run make again - you just made a ccminer executable with all the algos on 7.5 except neoscrypt on 6.5 :-) - revert the Makefile changes to build it again in the future
If you find this useful, please donate to the BTC address in my signature.
If I undertstand the result neo is compiled with 6.5 and eveyrthing else with 7.5. Then it is all linked with 6.5. I'm not sure linking object files from different compilers is safe. I prefer to use a script to select the prefered executable based on the algo. Less work, less risk, more flexible. Edit: but it's still a workaround. ![Wink](https://bitcointalk.org/Smileys/default/wink.gif)
|
|
|
Compiling on Windows is a pain. I have to rebuild my compile environment every month because VS shuts down unless I register. I had to create a virtual machine snapshot before installing VS the first time, otherwise the tombstone from the previous install would trigger the forced registration imediately.
Compile with visual studio express 2013. It is free VS community is advertised as free but it only works for a month without registering. The registration might be free but I haven't tried it. I don't recall beig able to download VS Express, I'll take another look, thanks. Got the test results on a EVGA 980 reference standard clocks. 76-7.5 76-6.5 74-6.5 quark 19.9 19.3 19.3 x11 9850 9920 10000 lyra2v2 10.7 11.4 11.6 neo 220 635 640 These results confirm the increase in quark is purely due to cuda 7.5. It also shows no degradation in cuda 6.5 performance, a win-win for quark. The neo degradation is also purely due to cuda 7.5 with no significant difference between r74 & r76 when compiled with cuda 6.5. X11 is interesting, the 76-6.5 performance is lower than 74-6.5 and the 76-7.5 performance is lower still, a lose-lose. Since quark was the focus of the most recent changes it proves that cuda 7.5 can perform better than 6.5. I hope these results translate to the other algos.
|
|
|
yep 76 is a 50% slow down for neo scrypt from 586 to 234 and quark is up 1.04 % from 17206 to 17972 time to run two folders
Has anyone compiled r76 for sm5.2 using cuda 6.5 (windows or linux to do a direct comparison with cuda 7.5? I could only do it with 750ti's on Linux and there was virtually no difference. Edit: however, they were both slower than 1.5.74-cuda6.5. The numbers (gpu0/gpu1) both EVGA 750ti SC no OC. 1.5.74(6.5) 1.5.76(6.5) 1.5.76(7.5) x11 3090/3145 2985/3050 2980/3045 quark 6360/6450 6335/6380 6340/6400 lyra2v2 4715/4755 4680/4715 4680/4715
What a pain getting this to display correctly, this ain't WYSIWYG. im compiling the the latest as we speak ... i was well enough to drive - so im here in the office at the moment ... the only thing i can give a comparison rate to is quark ( from 74 - 76 ) on c7.5 ... that doesnt exactly hep what you are asking - but it may be of interest to you regarding the last compiles i had with fedora ... btw - i have upgraded to fedora23x64 on the test machine - and compiling with c7.5 and ccminer-spmod76 ... #crysx Unfortunately with Linux there are very few versions that support both cuda 6.5 & 7.5 so it's difficult to do direct comparisons. I'm compiling r76 for cuda 6.5 on Windows (had to fiddle with the project file) so I can directly compare the difference between r74-cuda6.5 vs r76-cuda6.5 and r76-cuda6.5 vs r76-cuda7.5. Compiling on Windows is a pain. I have to rebuild my compile environment every month because VS shuts down unless I register. I had to create a virtual machine snapshot before installing VS the first time, otherwise the tombstone from the previous install would trigger the forced registration imediately.
|
|
|
yep 76 is a 50% slow down for neo scrypt from 586 to 234 and quark is up 1.04 % from 17206 to 17972 time to run two folders
Has anyone compiled r76 for sm5.2 using cuda 6.5 (windows or linux to do a direct comparison with cuda 7.5? I could only do it with 750ti's on Linux and there was virtually no difference. Edit: however, they were both slower than 1.5.74-cuda6.5. The numbers (gpu0/gpu1) both EVGA 750ti SC no OC. 1.5.74(6.5) 1.5.76(6.5) 1.5.76(7.5) x11 3090/3145 2985/3050 2980/3045 quark 6360/6450 6335/6380 6340/6400 lyra2v2 4715/4755 4680/4715 4680/4715
What a pain getting this to display correctly, this ain't WYSIWYG.
|
|
|
OK, I must have forgotten how to do this but
how would I start, in windows 7, a CMD or used in a CMD scrypt with an affinity of 4.
just trying to keep the work load off the first 2 for the OS I have looked it up in google but most had their heads in a dark place
I'm not sure why you're asking but the --cpu-affinity can be set in ccminer. If you want to do it in Windows it's something like: start <program> /affinity <n> I don't think you have anything to worry about with the cpu load of ccminer, but a cpu miner...
|
|
|
Also should update the code with the cuda version number so it says cuda 7.5 when launched ![Smiley](https://bitcointalk.org/Smileys/default/smiley.gif) not important though lol That is just a hardcoded string and does not reflect how ccminer was actually compiled. A more appropriate comment would be "optimized for cuda 7.5" unless a system variable is available to read the actual compile environment. Just replace this code in ccminer.cpp: #ifdef WIN32 printf("\tBuilt with VC++ 2013 and nVidia CUDA SDK 6.5\n\n"); #else printf("\tBuilt with the nVidia CUDA SDK 6.5\n\n"); #endif
with this: #ifdef _MSC_VER printf("Compiled with Visual C++ %d ", _MSC_VER / 100); #else #ifdef __clang__ printf("Compiled with Clang %s ", __clang_version__); #else #ifdef __GNUC__ printf("Compiled with GCC %d.%d ", __GNUC__, __GNUC_MINOR__); #else printf("Compiled with an unknown compiler "); #endif #endif #endif printf("using Nvidia CUDA Toolkit %d.%d\n\n", CUDART_VERSION / 1000, (CUDART_VERSION % 1000) / 10);
I knew there had to be one, I just couldn't find it.
|
|
|
Unless someone is holding back the mature algos are probably optimal by now. Most of the improvements now are device specific tweaks. We'll probably have to wait for pascal and sm5.3 (and cuda 8?) for any major improvement opportunities in software. Hopefully profits will be higher by then.
|
|
|
![](https://ip.bitcointalk.org/?u=https%3A%2F%2Fi.imgur.com%2FswnDqDn.png&t=663&c=VekImppzRCCcCA) Pending is dropped ~0.05 btc but balance is not incremented. what's happening? Orphans?
|
|
|
Also should update the code with the cuda version number so it says cuda 7.5 when launched ![Smiley](https://bitcointalk.org/Smileys/default/smiley.gif) not important though lol That is just a hardcoded string and does not reflect how ccminer was actually compiled. A more appropriate comment would be "optimized for cuda 7.5" unless a system variable is available to read the actual compile environment.
|
|
|
It looks like your profit estimates are way off. They are approximately one half of the other guy. This problem started around midnight last night on all algos and is plainly visible in the pool estimate graphs.
Thanks we'll look into it! Seems some algos are and some arn't... Estimates are still off. I compared some coins with the other clone and you're both reporting the same block size and difficulty but your profit estimates are 60% to 75% of the competition on most algos. I'm convinced the problem is with zpool because the profit estimate for all algos took a sudden drop at the same time while the other pool's estimates continued the same trend. Looks like they should be sorted out. Part of the problem was with c-cex integration so I've disabled that for the time being. I was just noticing the profit estimates are now in line, thanks.
|
|
|
It looks like your profit estimates are way off. They are approximately one half of the other guy. This problem started around midnight last night on all algos and is plainly visible in the pool estimate graphs.
Thanks we'll look into it! Seems some algos are and some arn't... Estimates are still off. I compared some coins with the other clone and you're both reporting the same block size and difficulty but your profit estimates are 60% to 75% of the competition on most algos. I'm convinced the problem is with zpool because the profit estimate for all algos took a sudden drop at the same time while the other pool's estimates continued the same trend.
|
|
|
Not sure if this multipool is still up and running, but whoever runs the pool should probably read this... Your Litecoin Client's node is: 151.80.206.101:9333Port 80 of that IP address redirects to a multitool which calls itself YAAMP, I presume that this is the OP for that pool. You are currently using a old version of the Litecoin Client which is unsupported, this is: /Satoshi:0.8.7.5/ And I'd recommend updating your Litecoin client to version: /Satoshi:0.10.2.2/ No run runs YAAMP because it shut down. The yaamp.com domain has been bought by Nicehash and this thread is now used mostly by a clone of yaamp called YIIMP. Your best bet would be to PM the OP to get things cleaned up.
|
|
|
It looks like your profit estimates are way off. They are approximately one half of the other guy. This problem started around midnight last night on all algos and is plainly visible in the pool estimate graphs. Edit: There is also a problem with digibyte. Apparently it has hard forked and zpool is on the wrong branch.
Thanks we'll look into it! Seems some algos are and some arn't... I really appreciate your quick response to issues and hands on approach to managing the pool. It's a refreshing change from the other clones. The growth of the pool has also been impressive. When you get a chance, between adding coins and dealing with forks, it would be nice if you could restore the current estimate in the pool status display. It is another one of those things that was removed from the original yaamp code by yiimp.
|
|
|
It looks like your profit estimates are way off. They are approximately one half of the other guy. This problem started around midnight last night on all algos and is plainly visible in the pool estimate graphs. Edit: There is also a problem with digibyte. Apparently it has hard forked and zpool is on the wrong branch.
|
|
|
|