I have a similar setup as you where I have some rigs with 1, 2 and 3 cards and have the same issue with voltage settings on the cards after I upgraded to Xubuntu 14.04 (needed to upgrade from 12.04 since I wanted to do Eth mining). The ethminers I have dont have clock and voltage controls, so the cards are mining at full tilt unless I downclock the core with the aticonfig --odsc command. I was able to undervolt all my R9 280X and 7970 cards by altering the BIOS using VBE7 tool, and they all run at 1.0 or 0.987 Volt @ 1000 MHz core (Exception being the XFX 280X DD card which wont go lower than 1.1V). Since my rigs are 3 gpus tops I get away with a single 1000W PSU for each rig. The 3GPU rig draws roughly 550W so there is no issue with power.
|
|
|
If you don't have it and would like to install teamviewer in (X)ubuntu, here is how: 1. Download it wget http://www.teamviewer.com/download/teamviewer_linux.deb2. Install gdebi (GDebi can install local .deb packages with automatic dependency resolution (it automatically downloads and installs the required packages).): sudo apt-get install gdebi 3. In the same directory you download the .deb file just run: sudo gdebi teamviewer_linux.deb
|
|
|
Hi all, Does anyone know how to get the highest hashing speed out of a GTX 970? I have an ASUS Strix 970 and it wont do more than ~17MH/s I keep seeing people claim this card can do ~22MH/s. I tried this sudo nvidia-smi -ac 3505,1202 this puts memory clock to max, and core to 1202, which should be sufficient for Eth. Upping core freq does nothing. Output from nvidia-smi -q: ==============NVSMI LOG==============
Timestamp : Mon Mar 7 18:39:08 2016 Driver Version : 352.79
Attached GPUs : 1 GPU 0000:01:00.0 Product Name : GeForce GTX 970 Product Brand : GeForce Display Mode : Enabled Display Active : Enabled Persistence Mode : Disabled Accounting Mode : Disabled Accounting Mode Buffer Size : 1920 Driver Model Current : N/A Pending : N/A Serial Number : N/A GPU UUID : GPU-4189f772-a656-8a5b-3aa4-00d7d0ece521 Minor Number : 0 VBIOS Version : 84.04.1F.00.2A MultiGPU Board : No Board ID : 0x100 Inforom Version Image Version : G001.0000.00.01 OEM Object : N/A ECC Object : N/A Power Management Object : N/A GPU Operation Mode Current : N/A Pending : N/A PCI Bus : 0x01 Device : 0x00 Domain : 0x0000 Device Id : 0x13C210DE Bus Id : 0000:01:00.0 Sub System Id : 0x85091043 GPU Link Info PCIe Generation Max : 3 Current : 3 Link Width Max : 16x Current : 16x Bridge Chip Type : N/A Firmware : N/A Replays since reset : 0 Tx Throughput : 5000 KB/s Rx Throughput : 5000 KB/s Fan Speed : 41 % Performance State : P0 Clocks Throttle Reasons Idle : Not Active Applications Clocks Setting : Active SW Power Cap : Not Active HW Slowdown : Not Active Unknown : Not Active FB Memory Usage Total : 4095 MiB Used : 1473 MiB Free : 2622 MiB BAR1 Memory Usage Total : 256 MiB Used : 6 MiB Free : 250 MiB Compute Mode : Default Utilization Gpu : 99 % Memory : 95 % Encoder : 0 % Decoder : 0 % Ecc Mode Current : N/A Pending : N/A ECC Errors Volatile Single Bit Device Memory : N/A Register File : N/A L1 Cache : N/A L2 Cache : N/A Texture Memory : N/A Total : N/A Double Bit Device Memory : N/A Register File : N/A L1 Cache : N/A L2 Cache : N/A Texture Memory : N/A Total : N/A Aggregate Single Bit Device Memory : N/A Register File : N/A L1 Cache : N/A L2 Cache : N/A Texture Memory : N/A Total : N/A Double Bit Device Memory : N/A Register File : N/A L1 Cache : N/A L2 Cache : N/A Texture Memory : N/A Total : N/A Retired Pages Single Bit ECC : N/A Double Bit ECC : N/A Pending : N/A Temperature GPU Current Temp : 67 C GPU Shutdown Temp : 96 C GPU Slowdown Temp : 91 C Power Readings Power Management : Supported Power Draw : 136.23 W Power Limit : 163.46 W Default Power Limit : 163.46 W Enforced Power Limit : 163.46 W Min Power Limit : 100.00 W Max Power Limit : 196.15 W Clocks Graphics : 1202 MHz SM : 1202 MHz Memory : 3505 MHz Applications Clocks Graphics : 1202 MHz Memory : 3505 MHz Default Applications Clocks Graphics : 1050 MHz Memory : 3505 MHz Max Clocks Graphics : 1392 MHz SM : 1392 MHz Memory : 3505 MHz Clock Policy Auto Boost : N/A Auto Boost Default : N/A Processes Process ID : 1208 Type : G Name : /usr/bin/X Used GPU Memory : 66 MiB Process ID : 2247 Type : C Name : /home/bobben/qtminer/./qtminer Used GPU Memory : 1385 MiB Process ID : 4566 Type : G Name : nvidia-settings Used GPU Memory : 3 MiB
Any help on this is much appreciated.
|
|
|
SOLVED And the solution was really simple after all. Here is a description of what I did to get it to work. Sorry if it is a bit haphazard. Most of these steps are copy and paste from other people who have had similar problems. I first removed the current driver (sudo apt-get purge "fglrx.*" <- this was installed via the Settings - Additional Drivers) I then dowloaded the latest driver for Ubuntu 14.04 from the AMD website, This is a zip file. Mine was radeon-crimson-15.12-15.302-151217a-297685e.zip I unzipped and ran the installer script (sudo ./amd-driver-installer-15.302-x86.x86_64.run) and made it make packages for Ubuntu/Trusty. After the driver was installed I ran the initialization command. sudo aticonfig --lsa sudo aticonfig --adapter=all --initial -f The gpu-manager was destroying my /etc/X11/xorg.conf file. so I edited this file: /etc/init/gpu-manager.conf by commenting out the first lines: #start on (starting lightdm # or starting kdm # or starting xdm # or starting lxdm) task exec gpu-manager --log /var/log/gpu-manager.log
Then I rebooted. Then: sudo aticonfig --adapter=all --odgt And I was able to see the temp on all cards: Adapter 0 - AMD Radeon R9 200 Series Sensor 0: Temperature - 29.00 C Adapter 1 - AMD Radeon R9 200 Series Sensor 0: Temperature - 30.00 C Adapter 2 - AMD Radeon R9 200 Series Sensor 0: Temperature - 24.00 C So now I am able to mine and monitor the temps of my GPUs. I hope that this could help somebody else having similar problem.
|
|
|
I agree with Bathrobehero. It is a waste trying to cpu-mine Ethereum now. To give you a perspective: I tested, and my Core 2 duo processor @ 3GHz does 87kH/s while my 280X GPU does 20MH/s. So the amount of work that the GPU does in a week, it will take the CPU 4.5 year, going by these hashrates...
|
|
|
This is my build command: ./autogen.sh make clean ./configure CFLAGS="-march=native -Ofast" CXXFLAGS=$CFLAGS --with-crypto --with-curl make
and cat /proc/cpuinfo | grep model model name : Intel(R) Core(TM) i5-4570S CPU @ 2.90GHz I tried both of these pools: neoscrypt.eu.nicehash.com:3341 hashpower.co:4233 It works for me. what can I do? Whoops... I figured it out. I had an old version of cpuminer in folder in my $PATH. So once I removed that, my script finally invoked the correct cpuminer version. And it works as I hoped! So the next beer is on me!
|
|
|
This is my build command: ./autogen.sh make clean ./configure CFLAGS="-march=native -Ofast" CXXFLAGS=$CFLAGS --with-crypto --with-curl make
and cat /proc/cpuinfo | grep model model name : Intel(R) Core(TM) i5-4570S CPU @ 2.90GHz I tried both of these pools: neoscrypt.eu.nicehash.com:3341 hashpower.co:4233
|
|
|
Hi joblo, Thanks for your continuing efforts with the miner. It seems like neosrypt mining is broken. The 3.1 version core dumps with -a neoscrypt. 4020 Segmentation fault (core dumped) cpuminer -t 1 -a neoscrypt --api-bind 127.0.0.1:4044 $SITE1
I tried to recompile the code with -g switch so I could have gdb locate where it crashes. But with -g the compile fails as well (This is on an i5 Haswell with Ubuntu 14.04.3): gcc -std=gnu99 -DHAVE_CONFIG_H -I. -Iyes/include -Iyes/include -fno-strict-aliasing -I. -Iyes/include -Iyes/include -Wno-pointer-sign -Wno-pointer-to-int-cast -march=native -g -Iyes/include -Iyes/include -MT algo/groestl/sse2/cpuminer-grso-asm.o -MD -MP -MF algo/groestl/sse2/.deps/cpuminer-grso-asm.Tpo -c -o algo/groestl/sse2/cpuminer-grso-asm.o `test -f 'algo/groestl/sse2/grso-asm.c' || echo './'`algo/groestl/sse2/grso-asm.c algo/groestl/sse2/grso-asm.c: In function ‘grsoP1024ASM’: algo/groestl/sse2/grso-asm.c:6:3: error: ‘asm’ operand has impossible constraints asm ( ^ make[2]: *** [algo/groestl/sse2/cpuminer-grso-asm.o] Error 1
EDIT: I managed to eke out a location where it apparently crashes without -g'ing an executable: 0x00000000004be530 in absorbBlockBlake2Safe ()
|
|
|
You can mine Ether with AMD 280X or 380(X) with 4Gig mem. I use AMD 280X and GTX 970 (Strix from ASUS). The GTX-cards are much more noisy, the power draw about the same as 280X, but the nvidia-drivers are more stable than amd drivers.
But be prepared to baby-sit your rigs as ethminers are unstable. If interested, try to look for a miner that has stratum built in. It improves hashrate and stability. I use Xubuntu 14.04 and it is crap compared to Xubuntu 12.04 since AMD drivers are broken. I wish I could run eth* on 12.04 :-( but it doesnt compile ootb... Windows is out as well due to its builtin unstability, long install time and licence cost.. At least if you have newer motherboards then Win 7 is out as it may not recognize newer NICs. I have heard Win 8.x is better at recognizing HW and installing correct drivers for it, but YMMV.
I would buy 380s if I knew how to undervolt them. Currently, I am running my Gigbytes, ASUSes and Sapphire Vapor-Xs on 0.987V @ 1000M core. They have been stable for more than two years constant mining various coin. The XFXs I cannot undervolt lower than 1.1V grrr
Hashrates ETH: 280X ~ 20MH/s @ 1000/1500, GTX 970: ~ 16MH/s @ stock clock
|
|
|
It will probably help temperatures if you reapply thermal paste. The Gigabyte WF cards are symphatetic in that you only have to remove 4 screws. Always be careful when you remove the cooler and fan assembly. Also consider if this operation voids any warranty. For thermal paste I have good experience using GC-Extreme. Temperature diff before and after can be 10 deg or more.
|
|
|
Hello I am trying to connect to your pool, but get the following error message: PENCL]:Found suitable OpenCL device [Tahit ] with 3106930688 bytes of GPU memory ✘ 15:09:51|ethminer Failed to submit hashrate. ✘ 15:09:51|ethminer Dynamic exception type: jsonrpc::JsonRpcException std::exception::what: Exception -32700 : JSON_PARSE_ERROR: The JSON-Object is not JSON-Valid: Database Error Command line: ethminer -F http://pool.eth.pp.ua/?miner=20@0x2e13f3b4b2976087c7f58ada9af378c5bebed977@rig4 -G --opencl-device 0 ethminer --version ethminer version 1.0.0 Build: Linux/g++/int/Release It works fine on other pools.
|
|
|
Folks, This has probably been up for discussion several times before, as is shown by the search list returned by google. I recently upgraded one of my rigs from Xubuntu 12.04 to 14.04. I installed the fglrx-updates driver from the Settings->Additional drivers menu Then I ran the usual initializion to get the software to recognize my cards (one R9 280X, one 7970) >sudo aticonfig --lsa >sudo aticonfig --adapter=all --initial -f * 0. 01:00.0 AMD Radeon HD 7900 Series 1. 05:00.0 AMD Radeon HD 7900 Series
* - Default adapter Uninitialised file found, configuring. Using /etc/X11/xorg.conf Saving back-up to /etc/X11/xorg.conf.fglrx-2 > >sudo aticonfig --adapter=all --odgt Adapter 0 - AMD Radeon HD 7900 Series Sensor 0: Temperature - 29.00 C ERROR - Get temperature failed for Adapter 1 - AMD Radeon HD 7900 Series
This worked fine under Xubuntu 12.04. In fact I reverted back to my 12.04 usb stick to verify it. That version even recognizes the rig to have one 280X card and one 7970.. Any suggestions as to what to try to get 14.04 to recognize both my cards correctly?
|
|
|
./configure CFLAGS="-march=native -Ofast" CXXFLAGS=$CFLAGS --with-crypto --with-curl ...
Checking CPU capatibility... Intel(R) Core(TM) i5-4570S CPU @ 2.90GHz AES_NI: Yes, start mining with AES_NI optimizations...
[2016-01-26 18:53:10] Starting Stratum on stratum+tcp://hashpower.co:3533 [2016-01-26 18:53:10] 4 miner threads started, using 'x11' algorithm. [2016-01-26 18:53:10] Stratum difficulty set to 0.016 [2016-01-26 18:53:10] hashpower.co:3533 x11 block 108404
....
[2016-01-26 18:56:41] CPU #1: 120.05 kH/s [2016-01-26 18:56:41] CPU #0: 108.78 kH/s [2016-01-26 18:56:41] CPU #3: 122.82 kH/s [2016-01-26 18:56:41] CPU #2: 124.79 kH/s [2016-01-26 18:56:56] CPU #0: 111.53 kH/s [2016-01-26 18:56:56] accepted: 2/2 (100.00%), 479.18 kH/s yes! [2016-01-26 18:57:19] CPU #3: 124.94 kH/s [2016-01-26 18:57:19] accepted: 3/3 (100.00%), 481.30 kH/s yes! [2016-01-26 18:57:30] CPU #3: 126.10 kH/s [2016-01-26 18:57:30] accepted: 4/4 (100.00%), 482.45 kH/s yes!
|
|
|
The cpu check fails on this computer and I suspect cpuid has been disabled in the BIOS. I will check when I get a chance. I downloaded the code you sent. I had to do the following mods to get it to compile. 1. Commented out #define AES_NI and Added #ifdef AES_NI to effectively comment out code in algo/cryptonight/cryptonight-aesni.c. 2. Added #undef AES_NI in algo/aesni/echo512/hash.c since miner.h is not included. 3. Added #ifdef AES_NI_ON a lot of places in algo/aesni/groestl/groestl-intr-aes.h After these changes, I could leave both #define AES_NI #define AES_NI_ON 1 in miner.h and it compiled/runs fine. Hashrates: With cpu_sse2 = false I get [2016-01-24 13:05:53] CPU #0: 25.50 kH/s [2016-01-24 13:05:53] CPU #1: 25.50 kH/s Set to true: [2016-01-24 13:06:36] CPU #0: 43.38 kH/s [2016-01-24 13:06:36] CPU #1: 43.38 kH/s EDIT: You might want to move the calls to has_aes_ni() and has_sse2() to the top of main() and make the boolean flags global. So that these functions are not called every pass of the main loop. No need to call these more than once  I've di there was already some cpuid checks in the code that will all have to be consolidated/ It's not a priority because the check is not expensive and is only really done at startup. Your x86_64/SSE2 performance ratio is 25.0 / 43.38 = .58 my 4790K is 266 / 472 = ,56 That's pretty close. If effect your CPU SSE2's performance is just as good as mine. My only advantage is the addition of AES_NI and possibly higher power efficiency. There' slife in those old CPU yet. I'll take some time to digest the rest of your report, I just woke up. I think I have something that works. I justhave to bundle it up and build another debug load for you. It should run straight out of the box on your CPU. But things aren't perfect. There are two problems I had to workaround. The #define AES_NI in miner.h is not being seen in any files that reference AESNI. I therefore had to add #define AES_NI in every file with any reference to AES_NI code. The other problem is that has_sse2() isn't working so I hard coded it. I tested it 4 ways: 1. AES_NI defined and march=native: my normal environment 2. AES_NI defined and march=core2. this fails to compile as expected. 3. AES_NI not defined and march=native, this compiles but performs at SSE2 rates as expected 4. AES_NI not defined and march=core2. this simulates your environment and works at SSE2 rates as expected. The only thing now is to confirm it works with the default build instructions on your machine and performs at SSE2 levels. Follow up items: investigate why #define in miner.h not seen Investigate has_sse2 failure Investigate ways to define AES_NI from the configure command line, It's currently setup so simulate a CPU without AES_NI and with SSE2 Edit: PM sent Hi joblo, The package you sent almost built out of the box, but only after I modified the build.sh I removed the -p switch and added -march=native. With your original build.sh, the following two files threw compiler errors: 1. algo/sse2/groestl/grso-asm.c <- will not accept the -p switch. Perhaps the function call is using too many registers when in profiling mode 2. algo/aes_ni/echo512/hash.c <- tons of errors when not building with -march=native After fixing build.sh the produced executable ran fine. I also built on one of my Haswell I5s, but mining X11 shares never got accepted... EDIT: It Does accept shares on my I5. But the hash is much lower than the "standard" version (cpuminer-opt-3.0.2) - 84 vs 128 kH/s per thread.
|
|
|
update: the miner is now released and can be found here: https://github.com/djm34/ccminer-sp-neoscrypt/releasesHello, I am about to release a new version of ccminer supporting neoscrypt, it is around 25% faster than the sp/pallas version compiled with cuda 6.5 and around 100% faster when compiled with cuda 7.5 This is a co-joint effort between myself and Nicehash which will contribute to the pledge as well This means that it will be publicly available once the donation reach an amount of 2.5btc. Donation will have to be made exclusively to that btc address: 16UoC4DmTz2pvhFvcfTQrzkPTrXkWijzXw (careful it is a new address, not the old one, one should already consider that 0.1btc has been donated) The goad for this pledge is of 2.5btc (included NiceHash contribution). The pledge start now (obviously) and should end the January the 24th (ie 2weeks and half). If the pledge is successful both windows and linux version (the source) of ccminer will become available. If those 2.5btc target is met, then the miners will be released through NiceHash Current speed of the miner:The hashrate given in the image is for ccminer compiled with cuda 6.5, I am also planning to retune the miner with cuda 7.5 At the moment the hashrate for the 750ti is mostly unchanged while the 980's lose around 40kh/s.  Thanks in advance for your participation.  here the link to the blockchain explorer where you can check the advancement of the pledge: https://blockchain.info/address/16UoC4DmTz2pvhFvcfTQrzkPTrXkWijzXw (+0.1btc) ps: this thread is self moderated (assuming I have time and not in a troll mood  ), off-topic trolling won't be permitted Thanks djm34 Up from 500 to 590 on my 970 (from sp-1.5.74 release), Cuda 6.5. [2016-01-24 19:39:32] accepted: 70/70 (100.00%), 588.91 kH/s yes! [2016-01-24 19:39:51] GPU #0: GeForce GTX 970, 594 (T= 71C F= 49% C=0/0) [2016-01-24 19:39:51] accepted: 71/71 (100.00%), 589.15 kH/s yes! [2016-01-24 19:40:03] GPU #0: GeForce GTX 970, 594 (T= 71C F= 49% C=0/0) [2016-01-24 19:40:03] accepted: 72/72 (100.00%), 589.23 kH/s yes! Also, both fan% and temp went up, as expected. Sent you some coin txid 74887611da7963c0cbac2399531d2d8d41221fd009f4482c6a7d6c4f34db0378-000
|
|
|
Hi Joblo. An update on cpuminer 3.0.2 on Core 2 duo. I managed to get it going - at least mining X11. Checking CPU capatibility... Intel(R) Core(TM)2 Duo CPU E8400 @ 3.00GHz AES_NI: No. SSE2: No, start mining without optimizations...
[2016-01-23 17:25:40] Starting Stratum on stratum+tcp://hashpower.co:3533 [2016-01-23 17:25:40] 2 miner threads started, using 'x11' algorithm. [2016-01-23 17:25:41] Stratum difficulty set to 0.016 [2016-01-23 17:25:41] hashpower.co:3533 x11 block 652575 [2016-01-23 17:25:47] CPU #1: 43.13 kH/s [2016-01-23 17:25:47] CPU #0: 43.13 kH/s [2016-01-23 17:25:54] hashpower.co:3533 x11 block 652576 [2016-01-23 17:25:54] CPU #0: 43.07 kH/s [2016-01-23 17:25:54] CPU #1: 43.07 kH/s
As you can see the hashrate is not great, and even worse (actually half) if I dont override the return from the has_sse2() function which doesnt work in this case. Here is a list of the changes I did to get it working: ../cpuminer-opt-3.0.2/algo/aes_ni/groestl/groestl-intr-aes.h <- Added #ifdef HAVE_AESNI to exclude the aesni code from being compiled. ../cpuminer-opt-3.0.2/algo/aes_ni/echo512/hash.c <- remove inline keyword. Also added #undef AES_NI ..//home/arve/cpuminer/joblo3.0.2/cpuminer-opt-3.0.2/algo/ <- remove inline keyword in qubit-aes.c quark-sse2.c quark-aes.c ../cpuminer-opt-3.0.2/cpu-miner.c line 1947 force cpu_sse2 = true In my opinion it should be possible to add a switch when running configure to disable-aesni ? That's great news after another difficult day trying to get windows working. The generic kernel is pretty slow as shown in the performce charts. Support is more about compatibility than performance. The sse2 kernels are looking pretty good, relatively speaking. It seems the SSE2 check fails on your core2 but when you force it the SSE2 kernel runs fine. Is that correct? I'll restore all the compiler directives, I had removed them LOL, when I was hacking and slashing the code an focussed only on the top tier. I'll follow up on your findings, thanks a lot. Edit: Another dramatic turn of events, this time a positive one! Reenabling the AESNI_NI defines (and the companion OPTIMIZE_SSE2 has unlocked a ton more performance thaty had been hidden by chainsaw approach. X11 is up to 865 from 720 on my i7-4790K, but things aren't perfect, I see some rejects. Some of the exposed code may not be perfect and may require a scalple to cut out the cancer. I've got a lot of work to do but I'm back on track. BTW thanks for the excelllent report, very clear, complete, and precise. I followed it like a script. --------------------------------------------------------- Edit: It looks like the dramatic sped increase was due to an error on my part. It seems the #define AES_NI I put in miner.h isn't being seen. I had to #define it in every file that uses it. Clumsy but it works. I'm building a debug tarbar so you can do some better testing. It will require you to make some small code changes. I will send a PM with a link to the file. Here's how it works: Default is to enable and respect all AES_NI checks. Edit: changed default configuration for your convenience. Will disable AES_NI because we already know your CPU can't handle it To force disable AES_NI you have to do two things before compiling: - in cpu-miner.c:1945 hard code cpu_aesni to false - remove #define AESNI from all algos, grep -r AES_NI to find them all, sorry To force disable SSE2 - in addition to the steps above hard code cpu_sse2 to false These changes will affect kernel selection only, the start up capability check is still performed but not enforced. This should solve the compile problem you had and allow you to do more testing. The ugly workarounds are for the debug load only. I will investigate a more user firendly implentation before release. Any suggestions welcome, including why the algos don't pick up the #define in miner.h The goals of the further testing: 1. confirm the capability of your core2 CPU 2. determine if cpuminer-opt can correctly identify your CPU's capability level and select the appropriate kernel 3. compare sse2 vs x86_64 performance. The first two are pretty obvious The third will allow me to extrapolate an estimate of the hash deficit of SSE2 vs AES_NI and split it into the HW component and software component. How much of the loss is due purely due to the lack of AES_NI and how much is due to other CPU optimizations in the latest generation. Thanks for the great work. The cpu check fails on this computer and I suspect cpuid has been disabled in the BIOS. I will check when I get a chance. I downloaded the code you sent. I had to do the following mods to get it to compile. 1. Commented out #define AES_NI and Added #ifdef AES_NI to effectively comment out code in algo/cryptonight/cryptonight-aesni.c. 2. Added #undef AES_NI in algo/aesni/echo512/hash.c since miner.h is not included. 3. Added #ifdef AES_NI_ON a lot of places in algo/aesni/groestl/groestl-intr-aes.h After these changes, I could leave both #define AES_NI #define AES_NI_ON 1 in miner.h and it compiled/runs fine. Hashrates: With cpu_sse2 = false I get [2016-01-24 13:05:53] CPU #0: 25.50 kH/s [2016-01-24 13:05:53] CPU #1: 25.50 kH/s Set to true: [2016-01-24 13:06:36] CPU #0: 43.38 kH/s [2016-01-24 13:06:36] CPU #1: 43.38 kH/s EDIT: You might want to move the calls to has_aes_ni() and has_sse2() to the top of main() and make the boolean flags global. So that these functions are not called every pass of the main loop. No need to call these more than once 
|
|
|
Hi Joblo. An update on cpuminer 3.0.2 on Core 2 duo. I managed to get it going - at least mining X11. Checking CPU capatibility... Intel(R) Core(TM)2 Duo CPU E8400 @ 3.00GHz AES_NI: No. SSE2: No, start mining without optimizations...
[2016-01-23 17:25:40] Starting Stratum on stratum+tcp://hashpower.co:3533 [2016-01-23 17:25:40] 2 miner threads started, using 'x11' algorithm. [2016-01-23 17:25:41] Stratum difficulty set to 0.016 [2016-01-23 17:25:41] hashpower.co:3533 x11 block 652575 [2016-01-23 17:25:47] CPU #1: 43.13 kH/s [2016-01-23 17:25:47] CPU #0: 43.13 kH/s [2016-01-23 17:25:54] hashpower.co:3533 x11 block 652576 [2016-01-23 17:25:54] CPU #0: 43.07 kH/s [2016-01-23 17:25:54] CPU #1: 43.07 kH/s
As you can see the hashrate is not great, and even worse (actually half) if I dont override the return from the has_sse2() function which doesnt work in this case. Here is a list of the changes I did to get it working: ../cpuminer-opt-3.0.2/algo/aes_ni/groestl/groestl-intr-aes.h <- Added #ifdef HAVE_AESNI to exclude the aesni code from being compiled. ../cpuminer-opt-3.0.2/algo/aes_ni/echo512/hash.c <- remove inline keyword. Also added #undef AES_NI ..//home/arve/cpuminer/joblo3.0.2/cpuminer-opt-3.0.2/algo/ <- remove inline keyword in qubit-aes.c quark-sse2.c quark-aes.c ../cpuminer-opt-3.0.2/cpu-miner.c line 1947 force cpu_sse2 = true In my opinion it should be possible to add a switch when running configure to disable-aesni ?
|
|
|
Hi Joblo, Thank you for this initiative. I downloaded the 3.0.1 version. It compiles (and runs) on my machines having i5 processors (Ubuntu 12.04), but it fails to compile on my Intel Core 2 duo machine (Ubuntu 14.04), as well as on my old Athlon single core machine (Ubuntu 12.04). Here is the build log (from ./build.sh) from the core 2 duo if interested. https://www.dropbox.com/s/mbxje7fdntxgrkk/cpuminer_build.log?dl=0Hi bobben, Thank you for your interest. You are the first person to report with a CPU without AES_NI support and I only have AES_NI. as a result cpuminer-opt on older CPUs is untested. I hope we can work together to get this working. I will download and look at your compile but I'm busy right now preparing another release. Stay tuned. I took a look at your build file and I think I know what the problm is. The package contains code for CPUs with AES_NI but your cpu can't handle it. The miner can handle this at run time but compiling is the issue. the compiler option -march=native means to build for your CPU. Since your CPU can't run AES_NI instructions the compiler refuses to compile it. If we can get it compiled I think it will run fine because cpuminer-opt checks the CPU architectire in order to select code to run for the correct CPU architecture. I have an idea for a workaround. We can fool the compiler into think it is bulding for an AES_NI CPU but in fact we'll run it on your core2. Change the configure option "-march=native" to "-march=corei7-avx" and see if it compiles. If successful try to run it and note the startup messages regarding the CPU capabilities. I hope this works. Edit: Only 64 bit CPU are supported. I changed march as you suggested. I got a bit further, but still errors. I commented out the content of the functions in algo/sse2/groestl/grso-asm.c as it was throwing errors. Then I created a dummy .c file which I compiled and added to the link statement with the following content as these functions were missing: #include <stdio.h>
void quarkhash_aes() { fprintf(stderr,"This should not happen\n"); } void quarkhash_sse2() { fprintf(stderr,"This neither\n"); } void qubithash_aes() { fprintf(stderr,"Nor this\n"); }
I then got the program linked, but it core dumps when I start it: Checking CPU capatibility... Intel(R) Core(TM)2 Duo CPU E8400 @ 3.00GHz AES_NI: No. SSE2: No, start mining without optimizations... [2016-01-22 17:41:29] Starting Stratum on stratum+tcp://hashpower.co:4733 [2016-01-22 17:41:29] 2 miner threads started, using 'qubit' algorithm. /home/arve/miner_cpu_qubit: line 11: 3379 Illegal instruction (core dumped) cpuminer -t $thr -a qubit -o stratum+tcp://hashpower.co:4733 -u $ADDR_BTC According to /proc/cpuinfo the core 2 duo has sse2, so I am surprised at the program conclusion.. Here is the info from gdb after I recompiled with -g Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1". Core was generated by `cpuminer -t 2 -a qubit -o stratum+tcp://hashpower.co:4733 -u 19eSQxSAL9PuTF8gfW'. Program terminated with signal SIGILL, Illegal instruction. #0 0x000000000064595d in jsonp_strtod (strbuffer=0x7f69bae93ca8, out=0x7f69bae93a88) at strconv.c:69 69 value = strtod(strbuffer->value, &end); (gdb) Thanks a lot for the data. It's clear the support for older cpus just isn't there yet. That's what happens when one can't ddo his own testing I need to change that. I have a core2quad but it runs windows, I guess I'll have to get widows workinh sooner than later.. In the meantime, since youseem comfortable in the code I have a request to gather more data with some instrumented code. In file cpu-miner.c, function miner_thread there is a big switch statement casing on each algo. You will notice how I select the proper kernel. Simly comment out a few lines to force either the sse2 version or x64 version to run regardless of the detected CPU technology. If you could do this for several algos it would help identify any code contamination. The sausage factory in action. It ain't pretty and better done behind closed doors. Edit: try forcing all three kernels to run to see at what level your core2 fails. thanks I deleted all the source files, after suspecting make clean doesnt clean out all the object files. Then I did a fresh untar of version 3.0.2. This time, it compiled and linked cleanly. But still core dump when running. I tried changing the miner_thread to force a particular cpu architecture, as you suggested: bool cpu_aesni = has_aes_ni(); bool cpu_sse2 = has_sse2(); cpu_aesni = false; cpu_sse2 = false; then cpu_aesni = false; cpu_sse2 = true; then cpu_aesni = true; cpu_sse2 = true; but none worked. I then tried to recompile with the -g switch, but I got new errors.... ./algo/sse2/groestl/grso-asm.c: In function ‘grsoP1024ASM’: ./algo/sse2/groestl/grso-asm.c:6:3: error: ‘asm’ operand has impossible constraints asm ( ^ Too many problems with the miner on this CPU.. It might take some effort to fix this. You may want to put this lower on your priority list.
|
|
|
Just for fun I tried to compile miner version 3.0.2 on my old Athlon, setting march=corei7-avx in the configure. And it compiled without a hitch.
Then tried to run it, but it core dumps :
Checking CPU capatibility... AMD Athlon(tm) 64 Processor 3200+ AES_NI: No. SSE2: No, start mining without optimizations...
[2016-01-22 20:15:18] Starting Stratum on stratum+tcp://hashpower.co:4733 [2016-01-22 20:15:18] 1 miner threads started, using 'qubit' algorithm. /home/bobben/miner_cpu_qubit: line 12: 5354 Illegal instruction (core dumped) cpuminer -t $thr -a qubit -o stratum+tcp://hashpower.co:4733 -u $ADDR_BTC
|
|
|
Hi Joblo, Thank you for this initiative. I downloaded the 3.0.1 version. It compiles (and runs) on my machines having i5 processors (Ubuntu 12.04), but it fails to compile on my Intel Core 2 duo machine (Ubuntu 14.04), as well as on my old Athlon single core machine (Ubuntu 12.04). Here is the build log (from ./build.sh) from the core 2 duo if interested. https://www.dropbox.com/s/mbxje7fdntxgrkk/cpuminer_build.log?dl=0Hi bobben, Thank you for your interest. You are the first person to report with a CPU without AES_NI support and I only have AES_NI. as a result cpuminer-opt on older CPUs is untested. I hope we can work together to get this working. I will download and look at your compile but I'm busy right now preparing another release. Stay tuned. I took a look at your build file and I think I know what the problm is. The package contains code for CPUs with AES_NI but your cpu can't handle it. The miner can handle this at run time but compiling is the issue. the compiler option -march=native means to build for your CPU. Since your CPU can't run AES_NI instructions the compiler refuses to compile it. If we can get it compiled I think it will run fine because cpuminer-opt checks the CPU architectire in order to select code to run for the correct CPU architecture. I have an idea for a workaround. We can fool the compiler into think it is bulding for an AES_NI CPU but in fact we'll run it on your core2. Change the configure option "-march=native" to "-march=corei7-avx" and see if it compiles. If successful try to run it and note the startup messages regarding the CPU capabilities. I hope this works. Edit: Only 64 bit CPU are supported. I changed march as you suggested. I got a bit further, but still errors. I commented out the content of the functions in algo/sse2/groestl/grso-asm.c as it was throwing errors. Then I created a dummy .c file which I compiled and added to the link statement with the following content as these functions were missing: #include <stdio.h>
void quarkhash_aes() { fprintf(stderr,"This should not happen\n"); } void quarkhash_sse2() { fprintf(stderr,"This neither\n"); } void qubithash_aes() { fprintf(stderr,"Nor this\n"); }
I then got the program linked, but it core dumps when I start it: Checking CPU capatibility... Intel(R) Core(TM)2 Duo CPU E8400 @ 3.00GHz AES_NI: No. SSE2: No, start mining without optimizations... [2016-01-22 17:41:29] Starting Stratum on stratum+tcp://hashpower.co:4733 [2016-01-22 17:41:29] 2 miner threads started, using 'qubit' algorithm. /home/arve/miner_cpu_qubit: line 11: 3379 Illegal instruction (core dumped) cpuminer -t $thr -a qubit -o stratum+tcp://hashpower.co:4733 -u $ADDR_BTC According to /proc/cpuinfo the core 2 duo has sse2, so I am surprised at the program conclusion.. Here is the info from gdb after I recompiled with -g Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1". Core was generated by `cpuminer -t 2 -a qubit -o stratum+tcp://hashpower.co:4733 -u 19eSQxSAL9PuTF8gfW'. Program terminated with signal SIGILL, Illegal instruction. #0 0x000000000064595d in jsonp_strtod (strbuffer=0x7f69bae93ca8, out=0x7f69bae93a88) at strconv.c:69 69 value = strtod(strbuffer->value, &end); (gdb)
|
|
|
|