Stubo
Member
Offline
Activity: 224
Merit: 13
|
|
December 01, 2017, 10:06:20 AM |
|
Tried to run timetravel (BTX) and got "Illegal instruction" error ccminer -a timetravel -o stratum+tcp://btx.suprnova.cc:3629 -u DangerD.Miner1 -p zzz
It was compiled from git of from downloaded release? (From git is needed because latest changes are not in latest release) Yes, the source (included in the zip) was downloaded from git just a couple of days after the release. I never tried it with BTX because I was trying it with MONA and it worked fine. If it is not working for your rig, I would suggest that you just recompile it.
|
|
|
|
alexander1234
Newbie
Offline
Activity: 5
Merit: 0
|
|
December 01, 2017, 11:08:38 AM |
|
Thank you for the great product!
Could you please assist with this trouble? I run v0019-1.3 with no issues, 10x 1080, all cards stably giving 550 sol on zec with zm. Then I've installed fresh v0019-1.4, applied my OC. Now hashrate and POWER consumption bounces on several of 1080s and overal hashrate decreased significantly. I've rechecked the configs, tried to turn off temp control, played with power mizer setting. But no result. Is it some known issue? Maybe related to driver update from 384 to 387? Could you please point me to the resolution if there is one? Thank you very much!
|
|
|
|
ruepa
Newbie
Offline
Activity: 10
Merit: 0
|
|
December 01, 2017, 01:23:01 PM |
|
If somebody could help, I've build a mining rig with 3 nvidia 1070 ti AERO and used a Asus Prime Z270-p motherboard. I've already enabled above 4G decoding and reset the security keys.
Inicially the system was booting fine, and i've set up the cards with CC 175 and a MC 1200, power limit of 115W and fan 80. On the 1bash I've opened the miner and it was mining around 480 sol's per card, with a temperature around 70 oC, i've letf it running for about 5hrs before turning off the monitor and going to work, at the time temperature were stabe at 68oC with ocasional spikes to 70oC.
Acording to the pool i was connected the machine was running for 12hrs and just before turning off there was a spike in the sol's. When i got home the RIG was on the bios. Now when i boot into linux if i leave the terminal open it crashes and reboots. If i close the terminal i can use the system, i can go into youtube and watch movies in 1080p, see the gpu's detected on the nvidia app. If i try to run the mining terminal it crashes strait away, with terminal without running any command it also crashes.
First of all, do you think it was possible to have burned the gpu's with those settings with around a total of 20hrs of work?
What can be causing this? I've already tried changing risers, and formatting the USB and running it again.
|
|
|
|
Reinars
Newbie
Offline
Activity: 13
Merit: 0
|
|
December 01, 2017, 05:20:58 PM |
|
Can we get BTG built into next version ? I would like to try that out
|
|
|
|
MentalNomad
Member
Offline
Activity: 83
Merit: 10
|
|
December 01, 2017, 05:58:11 PM |
|
Completely disagree. This is dangerous way to overclock and could lead to catastrophic failure of your rig if a card dies on its own.. and they do. Its discussed on the nvidia dev form with some python code that could be adopted to nvOC if anyone is interested: https://devtalk.nvidia.com/default/topic/769851/multi-nvidia-gpus-and-xorg-conf-how-to-account-for-pci-bus-busid-change-/Hi Guys,
I discovered a serious and potentially dangerous flaw in the way nvOC handles overclocking and would like to make a suggestion for an improvement.
We really need overclocking tied to the specific pcie slot (bus id) not an index that changes every time your hardware changes.
For example, if you have a gtx1080ti in slot 2, and a gtx1060 in slot 3, and your 1080ti goes offline for some reason or you remove it, the 1080ti overclock is now applied to what it thinks is the next card in the dumb index, and applies it to your gtx1060 potentially going POOF.
We need to apply overclocking to BUS ID: | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | |===============================+======================+======================| | 0 GeForce GTX 1070 Off | 00000000:02:00.0 Off | N/A | | 70% 56C P2 152W / 151W | 652MiB / 8112MiB | 99% Default | +-------------------------------+----------------------+----------------------+ | 1 GeForce GTX 106... Off | 00000000:04:00.0 Off | N/A | | 70% 61C P2 120W / 120W | 592MiB / 6072MiB | 99% Default | +-------------------------------+----------------------+----------------------+ | 2 GeForce GTX 1070 Off | 00000000:05:00.0 Off | N/A | | 70% 52C P2 118W / 120W | 614MiB / 8113MiB | 99% Default | +-------------------------------+----------------------+----------------------+
Nothing to fix at all oO ... You modified your RIG, you have to modify setting ... How is OC by slot going to fix the scenario where a person just moves cards around in a rig as opposed to just removing one? Both scenarios are hardware changes and common sense dictates that the user be aware of this potential because they went down the path of path of individual OC in the first place. It is not like they went there by mistake, right? I think the concern is about when no changes are intentionally made. Example: I have 12 cards in a rig. One card dies completely, mining stops, WDOG restarts the rig... Rig comes back up, but the dead card is not recognized at all. GPU numbering is now different. Now some OC settings are wrong, may be applying power/fans/OC inappropriately, perhaps making the rig unstable or putting more hardware at risk....
|
|
|
|
joshuajones02
|
|
December 01, 2017, 06:32:46 PM |
|
Hello, I've been trying to search in this thread but unable to find a similar issue. Not sure if its related but when I add additional GPU or remove one, my machine will boot up to a blank screen with the _ symbol, when I try to reboot after it'll say cleared or clear with a number of files but will not boot into the Ubuntu GUI, I can access the terminals but that doesnt help. I type in startx in the terminal but it just errors.. I should have taken photos of the screen, I can probably replicate the error if needed.
I have some extra cards I am using for a new build that I wanted to test out and had some extra slots to run them on and won't receive all the parts for a few more days and I can't play with them without taking an entire system offline and having to clone the hard drive to get it to run again.
|
|
|
|
Temporel
|
|
December 01, 2017, 07:51:25 PM |
|
those mining Neoscrypt with GTX-1070, what miner are you using with nvOC ?
TIA
|
|
|
|
Stubo
Member
Offline
Activity: 224
Merit: 13
|
|
December 01, 2017, 09:07:10 PM |
|
Hello, I've been trying to search in this thread but unable to find a similar issue. Not sure if its related but when I add additional GPU or remove one, my machine will boot up to a blank screen with the _ symbol, when I try to reboot after it'll say cleared or clear with a number of files but will not boot into the Ubuntu GUI, I can access the terminals but that doesnt help. I type in startx in the terminal but it just errors.. I should have taken photos of the screen, I can probably replicate the error if needed.
I have some extra cards I am using for a new build that I wanted to test out and had some extra slots to run them on and won't receive all the parts for a few more days and I can't play with them without taking an entire system offline and having to clone the hard drive to get it to run again.
Have you tried to access it remotely using SSH? I suspect that GPU0 is changing to a different GPU and that is the not the one you have your monitor connected to. So, all that you see is a blank screen. I have this same issue and I don't know if there is a way to force a particular GPU to be GPU0. Since I run headless anyway, it is really not much of a problem. Hope this helps.
|
|
|
|
Stubo
Member
Offline
Activity: 224
Merit: 13
|
|
December 01, 2017, 09:41:19 PM |
|
Completely disagree. This is dangerous way to overclock and could lead to catastrophic failure of your rig if a card dies on its own.. and they do. Its discussed on the nvidia dev form with some python code that could be adopted to nvOC if anyone is interested: https://devtalk.nvidia.com/default/topic/769851/multi-nvidia-gpus-and-xorg-conf-how-to-account-for-pci-bus-busid-change-/Hi Guys,
I discovered a serious and potentially dangerous flaw in the way nvOC handles overclocking and would like to make a suggestion for an improvement.
We really need overclocking tied to the specific pcie slot (bus id) not an index that changes every time your hardware changes.
For example, if you have a gtx1080ti in slot 2, and a gtx1060 in slot 3, and your 1080ti goes offline for some reason or you remove it, the 1080ti overclock is now applied to what it thinks is the next card in the dumb index, and applies it to your gtx1060 potentially going POOF.
We need to apply overclocking to BUS ID: | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | |===============================+======================+======================| | 0 GeForce GTX 1070 Off | 00000000:02:00.0 Off | N/A | | 70% 56C P2 152W / 151W | 652MiB / 8112MiB | 99% Default | +-------------------------------+----------------------+----------------------+ | 1 GeForce GTX 106... Off | 00000000:04:00.0 Off | N/A | | 70% 61C P2 120W / 120W | 592MiB / 6072MiB | 99% Default | +-------------------------------+----------------------+----------------------+ | 2 GeForce GTX 1070 Off | 00000000:05:00.0 Off | N/A | | 70% 52C P2 118W / 120W | 614MiB / 8113MiB | 99% Default | +-------------------------------+----------------------+----------------------+
Nothing to fix at all oO ... You modified your RIG, you have to modify setting ... How is OC by slot going to fix the scenario where a person just moves cards around in a rig as opposed to just removing one? Both scenarios are hardware changes and common sense dictates that the user be aware of this potential because they went down the path of path of individual OC in the first place. It is not like they went there by mistake, right? I think the concern is about when no changes are intentionally made. Example: I have 12 cards in a rig. One card dies completely, mining stops, WDOG restarts the rig... Rig comes back up, but the dead card is not recognized at all. GPU numbering is now different. Now some OC settings are wrong, may be applying power/fans/OC inappropriately, perhaps making the rig unstable or putting more hardware at risk.... Well, I don't think you will put the HW at risk but it could certainly screw up some OC settings. My understanding is that Nvidia builds their GPU's with several different fail-safe mechanisms to prevent this as do the vendors who build and warranty them. Consider the scenario where we attempt to apply a PL that is too high. Here is an old 970 on my test rig whose limit is 220 watts and I try to push it to 225: m1@Testy:~$ # Overpower a GPU m1@Testy:~$ nvidia-smi --query-gpu=name,pstate,temperature.gpu,fan.speed,utilization.gpu,power.draw,power.limit --format=csv name, pstate, temperature.gpu, fan.speed [%], utilization.gpu [%], power.draw [W], power.limit [W] GeForce GTX 970, P2, 49, 50 %, 100 %, 114.22 W, 115.00 W m1@Testy:~$ # Find max power card can handle m1@Testy:~$ nvidia-smi -a |grep "Max Power" Max Power Limit : 220.00 W m1@Testy:~$ # Set GPU Power above this m1@Testy:~$ sudo nvidia-smi -pl 225 Provided power limit 225.00 W is not a valid power limit which should be between 100.00 W and 220.00 W for GPU 00000000:01:00.0 Terminating early due to previous errors.
Has anybody seen a HW failure from this or is this just theoretical?
|
|
|
|
poisonxa
|
|
December 02, 2017, 01:47:49 AM |
|
its not going to happen the gpu is designed to only pull the power it needs and nothing more so theres no way to force feeding it more power because it gets to the point that it just wont accept it now giving it alot of power and letting it overheat is autokill "keep in mind power doesnt kill hw heat does" now for overclocking you cant brick a card now adays for over overclocking it because it will just crash the drivers and internal failsafes will just turn the card off or crash it if the clocks are too high.
|
|
|
|
infowire
Newbie
Offline
Activity: 96
Merit: 0
|
|
December 02, 2017, 07:21:07 PM |
|
Still having the last card at 26 mhs, no matter what i do it's always the last card. Evga 1070. All the same cards and they do 31 MHs except the last one.
|
|
|
|
poisonxa
|
|
December 02, 2017, 07:27:19 PM |
|
Still having the last card at 26 mhs, no matter what i do it's always the last card. Evga 1070. All the same cards and they do 31 MHs except the last one.
Did you try disabling mining mode and turning on 4g encode?
|
|
|
|
CryptAtomeTrader44
Full Member
Offline
Activity: 340
Merit: 103
It is easier to break an atom than partialities AE
|
|
December 02, 2017, 08:12:10 PM Last edit: December 03, 2017, 12:30:32 AM by CryptAtomeTrader44 |
|
This compiled miner don't mine on my rig and return an illegal instruction when a i mine neoscrypt. I tried to compile it but i have some errors on make make all-recursive make[1]: Entering directory '/home/m1/Downloads/ccminer-2.2.2-tpruvot' Making all in compat make[2]: Entering directory '/home/m1/Downloads/ccminer-2.2.2-tpruvot/compat' make[3]: Entering directory '/home/m1/Downloads/ccminer-2.2.2-tpruvot/compat' make[3]: Nothing to be done for 'all-am'. make[3]: Leaving directory '/home/m1/Downloads/ccminer-2.2.2-tpruvot/compat' make[2]: Leaving directory '/home/m1/Downloads/ccminer-2.2.2-tpruvot/compat' make[2]: Entering directory '/home/m1/Downloads/ccminer-2.2.2-tpruvot' g++ -DHAVE_CONFIG_H -I. -fopenmp -pthread -fno-strict-aliasing -I/usr/local/cuda/include -DUSE_WRAPNVML -O3 -march=native -D_REENTRANT -falign-functions=16 -falign-jumps=16 -falign-labels=16 -MT ccminer-bignum.o -MD -MP -MF .deps/ccminer-bignum.Tpo -c -o ccminer-bignum.o `test -f 'bignum.cpp' || echo './'`bignum.cpp In file included from bignum.cpp:8:0: bignum.hpp:63:24: error: invalid use of incomplete type ‘BIGNUM {aka struct bignum_st}’ class CBigNum : public BIGNUM ^ In file included from /usr/local/include/openssl/bn.h:32:0, from bignum.hpp:20, from bignum.cpp:8: /usr/local/include/openssl/ossl_typ.h:80:16: note: forward declaration of ‘BIGNUM {aka struct bignum_st}’ typedef struct bignum_st BIGNUM; ^ In file included from bignum.cpp:8:0: bignum.hpp: In constructor ‘CBigNum::CBigNum()’: bignum.hpp:68:21: error: ‘BN_init’ was not declared in this scope BN_init(this); ^ bignum.hpp: In copy constructor ‘CBigNum::CBigNum(const CBigNum&)’: bignum.hpp:73:21: error: ‘BN_init’ was not declared in this scope BN_init(this); ^ bignum.hpp:74:30: error: cannot convert ‘CBigNum*’ to ‘BIGNUM* {aka bignum_st*}’ for argument ‘1’ to ‘BIGNUM* BN_copy(BIGNUM*, const BIGNUM*)’ if (!BN_copy(this, &b)) ^ bignum.hpp:76:31: error: cannot convert ‘CBigNum*’ to ‘BIGNUM* {aka bignum_st*}’ for argument ‘1’ to ‘void BN_clear_free(BIGNUM*)’ BN_clear_free(this); ^ bignum.hpp: In member function ‘CBigNum& CBigNum::operator=(const CBigNum&)’: bignum.hpp:83:30: error: cannot convert ‘CBigNum*’ to ‘BIGNUM* {aka bignum_st*}’ for argument ‘1’ to ‘BIGNUM* BN_copy(BIGNUM*, const BIGNUM*)’ if (!BN_copy(this, &b)) ^ bignum.hpp: In destructor ‘CBigNum::~CBigNum()’: bignum.hpp:90:27: error: cannot convert ‘CBigNum*’ to ‘BIGNUM* {aka bignum_st*}’ for argument ‘1’ to ‘void BN_clear_free(BIGNUM*)’ BN_clear_free(this); ^ bignum.hpp: In constructor ‘CBigNum::CBigNum(signed char)’: bignum.hpp:94:47: error: ‘BN_init’ was not declared in this scope CBigNum(signed char n) { BN_init(this); if (n >= 0) setulong(n); else setint64(n); } ^ bignum.hpp: In constructor ‘CBigNum::CBigNum(short int)’: bignum.hpp:95:47: error: ‘BN_init’ was not declared in this scope CBigNum(short n) { BN_init(this); if (n >= 0) setulong(n); else setint64(n); } ^ bignum.hpp: In constructor ‘CBigNum::CBigNum(int)’: bignum.hpp:96:47: error: ‘BN_init’ was not declared in this scope CBigNum(int n) { BN_init(this); if (n >= 0) setulong(n); else setint64(n); } ^ bignum.hpp: In constructor ‘CBigNum::CBigNum(long int)’: bignum.hpp:97:47: error: ‘BN_init’ was not declared in this scope CBigNum(long n) { BN_init(this); if (n >= 0) setulong(n); else setint64(n); } ^ bignum.hpp: In constructor ‘CBigNum::CBigNum(int64)’: bignum.hpp:98:47: error: ‘BN_init’ was not declared in this scope CBigNum(int64 n) { BN_init(this); setint64(n); } ^ bignum.hpp: In constructor ‘CBigNum::CBigNum(unsigned char)’: bignum.hpp:99:47: error: ‘BN_init’ was not declared in this scope CBigNum(unsigned char n) { BN_init(this); setulong(n); } ^ bignum.hpp: In constructor ‘CBigNum::CBigNum(short unsigned int)’: bignum.hpp:100:47: error: ‘BN_init’ was not declared in this scope CBigNum(unsigned short n) { BN_init(this); setulong(n); } ^ bignum.hpp: In constructor ‘CBigNum::CBigNum(unsigned int)’: bignum.hpp:101:47: error: ‘BN_init’ was not declared in this scope CBigNum(unsigned int n) { BN_init(this); setulong(n); } ^ bignum.hpp: In constructor ‘CBigNum::CBigNum(long unsigned int)’: bignum.hpp:102:47: error: ‘BN_init’ was not declared in this scope CBigNum(unsigned long n) { BN_init(this); setulong(n); } ^ bignum.hpp: In constructor ‘CBigNum::CBigNum(uint64)’: bignum.hpp:103:47: error: ‘BN_init’ was not declared in this scope CBigNum(uint64 n) { BN_init(this); setuint64(n); } ^ bignum.hpp: In constructor ‘CBigNum::CBigNum(uint256)’: bignum.hpp:104:47: error: ‘BN_init’ was not declared in this scope explicit CBigNum(uint256 n) { BN_init(this); setuint256(n); } ^ bignum.hpp: In constructor ‘CBigNum::CBigNum(const std::vector<unsigned char>&)’: bignum.hpp:108:21: error: ‘BN_init’ was not declared in this scope BN_init(this); ^ bignum.hpp: In member function ‘void CBigNum::setulong(long unsigned int)’: bignum.hpp:114:33: error: cannot convert ‘CBigNum*’ to ‘BIGNUM* {aka bignum_st*}’ for argument ‘1’ to ‘int BN_set_word(BIGNUM*, long unsigned int)’ if (!BN_set_word(this, n)) ^ bignum.hpp: In member function ‘long unsigned int CBigNum::getulong() const’: bignum.hpp:120:48: error: cannot convert ‘const CBigNum*’ to ‘const BIGNUM* {aka const bignum_st*}’ for argument ‘1’ to ‘long unsigned int BN_get_word(const BIGNUM*)’ return (unsigned long) BN_get_word(this); ^ bignum.hpp: In member function ‘unsigned int CBigNum::getuint() const’: bignum.hpp:125:47: error: cannot convert ‘const CBigNum*’ to ‘const BIGNUM* {aka const bignum_st*}’ for argument ‘1’ to ‘long unsigned int BN_get_word(const BIGNUM*)’ return (unsigned int) BN_get_word(this); ^ bignum.hpp: In member function ‘int CBigNum::getint() const’: bignum.hpp:130:59: error: cannot convert ‘const CBigNum*’ to ‘const BIGNUM* {aka const bignum_st*}’ for argument ‘1’ to ‘long unsigned int BN_get_word(const BIGNUM*)’ unsigned long n = (unsigned long) BN_get_word(this); ^ bignum.hpp:131:33: error: cannot convert ‘const CBigNum*’ to ‘const BIGNUM* {aka const bignum_st*}’ for argument ‘1’ to ‘int BN_is_negative(const BIGNUM*)’ if (!BN_is_negative(this)) ^ In file included from bignum.cpp:8:0: bignum.hpp: In member function ‘void CBigNum::setint64(int64)’: bignum.hpp:179:45: error: cannot convert ‘CBigNum*’ to ‘BIGNUM* {aka bignum_st*}’ for argument ‘3’ to ‘BIGNUM* BN_mpi2bn(const unsigned char*, int, BIGNUM*)’ BN_mpi2bn(pch, (int) (p - pch), this); ^ bignum.hpp: In member function ‘void CBigNum::setuint64(uint64)’: bignum.hpp:206:45: error: cannot convert ‘CBigNum*’ to ‘BIGNUM* {aka bignum_st*}’ for argument ‘3’ to ‘BIGNUM* BN_mpi2bn(const unsigned char*, int, BIGNUM*)’ BN_mpi2bn(pch, (int) (p - pch), this); ^ bignum.hpp: In member function ‘void CBigNum::setuint256(uint256)’: bignum.hpp:234:45: error: cannot convert ‘CBigNum*’ to ‘BIGNUM* {aka bignum_st*}’ for argument ‘3’ to ‘BIGNUM* BN_mpi2bn(const unsigned char*, int, BIGNUM*)’ BN_mpi2bn(pch, (int) (p - pch), this); ^ bignum.hpp: In member function ‘uint256 CBigNum::getuint256() const’: bignum.hpp:239:50: error: cannot convert ‘const CBigNum*’ to ‘const BIGNUM* {aka const bignum_st*}’ for argument ‘1’ to ‘int BN_bn2mpi(const BIGNUM*, unsigned char*)’ unsigned int nSize = BN_bn2mpi(this, NULL); ^ bignum.hpp:243:32: error: cannot convert ‘const CBigNum*’ to ‘const BIGNUM* {aka const bignum_st*}’ for argument ‘1’ to ‘int BN_bn2mpi(const BIGNUM*, unsigned char*)’ BN_bn2mpi(this, &vch[0]); ^ bignum.hpp: In member function ‘void CBigNum::setvch(const std::vector<unsigned char>&)’: bignum.hpp:264:52: error: cannot convert ‘CBigNum*’ to ‘BIGNUM* {aka bignum_st*}’ for argument ‘3’ to ‘BIGNUM* BN_mpi2bn(const unsigned char*, int, BIGNUM*)’ BN_mpi2bn(&vch2[0], (int) vch2.size(), this); ^ bignum.hpp: In member function ‘std::vector<unsigned char> CBigNum::getvch() const’: bignum.hpp:269:50: error: cannot convert ‘const CBigNum*’ to ‘const BIGNUM* {aka const bignum_st*}’ for argument ‘1’ to ‘int BN_bn2mpi(const BIGNUM*, unsigned char*)’ unsigned int nSize = BN_bn2mpi(this, NULL); ^ bignum.hpp:273:32: error: cannot convert ‘const CBigNum*’ to ‘const BIGNUM* {aka const bignum_st*}’ for argument ‘1’ to ‘int BN_bn2mpi(const BIGNUM*, unsigned char*)’ BN_bn2mpi(this, &vch[0]); ^ bignum.hpp: In member function ‘CBigNum& CBigNum::SetCompact(unsigned int)’: bignum.hpp:309:36: error: cannot convert ‘CBigNum*’ to ‘BIGNUM* {aka bignum_st*}’ for argument ‘1’ to ‘int BN_set_word(BIGNUM*, long unsigned int)’ BN_set_word(this, nWord); ^ bignum.hpp:313:36: error: cannot convert ‘CBigNum*’ to ‘BIGNUM* {aka bignum_st*}’ for argument ‘1’ to ‘int BN_set_word(BIGNUM*, long unsigned int)’ BN_set_word(this, nWord); ^ bignum.hpp:314:46: error: cannot convert ‘CBigNum*’ to ‘BIGNUM* {aka bignum_st*}’ for argument ‘1’ to ‘int BN_lshift(BIGNUM*, const BIGNUM*, int)’ BN_lshift(this, this, 8*(nSize-3)); ^ bignum.hpp:316:40: error: cannot convert ‘CBigNum*’ to ‘BIGNUM* {aka bignum_st*}’ for argument ‘1’ to ‘void BN_set_negative(BIGNUM*, int)’ BN_set_negative(this, fNegative); ^ In file included from bignum.hpp:20:0, from bignum.cpp:8: bignum.hpp: In member function ‘unsigned int CBigNum::GetCompact() const’: bignum.hpp:322:30: error: cannot convert ‘const CBigNum*’ to ‘const BIGNUM* {aka const bignum_st*}’ for argument ‘1’ to ‘int BN_num_bits(const BIGNUM*)’ unsigned int nSize = BN_num_bytes(this); ^ In file included from bignum.cpp:8:0: bignum.hpp:325:55: error: cannot convert ‘const CBigNum*’ to ‘const BIGNUM* {aka const bignum_st*}’ for argument ‘1’ to ‘long unsigned int BN_get_word(const BIGNUM*)’ nCompact = (unsigned int) BN_get_word(this) << 8*(3-nSize); ^ bignum.hpp:329:45: error: cannot convert ‘CBigNum*’ to ‘BIGNUM* {aka bignum_st*}’ for argument ‘1’ to ‘int BN_rshift(BIGNUM*, const BIGNUM*, int)’ BN_rshift(&bn, this, 8*(nSize-3)); ^ bignum.hpp:330:54: error: cannot convert ‘CBigNum*’ to ‘const BIGNUM* {aka const bignum_st*}’ for argument ‘1’ to ‘long unsigned int BN_get_word(const BIGNUM*)’ nCompact = (unsigned int) BN_get_word(&bn); ^ bignum.hpp:340:41: error: cannot convert ‘const CBigNum*’ to ‘const BIGNUM* {aka const bignum_st*}’ for argument ‘1’ to ‘int BN_is_negative(const BIGNUM*)’ nCompact |= (BN_is_negative(this) ? 0x00800000 : 0); ^ In file included from bignum.cpp:8:0: bignum.hpp: In member function ‘std::__cxx11::string CBigNum::ToString(int) const’: bignum.hpp:381:35: error: cannot convert ‘CBigNum*’ to ‘BIGNUM* {aka bignum_st*}’ for argument ‘1’ to ‘void BN_set_negative(BIGNUM*, int)’ BN_set_negative(&bn, false); ^ bignum.hpp:384:29: error: cannot convert ‘CBigNum*’ to ‘const BIGNUM* {aka const bignum_st*}’ for argument ‘1’ to ‘int BN_cmp(const BIGNUM*, const BIGNUM*)’ if (BN_cmp(&bn, &bn0) == 0) ^ bignum.hpp:386:32: error: cannot convert ‘CBigNum*’ to ‘const BIGNUM* {aka const bignum_st*}’ for argument ‘1’ to ‘int BN_cmp(const BIGNUM*, const BIGNUM*)’ while (BN_cmp(&bn, &bn0) > 0) ^ bignum.hpp:388:54: error: cannot convert ‘CBigNum*’ to ‘BIGNUM* {aka bignum_st*}’ for argument ‘1’ to ‘int BN_div(BIGNUM*, BIGNUM*, const BIGNUM*, const BIGNUM*, BN_CTX*)’ if (!BN_div(&dv, &rem, &bn, &bnBase, pctx)) ^ bignum.hpp:394:32: error: cannot convert ‘const CBigNum*’ to ‘const BIGNUM* {aka const bignum_st*}’ for argument ‘1’ to ‘int BN_is_negative(const BIGNUM*)’ if (BN_is_negative(this)) ^ bignum.hpp: In member function ‘bool CBigNum::operator!() const’: bignum.hpp:427:31: error: cannot convert ‘const CBigNum*’ to ‘const BIGNUM* {aka const bignum_st*}’ for argument ‘1’ to ‘int BN_is_zero(const BIGNUM*)’ return BN_is_zero(this); ^ bignum.hpp: In member function ‘CBigNum& CBigNum::operator+=(const CBigNum&)’: bignum.hpp:432:35: error: cannot convert ‘CBigNum*’ to ‘BIGNUM* {aka bignum_st*}’ for argument ‘1’ to ‘int BN_add(BIGNUM*, const BIGNUM*, const BIGNUM*)’ if (!BN_add(this, this, &b)) ^ bignum.hpp: In member function ‘CBigNum& CBigNum::operator*=(const CBigNum&)’: bignum.hpp:446:41: error: cannot convert ‘CBigNum*’ to ‘BIGNUM* {aka bignum_st*}’ for argument ‘1’ to ‘int BN_mul(BIGNUM*, const BIGNUM*, const BIGNUM*, BN_CTX*)’ if (!BN_mul(this, this, &b, pctx)) ^ bignum.hpp: In member function ‘CBigNum& CBigNum::operator<<=(unsigned int)’: bignum.hpp:465:41: error: cannot convert ‘CBigNum*’ to ‘BIGNUM* {aka bignum_st*}’ for argument ‘1’ to ‘int BN_lshift(BIGNUM*, const BIGNUM*, int)’ if (!BN_lshift(this, this, shift)) ^ bignum.hpp: In member function ‘CBigNum& CBigNum::operator>>=(unsigned int)’: bignum.hpp:476:28: error: cannot convert ‘CBigNum*’ to ‘const BIGNUM* {aka const bignum_st*}’ for argument ‘1’ to ‘int BN_cmp(const BIGNUM*, const BIGNUM*)’ if (BN_cmp(&a, this) > 0) ^ bignum.hpp:482:41: error: cannot convert ‘CBigNum*’ to ‘BIGNUM* {aka bignum_st*}’ for argument ‘1’ to ‘int BN_rshift(BIGNUM*, const BIGNUM*, int)’ if (!BN_rshift(this, this, shift)) ^ bignum.hpp: In member function ‘CBigNum& CBigNum::operator++()’: bignum.hpp:491:47: error: cannot convert ‘CBigNum*’ to ‘BIGNUM* {aka bignum_st*}’ for argument ‘1’ to ‘int BN_add(BIGNUM*, const BIGNUM*, const BIGNUM*)’ if (!BN_add(this, this, BN_value_one())) ^ bignum.hpp: In member function ‘CBigNum& CBigNum::operator--()’: bignum.hpp:508:45: error: cannot convert ‘CBigNum*’ to ‘BIGNUM* {aka bignum_st*}’ for argument ‘1’ to ‘int BN_sub(BIGNUM*, const BIGNUM*, const BIGNUM*)’ if (!BN_sub(&r, this, BN_value_one())) ^ bignum.hpp: In function ‘const CBigNum operator+(const CBigNum&, const CBigNum&)’: bignum.hpp:533:27: error: cannot convert ‘CBigNum*’ to ‘BIGNUM* {aka bignum_st*}’ for argument ‘1’ to ‘int BN_add(BIGNUM*, const BIGNUM*, const BIGNUM*)’ if (!BN_add(&r, &a, &b)) ^ bignum.hpp: In function ‘const CBigNum operator-(const CBigNum&, const CBigNum&)’: bignum.hpp:541:27: error: cannot convert ‘CBigNum*’ to ‘BIGNUM* {aka bignum_st*}’ for argument ‘1’ to ‘int BN_sub(BIGNUM*, const BIGNUM*, const BIGNUM*)’ if (!BN_sub(&r, &a, &b)) ^ bignum.hpp: In function ‘const CBigNum operator-(const CBigNum&)’: bignum.hpp:549:43: error: cannot convert ‘CBigNum*’ to ‘const BIGNUM* {aka const bignum_st*}’ for argument ‘1’ to ‘int BN_is_negative(const BIGNUM*)’ BN_set_negative(&r, !BN_is_negative(&r)); ^ bignum.hpp: In function ‘const CBigNum operator*(const CBigNum&, const CBigNum&)’: bignum.hpp:557:33: error: cannot convert ‘CBigNum*’ to ‘BIGNUM* {aka bignum_st*}’ for argument ‘1’ to ‘int BN_mul(BIGNUM*, const BIGNUM*, const BIGNUM*, BN_CTX*)’ if (!BN_mul(&r, &a, &b, pctx)) ^ bignum.hpp: In function ‘const CBigNum operator/(const CBigNum&, const CBigNum&)’: bignum.hpp:566:39: error: cannot convert ‘CBigNum*’ to ‘BIGNUM* {aka bignum_st*}’ for argument ‘1’ to ‘int BN_div(BIGNUM*, BIGNUM*, const BIGNUM*, const BIGNUM*, BN_CTX*)’ if (!BN_div(&r, NULL, &a, &b, pctx)) ^ In file included from bignum.hpp:20:0, from bignum.cpp:8: bignum.hpp: In function ‘const CBigNum operator%(const CBigNum&, const CBigNum&)’: bignum.hpp:575:10: error: cannot convert ‘CBigNum*’ to ‘BIGNUM* {aka bignum_st*}’ for argument ‘2’ to ‘int BN_div(BIGNUM*, BIGNUM*, const BIGNUM*, const BIGNUM*, BN_CTX*)’ if (!BN_mod(&r, &a, &b, pctx)) ^ In file included from bignum.cpp:8:0: bignum.hpp: In function ‘const CBigNum operator<<(const CBigNum&, unsigned int)’: bignum.hpp:583:33: error: cannot convert ‘CBigNum*’ to ‘BIGNUM* {aka bignum_st*}’ for argument ‘1’ to ‘int BN_lshift(BIGNUM*, const BIGNUM*, int)’ if (!BN_lshift(&r, &a, shift)) ^ bignum.hpp: In function ‘bool operator==(const CBigNum&, const CBigNum&)’: bignum.hpp:595:83: error: cannot convert ‘const CBigNum*’ to ‘const BIGNUM* {aka const bignum_st*}’ for argument ‘1’ to ‘int BN_cmp(const BIGNUM*, const BIGNUM*)’ inline bool operator==(const CBigNum& a, const CBigNum& b) { return (BN_cmp(&a, &b) == 0); } ^ bignum.hpp: In function ‘bool operator!=(const CBigNum&, const CBigNum&)’: bignum.hpp:596:83: error: cannot convert ‘const CBigNum*’ to ‘const BIGNUM* {aka const bignum_st*}’ for argument ‘1’ to ‘int BN_cmp(const BIGNUM*, const BIGNUM*)’ inline bool operator!=(const CBigNum& a, const CBigNum& b) { return (BN_cmp(&a, &b) != 0); } ^ bignum.hpp: In function ‘bool operator<=(const CBigNum&, const CBigNum&)’: bignum.hpp:597:83: error: cannot convert ‘const CBigNum*’ to ‘const BIGNUM* {aka const bignum_st*}’ for argument ‘1’ to ‘int BN_cmp(const BIGNUM*, const BIGNUM*)’ inline bool operator<=(const CBigNum& a, const CBigNum& b) { return (BN_cmp(&a, &b) <= 0); } ^ bignum.hpp: In function ‘bool operator>=(const CBigNum&, const CBigNum&)’: bignum.hpp:598:83: error: cannot convert ‘const CBigNum*’ to ‘const BIGNUM* {aka const bignum_st*}’ for argument ‘1’ to ‘int BN_cmp(const BIGNUM*, const BIGNUM*)’ inline bool operator>=(const CBigNum& a, const CBigNum& b) { return (BN_cmp(&a, &b) >= 0); } ^ bignum.hpp: In function ‘bool operator<(const CBigNum&, const CBigNum&)’: bignum.hpp:599:83: error: cannot convert ‘const CBigNum*’ to ‘const BIGNUM* {aka const bignum_st*}’ for argument ‘1’ to ‘int BN_cmp(const BIGNUM*, const BIGNUM*)’ inline bool operator<(const CBigNum& a, const CBigNum& b) { return (BN_cmp(&a, &b) < 0); } ^ bignum.hpp: In function ‘bool operator>(const CBigNum&, const CBigNum&)’: bignum.hpp:600:83: error: cannot convert ‘const CBigNum*’ to ‘const BIGNUM* {aka const bignum_st*}’ for argument ‘1’ to ‘int BN_cmp(const BIGNUM*, const BIGNUM*)’ inline bool operator>(const CBigNum& a, const CBigNum& b) { return (BN_cmp(&a, &b) > 0); } ^ Makefile:1828: recipe for target 'ccminer-bignum.o' failed make[2]: *** [ccminer-bignum.o] Error 1 make[2]: Leaving directory '/home/m1/Downloads/ccminer-2.2.2-tpruvot' Makefile:2198: recipe for target 'all-recursive' failed make[1]: *** [all-recursive] Error 1 make[1]: Leaving directory '/home/m1/Downloads/ccminer-2.2.2-tpruvot' Makefile:653: recipe for target 'all' failed make: *** [all] Error 2
|
|
|
|
CryptAtomeTrader44
Full Member
Offline
Activity: 340
Merit: 103
It is easier to break an atom than partialities AE
|
|
December 02, 2017, 08:17:21 PM |
|
those mining Neoscrypt with GTX-1070, what miner are you using with nvOC ?
TIA
I am also interested in this question because SPccminer crashes too often for my taste: Cuda error in func 'neoscrypt_hash_k4' at line 1517: unspecified launch failure.
|
|
|
|
|
CryptAtomeTrader44
Full Member
Offline
Activity: 340
Merit: 103
It is easier to break an atom than partialities AE
|
|
December 03, 2017, 12:59:17 AM |
|
I did not think the pb came from there. Sorry but I did not know how to decode this pb as long as I did not have it or reading the log. I saw well passed this pb compilation previous but as the post was about the xervan algorytm, I have not rethought. Thank you to you for this good diag fast. PB solved TIVM mate
|
|
|
|
maximussilin
Newbie
Offline
Activity: 1
Merit: 0
|
|
December 03, 2017, 06:45:26 AM Last edit: December 03, 2017, 07:44:09 AM by maximussilin |
|
Hello to everybody! 2 fixes from me: 1. In 3main fixed p106 overclock if FULL_HEADLESS_MODE = YES. In original individual clocks and fan control were not working properly. Also changed initial xorg configuration to sudo nvidia-xconfig -a --cool-bits=28 --allow-empty-initial-configuration as were done in Thank you for contributing! Regarding the need for dummy plugs, have you tried using --allow-empty-initial-configuration in xorg.conf? sudo nvidia-xconfig -a --cool-bits=28 --allow-empty-initial-configuration
This allows me to run headless without the need for plugs of any kind. https://drive.google.com/open?id=1vv8OXO4zzYvRb46496cmMRQpT9-qyq2F2. Watchod mod - added draw power watching. As for me if one (or more) GPU crashes mining still works and gpu utilizations are about 100% for all, but at less then half normal speed. For me indicator of crash is [Unknown error] state in nvidia-smi power.draw. https://drive.google.com/open?id=1RLfLCta8GMX7Jk8d_BQuzG7eCnTXB2n2UPD: another mod 3. Added minimal power seting. Needs to add string to Maxximus007_AUTO_TEMPERATURE_CONTROL section in 1bash: MINIMAL_POWER=90 #90 is the sample value, for p106 as I found minimal power is 85 W https://drive.google.com/open?id=1ayE8YuDJZwSsL93dmaugMYzctGd_eOa5
|
|
|
|
Stubo
Member
Offline
Activity: 224
Merit: 13
|
|
December 03, 2017, 10:16:58 AM |
|
I did not think the pb came from there. Sorry but I did not know how to decode this pb as long as I did not have it or reading the log. I saw well passed this pb compilation previous but as the post was about the xervan algorytm, I have not rethought. Thank you to you for this good diag fast. PB solved TIVM mate Glad I could help. When I first started compiling miners for nvOC a few months back, I ran into the same error so I put in the time searched and read through the entire thread. Sure enough, the problem had already been identified and solved so I posted on it again - different miner but same compile error (SSL compatibility with bignum, bn.h).
|
|
|
|
barista1992
|
|
December 03, 2017, 12:41:24 PM |
|
Hello, guys! I use the nvOC 0019-1.4, I noticed that the Memory parameter is constantly growing, and I have to restart the miner from time to time, then the parameter is reset to 25% and then starts to load again. Someone faced with this? how to treat?
|
|
|
|
mrpickle89
Newbie
Offline
Activity: 1
Merit: 0
|
|
December 03, 2017, 01:58:11 PM Last edit: December 03, 2017, 02:19:16 PM by mrpickle89 |
|
Pretty sure this is something really simple...but I can't get a login to the desktop I get a prompt with either m1 or Guest session and the password miner1 doesn't work for me at all. It seems to take it but just jumps back to the login screen again Can you log in in console mode? press: ctrl + alt + f1 to enter console mode login is: m1 password: miner1 If you can login in console mode; type: reboot and press enter the miner should restart and auto login with full X. nvoc 19 did someone find the solution i have two 1070 mini mo giga ga-z270p-d3 same problem it mines then in the night stops mining if i reboot cant login with pass miner1 even if i open another terminal and reboot just goes back to login screen and i cant log back in thx for your help
|
|
|
|
|