is there a way to statically build/link the miner (linux) ?
i wasn't able to get it running
A native build is always prefereable to ensure you're taking advantage of all the optimizations supported by your CPU. Describe the problem you're having. Does it fail to compile, or fail to run after compiling? Post any error messages.
|
|
|
The problem with lbry(if it is a problem), is currently we need a lot of Mhash to get it profitable. In a rolling scrypt batch file, you won't stay on the coin long enough to solve it when current blocks are found in 30 minute or longer intervals. To get it profitable we need to get down to about 5 min intervals so a scrypt will actually stay on it long enough to find it. Otherwise its manually selecting the coin to force it to pay. It's the same with sib at the moment. Both coins are incredibly profitable but we currently don't have the hash to hit it.
The probability of finding a block doesn't change with time, it's always the same given a certain difficulty, so "staying on a coin long enough" doesn't make sense. See "gambler's fallacy" for further explanation. Yup. If you flip a coin 100 times and it comes up heads what are the odds it will come up heads the 101st time? 50-50. Unless the coin is loaded. I know what you guys mean, pool lucky is simply another variable. but my point was not many people want to mine a coin for 2 hours or more to get one block. They would rather mine a bunch of coins in that 2 hour block to show a steady gain instead of nothing then an abrupt adjustment in funds even if it works out to be the same. It's not like solo mining, you don't have to be mining when a block is found. If you stop mining before a block is found you will get credit for your share whenever the next block is found. However your percentage share will drop over time as the total shares grow while yours remains static. One concern is a multi-coin pool where you don't have control of which coin is mined. It would be nice if some of the more popular coins could have their own pool. At the moment there are two levels of profit switching: among coins sharing the same algo, and among algos. users have control over algo switching but not coin switching within an algo's pool. It would be nice if users could choose which coin in a specific algo's pool to mine. I'm not aware of any auto-exchange pools that offer this. The yaamp platform offers both the tools to let users do their own profit switching, as well as the auto-exchange to avoid requiring a wallet for each coin. It would be just a matter of giving the more popular coins their own mining port.
|
|
|
The problem with lbry(if it is a problem), is currently we need a lot of Mhash to get it profitable. In a rolling scrypt batch file, you won't stay on the coin long enough to solve it when current blocks are found in 30 minute or longer intervals. To get it profitable we need to get down to about 5 min intervals so a scrypt will actually stay on it long enough to find it. Otherwise its manually selecting the coin to force it to pay. It's the same with sib at the moment. Both coins are incredibly profitable but we currently don't have the hash to hit it.
The probability of finding a block doesn't change with time, it's always the same given a certain difficulty, so "staying on a coin long enough" doesn't make sense. See "gambler's fallacy" for further explanation. Yup. If you flip a coin 100 times and it comes up heads what are the odds it will come up heads the 101st time? 50-50. Unless the coin is loaded.
|
|
|
That's a lowblow, joblow.
You're a HAter and you're not interested in discussion, you're here to advance an agenda for some minor advantage. I'm acting on behalf of HOdlers, and at their direction. HOdlers are happy they can mine again, that the price has risen, that volume is ready to return and that the botnets have been banished for now.
Now you're name calling, that's something haters do. You're still making unsubstantiated claims of botnets, speaking for other people, and now you claim I'm not interested in discussion. I have been very consistent is asking you to justify your claims with data and reasoned arguments so support your assertions. You are the one who didn't want to discuss it, you just keep repeating the same baseless claims over and over. I was happy the way things were, and assumed you had integrity with your strong language about the specifications being "cast in stone". That was your word, and you broke it. I am amused that everything you accuse me of reflects more on you. I am less amused that I can't get out of this mess for a year. Yes I'm pissed and I show it, but I have valid reasons.
|
|
|
Now your using lame cliches. Do you call everyone who disagrees with you a hater? That's pretty childish. I'm presenting facts, I don't know how you can interpret that as hate. What does the cost of CPUs have to do with the network hashrate? There were the equivalent of 1500 total before the fork. Do you really think that's indicative of a botnet? How few legitimate miners do you think you had? Hint: more than you do now. I've also noticed that TD lockup time has increased due to the reduced block generation rate. Some of my TDs now mature two days later than stated when I bought them. No more interest though so a drop in effective interest rate. Are you willfully blind to what you've done? Everything I warned you about has come to pass and all you can do is call me a hater. Try looking in the mirror. You hate dumpers, pools and anyone who disagrees with you. For the record I didn't sell a single hodl before the fork was announced, they were all locked up. A big mistake in hindsight.
|
|
|
quote quote quote... nice wall of text ![Grin](https://bitcointalk.org/Smileys/default/grin.gif) LOL. Quote 100 lines and add 1. It reminds me of an accidental email bomb at my previous employer. 1. Someone inadvertantly sent an email to everyone in the company directory. Bad, 40,000 emails with the entire company directory in every copy. 2. A few clueless do-gooders replied to all, copying the original email and its mailing list, telling the sender not to mail bomb the entire company. Worse, a few hundred thousand emails, each with two copies of the company directory. 3. A bunch of other clueless doo-gooders replied to all complaining about doo-gooders copying the original email, not realizing the mailing list they were sending to was many times larger than the body they deleted. Another couple hundred thousand emails, each with a copy of the company directory. The mail system became unresponsive in less than 5 minutes. By then I had more than 20 replies to the original email. The mail system had to be shut down and purged of the entire thread.
|
|
|
If you have questions about cpuminer-opt please post them in my thread.
|
|
|
The hashrate is 4x lower than it was with a pool. Finding blocks in solo is much better than having that botnet mine most of the coins.
That supposed botnet must have been buying a lot of TDs because the locked up supply has been shrinking since the fork even with forced lockup of all new coins. This is not correct. Yet another unsupported assertion on your part. The locked up supply is lower both relatively (54% vs 65%) and absolutely (3.07 M vs 3.11 M). Here's an unsupported assertion: there was no botnet. It's just as valid as any claim there was. Edit: On the other hand I will support my assertion. The network hashrate pre-fork was around 400K. My i7 hashes at 275 H/s. That's the equivalent of less than 1500 i7's mining hodl for the entire network. If there was a botnet in there it would have to be pretty small and therefore insignificant. I there was a botnet it would be easy for pool operators to detect it by looking for multiple connections from different IPs from the same user.
|
|
|
i heard skein2 too ![Wink](https://bitcointalk.org/Smileys/default/wink.gif) Yup, I broke that in 3.3.7. I knew that one of my optimizations would break it but forgot to follow up. I had removed the automatic ctx init in the close function of all sph algos because it was unnecessary in most cases. But it is needed in algos like skein2 that run the same algo multiple times. I simply forgot to add an explicit init before the second skein round.
|
|
|
I can confirm that mining decred at nicehash is broken yet it works at zpool. I have also found some of the other blake based algos are broken at nicehash.
I will have to do some more investigation.
|
|
|
I'm confused. How do you mine Europecoin with a hodlcoin wallet? Your web site has a link to infernopool but I can't find Europecoin there. Your mining guide shows the hodl pool at suprnova.
Is there a Europecoin wallet and pool?
Please clarify.
Hy thank you for being interested, let me help, because its REALLY confusing
1) why a hodl wallet, it is not a hodl wallet its a Europecoin wallet. Despite from using the Ram-mining concept, inspired by HODL, its a very different coin with a different retargeting, diff, percentage, policy and additional feature, like Bitbreak. Everything is written in the OP, after this answer here, the OP can tell you deeper and you will find it less confusing.
2) Mining: We are not on Inferno pool for mining, we are a payout-coin. And because we are switching our core right now and they can't buy until the new coin is online (bittrex is switching right now) they suspended the erc payout option. The pool has been paying out in ERC way back to the POS wallet times. Its existence is not connected to our new POW capability’s. Supernova: the screenshot has been made, when ERC was in developement. Steve just wanted to show a glimpse about the descrbed cpu-miner.
For Pools in general, we don't have pools yet and to give average users the chance to mine on their desktop, i prefer it to be like this as long as possible. This is another difference to HODL : our self-developed DUAL-KGW-3 retargeting runs on an expotential curve and makes poolmining very very unlikely to succeed.
This Coin is not a clone, this is made by taking a fresh core and by porting HODL Mining among other mostly self-developed innovations.
Has it a wallet? yes, it has a wallet and this wallet is staple, has network and is in my opinion a technically advanced variant compared to HODL, because we have been able to learn from mistakes, we could see on his great invention.
Please, now, having these informations, read the OP again, because with my answer in mind, its less confusing an will give you deeper insights.
hope that helps if you still have questions after reading, ask at any time you want
have fun Matthias ![Smiley](https://bitcointalk.org/Smileys/default/smiley.gif)
btw. did you do this cpu-miner? If you like to have deeper insights, i am preparing a Datasheed with informations for the more advanced users, like you. Give me a week, to finish the current swap and to hook all our partners back onto our new chain. .
.. Your mining guide says to use hodlminer-qt to mine europecoin, maybe you can fix that. Wolf0 wrote the AES optimized hodlminer-wolf and I included his code in cpuminer-opt. Will the current hodlminer work with europecoin of have you made changes to the algo? It should also be noted this miner only supports stratum so it can't be used to mine the wallet. A stratum pool is required.
|
|
|
The hashrate is 4x lower than it was with a pool. Finding blocks in solo is much better than having that botnet mine most of the coins.
That supposed botnet must have been buying a lot of TDs because the locked up supply has been shrinking since the fork even with forced lockup of all new coins.
|
|
|
Also who would want to run a pool when they can't distribute the coins or collect their fee for a year? Just like the devs wanted it.
|
|
|
I'm confused. How do you mine Europecoin with a hodlcoin wallet? Your web site has a link to infernopool but I can't find Europecoin there. Your mining guide shows the hodl pool at suprnova.
Is there a Europecoin wallet and pool?
Please clarify.
|
|
|
hi joblo,
it seems decred (and maybe other algos?) generate 100% rejected on nicehash with the following error: "reject reason: Invalid extranonce2 size."
is this a bug?
br
Thanks for reporting this. It appears to be specific to decred and nicehash. I will look into this and do a round of testing on nicehash to confirm only decred is broken. I'm not very motivated to implement a workaround if it only applies to one algo in one pool. I should also point out that decred, and other blake algos, are the worst performing algos for CPU mining. In summary this is a very low priority issue.
|
|
|
Does anyone have any issues with there scrypt miners just stopping after like 10 minutes of mining? Its driving me crazy trying to get them to not drop out, the hashrate drops to 0 and no shares. I dont get this issue on any other pool
ongoing issue with sha/scrypt ports. Shouldn't be every 10 mins though that sounds like it's something else. There can ben stretches of 6-8 hours without a stratum reset, but can also be times of one or two an hour... Hope to have new stratum code some day soon. If anyone is handy with C++ this is the most common of a few segfaults I seem to get: Program received signal SIGABRT, Aborted. [Switching to Thread 0x7ffcebff5700 (LWP 23435)] 0x00007ffff65ffc37 in __GI_raise (sig=sig@entry=6) at ../nptl/sysdeps/unix/sysv/linux/raise.c:56 56 ../nptl/sysdeps/unix/sysv/linux/raise.c: No such file or directory. #0 0x00007ffff65ffc37 in __GI_raise (sig=sig@entry=6) at ../nptl/sysdeps/unix/sysv/linux/raise.c:56 #1 0x00007ffff6603028 in __GI_abort () at abort.c:89 #2 0x00007ffff663c2a4 in __libc_message (do_abort=do_abort@entry=1, fmt=fmt@entry=0x7ffff674a6b0 "*** Error in `%s': %s: 0x%s ***\n") at ../sysdeps/posix/libc_fatal.c:175 #3 0x00007ffff66479b2 in malloc_printerr (ptr=<optimised out>, str=0x7ffff67467e4 "corrupted double-linked list", action=1) at malloc.c:4996 #4 malloc_consolidate (av=av@entry=0x7ffd50000020) at malloc.c:4165 #5 0x00007ffff6648ce8 in _int_malloc (av=0x7ffd50000020, bytes=6016) at malloc.c:3423 #6 0x00007ffff664b6c0 in __GI___libc_malloc (bytes=6016) at malloc.c:2891 #7 0x00007ffff739cdad in operator new(unsigned long) () from /usr/lib/x86_64-linux-gnu/libstdc++.so.6 #8 0x0000000000412855 in client_thread (p=0x7a) at client.cpp:418 #9 0x00007ffff7bc4184 in start_thread (arg=0x7ffcebff5700) at pthread_create.c:312 #10 0x00007ffff66c337d in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:111 https://github.com/tpruvot/yiimp/tree/next/stratumCaveat: I am not strong in c++ but have lots of exepience analysing system crashes. The traceback only contains system code, I don't know if you trimmed it but if you have level #11 it might be where the application called clone. The crash itself occurred while allocating memory while cloning a process. malloc detected memory corruption: #3 0x00007ffff66479b2 in malloc_printerr (ptr=<optimised out>, str=0x7ffff67467e4 " corrupted double-linked list", action=1) at malloc.c:4996 This is not an error in the code in the traceback, it likely tripped over prexisting corruption. The data was probably corrupted the last time it was accessed prior to calling clone. If you are seeing different crashes it is quite possible they are all the result of the same bug. It may be the same data being corrupted every time or possibly other data. The victim just happened to be the first proceess to access the data after it was corrupted. The application probably got hold of a bad pointer somehow and used it to write some data to the wrong place. Due to the cause being disconnected from the crash these kinds of bugs are hard to find. If the problem is recent it narrows the scope of the problem to a recent change. I suggest you try to look for the point of deviation, ie when the crashes first started to see what was changed just prior. Since the crashes seem fairly consistent you could backout recent changes until the crashes stop. The well, crashing has been an issue since the start but not these ones, likely because the other bugs were occurring before these crashes were apparent. #8 makes reference to the stratum code but I agree a lot has to do with system code. #8 0x0000000000412855 in client_thread (p=0x7a) at client.cpp:418 https://github.com/tpruvot/yiimp/blob/next/stratum/client.cpp#L418client thread gets created at stratum.cpp:344 by the stratum_thread which was created by main. A new one:
Program received signal SIGSEGV, Segmentation fault. 0x00007ffff66477e3 in malloc_consolidate (av=av@entry=0x7ffdb8000020) at malloc.c:4157 4157 malloc.c: No such file or directory. #0 0x00007ffff66477e3 in malloc_consolidate (av=av@entry=0x7ffdb8000020) at malloc.c:4157 #1 0x00007ffff664845d in _int_free (av=0x7ffdb8000020, p=<optimised out>, have_lock=0) at malloc.c:4057 #2 0x0000000000404e6e in job_delete (object=0x7ffdb80aec70) at job.h:82 #3 0x0000000000416fbb in object_prune (list=0x83f7e0 <g_list_job>, deletefunc=0x404db0 <job_delete(YAAMP_OBJECT*)>) at object.cpp:57 #4 0x000000000040416c in main (argc=<optimised out>, argv=<optimised out>) at stratum.cpp:273
Looks the same to me. It's over my head now, you need someone who knows the application code. Could be a race condition between threads due to some missing mutex.
|
|
|
Does anyone have any issues with there scrypt miners just stopping after like 10 minutes of mining? Its driving me crazy trying to get them to not drop out, the hashrate drops to 0 and no shares. I dont get this issue on any other pool
ongoing issue with sha/scrypt ports. Shouldn't be every 10 mins though that sounds like it's something else. There can ben stretches of 6-8 hours without a stratum reset, but can also be times of one or two an hour... Hope to have new stratum code some day soon. If anyone is handy with C++ this is the most common of a few segfaults I seem to get: Program received signal SIGABRT, Aborted. [Switching to Thread 0x7ffcebff5700 (LWP 23435)] 0x00007ffff65ffc37 in __GI_raise (sig=sig@entry=6) at ../nptl/sysdeps/unix/sysv/linux/raise.c:56 56 ../nptl/sysdeps/unix/sysv/linux/raise.c: No such file or directory. #0 0x00007ffff65ffc37 in __GI_raise (sig=sig@entry=6) at ../nptl/sysdeps/unix/sysv/linux/raise.c:56 #1 0x00007ffff6603028 in __GI_abort () at abort.c:89 #2 0x00007ffff663c2a4 in __libc_message (do_abort=do_abort@entry=1, fmt=fmt@entry=0x7ffff674a6b0 "*** Error in `%s': %s: 0x%s ***\n") at ../sysdeps/posix/libc_fatal.c:175 #3 0x00007ffff66479b2 in malloc_printerr (ptr=<optimised out>, str=0x7ffff67467e4 "corrupted double-linked list", action=1) at malloc.c:4996 #4 malloc_consolidate (av=av@entry=0x7ffd50000020) at malloc.c:4165 #5 0x00007ffff6648ce8 in _int_malloc (av=0x7ffd50000020, bytes=6016) at malloc.c:3423 #6 0x00007ffff664b6c0 in __GI___libc_malloc (bytes=6016) at malloc.c:2891 #7 0x00007ffff739cdad in operator new(unsigned long) () from /usr/lib/x86_64-linux-gnu/libstdc++.so.6 #8 0x0000000000412855 in client_thread (p=0x7a) at client.cpp:418 #9 0x00007ffff7bc4184 in start_thread (arg=0x7ffcebff5700) at pthread_create.c:312 #10 0x00007ffff66c337d in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:111 https://github.com/tpruvot/yiimp/tree/next/stratumCaveat: I am not strong in c++ but have lots of exepience analysing system crashes. The traceback only contains system code, I don't know if you trimmed it but if you have level #11 it might be where the application called clone. The crash itself occurred while allocating memory while cloning a process. malloc detected memory corruption: #3 0x00007ffff66479b2 in malloc_printerr (ptr=<optimised out>, str=0x7ffff67467e4 " corrupted double-linked list", action=1) at malloc.c:4996 This is not an error in the code in the traceback, it likely tripped over prexisting corruption. The data was probably corrupted the last time it was accessed prior to calling clone. If you are seeing different crashes it is quite possible they are all the result of the same bug. It may be the same data being corrupted every time or possibly other data. The victim just happened to be the first proceess to access the data after it was corrupted. The application probably got hold of a bad pointer somehow and used it to write some data to the wrong place. Due to the cause being disconnected from the crash these kinds of bugs are hard to find. If the problem is recent it narrows the scope of the problem to a recent change. I suggest you try to look for the point of deviation, ie when the crashes first started to see what was changed just prior. Since the crashes seem fairly consistent you could backout recent changes until the crashes stop. Edit: I tend to forget that c++ has no array bounds protection so it is vurnerable to buffer overflows. This would be a more likely scenario than a random bad pointer.
|
|
|
Quark is broke, no hash reported at pool.
just checked, mining ok, pools stats ok, website confirms are you in the right thread ? It's working now, but it was broke from 12:00 to around 12:30, look at the hashrate graph. I was mining during that time.
|
|
|
Quark is broke, no hash reported at pool.
|
|
|
Not sure if this will help you at all but I was having a issue similar. I had one 970, two 960 and one 750ti in a rig and I had one card that would always crash while mining certain algos. It was always the same 960 and sometimes a few minutes and sometimes a few hours. I used Nvidia Inspector to determine this as it always showed the same card with the fan down to 0% after the crash. I tried new risers (powered usb) and the same thing. I was starting to think I had a bad card so I switched the 960's only in each riser to narrow it down. Low and behold the card that was having no issues started crashing and the card that was crashing would not anymore. Same thing after switching and same algos. The card that was crashing would not at all any more. I was perplexed. ran out of ideas and I had one last thought. I just switched the riser connector on the mother board that was crashing with my 750ti pci-e slots. I looked in Nvidia Inspector and noticed my 960 that was crashing had switched PCI Interface numbers. Both 960 cards read 3.0@1.1x1 before the switch and after the card that was crashing read 3.0@2.0x1. Now it reads the same as my 750ti. So both 960's read different values and I have not had a single issue since. I am no expert and have no idea why it fixed it but it did. Hope that might help and hope it made sense. I'm just guessing but it look like the notation means <slot version>@<running version><lanes>. That would mean the slots were running at PCIe v1.1 even though the slots and cards support v3. I have no idea why swapping slots would change that. Glad you got it fixed.
|
|
|
|