Why do you need it to be in a real time? Are you just looking for the most profitable coin to mine and want to right what is profitable this instant instead of in the last 5 mins? Mostly the profitability from coin to coin won't change in 5 minutes. Even if it did it wouldn't be worth the effort in switching.
Or do you need these stats to make some calculations? Basically if you want the true network hashrate then you need to add up all the pools total hashrate and you would get a close estimate. Its very difficult to get the global net hashrate just by looking at the stats in the bitcoin core commands (or similiar wallets for altcoins) because the hashrate there is just based on how long it took to find the blocks in the past few hours, its never fully accurate since there is variance and miners switch on and switch off.
The best source for real time data is a stratum connection to a pool or wallet. If you let the (yiimp) pool do the autoswitching it's done in real time, before the API or web page is updated. The stratum data is always for the current block, it has to be to be able to submit valid shares. As far as calculations, there is much that can be calculated with data obtained through the stratum connection. For an example see the output of the latest version of cpuminer-opt. I don't calculate the network hash rate but it's a function of network difficulty so is possible. But it's awkward because you only get data for one coin at a time.
|
|
|
My point is don't engineer your cooling solution based on one algo that can't even use all threads.
sha256d is a good thermal stress test.
|
|
|
Of course it's not "needed". Even with air cooling it's running within spec.
It depends on the algo, most algos favourable to CPUs run relatively cool because they often stall waiting for memory. If you want to see how hot it can get try sha256d.
|
|
|
Really i cant understand why 1660 have 6GB only.
It's just where it fits in with the rest of the lineup, If you want 8 GB you get a bigger card. It really depends on what you plan on mining and if you are going to be using it for mining only or plan on putting it into a gaming system in the future. The 1660 super is a nice compromise with the GDDR 6 but the 6 gb of memory is disappointing especially when you look at a 1660 board tear down and see they left room for 8 gb on the board design.
I high end 16xx would be nice, don't need no ray tracing for mining.
|
|
|
It looks like lyra2z330 has the stale share problem. There may still be others I haven't tried.
|
|
|
The issue here is the amount of hardware that needs to be bought for a CPU algo. For a GPU algo, all you need to increase hashrates is to buy the GPU and riser card. Now, you will have to get the whole shebang (PSU, memory, MB etc.) How will this be ever profitable? Will there ever be a riser card/board equivalent for CPUs?
That's all part of the cost per hash. It's no different than an algo with a lower hashrate requiring more CPU (or GPU) power to produce the same hash. What matters is how much the hash is worth. Yes Intel has Xeon Phi compute addin boards with up to 72 cores with 4:1 hyperthreading for 288 threads. The're compute beasts but very expensive and not suitable for mining due to the low mem/thread ratio. It maxes out at 384 GB which is similar to the cache/thread ratio of most desktop CPUs.
|
|
|
Definitely no profit. Pi is great at a lot of things that don't require large amounts of compute power but anything designed for low power isn't suitable for POW. Bigger is better, and more efficient.
|
|
|
A problem I see with CPU coins is that many use measures that limit their appeal, kind of self defeating IMO. Whether intentional or not they remain ASIC resistent by staying under the radar.
Changing algos frequently is not a good strategy. It creates more work for miners to keep up. Every fork is a race to get the new miner up and running before everyone else.
Some of the new CPU algos ar every interesting as they use permutable algorithms. In effect, they don't hash data, they hash code. The blockheader data is treated as a program that must be compiled and executed to produce the hash. everytime the data changes it must be recompiled.
In effect this means an ASIC would have to be a form of CPU, a lot more complex to implement than a static algorithm.
|
|
|
Does anyone know if Yobit exchange is scam exchange? I can trade on this one. Have asked for them to confirm if Gulden wallet is working but no reply. I don't understand why this coin not on any main Asian exchanges. Edit: Yobit looks to be Scam exchange https://bitcointalk.org/index.php?topic=4327871.0Considering how many exchanges have crashed and burned in scandal Yobit has been around for several years, that says something. I haven't been burned by Yobit but I wouldn't call it a well run exchange.
|
|
|
Hello folks! I'm trying to install the miner on my Pine Rock64 but have problem with MAKE. I did use sudo MAKE and then I get this... gcc -DHAVE_CONFIG_H -I. -Iyes/include -Iyes/include -fno-strict-aliasing -I. -Iyes/include -Iyes/include -Wno-pointer-sign -Wno-pointer-to-int-cast -DNOASM -D__arm__ -Ofast -march=native -Iyes/include -Iyes/include -MT algo/cpuminer-rainforest.o -MD -MP -MF algo/.deps/cpuminer-rainforest.Tpo -c -o algo/cpuminer-rainforest.o `test -f 'algo/rainforest.c' || echo './'`algo/rainforest.c algo/rainforest.c: In function 'rf_crc32_32': algo/rainforest.c:411:7: error: 'rf_crc32_table' undeclared (first use in this function); did you mean 'rf_crc32_32'? crc=rf_crc32_table[crc&0xff]^(crc>> ; ^~~~~~~~~~~~~~ rf_crc32_32 algo/rainforest.c:411:7: note: each undeclared identifier is reported only once for each function it appears in algo/rainforest.c: In function 'rf_add64_crc32': algo/rainforest.c:461:7: error: 'rf_crc32_table' undeclared (first use in this function); did you mean 'rf_crc32_32'? crc=rf_crc32_table[crc&0xff]^(crc>> ; ^~~~~~~~~~~~~~ rf_crc32_32 Makefile:2320: recipe for target 'algo/cpuminer-rainforest.o' failed make[2]: *** [algo/cpuminer-rainforest.o] Error 1 make[2]: Leaving directory '/opt/crypto/cpuminer-multi' Makefile:2805: recipe for target 'all-recursive' failed make[1]: *** [all-recursive] Error 1 make[1]: Leaving directory '/opt/crypto/cpuminer-multi' Makefile:590: recipe for target 'all' failed make: *** [all] Error 2 Any ideas to fix this problem?! This might help... https://stackoverflow.com/questions/37066261/why-is-arm-feature-crc32-not-being-defined-by-the-compiler
|
|
|
Edit: perhaps the following can be ignored for now as I was using v1.2.3. I'll upgrade to v1.3.1 and see how it goes.
---
I had 2 instances of ccminer-1.2.3 being killed by the kernel.
The only message to the console is "Killed".
The first time on Windows with a gtx 1070, the second was on Linux with a 1080ti.
There is a possibility that ccminer is a victim of the OOM killer. I have experience where the process denied memory was not the source of the leak. Considering the two incidents occurred on different OSes makes that unlikely. The only other common application is Firefox so that is the only plausible cause of the leak assuming ccminer was in fact a victim.
The only other common factor is both were using Pascal GPUs with sm6.1 but this doesn't look like a GPU issue to me (I'm certainly not a Cuda expert)
The problem is very intermittant (only happened to me twice) and isn't realated to session length. I have had sessions run for over 24 hours on both systems but one of the two incidents happened after only 14 minutes run time. That is not a s low leak.
There appears to be a trigger that requires a rare set of circumstances, but once triggered memory is exhausted quickly.
It would be reasonable to conclude the memory issue is related to code specific to MTP. I haven't seen this with any other forks of ccminer I've used.
The infrequent nature of the problem and lack of precise coonnection between the trigger event and the termination of the process make it virtually impossible to troubleshoot.
Then again the infrequent nature makes the problem less serious. But I thought I'd document what I could just in case my analysis leads to some inspiration.
Edit Nov 7:
I have seen one instance of "killed" on v1.3.1 Linux. I now suspect the trigger may not be ccminer, but another application that pushes mem over the edge. I also suspect the edge is close because of the 45-49 GB VM used by ccminer. The amount of VM apparently used greatly exceeds the amount of VM in my system: 16G RAM + 2 GB swap file.
Edit Nov 11:
Another revision. I have seen the problem with v1.3.1 on Windows, no console error message but a pop-up saying essentially the same thing.
The high VM usage seems to be a non-issue as it is also observed using tpruvot fork with different algo.
Current speculation is an intermittant leak, likely in conditional code related to MTP.
|
|
|
3900x runs a bit hot using the retail cooler. It goes up to 85ºC, guess it could be a lot better. That's with the case open and ~20ºC ambient temperature.
Water cooling is almost a must if you intend to mine. I have a 1700 with stock cooler and a box fan blowing through the open case, and it still gets over 80C on some algos. In other news... The market is about to be shaken up with Intel's Cascade Lake X with huge price drops. Way-to-go AMD for putting the screws to Intel! Some points of interest... AMD agressively pushing 7nm has had some issues with yield and consistency. Intel sticking with 14nm limits improvements but maturity should bring reliability. Cascade Lake X will br priced to compete with Ryzen 3000 series. Ryzen, with the bigger cache, is better suited to mining CPU algos. Cascade Lake X with AVX512 will only improve algos that have already lost the race to GPUs or ASICs. Limted supplies may keep AMD from lowering the price of Ryzen3 but they may be loading up for the next Threadripper. What else might Intel be holding back to respond? This is exciting.
|
|
|
Unfortunately it looks like the project might be dead according to the price,
An inevitable fate for any coin that tries to be exclusive.
|
|
|
So a RandomX ASIC or FPGA miner is just a CPU, compiler and runtime system on a chip that serves a single purpose.
not the only difference, while I agree that all cpus and asics made from logic gates, but the difference could be significant in architectures of the chips. take for example cpu and gpu, despite they all are used in one pc - their architecture differs so much as their performance cpu have 4 huge cores while gpu have 8000 small cores - this is just an example. If you need a processor you NEED a PROCESSOR, the architecture is secondary. Edit: Existing algos don't need processors the algorithms are hard coded.
|
|
|
The thing is there is no way to stop ASICs or FPGA in this case. Period.
Techically correct. It will always be possible to build an ASIC or FPGA to do whatever a CPU can do because they are all built from logic gates. The most significant difference between an ASIC and a CPU is it's purpose. RandomX is essentially a compiler, runtime system and processor all in one. It takes as input a program which it compiles into its own native machine code then runs it to produce the hash. So a RandomX ASIC or FPGA miner is just a CPU, compiler and runtime system on a chip that serves a single purpose. Seems simple enough. Or is it?
|
|
|
argon2d-dyn
CCminer TCP API: server started on 0.0.0.0:4028 [12:06:33] INFO - Start user session [12:06:34] INFO - Setting new difficulty: 64 (0.000976563) [12:06:34] INFO - Block height 413100 : Network difficulty 0.389216 [12:06:34] INFO - Received new job #9e0 [12:06:35] INFO - 1/0 Accepted : diff=0 : 90,28KH/s [12:06:38] INFO - 2/0 Accepted : diff=0 : 725,5KH/s [12:06:39] INFO - 3/0 Accepted : diff=0 : 726,5KH/s [12:06:50] INFO - 3/1 Rejected : diff=0, reason: Invalid job id [12:06:58] INFO - 3/2 Rejected : diff=0, reason: Invalid job id [12:06:59] INFO - 3/3 Rejected : diff=0, reason: Invalid job id [12:07:02] INFO - 3/4 Rejected : diff=0, reason: Invalid job id [12:07:14] INFO - 3/5 Rejected : diff=0, reason: Invalid job id [12:07:16] INFO - 3/6 Rejected : diff=0, reason: Invalid job id [12:07:18] INFO - 3/7 Rejected : diff=0, reason: Invalid job id [12:07:21] INFO - 3/8 Rejected : diff=0, reason: Invalid job id [12:07:29] INFO - 3/9 Rejected : diff=0, reason: Invalid job id [12:07:31] INFO - 3/10 Rejected : diff=0, reason: Invalid job id [12:07:33] INFO - 3/11 Rejected : diff=0, reason: Invalid job id
Confirmed, argon2d-dyn still has the stale share problem. Power2b is still looking good. No rejects, starting diff is a little higher, latency has come down a bit.
|
|
|
Network latency has shot up more on power2b, now rarely under 300 ms. It appears to corespond to an increase in hash rate at the pool.
Users might help by setting a higher diff using the password field.
8 threads or more should be -p d=0.1 minimum, 16 threads should be at least 0.2 and higher still for more threads.
On the encouraging side the stale share problem has not returned, very clean mining now and fewer stale shares than expected considering the network latency.
|
|
|
It's not a new idea and has a bad rep.
Some web sites were caught doing it without the users' consent and the shit spread to everyone else. Even with consent there was user resistance as they didn't realize the impact running a miner would have on their system.
I'm talking about real mining with real CPUs, not the fake mining on mobile apps.
|
|
|
Rejects are back!
Edit: stratum is also dropping, maybe ongoing work.
Is that still reoccuring? What is your full command line? Things have been good for several hours, check your PM. Edit: Network latency has increased from 150ms to 190 ms on power2b. Haven't tested other affected algos but lyra2v3 is still 150ms. Edit2: The increased latency may be server overload due to stratum diff too low. The starting diff is way to low for a decent CPU and I don't see it ever increasing. An i7 submits around 1 share per second at the starting diff for power2b. If that's the case for everyone and vardiff isn't working then it's going to cause problems. Speaking of diff many GPU algos have a minimum diff too high for CPU even though many can be mined successfully with a good CPU. Vardiff will only lower by one step. Setting the diff manually is also limited to one step below the default starting diff. Setting a high default is reasonable to deal with big GPU rigs but it would be nice for it to step low enough to accomodate CPUs. Unless it's your intention to discourage CPU mining on those algos.
|
|
|
That was disappointing. After a proud announcement and promising results the problem returned and, silence.
|
|
|
|