Here's an updated formula for calculating blocks/day:
blocks/day = chains/day * (0.97 * (1 - fracDiff) + 0.03)
Here fracDiff is the fractional part of the difficulty, i.e. fracDiff = diff - floor(diff).
This is simply assuming that it's a 0.03 probability for the (k+1)'th number being prime in a chain. These result in longer chains which are not subject to the fractional difficulty. This number was produced by the function EstimateNormalPrimeProbability() in my latest code. It's a bit smaller than my previous estimate of 0.035 being the probability.
|
|
|
EDIT: We are also seeing a slight increase in speed(chainsperday). Should we leave new sieve options default or is it ok to use modified options that worked well in previous version ?
The defaults should still be pretty good. I did get rid of -sievepercentage in favor of -sievefilterprimes. -sievefilterprimes is used to specify the exact number of primes that will be filtered out by the sieve (instead of some percentage of a number that depends on -sievesize). The default is 7849 based on the old defaults. There's also two new parameters you can try: * -l1cachesize determines the sieve segment size (in bytes). This should be the size of the L1 or L2 cache (or some other number close to them). The default value is 28000. This value has a minor impact on mining performance. * -primorial determines the primorial that is used as a multiplier in all candidate numbers. This should a small prime number such as 47 or 53. Use the parameters -debug -printprimorial to see which primorials are used by default. This value has a minor impact on mining performance. The rest of the documentation is here: https://github.com/mikaelh2/primecoin/blob/master/doc/README.mdMake sure new official build is difficulty 11 ready, at the current rate we will be hitting 11 diff. early Feb. HP11 should already work just fine with difficulty 11. And I doubt we'll be hitting difficulty 11 that soon because it would require 30 times more mining power.
|
|
|
Has anyone compiled the latest git version for windows 64 ?
I have (obviously) but I'm not quite ready to make a release yet. I've been working on some other improvements that I want to get done before I make a release. I pushed a whole bunch of commits to github if someone wants to have a look.
|
|
|
So I compiled this on ubuntu 13.10 and I seem to get a segmentation error/can't connect to server as at approaches or reaches difficulty 10 (when downloading the block). Is anyone else seeing this? I have another server running the non updated version for multi-wallet.
error: couldn't connect to server [1]+ Segmentation fault (core dumped) ./primecoind
This is in the debug.log ERROR: mempool transaction missing input
EDIT: I was able to start it up for a very shot period of time before it failed again but I noticed this occurs somewhere between 279042 - 279060 blocks. I figured it was too much of a coincidence if two separate PCs experienced this at the same time.
I haven't seen this issue before but similar issues have happened with Bitcoin before. They seems to be related to the wallet mining code, so try starting the wallet without the miner.
|
|
|
Here's my fix for the issue with shared wallets I discovered earlier: https://github.com/mikaelh2/primecoin/commit/a1c3f5854e9970d2f9f13ff75601dc7c87bf83c3Basically this should help people who are running lots of miners using the same shared wallet. My fix is to initialize the extra nonce value using the current value of a nanosecond-precision clock. That should give a unique value on every machine. Boost.Chrono is now required for compiling and I have updated the makefiles to reflect that. Thanks!! Someone please compile x86 and x64 builds. Edit: unable to compile on centos 6.4 x64 : ain.cpp: In function ‘void BitcoinMiner(CWallet*)’: main.cpp:4578: error: ‘boost::chrono’ has not been declared main.cpp:4578: error: expected ‘;’ before ‘time_now’ main.cpp:4579: error: ‘boost::chrono’ has not been declared boost_1_55_0 compiled successfully. You can get it to compile on CentOS but it's a bit more tricky than before. You need to have a newer version of boost installed. Then you need to tell the compiler where you installed the newer boost. By default it goes into /usr/local. Then you need to modify the makefile like this: cp makefile.unix makefile.my sed -i -e 's/$(OPENSSL_INCLUDE_PATH))/$(OPENSSL_INCLUDE_PATH) \/usr\/local\/include)/' makefile.my sed -i -e 's/$(OPENSSL_LIB_PATH))/$(OPENSSL_LIB_PATH) \/usr\/local\/lib)/' makefile.my sed -i -e 's/$(LDHARDENING) $(LDFLAGS)/$(LDHARDENING) -Wl,-rpath,\/usr\/local\/lib $(LDFLAGS)/' makefile.my And then you type 'make -f makefile.my' to compile. Official binaries will be coming soon. I want to see if I can downgrade my glibc version first somehow. If I can get the glibc version down to 2.12, the binaries should work on CentOS.
|
|
|
Here's my fix for the issue with shared wallets I discovered earlier: https://github.com/mikaelh2/primecoin/commit/a1c3f5854e9970d2f9f13ff75601dc7c87bf83c3Basically this should help people who are running lots of miners using the same shared wallet. My fix is to initialize the extra nonce value using the current value of a nanosecond-precision clock. That should give a unique value on every machine. Boost.Chrono is now required for compiling and I have updated the makefiles to reflect that.
|
|
|
mikaelh Have any plans to build standalone miner for Solo(getwork or getblocktemplate)? Xolo promise make it, but... only promise...
I'm not sure if I'm going to have the time to develop a standalone miner. Right now I have no plans for it. Hopefully someone will step up and make one.
|
|
|
I looked at the code a bit and I spotted a potential issue with shared wallet mining. The issue is that N mining threads are probably using the first N keys available in the wallet. If multiple machines are using the same wallet, they may be trying to solve the same block if the timestamp and the extra nonce are the same. I can write a fix for that but it's unlikely to help you since you're using separate wallets.
Please fix this issue in the next version or provide a patch, i know of many miners using the same wallet+hp11. As a quick fix, would it help if time is changed on servers mining with the same wallet - eg one set at UTC +1 and the other at UTC+2 ? Playing around with the system time will likely cause more problems. There's even a warning message if it seems to be off by too much. So you actually should check that all your servers have the correct time.
|
|
|
any plans for osx and android wallet?
website is also in serious need of an update
I don't have a Mac computer, so there will be no OS X builds in the near future. There are some other OS X builds in the wild: http://www.peercointalk.org/index.php?topic=343.0No plans for an Android wallet currently.
|
|
|
We use separate wallets + they were all generated using -keypool=5000 , so each can hold up to 5000 transactions. Now the important part : Just before i switched back to XMP mining i came accross a post about haveged!: https://bitcointalk.org/index.php?topic=255782.msg2899987#msg2899987I killed it on all servers, started mining, and in less than one hour 1 block found. Now how does that explain found blocks before 22. Dec - well i do recall killing haveged manually on all servers on several occasions so it most probably was not running on all servers. What i do know for sure is that all servers were rebooted on 22. Dec and several times after, chkconfig shows haveged as on so it started automatically. I think the positive/negative experiences with haveged are merely coincidences. Haveged should only be generating some additional entropy, which isn't used by the mining process. Entropy is only used by the OpenSSL library for specific tasks (e.g. generating new keypairs). Regarding conectivity, if wallet loses all connections, wouldnt it be trying to reconnect indefinitely, using peers.dat file ? I've had machines lose conectivity for hours and reconnect just fine.
Yes, the wallet will keep trying to reconnect to nodes stored in peers.dat. There should normally be at least hundreds of addresses in that file.
|
|
|
Hello mikaelh, i have checked debug.log on all servers, no errors. They are not using the same wallet.dat file. Haveged package is installed on all instances to ensure entropy/no duplicate work. They are all synced with the network. If sieve options are not causing the issue the only thing i have not done is recreate peers.dat file to get a fresh list of nodes on every server. Would it be better to point all mining servers to one under my control, with a -connect= flag in the config or leave node selection default ?
I have switched them all to mining litecoin since my last post but will soon switch them back. Let me know If there is anything else you'd like me to test.
If every miner is using a separate wallet, then you shouldn't have issues with running out of pre-generated keys. If your wallets are encrypted, then you may run out of keys because the wallet needs to be unlocked before new keys can be generated. Connectivity is also important because mining will stop if the wallet loses all connections. Using the -connect parameter is probably a bad idea because it introduces a single point of failure. If your central node crashes, then all the slave nodes lose connectivity. You should use -addnode if you want to have a central node. I looked at the code a bit and I spotted a potential issue with shared wallet mining. The issue is that N mining threads are probably using the first N keys available in the wallet. If multiple machines are using the same wallet, they may be trying to solve the same block if the timestamp and the extra nonce are the same. I can write a fix for that but it's unlikely to help you since you're using separate wallets. That's all the ideas I have currently.
|
|
|
18. Dec - 8 blocks, 19. Dec 9 blocks, 20. Dec. 6 blocks, 21 Dec. 8 blocks, 22. Dec - 2 blocks, 23 Dec - 0 blocks, 24. Dec. 0 blocks, 25. Dec - 0 blocks ...
Must be due to sieveextensions, sievepercentage and sievesize values or hp11 client itself ?
Difficulty 10 started on the 15th of December. Your data shows that your mining results dropped to zero about one week after that. I don't really see why that would happen like that. It's probably a good idea to also check that your miners and machines haven't crashed for some other reason.
|
|
|
why primecoin calculator say otherwise then?it say 33 days with my cycle, before that change, the time was 100+days to found a block...
Many of the calculators are still using outdated formulas. They give bad estimates when fractional difficulty is high. So the 100+ days estimate you got earlier was way too high. It should have been less than 30 days in reality.
|
|
|
Yes, the previous blocks/day formula was written with difficulty 9.99 in mind. This is the slightly more complicated version which is accurate for lower fractional difficulty:
blocks/day = chains/day * (0.965 * (1 - fracDiff) + 0.035)
I haven't checked yet whether the constants (0.965 and 0.035) still apply for difficulty 10. Of course, this formula only applies for the new chains/day values.
Right now at difficulty 10.15, mining should be about 22% harder than it was at 9.996. Difficulty 10.0 is about 5% harder than 9.996 in theory. The block reward has also dropped by 3%. And about 15% of 10-chains are discarded due to fractional difficulty.
I haven't checked what the new optimal mining parameters are. The old ones should still be pretty good. It might be beneficial to increase -sieveextensions.
|
|
|
Yup, the Primecoin network has finally reached difficulty 10. Chains/day values should be about 27 times lower now. This is to be expected because chains/day is now trying to predict how many 10-chains will be found.
|
|
|
I updated the FAQ in my first post a bit. Most importantly I added the revised blocks/day formula there.
The revised formula is: blocks/day = chains/day * (1 - fracDiff + 0.035)
Note that the old formula is still used in many places like the bot on Freenode and Anty's website.
|
|
|
When i use primecoin-hp11 binary (downloaded from sourceforge) everything runs fine. Let's call this one A. When i use the binary i get after compiling from source (either sourceforge, github or bitbucket) -- let's call this one B --, i get these kind of messages in debug.log: received block 1512dbec68b62a277efddb631a3de13fc6293f6906a2386bdedf7e7157c5218b ERROR: CScriptCheck() : 0ef6ec05ffdc0a1fcc54efbe10ab2ba90359f0038c75278382e3358b8014602e VerifySignature failed InvalidChainFound: invalid block=1512dbec68b62a277efddb631a3de13fc6293f6906a2386bdedf7e7157c5218b height=297777 log2_work=43.922259 date=2013-12-06 19:46:32 InvalidChainFound: current best=389d7b3c8f4b5ea6ff76504e837942f0e2d58a40d7bac74da4a6faba491ee8b9 height=297776 log2_work=43.92224 date=2013-12-06 19:44:57 InvalidChainFound: invalid block=1512dbec68b62a277efddb631a3de13fc6293f6906a2386bdedf7e7157c5218b height=297777 log2_work=43.922259 date=2013-12-06 19:46:32 InvalidChainFound: current best=389d7b3c8f4b5ea6ff76504e837942f0e2d58a40d7bac74da4a6faba491ee8b9 height=297776 log2_work=43.92224 date=2013-12-06 19:44:57 ERROR: SetBestBlock() : ConnectBlock 1512dbec68b62a277efddb631a3de13fc6293f6906a2386bdedf7e7157c5218b failed ERROR: AcceptBlock() : AddToBlockIndex failed ERROR: ProcessBlock() : AcceptBlock FAILED
In fact, no new "ProcessBlock: ACCEPTED" ever happens. Even if I return to A, the same error is now showing (it looks like the chain was corrupted by B). I am, unable to reindex the chain with B (InvalidChainFound appears around height 85431). Any idea what's wrong? I am on a Fedora 19 box. Well, it could be a compiler issue in Fedora 19. Did you use any special compiler flags when compiling? Also, it could be an issue in your version of OpenSSL. In the past Redhat has removed support for EC crypto from their version of OpenSSL. Did you compile your own version of OpenSSL, or did you use Fedora's version?
|
|
|
@mikaelh Thank you for clarifying, could you also discuss shortly other enhancements pointed by Supercomputing? Not necessarily definitely, just from the perspective of your experience. There are several enhancements which account for the drastic increase in performance over the current CPU implementation: 1) Montgomery Reduction is used. 2) The size of the multiprecision arithmetic is fixed. 3) An optimized sieve is running on the GPU. 4) An optimized primorial search is running on the GPU (double SHA-256).
5) An exploitation of the difficulty (Sunny King knows what I am referring to here, just ask Sunny).
Thanks Ok, here's my take on these claims. 1) There's no question about the fact that GPUs are faster at modular multiplication. Supercomputing has already linked plenty of papers about that. The missing piece of information about his implementation is how he goes from multiplication to exponentiation. The Fermat's test in Primecoin is all about doing modular exponentiation. There's a well-known algorithm for that which uses modular squaring and multiplication. I think you are forced to do some branching on the GPU which slightly slows it down. 2) Fixing the precision definitely gives a minor speedup and makes the implementation easier. The only caveat is that it may not be future-proof when we move to longer chains. 3) Yes, I think it's possible to implement a much more efficient sieve if you exploit the shared memory on the GPU. 4) I'm guessing that primorial search refers to finding header hashes divisible by a primorial. The CPU implementation searches for hashes that are divisible by 7# (= 2 * 3 * 5 * 7). On the CPU this takes only a tiny fraction of time. If you have a fast GPU implementation of it, you can search for hashes divisible by much larger primorials. This might get him a minor speedup. Note that his faster primorial search will be obsolete once mining protocol v0.2 is enforced. Link: http://www.peercointalk.org/index.php?topic=453.0
|
|
|
5) An exploitation of the difficulty (Sunny King knows what I am referring to here, just ask Sunny).
Sunny ? From Sunny King's design document. "Block hash, the value that is embedded in the child block, is derived from hashing the header together with the proof-of-work certificate. This not only prevents the proof-ofwork certificate from being tampered with, but also defeats attempt at generating a single proof-of-work certificate usable on multiple blocks on the block chain, since the block header hash of a descendant block then depends on the certificate itself. Note that, if an attacker generates a different proof-of-work certificate for an existing block, the block would then have a different block hash even though the block content remains the same other than the certificate, and would be accepted to the block chain as a sibling block to the existing block." Unless I completely misunderstood the meaning of that statement, but why would generating a different proof-of-work certificate for an existing block be considered an attack? Which implies two things: 1) The difficulty of that block will be frozen, and you can add as many sibling blocks as you whish, as the probability of finding the next sibling block is not much lower than finding the next block. 2) There is no mechanism in place to prevent spending from the sibling block, double spending attack. Please let me know if I misunderstood something. I think you have misunderstood the paragraph you quoted. That note is trying to say that the attack wouldn't work. The sibling blocks would all be orphans which makes them useless. Only one of the blocks will be part of the blockchain because its block hash is referenced from the next block in the blockchain. The block cannot be replaced unless you are able to create another block with the same block hash. I think Sunny was trying to say that it's not possible to create such a block in Primecoin even if the attacker finds a different proof-of-work certificate. That's because the certificate (i.e. the prime chain multiplier) is hashed into the block hash. Note that there are two hashes: the block header hash and the block hash. They are defined as follows: blockHeaderHash = HASH(nVersion, hashPrevBlock, hashMerkleRoot, nTime, nBits, nNonce) blockHash = HASH(nVersion, hashPrevBlock, hashMerkleRoot, nTime, nBits, nNonce, bnPrimeChainMultiplier) The block hash is the "official" hash of the block which is referenced in the next block as hashPrevBlock.
|
|
|
|