Show Posts
|
Pages: [1] 2 3 4 »
|
Hivewallet: You might try contacting https://coinplorer.com at contact@coinplorer.uservoice.com. They run the most feature-rich block explorer for Primecoin that I'm aware of, although I don't know that they provide an API. You might be able to work with them to get the interface you need. If that isn't possible then the community might be able to throw together a server that can handle the API calls you need. I moderate the Primecoin subreddit and would be willing to help promote a search on that forum for a developer willing to put together such an interface. You would also be well served looking around on ppcointalk.org, which serves as the main forum for discussion of both Primecoin and Peercoin.
|
|
|
Could you explain what primecoin algorithm actually does?
Why doesn't primecoin for example just read some known prime numbers from some database, compare them if there is any Cunningham chain candidate and ... offer result?
Does this algorithm have to find all these prime numbers by brute force first, and then find the chains..?
If you could describe this in layman's terms, I'd be very happy.
The algorithm is kind of complicated to explain in lay terms. First, I'd like to point out that there is no database of primes of this size that would be even kind of useful. The numbers used are on the order of 10 100. In that range about 1 in 180 numbers is prime. That means that in order to store all of the prime numbers between 10 100 and 10 101 you would more memory than there could ever exist in the universe--more bits than there are atoms. No such database can ever exist within the current understanding of physics. So, the algorithm is somewhat of a brute force. It has some elegance to it, though. The chains have a very specific form, which gives a lot of opportunity for speedups. For example, if your chain starts at N then 2N-1 is in the chain (for one of the chain types). It would be inefficient to check the primality of 2N-1 if N is already shown to be composite--the chain is already proven to be invalid. Many optimizations take this general form. The algorithm itself is broken into two main parts: sieving and primality tests. Sieving is a really fast way to show that lots of numbers are composite, but it is impractical to show that any number is prime by a sieve alone. The most famous (and oldest) sieve is the Sieve of Eratosthenes, which has a lovely Gif and explanation on its wikipedia page. Sieves work by eliminating numbers that have small factors; it is easy and computationally cheap to carry out a single division to determine if a big number, N, is divisible by a small number, k, and about 1 in k divisions will show that N is divisible by k and that N is therefore composite. This can be taken a step farther by noting that if N is divisible by k then N + a*k is also divisible by k, allowing many candidates to be eliminated by a single division and a few multiplications. Thus, the mining algorithm first generates a set of candidate numbers based on a hash function and the data in the block, then runs that set of numbers through a sieve. The next step is to actually test the remaining numbers after the sieve takes out most of them. This is accomplished by the Fermat (Probable) Primality Test, which holds that if a P-1 ≡ 1 (mod P) for all a and prime P. That is to say, you can take any number (a) and raise it to the power of your candidate (P) minus one, then divide the result by the candidate; if you get anything other than 1 as the remainder of that division then P is composite, otherwise P is probably prime. For PrimeCoin a is always taken to be 2, and "probable" is taken to be good enough--the odds that 2 is a "Fermat Liar" for P are much less than the odds that P is prime. Thus, the general flow is (Protocol to build a block and find the appropriate hash value to start from) -> (form a set of numbers that could be valid as part of prime chains for this block) -> (run the numbers through a sieve) -> (Eliminate numbers that passed the sieve but aren't parts of chains of length 9 or longer (current integer difficulty)) -> (for each chain of possibly prime numbers remaining, execute a Fermat Primality test until you either find a complete chain or find a composite number that proves that no chain long enough will be found there).
|
|
|
The miner for ypool can never match the efficiency of a solo miner, and it certainly can't compete with mikaelh's most recent builds. This is because in pooled mining (at least ypool's implementation of it) the miners are paid to produce blocks of a lower-than-network difficulty, as in Bitcoin. This is not a problem in Bitcoin since you can't look for low difficulty shares without looking for high difficulty shares at the same time.
However, in Primecoin it is very possible to tune ones search for shorter chains. For example if, after the sieve, you find a collection of 7 numbers that are in the form of a chain (e.g. H-1, 2H-1, 4H-1, 8H-1...) but the number on either side of the chain was proven to be composite then you should not waste time with an expensive primality test on any of the numbers--it will never be a valid share when the network difficulty is 8 or higher. However, if miners are paid to produce shares of difficulty 7 then they should check this chain.
There are a few resolutions to this dilemma. One possibility is that everyone checks all chains that aren't long enough to be network shares but could still be pool shares. This is fair for everyone--nobody has an advantage over anyone else--but it means that fewer blocks are generated overall (everyone is wasting time that doesn't benefit the network). Another possibility is that some people ignore the shorter chains, while others check all chains. This gives an unfair advantage to the people checking the shorter chains--they will produce fewer valid blocks for the pool this way while producing more shares and taking a larger cut of the profits. The final option is if everyone only checks the longer chains while ignoring the shorter ones. This is the solution that gives the highest average payout and is the one that ypool is trying for, but it has the problem that if anyone wants to increase their payout they just have to change a couple lines of code and suddenly they can start taking a higher payout. This is a classic case of the Tragedy of the Commons.
I have explained this attack in detail to (who I think were) the operators of ypool and they have continued to operate. The only case where mining with them is a wise decision is if you are so averse to variance that you are willing to take an enormous cut to your profits (e.g. 50% or more) in exchange for a more regular payout.
Fascinating. So do you think primecoin is going to end up fundamentally incompatible with pooled mining simply because of the nature of the computational work being done for the coin, or are the problems that you have outlined likely only specific to ypool's implementation and solvable by a different and creative approach to parceling out work to pool participants? If primecoin ultimately became a coin that was considered optimal to GPU mine, but significantly suboptimal to pool mine, that would make it quite the unusual coin, and would leave small miners with no escape from high variance. I'd say from that explanation the only way one could effectively pool mine without this exploit would be to literally setup their own private pools to operate however many rigs they are mining primecoin with. Thank you for the insight Koooooj, interesting. I don't have a deep understanding at all here. But this is particularly interesting because so far we've all, as far as I know, only been using modified internal miners (like mikaelh's). No one has really even solved the problem of parceling out work to two separate CPU's using external miners. But at a basic level, I would think that if there someday exists code allowing 2+ CPUs (or 2+ GPUs) graphics cards in the same system to interface with the same primecoind and mine via external miner, there would have be a way to insert as an intermediary some pool management code to consolidate work and distribute rewards fairly. Setting up a pool among your own miners gets you nothing. Parceling out work to two separate CPUs doesn't get you much, but it could allow a larger sieve to be more efficient. It all comes down to network bandwidth and latency, which would make a large distributed pool impractical. I got a reply from the ypool operators and they've made the best suggestion I've seen yet. They would require miners to submit information about the sieve they used, so that the server could check that the shares being submitted are valid shares. Due to the computational demands of executing the sieve they could only check one a small percentage of shares, but if a miner is found to be cheating then they could be evicted from the network and their mining proceeds could be confiscated. Such a system would require miners to establish a certain amount of trust before being allowed to withdraw, but it could be made to work. Not nearly as efficient as Bitcoin pools, but still a viable option.
|
|
|
The miner for ypool can never match the efficiency of a solo miner, and it certainly can't compete with mikaelh's most recent builds. This is because in pooled mining (at least ypool's implementation of it) the miners are paid to produce blocks of a lower-than-network difficulty, as in Bitcoin. This is not a problem in Bitcoin since you can't look for low difficulty shares without looking for high difficulty shares at the same time.
However, in Primecoin it is very possible to tune ones search for shorter chains. For example if, after the sieve, you find a collection of 7 numbers that are in the form of a chain (e.g. H-1, 2H-1, 4H-1, 8H-1...) but the number on either side of the chain was proven to be composite then you should not waste time with an expensive primality test on any of the numbers--it will never be a valid share when the network difficulty is 8 or higher. However, if miners are paid to produce shares of difficulty 7 then they should check this chain.
There are a few resolutions to this dilemma. One possibility is that everyone checks all chains that aren't long enough to be network shares but could still be pool shares. This is fair for everyone--nobody has an advantage over anyone else--but it means that fewer blocks are generated overall (everyone is wasting time that doesn't benefit the network). Another possibility is that some people ignore the shorter chains, while others check all chains. This gives an unfair advantage to the people checking the shorter chains--they will produce fewer valid blocks for the pool this way while producing more shares and taking a larger cut of the profits. The final option is if everyone only checks the longer chains while ignoring the shorter ones. This is the solution that gives the highest average payout and is the one that ypool is trying for, but it has the problem that if anyone wants to increase their payout they just have to change a couple lines of code and suddenly they can start taking a higher payout. This is a classic case of the Tragedy of the Commons.
I have explained this attack in detail to (who I think were) the operators of ypool and they have continued to operate. The only case where mining with them is a wise decision is if you are so averse to variance that you are willing to take an enormous cut to your profits (e.g. 50% or more) in exchange for a more regular payout.
|
|
|
Just wanted to say great miner! I feel like this is probably the most optimized miner on the market right now. Looking over the source code of the original miner and yours I think I see a way to make the miner (potentially much) faster from a mathematical standpoint rather than a code-optimization standpoint. My optimization centers around the Primorial and the loop that occurs in main.cpp on line 4622. Before getting into the code, it is important to realize why a primorial is helpful (if you already understand this, skip this paragraph). With numbers on the order of 2 256 there is about a 1 in 177 chance that a random number is prime. If you select 8 random numbers, then, the odds of all of them being prime is about 1 in 1 quintillion--impractically low. If you limit your search to only odd numbers, though, the odds shoot up tremendously. Further limiting your search to numbers not divisible by 3, 5, 7, and so on can cause the odds of finding a prime number to become much, much better. If the hash value is divisible by lots of these small numbers then multiples of that hash ± 1 will not be divisible by any of those numbers. Thus, it is convenient to use a hash that is already a multiple of 2 * 3 * 5 * 7 * ... as it will produce far more primes than another hash. As I understand the aforementioned loop, the program is searching for a hash that is divisible by a primorial. Each iteration of this loop requires a hash to be generated as it increments the nonce value. In the present form the primorail is of degree 7 (line 4579: static const unsigned int nPrimorialHashFactor = 7). I suspect that this value is carefully chosen and it's probably ideal with the way that it is written. However, I think there's an additional step that can be added to make the process much faster. Increasing the degree of the primorial is incredibly expensive as it gets larger, since adding the 8th prime requires 19 times as many hashes to be checked; the 9th prime requires 23 times as many, and so on. There is another way, though. Prime origins are required to be in the form O = H * N where O is the origin, H is the hash, and N is any integer; H is selected to be divisible by p#7 (the primorial shown above). If we extend this to O = H * N * P 2 where P 2 is 19 * 23 * 29 * ... * 51--a product of primes starting after the last prime used in the existing search--then the checked origin is still valid (an integer times an integer is still an integer). This grants the benefits of searching with a higher degree primorial while not requiring a longer search for a working hash. Nothing is free, though, as this method inflates the size of the primes to be checked. If the fast modular exponentiation is implemented through the same method as is used on the Fermat Primality Test Wikipedia page then the algorithmic efficiency is O(log 2(n) * log(log(n)) * log(log(log(n))) ). There should be some sweet spot of how large of a primorial to use where the increased frequency of primes more than offsets the extra time required for the fast modular exponentiation. It's possible that the sweet spot is 0 extra primes, but I think it's worth looking into. Definitely a great. I haven't had the time to study the code from a mathematical perspective, so this is helping me understand the code better.  Unfortunately it seems that Sunny King was ahead of you here. The code is already doing what you proposed. It uses something called a round primorial which is dynamically adjusted. You can see these round primorials in debug.log if you enable the debugging options -debug -printmining. A fixed multiplier is calculated through division by the first primorial used to choose the hash value. This fixed multiplier corresponds to your P 2. I think some improvements could be made to the dynamic adjustment system. It seems to be picking lots of different values. Darn. I guess Sunny was on his game! At any rate, I hope this at least opens a new door for additional tuning of the miner in the wild. Perhaps a system that does away with the dynamic tuning in favor of a conf file/command line parameter. I don't have the time to crawl through the source code right now--friend is getting married--but I'll take a look when I get back.
|
|
|
Just wanted to say great miner! I feel like this is probably the most optimized miner on the market right now. Looking over the source code of the original miner and yours I think I see a way to make the miner (potentially much) faster from a mathematical standpoint rather than a code-optimization standpoint. My optimization centers around the Primorial and the loop that occurs in main.cpp on line 4622. Before getting into the code, it is important to realize why a primorial is helpful (if you already understand this, skip this paragraph). With numbers on the order of 2 256 there is about a 1 in 177 chance that a random number is prime. If you select 8 random numbers, then, the odds of all of them being prime is about 1 in 1 quintillion--impractically low. If you limit your search to only odd numbers, though, the odds shoot up tremendously. Further limiting your search to numbers not divisible by 3, 5, 7, and so on can cause the odds of finding a prime number to become much, much better. If the hash value is divisible by lots of these small numbers then multiples of that hash ± 1 will not be divisible by any of those numbers. Thus, it is convenient to use a hash that is already a multiple of 2 * 3 * 5 * 7 * ... as it will produce far more primes than another hash. As I understand the aforementioned loop, the program is searching for a hash that is divisible by a primorial. Each iteration of this loop requires a hash to be generated as it increments the nonce value. In the present form the primorail is of degree 7 (line 4579: static const unsigned int nPrimorialHashFactor = 7). I suspect that this value is carefully chosen and it's probably ideal with the way that it is written. However, I think there's an additional step that can be added to make the process much faster. Increasing the degree of the primorial is incredibly expensive as it gets larger, since adding the 8th prime requires 19 times as many hashes to be checked; the 9th prime requires 23 times as many, and so on. There is another way, though. Prime origins are required to be in the form O = H * N where O is the origin, H is the hash, and N is any integer; H is selected to be divisible by p#7 (the primorial shown above). If we extend this to O = H * N * P 2 where P 2 is 19 * 23 * 29 * ... * 51--a product of primes starting after the last prime used in the existing search--then the checked origin is still valid (an integer times an integer is still an integer). This grants the benefits of searching with a higher degree primorial while not requiring a longer search for a working hash. Nothing is free, though, as this method inflates the size of the primes to be checked. If the fast modular exponentiation is implemented through the same method as is used on the Fermat Primality Test Wikipedia page then the algorithmic efficiency is O(log 2(n) * log(log(n)) * log(log(log(n))) ). There should be some sweet spot of how large of a primorial to use where the increased frequency of primes more than offsets the extra time required for the fast modular exponentiation. It's possible that the sweet spot is 0 extra primes, but I think it's worth looking into.
|
|
|
For the process of submitting data to primecoind, I would reference the submitBlock function in the same file on line 346. Critical to that function is the parseHex function, which converts the parameter from a string to hexidecimal (i.e. the string "10c8e" would become the number 10c8e or 68750 in decimal), the constructor to the CDataStream class, found in serialize.h, and the ProcessBlock function found in main.cpp (line 2236).
From the looks of it the parseHex function is just a simple utility function and the ProcessBlock function just goes through and makes all the appropriate checks to prevent DoS attacks and to make sure the block is valid. The real magic of submitting the block is in getting the right hex data, in the right order. For that, I would look at how CDataStream interacts with CBlock, as the former is just a holding cell for the latter (hex data is put into the CDataStream and then fed into the CBlock object, as is shown on line 359 of rpcmining.cpp. CBlock is defined on line 1340 of main.h.
The relevant function is defined on line 1362 (IMPLEMENT_SERIALIZE), although it is defined by macros. Looking at these macros (defined in serialize.h, lines 55-93) it would appear that you first must give the block header, followed by a vector of transactions. The block header is defined in main.h on line 1266 and should have the following fields in order: Version, previous block hash, hash of the Merkle root, time, "nBits" (the target, defined in prime.cpp), the nonce, and finally the prime chain multiplier (i.e. hash * multiplier = origin of the prime chain). Each item in the vector of transactions should contain, in order: Version, a vector of inputs (CTxIn), a vector of outputs (CTxOut), and finally "nLockTime" has minimal documentation but it says it was implemented in (presumably Bitcoin) version 0.1.6.
This hierarchy continues, where each object that has data members that are, of themselves, objects, until you get to the point of primitives. CTxIn, for example, has COutPoint prevout (object), then CScript scriptSig (object), and finally nSequence (an integer--primitive). The order is seen for these on line 347 of main.h as the order of the READWRITE lines. One can go deeper by finding where COutPoint and CScript are stored and looking at their READWRITE definitions in IMPLEMENT_SERIALIZE.
When one finishes crawling this tree of IMPLEMENT_SERIALIZE functions you get to the point where you have a really long sequence of data that must be submitted. If you collect all of that data in the right order, represented as its hex values and you pass it to the submitblock function (e.g. primecoind submitblock <data>) then it will go about checking to make sure that the prime chain is long enough and, if it is, it should broadcast it through the network.
For the process of getting data from primecoind, I would reference the getwork function in rpcmining.cpp on line 86. Based on its documentation it seems that it should return formatted hash data to work on. However, every attempt I've made to run "getwork" on my windows QT machine has resulted in the client crashing. On linux, on the other hand, it nicely returns midstate, data, hash1, and target as separate fields in a JSON format (similar to the output of getmininginfo, but with different fields). I believe that hash1 and midstate are not needed and are holdovers from the Bitcoin roots of the client. Target is the difficulty, although its format is a little bit odd (it is little-endian, which means that the least significant digit is first. Data is a 128 byte array that contains the hash of the version, hash of the previous block, hash of the Merkel root, the time, the difficulty, the nonce, "block", and hash1. I believe this is the order that the data is listed, and I believe that this is sufficient data to find the hash needed for the proof of work (if one of the hashes already contained isn't already the hash needed).
I've exhausted my skill (at least until I can get some sleep) as far as looking at the getwork and submitBlock commands are concerned. There's plenty of fun C/C++ tricks going on and I only hardly know C/C++. I hope that this is useful towards the development of a GPU miner. If you're feeling generous and this helped, I can receive donations/bounties at (BTC)1FbWVR3LpGoWp5rc4SJc3cHo42ALrYhcxC or (XPM) AFozUk53j5uZCrKAhkcrM6SjnKtGRXCnov. I will, of course, attempt to answer any questions about this, although all I know is what I've read and this is a much larger project than I've ever worked with.
|
|
|
This shouldn't be taken as 100% correct, but I seem to remember it being mentioned that each step up in chain length is a 30 fold increase. so an 8 length chain would be 30x more difficult than a 7 length chain, and a 9 length chain would then be 900x more difficult than a 7 length chain.
So in other words, your average rate of blocks per hour would be the 5-chain rate / 30^(diff - 5). Time to write a script... Edit: updated the script. I don't think that's quite it, but it's close. I don't think this is a case where we can use 30 as the ratio between each chain length; empirically it seems to be off, at the very least--I've estimated it as closer to 12 by looking at the gap between 1- and 5-chains. Also, that would not be the blocks per hour; it would be the number of 8-chains per hour. Primecoin has a fractional difficulty that is roughly linear, so if the difficulty is 8.5 then about half of all 8-chains are not valid work shares. If the difficulty is 8.99 then only 1 in 100 8-chains are valid. You need to account for that before you arrive at blocks per hour. Generally it is blocks per hour = 8-chains/h * (1+difficulty-floor(difficulty)), but when you get close to the next integer difficulty things get weird since 9-chains work for difficulty 8.XX and 9-chains are more common than 8.995 chains.
|
|
|
restart...  could anyone do the math about the ratio between 5-chains and 8-chains? I found that the ratio between 1-chains and 5-chains was approximately 12 4, which would suggest that the ratio between N-chains and N+1-chains is about 12:1. That would suggest that there are about 12 3 5-chains per 8-chain, or about 1,728:1. If that ratio holds (there's no guarantee that the ratio of N-chains to N+1-chains is constant) then a miner with 1,728 5-chains/hour should find a block every 10 hours when the difficulty is 8.9, every 20 hours when difficulty is 8.95, or every 100 hours when difficulty is 8.99 (all of these are rough averages). This math should not be taken as anything other than a crude approximation, but it should give an idea of order of magnitude. My experience has shown that this approximation is at least within a factor of 10.
|
|
|
(long quote)
Awesome explanation, sending 0.25BTC over. I think I have a good enough explanation of the math to go forward, however (and this is a really dumb question), the block that has to be a factor of the origin of the prime, we just convert the hex of the previous block (so say we are trying to mine block 54320, we would convert the hex of block 54319 to decimal, correct?) Oh, and another dumb question... What does '≡' mean?  Thanks for the BTC! The explanations on ≡ are correct. It's just a fancy equals sign for modular arithmetic. As for the divisibility requirement, it is not just the block depth (not sure if that was what you were asking). If it were then it would be trivial to start working on the prime chain for the 75,000th block far ahead of time and everyone would be looking for the exact same primes at the same time, meaning the fastest computer would always win. It is also not just the hex of the block, I don't think, as that would open it up to someone trying to manipulate the block to have the correct hex to allow a precomputed chain of primes to be accepted. As I understand it you essentially perform a single Bitcoin-style proof of work hash on the block you're looking at and then instead of checking to see if that number is less than a difficulty target as you would do in Bitcoin you search for numbers divisible by it. I'm not sure the exact order that everything has to go into this hash, but I know that it includes the previous block's hash, the time, the Merkle Root of the transactions, an arbitrary nonce, and likely other data. Looking at the data associated with a few blocks it would appear that the specific hash that is used is not the current block's "hash" that is used to identify it; I think that hash includes data about the prime chain, which means it can't be used as a requirement for what the primes are. Looking at block 54321, for example, the prime origin is 4981759032498050152560261402218509159687271713794968969555671831618386493918345 8194973000. The primorial used here appears to be 2*3*5*7*11*13*17*19*23 (since it is not divisible by the next prime number, 29). That means the hash was most likely 2233042693160946897388635236176982778377126850264183238825907717991339881869979 00, or 7887e497dbc303fd77e86f8c2ce7c73d96d039643ab695c311987fc2a16f6bfc48 in hex. If one were to take the (most likely SHA256) hash of parts of the block in a prescribed order then they should get that number. however, troublingly, that number is 67 hex digits long, which is 268 bits; it appears that I've either made an arithmetic error or that I've misunderstood the procedure. As an aside, not the security that this divisibility requirement brings: the hash value above was approximated by Wolfram Alpha as about 2.2 * the number of atoms in the visible universe. It is effectively impossible to work on a block out of sequence by guessing a valid origin, as you would be guessing until the heat death of the universe.
|
|
|
I think I have a pretty good understanding of the algorithm from a mathematics standpoint. I don't know CUDA (or OpenCL, which I would suggest--it runs on both NVidia and AMD with reasonably comparable performance) and I don't know the getwork protocol, but I can give some insight into the mathematics, at least.
The first thing to understand is that if the block hash is H then H, 2H, 3H, 4H, etc are all valid origins, and that H-1, 2H-1, 4H-1, 8H-1, etc. makes a valid Cunningham chain (if all of those numbers are prime and the chain is long enough; this is for a Cunningham chain of the first kind. For the second kind replace the "-" with "+". Most of the math here will assume Cunningham chains of the First kind, but it is identicle for chains of the Second kind. If H ± 1, 2H ± 1, ... are all prime (i.e. a chain of the first and second) then the chain is a bi-twin chain). That means that when you are eliminating composite origins you are also eliminating numbers that could appear later in the chain. This will become useful later.
Another thing to notice is that H, 2H, 3H, 4H, etc. are very unlikely to be prime. If you look at the probability that a random number on the order of 2^256 is prime then it is about 1 in 177; that makes the probability that 8 randomly chosen numbers are all primes about 1 in 1 quintillion--impractically low. That probability is greatly improved by using a "primorial" number--the product of many small primes. If you take pRimorial= R = H * 2 * 3 * 5 * 7 * .... * 47 (the first 15 prime numbers; quick tests on my own code suggest that this is about ideal, although some tweaking and testing is in order) then you increase the size of the number you are testing, but you also make the numbers much more likely to be prime. Thus if you look at R-1, 2R -1, etc then you know that each of those numbers is not divisible by 2, 3, 5, ... , 47 because they are 1 less than (or greater than, if you are looking for Cunningham chains of the Second kind) a multiple of each of those numbers. This means that the smallest factor NR ± 1 can have is 53, where N is any integer.
From here you execute the Sieve. The standard Sieve of Eratosthenes is very straightforward, but it is designed to be executed starting at 1 and then continuing over the small, consecutive integers, such as is shown on the GIF on the Wikipedia page. We are not interested in consecutive numbers, and they are far from small, so a change must be adopted. This change requires knowledge of modular arithmetic. To execute the sieve we first take a small prime number (starting with 53 if our primorial ended with 47) and take R (mod 53). If (and only if) R ≡ 0 (mod 53) then R is divisible by 53, and all future R will be divisible by 53. In that case, no NR-1 will be divisible by 53 and none should be eliminated.
In every other case, R ≡ r (mod 53), where 0 < r < 53. If you have found that r = 1 then that shows that R-1 ≡ 0 (mod 53) and is therefore not prime (and cannot be a member of a Cunningham Chain of the first kind). If r = 52 then R+1 ≡ 0 (mod 53) and is therefore not prime. If r is any other number then you have to search. Since r is relatively prime to 53 (i.e. gcd(r,53) = 1) it can be shown that N * r (mod 53) will cycle through every number less than 53. Thanks to modular arithmetic, N * r ≡ N * R (mod 53), so you can search for the N for which N1 * r ≡ 1 (mod 53) and for which N2 * r ≡ 52 (mod 53). You can eliminate N1R - 1 as a candidate for chains of the first kind, and you can eliminate N2R + 1 as a candidate for chains of the second kind.
The power of finding this N is that you then know that if N * r ≡ 1 (mod 53) then (53 + N) * r ≡ 1 (mod 53) since N * r is cyclic with a period of 53. Thus, you can quickly go through, just like in the classic form of the sieve of Eratosthenes, and eliminate every 53rd N. You would then repeat this process for subsequent prime numbers (59, 61, 67, ...), until sieving against another prime number is not worth it--it takes too long and removes too few numbers.
The next thing you can do to eliminate candidates is to look for clusters (you can actually do this while sieving if you want, but I have separated it since it is a separate topic). Since the goal is to find chains of (currently) 8 primes it makes no sense to waste time performing a long primality test on a chain of numbers that have passed the sieve that is 7 or fewer long. Thus, you can look through the sieve for chains in the form N-1, 2N-1, 4N-1, ... 128N-1 where all of those numbers have no small factors (which would have been found in the sieve). When you find these, they are passed to the formal (probable) primality tests.
The main test used in this step is the Fermat Primality Test, based on Fermat's Little Theorem. It states that aP-1 ≡ 1 (mod P) for all prime P. (It does not state that if that identity holds true then P is prime, but in practice prime numbers are far more likely than "Fermat liars"). For the reference client "a" is chosen to be 2, since 1 is trivially a Fermat Liar for every composite P and since a larger "a" would take more time for no benefit. To perform the operation it is best to use "fast modular exponentiation," which is shown in pseudocode in the Modular Exponentiation Wikipedia page.
The second test used is the Euler Lagrange Lifchitz test (a brief reading of the code suggests that it is not actually used, but it is implemented. You can skip this paragraph and not lose much). It states that if P is prime and P ≡ 1 (mod 4) then 2P + 1 is prime if and only if 2P + 1 ≡ 0 (mod 2P + 1). There are other forms as well: if P ≡ 3 (mod 4) then 2P + 1 is prime if and only if 2P - 1 ≡ 0 (mod 2P +1). There are forms for 2P-1 as well; if P ≡ 1 (mod 4) then you use 2P-1 - 1 ≡ 0 (mod 2P - 1); otherwise it's 2P-1 + 1 ≡ 0 (mod 2P -1). These are not much, if any, faster than just using the Fermat primality test--they all rely on a single modular exponentiation, and in all cases the base is 2, and both the exponent and the modulus are on the order of P. Lifchitz mentions a way to look for Cunningham chains of the second kind "in cluster" by taking 2ni-1-1-1 ≡ 0 (mod n * n1 * n2 * ... * ni) where ni = 2ni-1-1. This test only looks for one kind of chain, though, and requires that n ≡ 1 (mod 4). Also, it is a much more expensive modular exponentiation since the modulus is the product of 8 or more large numbers.
When a prime chain is found that is at least as long as floor(difficulty) (e.g. at least 8 when the difficulty is 8.9) it is necessary to calculate the fractional difficulty. I haven't verified this in the source code, but in the design paper he takes r = (2P-1 (mod P) ) as the Fermat remainder, then defines the fractional length as (P - r) / P. P, in all of this, is the number that would have been next in the chain. I'm not sure which half of a bi-twin chain has to be used (note that a bi-twin chain can have an odd length if the two Cunningham chains it is made up of differ in length by 1). If the integer chain length plus the fractional length is greater than the difficulty, it is a valid share.
I hope this has been easy enough to follow, while still being informative. If anyone has questions I'll try to answer them. Note that my understanding is based partially on looking at the code but also largely on my own insights; I've likely missed something major, but there is a slim chance that I've offered an improvement on what's already being done simply from coming at it from a different persepctive. If you feel this satisfies part of your bounty for money, I can receive donations at 1FbWVR3LpGoWp5rc4SJc3cHo42ALrYhcxC. Not really doing this for the bounty, though. I just want to be as much help as possible towards making GPU mining a reality, to help neutralize the botnet and massive VPS farm threats.
|
|
|
The client seems to crash for me after few minutes of mining running hp3-win64 on Win 7 x64 I7 930  any ideas? This is what I got, and only once, only after hours of mining, and never again since restart. EDIT This was with hp3, I7 2660 Any news on this? I seem to get this error every few hours on my i7 3930k (Win 7 x64) but not on any of my other machines with various Intel processors. A simple restart and it goes away, but if I'm not at my computer then I lose mining time until I can get back. If I had to make a wild guess, it's asserting that the thread pool has nothing but idle threads, but I've not waded through the source to find what the problem is.
|
|
|
502896 already that many coins....!
999/dif^2
is going to run out of coins soon
Note that in Primecoin difficulty increases exponentially. If a difficulty of 8 is, say, 30 times the difficulty of 7, then a difficulty of 9 is 30 times the difficulty of 8. To get to a block reward of 10 we would need a difficulty of 10, which is hundreds of times the difficulty we have today. To get to 5 we would need a difficulty of 14, which would mean we were generating primes worthy of the current record books about every minute. If we got to a difficulty of 17--the longest known Cunningham chain right now--we would still have about 3.5 XPM per block. Rewards are certainly going down, but decreases in miners' profits will come more from increasing difficulty than from decreasing reward. yes I see the chance of finding becuase hash up.... now hat if a lot of power is suddenly withdrawn can this difficulty readjust? or are we stuck at that prime length dif In any proof of work coin with an automatically adjusting difficulty there is a danger that a huge portion of miners could suddenly go off the network, leaving the difficulty high. In Bitcoin the network is stuck waiting for the 2016th block before difficulty can start readjusting down. Primecoin and others use a continuously readjusting method that allows the difficulty to start dropping immediately. That, combined with the much faster block generation time, means that Primecoin could handle a sudden outflux of miners better than many other coins.
|
|
|
If I'm not mistaken, Block 28769 contains the new world record for BiTwin of four links (10 primes / TWN0a), 100 digits.
Also, the BiTwin primes listed on the records contain only even numbers of primes, yet our mining uncovers chains like TWN07, TWN09, etc. How are these defined? Do they count toward official records?
Odd numbered twin prime chains are defined as having only half of the final pair. I doubt the record books would care about the final pair, since it's a prime and a composite, so TWN07 is really a TWN06 and TWN09 is just TWN08. The odd numbered chains work just fine as a proof of work, though, and seem to behave nicely as far as difficulty is concerned.
|
|
|
Ok... Is this what I have to send? signmessage ANi7JgdfT2HSdTxB8T8sAnVwfDuRFxJeRg "Koooooj owns this address"
H1mJljsMaxVEXbfbnuNACzgVekgqYlEqJr2pljzJU+YfwnaKi35l4Q1+/2bh2iPNND1rA3ID3Jfkz+L5M0oiB70=
I would like to claim this record as well. As you can clearly see, I have signed the message stating my ownership. I... uh.... just logged into my other account, or something.  In all seriousness, though, congrats on the find!
|
|
|
502896 already that many coins....!
999/dif^2
is going to run out of coins soon
Note that in Primecoin difficulty increases exponentially. If a difficulty of 8 is, say, 30 times the difficulty of 7, then a difficulty of 9 is 30 times the difficulty of 8. To get to a block reward of 10 we would need a difficulty of 10, which is hundreds of times the difficulty we have today. To get to 5 we would need a difficulty of 14, which would mean we were generating primes worthy of the current record books about every minute. If we got to a difficulty of 17--the longest known Cunningham chain right now--we would still have about 3.5 XPM per block. Rewards are certainly going down, but decreases in miners' profits will come more from increasing difficulty than from decreasing reward.
|
|
|
Unless I'm mistaken, the source code links on that page are dead ends. Also, unless I'm mistaken, it is using a different algorithm and is running on smaller numbers. A useful thing to find would be a CUDA (or, ideally, an OpenCL, which runs on both nVidia and AMD) program that performs fast modular exponentiation of large numbers (i.e. larger than 64 bits; the numbers needed are on the order of 256 bits or larger). Modular exponentiation is at the heart of both the Fermat Primality Test and the various forms of the Euler Lagrange Lichfitz Test. It would also be good to find a sieving method implemented in a GPU-specific language, as that is the other math being done in the search miners are doing. It looks like the linked page implements (or, would have implemented, if the links weren't broken) the Ulam Spiral, which is a method of finding lots of primes and is a strong proof of primality (as opposed to the Fermat Test, which only shows probable primality). It, like the Sieve of Eratosthenes used in the generation algorithm right now, can be used to calculate every prime number up to a certain number. If it can be stopped early and still give useful information about a set of numbers then this sieve is applicable to Primecoin (i.e. if you can run some number of iterations of the Ulam Spiral where you only care about a very sparse subset of numbers and when you're done it has eliminated a lot of the composite numbers). However, without source code it isn't really a big help to developers right now.
|
|
|
Pool share can be implemented as lower difficulty prime chains, similar to hashcash proof-of-work I think.
I'm not sure this is the case, or at least it is not as simple. With hashcash proof-of-work it is impossible to look for lower difficulty shares without also looking for higher difficulty shares. In Primecoin, on the other hand, as I understand it, one could look for chains of length 7 and find them with much greater frequency than they would find chains of length 7 while looking for chains of length 8 (i.e. pool miners would maximize their share submission by hurting the pool; the tragedy of the commons ensues). I assume that when the mining algorithm executes it first executes the Sieve of Eratosthenes to build a list of possible primes. If one finds that there is a list of 7 numbers that passed the sieve and form a chain then they could be checked to see if they form a valid share, even if the sieve eliminated the next value, proving that a block of difficulty 8 or higher is impossible from that start (I am assuming a share difficulty of 7 and a network difficulty of 8 or more). A miner optimized for finding valid blocks as fast as possible would save computational time by ignoring the chain of length 7 when the difficulty is 8 or higher, while a miner optimized for finding valid shares would check every chain of primes that passes the sieve that is at least (share length) long. This could be circumvented by requiring the numbers after the share's chain up to the integral network difficulty to all pass a sieve, but I believe that that would break the requirement that shares be fast to verify by the pool host. Additionally, it would set stringent requirements on how the numbers would have to be sieved which would limit improvements to be made in that area (which seems to be where most of the improvements are being made). You've made a really innovative coin, Sunny, and I trust you to come up with an innovative solution to this, but it isn't as simple as it may appear at first glance.
|
|
|
Also, block 1509, which is a bi-twin chain of 3 links and has an origin of 106 digits: 21:44:59 signmessage ASNg61dJ8vJPUQ9NvFaajX4qSvShPQxKeg "Koooooj owns this address"
21:44:59 H138sg1WvaZridL9Mozc1fFS59fByVkR5EW8pcuBNFHb6MsjjyK0z3noqmAjiY1jOvCfwtTTZyjZtaJvNk80jHc=
|
|
|
For block 12344: 20:58:10 signmessage APLLrWbHapoRVxLAxBUkDU1hkx4VbCJHQX "Koooooj owns this address"
20:58:10 IBBD3qd/bbh5npcj7pEQCBqZUwiYaaE/y097+P0EBeB5fH4eJCqvNzrxzm20P6kWFq/0wmp0cZQTHk78cpfaYbM=
For block 17302: 21:00:27 signmessage AbrPM2J3WLvgz87t13wU46VwvVW5dM1Vdq "Koooooj owns this address"
21:00:27 IH8dQ+I2e4WWZIpXbqhXrquLXMnNdx847+HUeuadcqyeB77XPe3tmN2bG/FdOPZv7id9Q0eMuQ4UO7iBaq0wUf0=
I think I got the right blocks and signed correctly; my local QT client returns true upon verification requests.
|
|
|
|