Bitcoin Forum
November 13, 2024, 05:14:52 PM *
News: Check out the artwork 1Dq created to commemorate this forum's 15th anniversary
 
   Home   Help Search Login Register More  
Pages: « 1 [2] 3 »  All
  Print  
Author Topic: New PoW method using factorization of large numbers.  (Read 876 times)
ir.hn (OP)
Member
**
Offline Offline

Activity: 322
Merit: 54

Consensus is Constitution


View Profile
December 18, 2017, 04:53:11 AM
 #21

Can you prove an upper bound on parallelization? In other words, can you prove that the advantage of x working units as x->oo goes to zero? Empirical evidence about how hard people have found parallelization of factoring to be is just hand-waving.

Once you prove that SHA-256 encryption can't be broken without using brute force.  You trying to draw this hard line of definitive proof shows that you don't have the wisdom to understand how the world works.

You're shifting the goalposts. Your opening claim was that factorization does not benefit from GPUs (by which, I assume you mean parallelization). I nitpicked by pointing out that there's no reason to believe that factorization can't be sped up by parallelization. Claiming that factorization cannot benefit from parallelization is tantamount to claiming that there is no linear speedup for factorization. If this were true, it could be proved. There is no other way to know that it is the case than to prove it.

Yes you are nitpicking.  Like I said if you try to prove that brute force is the only way to crack SHA-256 encryption then I will try to prove that GPU's are inherently weaker than CPU's for the task of GNFS sieving.

I'm sorry to break it to you but mathematical proofs can't be applied to everything you do in life.  Next time you take a shit you better offer exact proof that it smells bad.

People like you nitpick and demand things from people because you want to make an excuse to yourself why you don't want to learn something new.

ir.hn (OP)
Member
**
Offline Offline

Activity: 322
Merit: 54

Consensus is Constitution


View Profile
December 18, 2017, 05:10:36 PM
 #22

It's really exciting to me that this method could very well be the holy grail PoW method that we have been looking for.  Not only is it inefficient on GPU's, the best you could hope for is GPU's approaching CPU's in speed, but the optimal setup would be 1 GPU and 1 CPU working together.  Not only does this make GPU farms and standard ASIC farms unproductive, but it also makes the Server farms and Botnet's that have plagued CPU only coins unproductive as well.  The optimum setup would be similar to a desktop computer, one CPU and one GPU.

One CPU + One GPU = 1 vote

http://www.mersenneforum.org/showthread.php?t=16687

Witrebel
Member
**
Offline Offline

Activity: 116
Merit: 101


View Profile
December 19, 2017, 04:07:28 AM
 #23

You are essentially suggesting you have a solution that could/should be applied to a multi billion dollar industry.   And you are unwilling to entertain the idea of a mathematical proof.  It's likely that you're right, but in this arena, I think its pretty much a requirement to at least ATTEMPT a rigorous proof.   Otherwise your idea will forever remain an idea.  Personally I'd love to see the math that goes into such a proof for the sake of learning alone.

That said, its a very interesting concept.  
ir.hn (OP)
Member
**
Offline Offline

Activity: 322
Merit: 54

Consensus is Constitution


View Profile
December 19, 2017, 04:13:25 AM
 #24

My number one priority is to implement this idea but I need help.  I am an inventor, engineer, and scientist but unfortunately I fail when it comes to languages and programming.  That side of my brain is stunted.

What needs to be done is a miner made and bitcoin code modified to allow for this PoW to be used.  

I think at first we should use stock bitcoin code and get this PoW working in that. From there people can tweak it to make coins that have modified blocksize, reward, maximum emission, blocktime, etc tuned to how they like it.  The more the merrier and I know the best way for a coin to scale is for child coins to be made.  But if we get it working in otherwise stock code then that would be a good foundation to build on.

We also need a miner.  This may be the hardest part.  But there is msieve, a C program that may be a great fit for the foundation of a pretty well optimized miner https://sourceforge.net/projects/msieve/

Is there anything else I'm missing?

The PoW is the heart of a coin.  I think this new heart could give coins a much longer life.  If anyone else is excited about this, please lets figure out how to get this started.

Witrebel
Member
**
Offline Offline

Activity: 116
Merit: 101


View Profile
December 19, 2017, 04:21:35 AM
 #25

Think about your target audience.  Your number 1 goal is proving this PoW algo is THE FUTURE OF BITCOIN.  This isn't a "lets bench test it in the lab and see how it works"  You are literally suggesting something that would end an entire mining industry, start another, and affect a multi-billion dollar market.  I'd start with "who here is a math minded person that can help me work out the proofs"
ir.hn (OP)
Member
**
Offline Offline

Activity: 322
Merit: 54

Consensus is Constitution


View Profile
December 19, 2017, 04:30:40 AM
 #26

You are essentially suggesting you have a solution that could/should be applied to a multi billion dollar industry.   And you are unwilling to entertain the idea of a mathematical proof.  It's likely that you're right, but in this arena, I think its pretty much a requirement to at least ATTEMPT a rigorous proof.   Otherwise your idea will forever remain an idea.  Personally I'd love to see the math that goes into such a proof for the sake of learning alone.

That said, its a very interesting concept.  

So would I Wink.  I would like to apologize to haltingprobability, I have been a bit harsh on him.  Some people like seeing proof, I for one am someone who will go out and search for myself.  But different strokes for different folks.

I am not the kind of meticulous person to work on proofs.  I don't have a degree in mathematics or computer science and I tend to get interested in developing concepts and ideas and implementing them.  I am more of an entrepreneur and engineer than a mathematician.  My brain works like a CPU and I take in lots of information from many sources and synthesize it into a product idea.  I don't need proof, I just need a general understanding how something works, and if it makes sense and passes the smell test, I go for it.  

Here is how I see it.  GPU's are like robots.  You program them for a specific task and they can beat any human in that specialized task.  But lets say something unexpected happens like the roof caves in.  The robots would get stuck, they wouldn't know how to move out of the way and bring their work elsewhere to continue.  The humans would then beat them because they are more adaptable.  CPU's are adaptable, GPU's are not.  CPU's are fast and think on their feet whereas GPU's are slow and methodical and share their work well.  So it would be very shortsighted to think that GPU's can accelerate any given task and be better than a CPU.  They are very different entities.  If GPU's are inherently better than CPU's at some tasks (and they certainly are like SHA-256 and Scrypt) Then it only makes sense that CPU's would be inherently better at other tasks.  These tasks would be tasks where lots of different and unpredictable things are happening.  The problem is finding a stable consistent task that also offers this unpredictability.

For me to attempt to prove that Large number factoring would offer this unpredictability would require that I know much more than I currently do about large number factoring.  But I take the experts word that factoring (more specifically sieving) large numbers is a very intensive thing that requires lots of unpredictable memory access and varied tasks.  Its about as random of a task as a non random task can be if that makes any sense.  So that is why I feel it is the #1 contender for a CPU tilted coin.  The cool thing is certain tasks in the sieving are predictable, and so the combination of a GPU and CPU would be best, which just so happens to be the perfect thing if we want to favor PC's.  Also it is good if we want to bring the GPU miners and CPU miners together to both adopt this PoW.

ir.hn (OP)
Member
**
Offline Offline

Activity: 322
Merit: 54

Consensus is Constitution


View Profile
December 19, 2017, 04:34:46 AM
 #27

Think about your target audience.  Your number 1 goal is proving this PoW algo is THE FUTURE OF BITCOIN.  This isn't a "lets bench test it in the lab and see how it works"  You are literally suggesting something that would end an entire mining industry, start another, and affect a multi-billion dollar market.  I'd start with "who here is a math minded person that can help me work out the proofs"

Fair enough.  But the difference in perspective on how to pitch this is because of a difference in perspective of how to implement it.  I am of the mind that we create a bitcoin based altcoin with this PoW and test it out and if it really is the future, then bitcoin will have to implement it.

Most people want to see something working and succeeding before adopting it.  This is the reason why people still aren't investing in bitcoin en masse, they want to see if it is still around in 10 years first.

Anti-Cen
Member
**
Offline Offline

Activity: 210
Merit: 26

High fees = low BTC price


View Profile
December 19, 2017, 01:33:35 PM
 #28

Wow i don't know how us programmers in the past managed to do anything with out proof of work, you know
counting the number of black pixels on hundreds of images just so we can keep wasting electricity and spending
money on 1080 GPU's in order to have POW in our programs.

Where did the need come from, yes i remember this block-chain thing as if we never had databases or
spread sheets before then or be in a position to lock records or data.

That satoshi guy created more problems than he solved in developing digital "Peoples money" and TBTB
are none too happy about Bit-Torrent but BT does not waste time and energy counting grains of sand to
over come the problems does it now.

Tor is also another nice tool (Before someone got it to spread wannacry viruses using the client SW, not back doors in browsers, kicked off the boom in BTC price)

It seem to me that what we wanted and what we have got are nothing like each other and people like me or people that are
smarter need to go back to the drawing board if we are left with counting grains of sand and distributing a 200gb to each and
every node that tries to run a system in near real time.

ETH that allows us to deploy programs (call them contract, keeps wall street happy) on distributed servers that use a type of gas to cover
the running costs is cool but look closer, it's not offering .onion type sites you get on the underground using Tor and seems to be just offering
a very expensive service, if not easy to use that allows us to rent space to store a few global strings and int32 numbers at a huge cost

We need to take concepts like Tor, Bit-Torrent, ETH, Tangle, BAT and what has been learned from BTC and go back to the drawing board and to
look at this again and start from scratch me thinks

Each node in the network should also offer a VPN or proxy type service and be paid in a type of gas because the internet is being closed down
using censorship so we need to not only take the money back but also our freedom of speech whilst we are about it






Mining is CPU-wars and Intel, AMD like it nearly as much as big oil likes miners wasting electricity. Is this what mankind has come too.
jenni161
Member
**
Offline Offline

Activity: 73
Merit: 10


View Profile WWW
December 19, 2017, 03:44:38 PM
 #29

Where do you get those 100 digit numbers in the first place? Is there some database or some other computer (server) generating those tasks or what? Sorry if that has been already mentioned but I didn't see that.
ir.hn (OP)
Member
**
Offline Offline

Activity: 322
Merit: 54

Consensus is Constitution


View Profile
December 20, 2017, 05:48:48 PM
 #30

Where do you get those 100 digit numbers in the first place? Is there some database or some other computer (server) generating those tasks or what? Sorry if that has been already mentioned but I didn't see that.

They are generated using a hashing algorithm like sha-256.  Basically the same way they currently work in bitcoin.  Great question.  Let me know if you want a more in depth answer.

ir.hn (OP)
Member
**
Offline Offline

Activity: 322
Merit: 54

Consensus is Constitution


View Profile
December 20, 2017, 05:52:48 PM
 #31

Wow i don't know how us programmers in the past managed to do anything with out proof of work, you know
counting the number of black pixels on hundreds of images just so we can keep wasting electricity and spending
money on 1080 GPU's in order to have POW in our programs.

Where did the need come from, yes i remember this block-chain thing as if we never had databases or
spread sheets before then or be in a position to lock records or data.

That satoshi guy created more problems than he solved in developing digital "Peoples money" and TBTB
are none too happy about Bit-Torrent but BT does not waste time and energy counting grains of sand to
over come the problems does it now.

Tor is also another nice tool (Before someone got it to spread wannacry viruses using the client SW, not back doors in browsers, kicked off the boom in BTC price)

It seem to me that what we wanted and what we have got are nothing like each other and people like me or people that are
smarter need to go back to the drawing board if we are left with counting grains of sand and distributing a 200gb to each and
every node that tries to run a system in near real time.

ETH that allows us to deploy programs (call them contract, keeps wall street happy) on distributed servers that use a type of gas to cover
the running costs is cool but look closer, it's not offering .onion type sites you get on the underground using Tor and seems to be just offering
a very expensive service, if not easy to use that allows us to rent space to store a few global strings and int32 numbers at a huge cost

We need to take concepts like Tor, Bit-Torrent, ETH, Tangle, BAT and what has been learned from BTC and go back to the drawing board and to
look at this again and start from scratch me thinks

Each node in the network should also offer a VPN or proxy type service and be paid in a type of gas because the internet is being closed down
using censorship so we need to not only take the money back but also our freedom of speech whilst we are about it







Good point but even eth uses proof of work.  The problem that is unique to blockchains is that they require the information to be correct.  How do we verify that sally did or did not give bob 100 bitcoins?  Who do we trust to tell us the truth?  That is where proof of work comes in.  We let everyone vote on it.  If we let every IP address vote, then people would just make multiple IP addresses to vote more.  PoW makes it so you have to prove that you are doing work so you can't just get a bunch of free votes. 

That said, my PoW method here actually does provide value to the world.  Prime numbers are valuable (they are used to create hard to break encryption like TOR) and everyone factoring large numbers would give us hints that some numbers that haven't been factored are possible primes so those can be checked out later.

jenni161
Member
**
Offline Offline

Activity: 73
Merit: 10


View Profile WWW
December 20, 2017, 08:45:12 PM
 #32

Where do you get those 100 digit numbers in the first place? Is there some database or some other computer (server) generating those tasks or what? Sorry if that has been already mentioned but I didn't see that.

They are generated using a hashing algorithm like sha-256.  Basically the same way they currently work in bitcoin.  Great question.  Let me know if you want a more in depth answer.

Yes I would love to have a more in depth answer. I have read some descriptions about sha256 and I didn't see any reason it would produce large numbers that are either primes or semiprimes.
ir.hn (OP)
Member
**
Offline Offline

Activity: 322
Merit: 54

Consensus is Constitution


View Profile
December 20, 2017, 09:02:56 PM
 #33

Where do you get those 100 digit numbers in the first place? Is there some database or some other computer (server) generating those tasks or what? Sorry if that has been already mentioned but I didn't see that.

They are generated using a hashing algorithm like sha-256.  Basically the same way they currently work in bitcoin.  Great question.  Let me know if you want a more in depth answer.

Yes I would love to have a more in depth answer. I have read some descriptions about sha256 and I didn't see any reason it would produce large numbers that are either primes or semiprimes.

We don't want them to be primes.  We want them to be composite so that we can find the winning factor (the factor that meets the difficulty requirements).

Hashing algorithms take any input and produce a large random output.  This output can be converted into base-10; normal numbers.  Lets say it produces 120 digit number and we want a 102 digit number.  we can just truncate the number to the amount of digits we want.

haltingprobability
Member
**
Offline Offline

Activity: 98
Merit: 26


View Profile
December 21, 2017, 12:34:09 AM
 #34

So would I Wink.  I would like to apologize to haltingprobability, I have been a bit harsh on him.  Some people like seeing proof, I for one am someone who will go out and search for myself.  But different strokes for different folks.

No hard feelings.

Quote
I am not the kind of meticulous person to work on proofs.  I don't have a degree in mathematics or computer science and I tend to get interested in developing concepts and ideas and implementing them.

I am a computer engineer. I have a degree in computer engineering. I'm trying to inform you that your intuitive notions about the difference between GPUs and CPUs - while good enough for basic comprehension - are not rigorous and that, in this case, that matters.

There is a theorem in computational complexity (the field of mathematics that studies how fast any given kind of formal problem can be solved), called the speed-up theorem. Basically, this theorem says that any particular solution to a problem (an "algorithm") can be sped up by a linear factor. So, suppose that you find an algorithm A that solves problem Z in time n2, where n is the size of a problem in Z. This is called quadratic complexity - as n increases, the time required to solve the problem grows as the square of n. The speed-up theorem tells us that it is always possible to find some fixed factor, k, such that we can construct a new algorithm A' from A that solves a problem in Z in time n2 / k. Note that this does not change the complexity class - A still solves problems in Z in quadratic time. Intuitively, this makes sense - if I can solve a problem in time n2 with one computer, then I should always be able to solve it in nearly n2 / 2 with two computers. There are some very intractable problems where this becomes difficult (due to network complexity) but such problems tend to be academic, i.e. specially constructed to make parallelization difficult.

Quote
CPU's are adaptable, GPU's are not.  CPU's are fast and think on their feet whereas GPU's are slow and methodical and share their work well.  

This is mostly not true anymore. GPGPUs (General-Purpose GPU) can, in principle, do anything that CPUs can do and the latest GPU cards are being released with GPGPU cores in them. GPU clock speeds are in the same class as CPUs, so they are not slow. GPUs are designed around "data parallelism" or "data flow architecture" and this is why they are well-suited to graphics applications or any application where you can easily parallelize the work by simply dividing up the work surface and handing out more or less identical instructions to each work unit (rendering pipeline).

Quote
So it would be very shortsighted to think that GPU's can accelerate any given task and be better than a CPU.  

For any large search problem (and any non-trivial PoW must be a large search problem), GPUs will be superior. If I have to try 260 possibilities (about a billion billion possibilities) in order to solve a given PoW problem, I can try these one-by-one on a CPU or I can parallelize the problem and have multiple CPUs or GPUs work on it in parallel. Suppose I have 128 GPUs available to me. Instead of having to search 260 possibilities, each GPU now only has to solve 253 possibilities, resulting in a 128x reduction in search time. Factoring is just a large search problem.
ir.hn (OP)
Member
**
Offline Offline

Activity: 322
Merit: 54

Consensus is Constitution


View Profile
December 21, 2017, 01:12:22 AM
 #35

Thanks haltingprobability.  I think factoring large numbers may be a bit more complex than you realize.  Here is an excerpt from wikipedia for general number field sieve, the most efficient way to factor large numbers over 100 digits.  I just want to get this posted so we can dissect it and discuss:

First off:
The general number field sieve, on the other hand, manages to search for smooth numbers that are subexponential in the size of n. Since these numbers are smaller, they are more likely to be smooth than the numbers inspected in previous algorithms. This is the key to the efficiency of the number field sieve. In order to achieve this speed-up, the number field sieve has to perform computations and factorizations in number fields. This results in many rather complicated aspects of the algorithm, as compared to the simpler rational sieve.

Method:
Two polynomials f(x) and g(x) of small degrees d and e are chosen, which have integer coefficients, which are irreducible over the rationals, and which, when interpreted mod n, have a common integer root m. An optimal strategy for choosing these polynomials is not known; one simple method is to pick a degree d for a polynomial, consider the expansion of n in base m (allowing digits between −m and m) for a number of different m of order n1/d, and pick f(x) as the polynomial with the smallest coefficients and g(x) as x − m.

Consider the number field rings Z[r1] and Z[r2], where r1 and r2 are roots of the polynomials f and g. Since f is of degree d with integer coefficients, if a and b are integers, then so will be bd·f(a/b), which we call r. Similarly, s = be·g(a/b) is an integer. The goal is to find integer values of a and b that simultaneously make r and s smooth relative to the chosen basis of primes. If a and b are small, then r and s will be small too, about the size of m, and we have a better chance for them to be smooth at the same time. The current best-known approach for this search is lattice sieving; to get acceptable yields, it is necessary to use a large factor base.

Having enough such pairs, using Gaussian elimination, one can get products of certain r and of the corresponding s to be squares at the same time. A slightly stronger condition is needed—that they are norms of squares in our number fields, but that condition can be achieved by this method too. Each r is a norm of a − r1b and hence that the product of the corresponding factors a − r1b is a square in Z[r1], with a "square root" which can be determined (as a product of known factors in Z[r1])—it will typically be represented as an irrational algebraic number. Similarly, the product of the factors a − r2b is a square in Z[r2], with a "square root" which also can be computed. It should be remarked that the use of Gaussian elimination does not give the optimal run time of the algorithm. Instead, sparse matrix solving algorithms such as Block Lanczos or Block Wiedemann are used.

Since m is a root of both f and g mod n, there are homomorphisms from the rings Z[r1] and Z[r2] to the ring Z/nZ (the integers mod n), which map r1 and r2 to m, and these homomorphisms will map each "square root" (typically not represented as a rational number) into its integer representative. Now the product of the factors a − mb mod n can be obtained as a square in two ways—one for each homomorphism. Thus, one can find two numbers x and y, with x2 − y2 divisible by n and again with probability at least one half we get a factor of n by finding the greatest common divisor of n and x − y.

haltingprobability
Member
**
Offline Offline

Activity: 98
Merit: 26


View Profile
December 21, 2017, 04:27:02 AM
 #36

Thanks haltingprobability.  I think factoring large numbers may be a bit more complex than you realize.  Here is an excerpt from wikipedia for general number field sieve, the most efficient way to factor large numbers over 100 digits.  I just want to get this posted so we can dissect it and discuss:

Note: The results of complexity theory apply regardless of "best known algorithm" - a problem can be proved to much easier than the best known algorithm for solving that problem.

The computational complexity of GNFS is given in the first paragraph on Wiki, as of this writing. It is an exponential time algorithm. As long as SHA-256 is actually one-way, solving the hashcash PoW function is also exponential in the number of zero-bits. Let me try to explain what I'm saying. Even though solving Bitcoin's PoW (hashcash) is exponential time complexity, the pool of all Bitcoin mining (considered as an aggregate) is "parallelizing" this problem. That's why the hashrate is so high. For a given difficulty factor, the average number of attempts to find a solution is fixed. We can think of the entire mining pool as dividing up this solution time amongst themselves - parallelizing it. For the sake of discussion, let's say it takes 1 trillion hash attempts, on average, to mine a block. If there are 1,000 miners and each of them is operating 1,000 mining rigs and each mining rig can solve 1,000 hashes per second, the total hash rate of the network is 1 billion hash attempts per second. At this rate, it will take about 1,000 seconds (almost 17 minutes) to mine a block. In fact, these numbers will hold no matter whether we change hashcash to some other exponential problem (including GNFS), so long as it takes about 1 trillion attempts (on average) to find a solution. There's nothing in the nature of the problem (factoring versus finding a preimage whose hash solves a pre-specified equation) that matters because we only care about the difficulty of the problem (proof of work). In the case of hashcash, we don't care what the preimage looks like (we are indifferent to it), so long as its hash satisfies the required property. Any random number will do just as well as any other random number. In the case of GNFS, we don't care which factors you find, just as long as you find the required number of factors. This indifference means that parallelization is trivial because we can just utilize random search. Proof-of-work, after all, is not really about solving a problem, it's just a race.
ir.hn (OP)
Member
**
Offline Offline

Activity: 322
Merit: 54

Consensus is Constitution


View Profile
December 21, 2017, 04:57:26 AM
 #37

Thanks haltingprobability.  I think factoring large numbers may be a bit more complex than you realize.  Here is an excerpt from wikipedia for general number field sieve, the most efficient way to factor large numbers over 100 digits.  I just want to get this posted so we can dissect it and discuss:

Note: The results of complexity theory apply regardless of "best known algorithm" - a problem can be proved to much easier than the best known algorithm for solving that problem.

The computational complexity of GNFS is given in the first paragraph on Wiki, as of this writing. It is an exponential time algorithm. As long as SHA-256 is actually one-way, solving the hashcash PoW function is also exponential in the number of zero-bits. Let me try to explain what I'm saying. Even though solving Bitcoin's PoW (hashcash) is exponential time complexity, the pool of all Bitcoin mining (considered as an aggregate) is "parallelizing" this problem. That's why the hashrate is so high. For a given difficulty factor, the average number of attempts to find a solution is fixed. We can think of the entire mining pool as dividing up this solution time amongst themselves - parallelizing it. For the sake of discussion, let's say it takes 1 trillion hash attempts, on average, to mine a block. If there are 1,000 miners and each of them is operating 1,000 mining rigs and each mining rig can solve 1,000 hashes per second, the total hash rate of the network is 1 billion hash attempts per second. At this rate, it will take about 1,000 seconds (almost 17 minutes) to mine a block. In fact, these numbers will hold no matter whether we change hashcash to some other exponential problem (including GNFS), so long as it takes about 1 trillion attempts (on average) to find a solution. There's nothing in the nature of the problem (factoring versus finding a preimage whose hash solves a pre-specified equation) that matters because we only care about the difficulty of the problem (proof of work). In the case of hashcash, we don't care what the preimage looks like (we are indifferent to it), so long as its hash satisfies the required property. Any random number will do just as well as any other random number. In the case of GNFS, we don't care which factors you find, just as long as you find the required number of factors. This indifference means that parallelization is trivial because we can just utilize random search. Proof-of-work, after all, is not really about solving a problem, it's just a race.

What you have to understand is the network as a whole is not "parallelizing" like a GPU would parallelize.  The reason GPU's can beat CPU's in normal hashing is they can all have different start points in their nonce attempts.  They can have 1 core start at 0, another core start at 1,000,000 another core start at 2,000,000 etc.  This way they can split up the workload into chunks.  The network is trying random attempts and not collaborating on which computers have checked which numbers already.  So for tasks that cannot be split up into chunks and one chunk given to each core, CPU's will benefit.  Another thing that makes CPU's benefit is their memory utilization, which in the case of working with number fields, these number fields tend to lead to randomly accessing the memory often, again benefiting CPU's.

jenni161
Member
**
Offline Offline

Activity: 73
Merit: 10


View Profile WWW
December 21, 2017, 08:28:31 AM
 #38

Ok, so you just use that particular algorithm to generate random numbers. Have you studied how many % of large random numbers meet your expectations? And what will happen when you hit actual prime or composite that has only relatively small factors? There will be lot of them.
haltingprobability
Member
**
Offline Offline

Activity: 98
Merit: 26


View Profile
December 21, 2017, 02:46:01 PM
 #39

What you have to understand is the network as a whole is not "parallelizing" ... they can all have different start points in their nonce attempts.  They can have 1 core start at 0, another core start at 1,000,000 another core start at 2,000,000 etc.  

I realize the difference. But it actually doesn't matter because the search space is so large (2256) that the probability that any random slice will overlap another random slice is effectively zero. Mining pools would not be any more efficient if they coordinated the search space than they are by simply randomizing it. This is an artifact of the size of the search space.

Quote
This way they can split up the workload into chunks.  The network is trying random attempts and not collaborating on which computers have checked which numbers already.  So for tasks that cannot be split up into chunks and one chunk given to each core, CPU's will benefit.  Another thing that makes CPU's benefit is their memory utilization, which in the case of working with number fields, these number fields tend to lead to randomly accessing the memory often, again benefiting CPU's.

Both the CPU and GPUs have access to RAM (random-access memory), so access time to any memory location is equivalent, maybe some slight advantage for the CPU if it is onboard and the GPU is on an expansion card.

I encourage you to keep working on your idea - I'm not pouring cold water on it. But I am a computer engineer and computer architecture is where my career experience is, so I'm just giving you some additional facts to work with. Best of luck.
ir.hn (OP)
Member
**
Offline Offline

Activity: 322
Merit: 54

Consensus is Constitution


View Profile
December 21, 2017, 04:48:11 PM
 #40

Ok, so you just use that particular algorithm to generate random numbers. Have you studied how many % of large random numbers meet your expectations? And what will happen when you hit actual prime or composite that has only relatively small factors? There will be lot of them.

Great questions keep them coming!  Originally I was worried about primes but one of the other posters alerted me that everyone can be working on different numbers.  I didn't realize this is how bitcoin works currently.  Everyone's number is determined by the hash of their public key along with other block details.  So if one person gets dealt a prime (or two or three or 100 people) then the others will be the ones to win that block.  This is kind of good because the winning number we know are not prime, so if certain numbers aren't showing up in our list of winning numbers, we can say they may be prime.

So at this size, there will be primes I'm guessing 1 out of 5,000 numbers.  So not bad at all, pretty much inconsequential for the network as a whole.  Lets say we are looking for  20 digit factor of a 100 digit number.  This is like saying we are looking for a 2 digit factor (12 for example) of a 10 digit number (2,000,000,000 for example) so they should be very very common.  Really since everyone generates their own numbers it doesn't even matter how common a working number will be.  We just need one person to find a match.

Pages: « 1 [2] 3 »  All
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!