Bitcoin Forum
May 09, 2024, 02:24:37 PM *
News: Latest Bitcoin Core release: 27.0 [Torrent]
 
   Home   Help Search Login Register More  
Pages: 1 2 3 [All]
  Print  
Author Topic: New PoW method using factorization of large numbers.  (Read 818 times)
ir.hn (OP)
Member
**
Offline Offline

Activity: 322
Merit: 54

Consensus is Constitution


View Profile
December 12, 2017, 11:03:16 PM
Last edit: June 09, 2018, 11:44:42 PM by ir.hn
 #1

Here is a whitepaper about how to define a new proof of work system that gives preference to CPU's forever theoretically and is immune to vulnerabilities that encryption algorithms are prone to.
(link is to wayback machine so you can trust visiting it)

https://web.archive.org/web/20180609233726/http://www.naturehackerproducts.com/2017/12/new-proof-of-work-pow-method-based-on.html?m=1

The take home message is to find a single factor (of a length defined by the difficulty) of a very large number (over 100 digits).

Since it is such a large number GPU's become very slow at attempting it by themselves.

It appear this PoW would be the perfect marriage of GPU and CPU as a GPU seems to speed up the first part (the easiest part) of the factorization:
http://www.mersenneforum.org/showthread.php?t=19312

The Bitcoin software, network, and concept is called "Bitcoin" with a capitalized "B". Bitcoin currency units are called "bitcoins" with a lowercase "b" -- this is often abbreviated BTC.
Advertised sites are not endorsed by the Bitcoin Forum. They may be unsafe, untrustworthy, or illegal in your jurisdiction.
shensu
Member
**
Offline Offline

Activity: 86
Merit: 10


View Profile
December 13, 2017, 01:13:29 AM
 #2

Here is a whitepaper about how to define a new proof of work system that gives preference to CPU's forever theoretically and is immune to velnerabilities like encryption algorithms are prone to.
(link is to wayback machine so you can trust visiting it)

https://web.archive.org/web/20171212224738/http://www.naturehackerproducts.com/2017/12/new-proof-of-work-pow-method-based-on.html?m=1

The take home message is to find a single factor (of a length defined by the difficulty) of a very large number (over 100 digits).

Sunce it is such a large number GPU's become very slow at attempting it.

Are you the author?
ir.hn (OP)
Member
**
Offline Offline

Activity: 322
Merit: 54

Consensus is Constitution


View Profile
December 13, 2017, 01:29:01 AM
Last edit: December 13, 2017, 05:15:49 AM by ir.hn
 #3

yep.  Let me know what you think.  The first person to submit a factor that meets the requirements would be the winner of the block.  Very similar to how the system currently works but a bit cleaner and slanted towards cpu.  Also any factoring or sieving methods can be used so design of the miner software will be very important.  The network should be even faster than bitcoin too because verification of the factor is extremely easy and faster than verification of a nonce.

ir.hn (OP)
Member
**
Offline Offline

Activity: 322
Merit: 54

Consensus is Constitution


View Profile
December 16, 2017, 11:36:40 PM
 #4

https://www.wired.com/story/bitcoin-global-warming/

Interesting article.

achow101
Moderator
Legendary
*
expert
Offline Offline

Activity: 3388
Merit: 6631


Just writing some code


View Profile WWW
December 17, 2017, 12:59:29 AM
Merited by ABCbits (1)
 #5

Here is a whitepaper
It's not a whitepaper, it's a blog post.

Since it is such a large number GPU's become very slow at attempting it.
What about ASICs? This does not look like a problem that would be very hard to design ASICs for and just skip over GPUs entirely.

ir.hn (OP)
Member
**
Offline Offline

Activity: 322
Merit: 54

Consensus is Constitution


View Profile
December 17, 2017, 02:39:29 AM
 #6

Here is a whitepaper
It's not a whitepaper, it's a blog post.

Since it is such a large number GPU's become very slow at attempting it.
What about ASICs? This does not look like a problem that would be very hard to design ASICs for and just skip over GPUs entirely.

Hehe call it what you like but I have written many open source inventions as "blog posts" since blogs are a great place to self publish.

But no factoring large numbers via sieving is something that is done commonly and there is lots of literature on it.  The most efficient thing is CPU's.  The best "ASIC" currently would probably be built using Intel X series I9 on a mini itx motherboard with very little ram.

ir.hn (OP)
Member
**
Offline Offline

Activity: 322
Merit: 54

Consensus is Constitution


View Profile
December 17, 2017, 02:48:44 AM
Last edit: December 17, 2017, 03:19:25 AM by ir.hn
 #7

Interestingly searching google for 'number factoring asic's' lead me to the following post from 2014 which is basically exactly my idea I just take it a step farther and say the number can be so big that only one factor of sufficient size needs to be found to win the block.

https://bitcointalk.org/index.php?topic=783110.0

But ya I only saw papers of theoretical asic's for large number factoring, nothing that has been realized in real life.  And even such an asic to accomplish number sieving factoring would be so complicated and big that it probably wouldn't be financially efficient to build; ie: for the same price you could probably build a better cpu farm.

http://www.hyperelliptic.org/tanja/SHARCS/talks/shark_paper.pdf

More info about state of the art NFS sieving
http://gilchrist.ca/jeff/factoring/nfs_beginners_guide.html

It appear this PoW would be the perfect marriage of GPU and CPU as a GPU seems to speed up the first part (the easiest part) of the factorization:
http://www.mersenneforum.org/showthread.php?t=19312

Taras
Legendary
*
Offline Offline

Activity: 1386
Merit: 1053


Please do not PM me loan requests!


View Profile WWW
December 17, 2017, 08:16:50 AM
 #8

Quote
If a factor is not found in say 50% plus the blocktime (15 minutes or whatever is desired), then a new number generated (could just be the last number+1) because that previous number may be prime and not have any factors.
If you can generate a big number to work with, say from the block header, the miner can just regenerate the number by changing one byte of the generation transaction. If a miner can somehow prove that the number is prime, or cannot possibly have desirable factors, they can skip to the next one without waiting.

Also, if the product is dependent on the block header, which contains the merkle root, you don't have to worry about other miners stealing your solution, because the resultant factor is for a product that is dependent on the generation transaction paying you.

The other day I was thinking about the possibility of using a PoW coin to find Taxicab numbers, of which only the first 6 have been found in all of human history. You could theoretically make proof-of-work out of finding Ta(7), where a block is a near-miss (that is, a bigger version of Ta(6)), but it would be harder to turn into a difficulty-adjustable system than dividing huge numbers until you get a whole number of length n.
ir.hn (OP)
Member
**
Offline Offline

Activity: 322
Merit: 54

Consensus is Constitution


View Profile
December 17, 2017, 02:08:45 PM
Last edit: December 17, 2017, 02:27:38 PM by ir.hn
 #9

Quote
If a factor is not found in say 50% plus the blocktime (15 minutes or whatever is desired), then a new number generated (could just be the last number+1) because that previous number may be prime and not have any factors.
If you can generate a big number to work with, say from the block header, the miner can just regenerate the number by changing one byte of the generation transaction. If a miner can somehow prove that the number is prime, or cannot possibly have desirable factors, they can skip to the next one without waiting.

Ya sounds good.  This incentivizes developing unique mining methods.

Quote
Also, if the product is dependent on the block header, which contains the merkle root, you don't have to worry about other miners stealing your solution, because the resultant factor is for a product that is dependent on the generation transaction paying you.


Not sure I am following you.  The generated number should be the same for everyone, the only difference is if someone can figure out the number is not going to work he can skip ahead to the next one because everyone else would eventually have to do the same.  In this situation of a stalled out time limit, the next number would be the same for everyone too and based on the last block number and last block factor and say +1.  Someone can make their miner skip ahead and work on the next number automatically and just wait 15 minutes then propose it as a solution if he wants and be almost guarenteed to win the block.  The risk though is if a solution is found for the last block within the time limit his 'skip ahead' solution is not for the next block so he wasted his time.  This is good because it guarentees the blocktime is no longer than 15 minutes.
Quote
The other day I was thinking about the possibility of using a PoW coin to find Taxicab numbers, of which only the first 6 have been found in all of human history. You could theoretically make proof-of-work out of finding Ta(7), where a block is a near-miss (that is, a bigger version of Ta(6)), but it would be harder to turn into a difficulty-adjustable system than dividing huge numbers until you get a whole number of length n.

Right on.  Well a method for pow could be designed to give possible taxicab numbers that can be further investigated later perhaps.  Like my method's side effect would be generating a list of possible large prime numbers.  But I say we focus on my idea for now and get it working in a coin then work on how we can design a pow to provide a list of possible taxicab's or other types like mersenne's.  However it is all about finding prime numbers to use in cryptography and taxicab's and mersennes are just kind of intellectually curious types so I think my idea is more basic and useful.

Taras
Legendary
*
Offline Offline

Activity: 1386
Merit: 1053


Please do not PM me loan requests!


View Profile WWW
December 17, 2017, 07:31:26 PM
 #10

Also, if the product is dependent on the block header, which contains the merkle root, you don't have to worry about other miners stealing your solution, because the resultant factor is for a product that is dependent on the generation transaction paying you.


Not sure I am following you.  The generated number should be the same for everyone, the only difference is if someone can figure out the number is not going to work he can skip ahead to the next one because everyone else would eventually have to do the same.  In this situation of a stalled out time limit, the next number would be the same for everyone too and based on the last block number and last block factor and say +1.  Someone can make their miner skip ahead and work on the next number automatically and just wait 15 minutes then propose it as a solution if he wants and be almost guarenteed to win the block.  The risk though is if a solution is found for the last block within the time limit his 'skip ahead' solution is not for the next block so he wasted his time.  This is good because it guarentees the blocktime is no longer than 15 minutes.
If the generated number is the same for everyone, then if miner A broadcasts a solution, what is stopping miner B from just stealing that solution and broadcasting it as their own, and stealing the block reward? What if every node on the network wants to steal solutions?
ir.hn (OP)
Member
**
Offline Offline

Activity: 322
Merit: 54

Consensus is Constitution


View Profile
December 17, 2017, 07:35:44 PM
 #11

Also, if the product is dependent on the block header, which contains the merkle root, you don't have to worry about other miners stealing your solution, because the resultant factor is for a product that is dependent on the generation transaction paying you.


Not sure I am following you.  The generated number should be the same for everyone, the only difference is if someone can figure out the number is not going to work he can skip ahead to the next one because everyone else would eventually have to do the same.  In this situation of a stalled out time limit, the next number would be the same for everyone too and based on the last block number and last block factor and say +1.  Someone can make their miner skip ahead and work on the next number automatically and just wait 15 minutes then propose it as a solution if he wants and be almost guarenteed to win the block.  The risk though is if a solution is found for the last block within the time limit his 'skip ahead' solution is not for the next block so he wasted his time.  This is good because it guarentees the blocktime is no longer than 15 minutes.
If the generated number is the same for everyone, then if miner A broadcasts a solution, what is stopping miner B from just stealing that solution and broadcasting it as their own, and stealing the block reward? What if every node on the network wants to steal solutions?

The generated number is the same for everyone now in the bitcoin network.  As far as I l know currently everyone is working on cracking the same hash.  So if that were a problem then it would be a problem now.  I agree with you though that it seems like that attack could happen.

Taras
Legendary
*
Offline Offline

Activity: 1386
Merit: 1053


Please do not PM me loan requests!


View Profile WWW
December 17, 2017, 07:54:59 PM
 #12

Also, if the product is dependent on the block header, which contains the merkle root, you don't have to worry about other miners stealing your solution, because the resultant factor is for a product that is dependent on the generation transaction paying you.


Not sure I am following you.  The generated number should be the same for everyone, the only difference is if someone can figure out the number is not going to work he can skip ahead to the next one because everyone else would eventually have to do the same.  In this situation of a stalled out time limit, the next number would be the same for everyone too and based on the last block number and last block factor and say +1.  Someone can make their miner skip ahead and work on the next number automatically and just wait 15 minutes then propose it as a solution if he wants and be almost guarenteed to win the block.  The risk though is if a solution is found for the last block within the time limit his 'skip ahead' solution is not for the next block so he wasted his time.  This is good because it guarentees the blocktime is no longer than 15 minutes.
If the generated number is the same for everyone, then if miner A broadcasts a solution, what is stopping miner B from just stealing that solution and broadcasting it as their own, and stealing the block reward? What if every node on the network wants to steal solutions?

The generated number is the same for everyone now in the bitcoin network.  As far as I l know currently everyone is working on cracking the same hash.  So if that were a problem then it would be a problem now.  I agree with you though that it seems like that attack could happen.

No, it's not. Every miner is hashing a different block header, each containing a different merkle root that has the generation transaction paying that miner exclusively. This attack is therefore impossible on bitcoin, but there's still nothing stopping it from happening with this project.

If it is important that every miner works on the same number (and it may actually be a requirement for Ta(7)), then a miner could submit a hash of their solution along with their address for payment, wait for everyone to get a copy, and then release the solution, along with the block. It would be a lot more difficult to steal a solution in that case. But it would be impossible if every miner worked on a different number, and that number was dependent on the miner's payout address (on which the generation transaction depends, which the merkle root depends on, which the block header depends on).
ir.hn (OP)
Member
**
Offline Offline

Activity: 322
Merit: 54

Consensus is Constitution


View Profile
December 17, 2017, 08:04:31 PM
 #13

Also, if the product is dependent on the block header, which contains the merkle root, you don't have to worry about other miners stealing your solution, because the resultant factor is for a product that is dependent on the generation transaction paying you.


Not sure I am following you.  The generated number should be the same for everyone, the only difference is if someone can figure out the number is not going to work he can skip ahead to the next one because everyone else would eventually have to do the same.  In this situation of a stalled out time limit, the next number would be the same for everyone too and based on the last block number and last block factor and say +1.  Someone can make their miner skip ahead and work on the next number automatically and just wait 15 minutes then propose it as a solution if he wants and be almost guarenteed to win the block.  The risk though is if a solution is found for the last block within the time limit his 'skip ahead' solution is not for the next block so he wasted his time.  This is good because it guarentees the blocktime is no longer than 15 minutes.
If the generated number is the same for everyone, then if miner A broadcasts a solution, what is stopping miner B from just stealing that solution and broadcasting it as their own, and stealing the block reward? What if every node on the network wants to steal solutions?

The generated number is the same for everyone now in the bitcoin network.  As far as I l know currently everyone is working on cracking the same hash.  So if that were a problem then it would be a problem now.  I agree with you though that it seems like that attack could happen.

No, it's not. Every miner is hashing a different block header, each containing a different merkle root that has the generation transaction paying that miner exclusively. This attack is therefore impossible on bitcoin, but there's still nothing stopping it from happening with this project.

If it is important that every miner works on the same number (and it may actually be a requirement for Ta(7)), then a miner could submit a hash of their solution along with their address for payment, wait for everyone to get a copy, and then release the solution, along with the block. It would be a lot more difficult to steal a solution in that case. But it would be impossible if every miner worked on a different number, and that number was dependent on the miner's payout address (on which the generation transaction depends, which the merkle root depends on, which the block header depends on).

Thanks for explaining that, that was a gap in my understanding.  I think I am with you now and that makes sense that either of those two solutions you proposed should work; that everyone could work on a different number that is determined in part by their public key -  or that everyone works on the same number and encrypts their solution and later broadcasts their encryption key once "everyone" has received their solution.  I think working on different numbers would be more efficient - it sounds like the second solution would add in extra lag.

So there would be no block timeout.  Since everyone is working on a different number, if one person happened to be working on a prime and is destined to never find a factor, it doesn't matter because someone else surely was working on a composite number and will find a factor.

haltingprobability
Member
**
Offline Offline

Activity: 98
Merit: 26


View Profile
December 17, 2017, 08:15:44 PM
Last edit: December 17, 2017, 08:48:27 PM by haltingprobability
 #14

Here is a whitepaper about how to define a new proof of work system that gives preference to CPU's forever theoretically and is immune to velnerabilities that encryption algorithms are prone to.
(link is to wayback machine so you can trust visiting it)

https://web.archive.org/web/20171212224738/http://www.naturehackerproducts.com/2017/12/new-proof-of-work-pow-method-based-on.html?m=1

The take home message is to find a single factor (of a length defined by the difficulty) of a very large number (over 100 digits).

Since it is such a large number GPU's become very slow at attempting it.

I have a quibble with your "GPU-resistant" claims. We don't know that factorization cannot be efficiently randomized. If it can, it can also be trivially parallelized by randomly dividing the search space and sending each range to a different processing unit. In short, you can't claim an upper bound on parallelization without formal proof. Even if no one knows how to parallelize factorization (is this the case?), it doesn't prove that factorization can't be parallelized.

IMO, the effort expended on trying to democratize mining is wasted. Centralization of mining will always occur. Even factors like bandwidth and network latency are favored by centralized mining. Differences in electricity costs and so on only amplify this effect.

But there is no reason that the effort spent solving PoW puzzles has to be useless. Any problem in NP (including factoring) is a potential candidate and I recommend Circuit-SAT as an ideal candidate for PoW puzzles. Using this approach has the benefit that any real-world problem can be encoded as a circuit and then submitted for solution by the mining network.
ir.hn (OP)
Member
**
Offline Offline

Activity: 322
Merit: 54

Consensus is Constitution


View Profile
December 17, 2017, 08:28:53 PM
Last edit: December 17, 2017, 08:57:36 PM by ir.hn
 #15

Here is a whitepaper about how to define a new proof of work system that gives preference to CPU's forever theoretically and is immune to velnerabilities that encryption algorithms are prone to.
(link is to wayback machine so you can trust visiting it)

https://web.archive.org/web/20171212224738/http://www.naturehackerproducts.com/2017/12/new-proof-of-work-pow-method-based-on.html?m=1

The take home message is to find a single factor (of a length defined by the difficulty) of a very large number (over 100 digits).

Since it is such a large number GPU's become very slow at attempting it.

I have a quibble with your "GPU-resistant" claims. We don't know that factorization cannot be efficiently randomized. If it can, it can also be trivially parallelized by randomly dividing the search space and sending each range to a different processing unit. In short, you can't claim a lower bound on parallelization without formal proof. Even if no one knows how to parallelize factorization (is this the case?), it doesn't prove that factorization can't be parallelized.

IMO, the effort expended on trying to democratize mining is wasted. Centralization of mining will always occur. Even factors like bandwidth and network latency are favored by centralized mining. Differences in electricity costs and so on only amplify this effect.

But there is no reason that the effort spent solving PoW puzzles has to be useless. Any problem in NP (including factoring) is a potential candidate and I recommend Circuit-SAT as an ideal candidate for PoW puzzles. Using this approach has the benefit that any real-world problem can be encoded as a circuit and then submitted for solution by the mining network.

When you learn more about factorization you will discover that for factoring numbers of 100 digits or more the best way to narrow down the options to use trial factoring on is by using a process called GNFS sieving.  This process simply cannot be efficiently done on graphics cards.  Graphics cards can help with the process (step 1) but the longest part is step 2 so CPU's have had an overall advantage.  Ideally GPU would be used in tandem with a CPU... which just so happens to be a GREAT way to block botnets or server farms and give the advantage to personal computers.  

This is a well known state of affairs as these problems have been done for decades and people have been working for years on making CUDA variants of the software to do it.

To give you a counter example to your proposal that my "GPU resistant" claim needs to be definitively proven, SHA-256 has never been definitively proven to not be crackable without brute force yet we still use it.  Experience has taught us that it is secure and experience has show us that factoring very large numbers cannot be efficiently randomized.

Mining will always tend towards centralization but if we can very much disincentivize using GPU or ASIC or server farm or botnet then it will be decentralized for longer.  I never claim an infinite utopia, the constitution in America only lasted 200 years before it broke down and if this next coin can last 100 years before centralizing I will consider it a success.

haltingprobability
Member
**
Offline Offline

Activity: 98
Merit: 26


View Profile
December 17, 2017, 08:46:52 PM
 #16

When you learn more about factorization you will discover that for factoring numbers of 100 digits or more the best way to narrow down the options to use trial factoring on is by using a process called GNFS sieving.  This process simply cannot be efficiently done on graphics cards.  Graphics cards can help with the process (step 1) but the longest part is step 2 so CPU's have had an overall advantage.  Ideally GPU would be used in tandem with a CPU... which just so happens to be a GREAT way to block botnets or server farms and give the advantage to personal computers. 

Can you prove an upper bound on parallelization? In other words, can you prove that the advantage of x working units as x->oo goes to zero? Empirical evidence about how hard people have found parallelization of factoring to be is just hand-waving.
ir.hn (OP)
Member
**
Offline Offline

Activity: 322
Merit: 54

Consensus is Constitution


View Profile
December 17, 2017, 08:59:22 PM
 #17

When you learn more about factorization you will discover that for factoring numbers of 100 digits or more the best way to narrow down the options to use trial factoring on is by using a process called GNFS sieving.  This process simply cannot be efficiently done on graphics cards.  Graphics cards can help with the process (step 1) but the longest part is step 2 so CPU's have had an overall advantage.  Ideally GPU would be used in tandem with a CPU... which just so happens to be a GREAT way to block botnets or server farms and give the advantage to personal computers.  

Can you prove an upper bound on parallelization? In other words, can you prove that the advantage of x working units as x->oo goes to zero? Empirical evidence about how hard people have found parallelization of factoring to be is just hand-waving.

Once you prove that SHA-256 encryption can't be broken without using brute force.  You trying to draw this hard line of definitive proof shows that you don't have the wisdom to understand how the world works.

Taras
Legendary
*
Offline Offline

Activity: 1386
Merit: 1053


Please do not PM me loan requests!


View Profile WWW
December 17, 2017, 09:45:42 PM
 #18

So there would be no block timeout.  Since everyone is working on a different number, if one person happened to be working on a prime and is destined to never find a factor, it doesn't matter because someone else surely was working on a composite number and will find a factor.

If a miner suspects that their number is prime, or does not have many (or any) factors of the required size, they can just update the timestamp in the block header and get a new number to try getting factors from. They even get a new number for changing which transactions they want to include in their block. So, there can be a block timeout of 15 minutes, where if it has been that long without a solution, automatically update the timestamp in the header. But that's probably something that was going to be done anyways, at a much faster interval than 15 minutes. However you still cannot guarantee a block is found in 15 minutes.

Bitcoin testnet guarantees a block is found every hour by making the difficulty reset to 1 on any block whose predecessor is at least an hour older than itself. But that's probably not something we would want to do with a real currency. The low probability of happening to have long times without any blocks is just something we all have to deal with in bitcoin.
ir.hn (OP)
Member
**
Offline Offline

Activity: 322
Merit: 54

Consensus is Constitution


View Profile
December 17, 2017, 11:22:28 PM
 #19

So there would be no block timeout.  Since everyone is working on a different number, if one person happened to be working on a prime and is destined to never find a factor, it doesn't matter because someone else surely was working on a composite number and will find a factor.

If a miner suspects that their number is prime, or does not have many (or any) factors of the required size, they can just update the timestamp in the block header and get a new number to try getting factors from. They even get a new number for changing which transactions they want to include in their block. So, there can be a block timeout of 15 minutes, where if it has been that long without a solution, automatically update the timestamp in the header. But that's probably something that was going to be done anyways, at a much faster interval than 15 minutes. However you still cannot guarantee a block is found in 15 minutes.

Bitcoin testnet guarantees a block is found every hour by making the difficulty reset to 1 on any block whose predecessor is at least an hour older than itself. But that's probably not something we would want to do with a real currency. The low probability of happening to have long times without any blocks is just something we all have to deal with in bitcoin.

That sounds good to me.  Ya the probability, assuming a correctly set difficulty, of a block even bieng 5 minutes late is extremely low given the fact that everyone will be working on different numbers.  I wonder if anyone has analyzed the data for bitcoin and made a normal distribution of how long blocks actually take.

This new understanding of the method would mean there is less ability, as a side effect, to classify numbers as possible primes, but at least we could have a list of large composite numbers which can be used to narrow down the search for primes.  Since even at very large numbers primes are not all that uncommon - one in a few thousand or so.  It isn't much, but it's better than nothing I guess.

Of course the main point of a new algorithm is to resist centralization of mining power but providing useful work is very attractive too.

haltingprobability
Member
**
Offline Offline

Activity: 98
Merit: 26


View Profile
December 18, 2017, 12:03:13 AM
 #20

Can you prove an upper bound on parallelization? In other words, can you prove that the advantage of x working units as x->oo goes to zero? Empirical evidence about how hard people have found parallelization of factoring to be is just hand-waving.

Once you prove that SHA-256 encryption can't be broken without using brute force.  You trying to draw this hard line of definitive proof shows that you don't have the wisdom to understand how the world works.

You're shifting the goalposts. Your opening claim was that factorization does not benefit from GPUs (by which, I assume you mean parallelization). I nitpicked by pointing out that there's no reason to believe that factorization can't be sped up by parallelization. Claiming that factorization cannot benefit from parallelization is tantamount to claiming that there is no linear speedup for factorization. If this were true, it could be proved. There is no other way to know that it is the case than to prove it.
ir.hn (OP)
Member
**
Offline Offline

Activity: 322
Merit: 54

Consensus is Constitution


View Profile
December 18, 2017, 04:53:11 AM
 #21

Can you prove an upper bound on parallelization? In other words, can you prove that the advantage of x working units as x->oo goes to zero? Empirical evidence about how hard people have found parallelization of factoring to be is just hand-waving.

Once you prove that SHA-256 encryption can't be broken without using brute force.  You trying to draw this hard line of definitive proof shows that you don't have the wisdom to understand how the world works.

You're shifting the goalposts. Your opening claim was that factorization does not benefit from GPUs (by which, I assume you mean parallelization). I nitpicked by pointing out that there's no reason to believe that factorization can't be sped up by parallelization. Claiming that factorization cannot benefit from parallelization is tantamount to claiming that there is no linear speedup for factorization. If this were true, it could be proved. There is no other way to know that it is the case than to prove it.

Yes you are nitpicking.  Like I said if you try to prove that brute force is the only way to crack SHA-256 encryption then I will try to prove that GPU's are inherently weaker than CPU's for the task of GNFS sieving.

I'm sorry to break it to you but mathematical proofs can't be applied to everything you do in life.  Next time you take a shit you better offer exact proof that it smells bad.

People like you nitpick and demand things from people because you want to make an excuse to yourself why you don't want to learn something new.

ir.hn (OP)
Member
**
Offline Offline

Activity: 322
Merit: 54

Consensus is Constitution


View Profile
December 18, 2017, 05:10:36 PM
 #22

It's really exciting to me that this method could very well be the holy grail PoW method that we have been looking for.  Not only is it inefficient on GPU's, the best you could hope for is GPU's approaching CPU's in speed, but the optimal setup would be 1 GPU and 1 CPU working together.  Not only does this make GPU farms and standard ASIC farms unproductive, but it also makes the Server farms and Botnet's that have plagued CPU only coins unproductive as well.  The optimum setup would be similar to a desktop computer, one CPU and one GPU.

One CPU + One GPU = 1 vote

http://www.mersenneforum.org/showthread.php?t=16687

Witrebel
Member
**
Offline Offline

Activity: 116
Merit: 101


View Profile
December 19, 2017, 04:07:28 AM
 #23

You are essentially suggesting you have a solution that could/should be applied to a multi billion dollar industry.   And you are unwilling to entertain the idea of a mathematical proof.  It's likely that you're right, but in this arena, I think its pretty much a requirement to at least ATTEMPT a rigorous proof.   Otherwise your idea will forever remain an idea.  Personally I'd love to see the math that goes into such a proof for the sake of learning alone.

That said, its a very interesting concept.  
ir.hn (OP)
Member
**
Offline Offline

Activity: 322
Merit: 54

Consensus is Constitution


View Profile
December 19, 2017, 04:13:25 AM
 #24

My number one priority is to implement this idea but I need help.  I am an inventor, engineer, and scientist but unfortunately I fail when it comes to languages and programming.  That side of my brain is stunted.

What needs to be done is a miner made and bitcoin code modified to allow for this PoW to be used.  

I think at first we should use stock bitcoin code and get this PoW working in that. From there people can tweak it to make coins that have modified blocksize, reward, maximum emission, blocktime, etc tuned to how they like it.  The more the merrier and I know the best way for a coin to scale is for child coins to be made.  But if we get it working in otherwise stock code then that would be a good foundation to build on.

We also need a miner.  This may be the hardest part.  But there is msieve, a C program that may be a great fit for the foundation of a pretty well optimized miner https://sourceforge.net/projects/msieve/

Is there anything else I'm missing?

The PoW is the heart of a coin.  I think this new heart could give coins a much longer life.  If anyone else is excited about this, please lets figure out how to get this started.

Witrebel
Member
**
Offline Offline

Activity: 116
Merit: 101


View Profile
December 19, 2017, 04:21:35 AM
 #25

Think about your target audience.  Your number 1 goal is proving this PoW algo is THE FUTURE OF BITCOIN.  This isn't a "lets bench test it in the lab and see how it works"  You are literally suggesting something that would end an entire mining industry, start another, and affect a multi-billion dollar market.  I'd start with "who here is a math minded person that can help me work out the proofs"
ir.hn (OP)
Member
**
Offline Offline

Activity: 322
Merit: 54

Consensus is Constitution


View Profile
December 19, 2017, 04:30:40 AM
 #26

You are essentially suggesting you have a solution that could/should be applied to a multi billion dollar industry.   And you are unwilling to entertain the idea of a mathematical proof.  It's likely that you're right, but in this arena, I think its pretty much a requirement to at least ATTEMPT a rigorous proof.   Otherwise your idea will forever remain an idea.  Personally I'd love to see the math that goes into such a proof for the sake of learning alone.

That said, its a very interesting concept.  

So would I Wink.  I would like to apologize to haltingprobability, I have been a bit harsh on him.  Some people like seeing proof, I for one am someone who will go out and search for myself.  But different strokes for different folks.

I am not the kind of meticulous person to work on proofs.  I don't have a degree in mathematics or computer science and I tend to get interested in developing concepts and ideas and implementing them.  I am more of an entrepreneur and engineer than a mathematician.  My brain works like a CPU and I take in lots of information from many sources and synthesize it into a product idea.  I don't need proof, I just need a general understanding how something works, and if it makes sense and passes the smell test, I go for it.  

Here is how I see it.  GPU's are like robots.  You program them for a specific task and they can beat any human in that specialized task.  But lets say something unexpected happens like the roof caves in.  The robots would get stuck, they wouldn't know how to move out of the way and bring their work elsewhere to continue.  The humans would then beat them because they are more adaptable.  CPU's are adaptable, GPU's are not.  CPU's are fast and think on their feet whereas GPU's are slow and methodical and share their work well.  So it would be very shortsighted to think that GPU's can accelerate any given task and be better than a CPU.  They are very different entities.  If GPU's are inherently better than CPU's at some tasks (and they certainly are like SHA-256 and Scrypt) Then it only makes sense that CPU's would be inherently better at other tasks.  These tasks would be tasks where lots of different and unpredictable things are happening.  The problem is finding a stable consistent task that also offers this unpredictability.

For me to attempt to prove that Large number factoring would offer this unpredictability would require that I know much more than I currently do about large number factoring.  But I take the experts word that factoring (more specifically sieving) large numbers is a very intensive thing that requires lots of unpredictable memory access and varied tasks.  Its about as random of a task as a non random task can be if that makes any sense.  So that is why I feel it is the #1 contender for a CPU tilted coin.  The cool thing is certain tasks in the sieving are predictable, and so the combination of a GPU and CPU would be best, which just so happens to be the perfect thing if we want to favor PC's.  Also it is good if we want to bring the GPU miners and CPU miners together to both adopt this PoW.

ir.hn (OP)
Member
**
Offline Offline

Activity: 322
Merit: 54

Consensus is Constitution


View Profile
December 19, 2017, 04:34:46 AM
 #27

Think about your target audience.  Your number 1 goal is proving this PoW algo is THE FUTURE OF BITCOIN.  This isn't a "lets bench test it in the lab and see how it works"  You are literally suggesting something that would end an entire mining industry, start another, and affect a multi-billion dollar market.  I'd start with "who here is a math minded person that can help me work out the proofs"

Fair enough.  But the difference in perspective on how to pitch this is because of a difference in perspective of how to implement it.  I am of the mind that we create a bitcoin based altcoin with this PoW and test it out and if it really is the future, then bitcoin will have to implement it.

Most people want to see something working and succeeding before adopting it.  This is the reason why people still aren't investing in bitcoin en masse, they want to see if it is still around in 10 years first.

Anti-Cen
Member
**
Offline Offline

Activity: 210
Merit: 26

High fees = low BTC price


View Profile
December 19, 2017, 01:33:35 PM
 #28

Wow i don't know how us programmers in the past managed to do anything with out proof of work, you know
counting the number of black pixels on hundreds of images just so we can keep wasting electricity and spending
money on 1080 GPU's in order to have POW in our programs.

Where did the need come from, yes i remember this block-chain thing as if we never had databases or
spread sheets before then or be in a position to lock records or data.

That satoshi guy created more problems than he solved in developing digital "Peoples money" and TBTB
are none too happy about Bit-Torrent but BT does not waste time and energy counting grains of sand to
over come the problems does it now.

Tor is also another nice tool (Before someone got it to spread wannacry viruses using the client SW, not back doors in browsers, kicked off the boom in BTC price)

It seem to me that what we wanted and what we have got are nothing like each other and people like me or people that are
smarter need to go back to the drawing board if we are left with counting grains of sand and distributing a 200gb to each and
every node that tries to run a system in near real time.

ETH that allows us to deploy programs (call them contract, keeps wall street happy) on distributed servers that use a type of gas to cover
the running costs is cool but look closer, it's not offering .onion type sites you get on the underground using Tor and seems to be just offering
a very expensive service, if not easy to use that allows us to rent space to store a few global strings and int32 numbers at a huge cost

We need to take concepts like Tor, Bit-Torrent, ETH, Tangle, BAT and what has been learned from BTC and go back to the drawing board and to
look at this again and start from scratch me thinks

Each node in the network should also offer a VPN or proxy type service and be paid in a type of gas because the internet is being closed down
using censorship so we need to not only take the money back but also our freedom of speech whilst we are about it






Mining is CPU-wars and Intel, AMD like it nearly as much as big oil likes miners wasting electricity. Is this what mankind has come too.
jenni161
Member
**
Offline Offline

Activity: 73
Merit: 10


View Profile WWW
December 19, 2017, 03:44:38 PM
 #29

Where do you get those 100 digit numbers in the first place? Is there some database or some other computer (server) generating those tasks or what? Sorry if that has been already mentioned but I didn't see that.
ir.hn (OP)
Member
**
Offline Offline

Activity: 322
Merit: 54

Consensus is Constitution


View Profile
December 20, 2017, 05:48:48 PM
 #30

Where do you get those 100 digit numbers in the first place? Is there some database or some other computer (server) generating those tasks or what? Sorry if that has been already mentioned but I didn't see that.

They are generated using a hashing algorithm like sha-256.  Basically the same way they currently work in bitcoin.  Great question.  Let me know if you want a more in depth answer.

ir.hn (OP)
Member
**
Offline Offline

Activity: 322
Merit: 54

Consensus is Constitution


View Profile
December 20, 2017, 05:52:48 PM
 #31

Wow i don't know how us programmers in the past managed to do anything with out proof of work, you know
counting the number of black pixels on hundreds of images just so we can keep wasting electricity and spending
money on 1080 GPU's in order to have POW in our programs.

Where did the need come from, yes i remember this block-chain thing as if we never had databases or
spread sheets before then or be in a position to lock records or data.

That satoshi guy created more problems than he solved in developing digital "Peoples money" and TBTB
are none too happy about Bit-Torrent but BT does not waste time and energy counting grains of sand to
over come the problems does it now.

Tor is also another nice tool (Before someone got it to spread wannacry viruses using the client SW, not back doors in browsers, kicked off the boom in BTC price)

It seem to me that what we wanted and what we have got are nothing like each other and people like me or people that are
smarter need to go back to the drawing board if we are left with counting grains of sand and distributing a 200gb to each and
every node that tries to run a system in near real time.

ETH that allows us to deploy programs (call them contract, keeps wall street happy) on distributed servers that use a type of gas to cover
the running costs is cool but look closer, it's not offering .onion type sites you get on the underground using Tor and seems to be just offering
a very expensive service, if not easy to use that allows us to rent space to store a few global strings and int32 numbers at a huge cost

We need to take concepts like Tor, Bit-Torrent, ETH, Tangle, BAT and what has been learned from BTC and go back to the drawing board and to
look at this again and start from scratch me thinks

Each node in the network should also offer a VPN or proxy type service and be paid in a type of gas because the internet is being closed down
using censorship so we need to not only take the money back but also our freedom of speech whilst we are about it







Good point but even eth uses proof of work.  The problem that is unique to blockchains is that they require the information to be correct.  How do we verify that sally did or did not give bob 100 bitcoins?  Who do we trust to tell us the truth?  That is where proof of work comes in.  We let everyone vote on it.  If we let every IP address vote, then people would just make multiple IP addresses to vote more.  PoW makes it so you have to prove that you are doing work so you can't just get a bunch of free votes. 

That said, my PoW method here actually does provide value to the world.  Prime numbers are valuable (they are used to create hard to break encryption like TOR) and everyone factoring large numbers would give us hints that some numbers that haven't been factored are possible primes so those can be checked out later.

jenni161
Member
**
Offline Offline

Activity: 73
Merit: 10


View Profile WWW
December 20, 2017, 08:45:12 PM
 #32

Where do you get those 100 digit numbers in the first place? Is there some database or some other computer (server) generating those tasks or what? Sorry if that has been already mentioned but I didn't see that.

They are generated using a hashing algorithm like sha-256.  Basically the same way they currently work in bitcoin.  Great question.  Let me know if you want a more in depth answer.

Yes I would love to have a more in depth answer. I have read some descriptions about sha256 and I didn't see any reason it would produce large numbers that are either primes or semiprimes.
ir.hn (OP)
Member
**
Offline Offline

Activity: 322
Merit: 54

Consensus is Constitution


View Profile
December 20, 2017, 09:02:56 PM
 #33

Where do you get those 100 digit numbers in the first place? Is there some database or some other computer (server) generating those tasks or what? Sorry if that has been already mentioned but I didn't see that.

They are generated using a hashing algorithm like sha-256.  Basically the same way they currently work in bitcoin.  Great question.  Let me know if you want a more in depth answer.

Yes I would love to have a more in depth answer. I have read some descriptions about sha256 and I didn't see any reason it would produce large numbers that are either primes or semiprimes.

We don't want them to be primes.  We want them to be composite so that we can find the winning factor (the factor that meets the difficulty requirements).

Hashing algorithms take any input and produce a large random output.  This output can be converted into base-10; normal numbers.  Lets say it produces 120 digit number and we want a 102 digit number.  we can just truncate the number to the amount of digits we want.

haltingprobability
Member
**
Offline Offline

Activity: 98
Merit: 26


View Profile
December 21, 2017, 12:34:09 AM
 #34

So would I Wink.  I would like to apologize to haltingprobability, I have been a bit harsh on him.  Some people like seeing proof, I for one am someone who will go out and search for myself.  But different strokes for different folks.

No hard feelings.

Quote
I am not the kind of meticulous person to work on proofs.  I don't have a degree in mathematics or computer science and I tend to get interested in developing concepts and ideas and implementing them.

I am a computer engineer. I have a degree in computer engineering. I'm trying to inform you that your intuitive notions about the difference between GPUs and CPUs - while good enough for basic comprehension - are not rigorous and that, in this case, that matters.

There is a theorem in computational complexity (the field of mathematics that studies how fast any given kind of formal problem can be solved), called the speed-up theorem. Basically, this theorem says that any particular solution to a problem (an "algorithm") can be sped up by a linear factor. So, suppose that you find an algorithm A that solves problem Z in time n2, where n is the size of a problem in Z. This is called quadratic complexity - as n increases, the time required to solve the problem grows as the square of n. The speed-up theorem tells us that it is always possible to find some fixed factor, k, such that we can construct a new algorithm A' from A that solves a problem in Z in time n2 / k. Note that this does not change the complexity class - A still solves problems in Z in quadratic time. Intuitively, this makes sense - if I can solve a problem in time n2 with one computer, then I should always be able to solve it in nearly n2 / 2 with two computers. There are some very intractable problems where this becomes difficult (due to network complexity) but such problems tend to be academic, i.e. specially constructed to make parallelization difficult.

Quote
CPU's are adaptable, GPU's are not.  CPU's are fast and think on their feet whereas GPU's are slow and methodical and share their work well.  

This is mostly not true anymore. GPGPUs (General-Purpose GPU) can, in principle, do anything that CPUs can do and the latest GPU cards are being released with GPGPU cores in them. GPU clock speeds are in the same class as CPUs, so they are not slow. GPUs are designed around "data parallelism" or "data flow architecture" and this is why they are well-suited to graphics applications or any application where you can easily parallelize the work by simply dividing up the work surface and handing out more or less identical instructions to each work unit (rendering pipeline).

Quote
So it would be very shortsighted to think that GPU's can accelerate any given task and be better than a CPU.  

For any large search problem (and any non-trivial PoW must be a large search problem), GPUs will be superior. If I have to try 260 possibilities (about a billion billion possibilities) in order to solve a given PoW problem, I can try these one-by-one on a CPU or I can parallelize the problem and have multiple CPUs or GPUs work on it in parallel. Suppose I have 128 GPUs available to me. Instead of having to search 260 possibilities, each GPU now only has to solve 253 possibilities, resulting in a 128x reduction in search time. Factoring is just a large search problem.
ir.hn (OP)
Member
**
Offline Offline

Activity: 322
Merit: 54

Consensus is Constitution


View Profile
December 21, 2017, 01:12:22 AM
 #35

Thanks haltingprobability.  I think factoring large numbers may be a bit more complex than you realize.  Here is an excerpt from wikipedia for general number field sieve, the most efficient way to factor large numbers over 100 digits.  I just want to get this posted so we can dissect it and discuss:

First off:
The general number field sieve, on the other hand, manages to search for smooth numbers that are subexponential in the size of n. Since these numbers are smaller, they are more likely to be smooth than the numbers inspected in previous algorithms. This is the key to the efficiency of the number field sieve. In order to achieve this speed-up, the number field sieve has to perform computations and factorizations in number fields. This results in many rather complicated aspects of the algorithm, as compared to the simpler rational sieve.

Method:
Two polynomials f(x) and g(x) of small degrees d and e are chosen, which have integer coefficients, which are irreducible over the rationals, and which, when interpreted mod n, have a common integer root m. An optimal strategy for choosing these polynomials is not known; one simple method is to pick a degree d for a polynomial, consider the expansion of n in base m (allowing digits between −m and m) for a number of different m of order n1/d, and pick f(x) as the polynomial with the smallest coefficients and g(x) as x − m.

Consider the number field rings Z[r1] and Z[r2], where r1 and r2 are roots of the polynomials f and g. Since f is of degree d with integer coefficients, if a and b are integers, then so will be bd·f(a/b), which we call r. Similarly, s = be·g(a/b) is an integer. The goal is to find integer values of a and b that simultaneously make r and s smooth relative to the chosen basis of primes. If a and b are small, then r and s will be small too, about the size of m, and we have a better chance for them to be smooth at the same time. The current best-known approach for this search is lattice sieving; to get acceptable yields, it is necessary to use a large factor base.

Having enough such pairs, using Gaussian elimination, one can get products of certain r and of the corresponding s to be squares at the same time. A slightly stronger condition is needed—that they are norms of squares in our number fields, but that condition can be achieved by this method too. Each r is a norm of a − r1b and hence that the product of the corresponding factors a − r1b is a square in Z[r1], with a "square root" which can be determined (as a product of known factors in Z[r1])—it will typically be represented as an irrational algebraic number. Similarly, the product of the factors a − r2b is a square in Z[r2], with a "square root" which also can be computed. It should be remarked that the use of Gaussian elimination does not give the optimal run time of the algorithm. Instead, sparse matrix solving algorithms such as Block Lanczos or Block Wiedemann are used.

Since m is a root of both f and g mod n, there are homomorphisms from the rings Z[r1] and Z[r2] to the ring Z/nZ (the integers mod n), which map r1 and r2 to m, and these homomorphisms will map each "square root" (typically not represented as a rational number) into its integer representative. Now the product of the factors a − mb mod n can be obtained as a square in two ways—one for each homomorphism. Thus, one can find two numbers x and y, with x2 − y2 divisible by n and again with probability at least one half we get a factor of n by finding the greatest common divisor of n and x − y.

haltingprobability
Member
**
Offline Offline

Activity: 98
Merit: 26


View Profile
December 21, 2017, 04:27:02 AM
 #36

Thanks haltingprobability.  I think factoring large numbers may be a bit more complex than you realize.  Here is an excerpt from wikipedia for general number field sieve, the most efficient way to factor large numbers over 100 digits.  I just want to get this posted so we can dissect it and discuss:

Note: The results of complexity theory apply regardless of "best known algorithm" - a problem can be proved to much easier than the best known algorithm for solving that problem.

The computational complexity of GNFS is given in the first paragraph on Wiki, as of this writing. It is an exponential time algorithm. As long as SHA-256 is actually one-way, solving the hashcash PoW function is also exponential in the number of zero-bits. Let me try to explain what I'm saying. Even though solving Bitcoin's PoW (hashcash) is exponential time complexity, the pool of all Bitcoin mining (considered as an aggregate) is "parallelizing" this problem. That's why the hashrate is so high. For a given difficulty factor, the average number of attempts to find a solution is fixed. We can think of the entire mining pool as dividing up this solution time amongst themselves - parallelizing it. For the sake of discussion, let's say it takes 1 trillion hash attempts, on average, to mine a block. If there are 1,000 miners and each of them is operating 1,000 mining rigs and each mining rig can solve 1,000 hashes per second, the total hash rate of the network is 1 billion hash attempts per second. At this rate, it will take about 1,000 seconds (almost 17 minutes) to mine a block. In fact, these numbers will hold no matter whether we change hashcash to some other exponential problem (including GNFS), so long as it takes about 1 trillion attempts (on average) to find a solution. There's nothing in the nature of the problem (factoring versus finding a preimage whose hash solves a pre-specified equation) that matters because we only care about the difficulty of the problem (proof of work). In the case of hashcash, we don't care what the preimage looks like (we are indifferent to it), so long as its hash satisfies the required property. Any random number will do just as well as any other random number. In the case of GNFS, we don't care which factors you find, just as long as you find the required number of factors. This indifference means that parallelization is trivial because we can just utilize random search. Proof-of-work, after all, is not really about solving a problem, it's just a race.
ir.hn (OP)
Member
**
Offline Offline

Activity: 322
Merit: 54

Consensus is Constitution


View Profile
December 21, 2017, 04:57:26 AM
 #37

Thanks haltingprobability.  I think factoring large numbers may be a bit more complex than you realize.  Here is an excerpt from wikipedia for general number field sieve, the most efficient way to factor large numbers over 100 digits.  I just want to get this posted so we can dissect it and discuss:

Note: The results of complexity theory apply regardless of "best known algorithm" - a problem can be proved to much easier than the best known algorithm for solving that problem.

The computational complexity of GNFS is given in the first paragraph on Wiki, as of this writing. It is an exponential time algorithm. As long as SHA-256 is actually one-way, solving the hashcash PoW function is also exponential in the number of zero-bits. Let me try to explain what I'm saying. Even though solving Bitcoin's PoW (hashcash) is exponential time complexity, the pool of all Bitcoin mining (considered as an aggregate) is "parallelizing" this problem. That's why the hashrate is so high. For a given difficulty factor, the average number of attempts to find a solution is fixed. We can think of the entire mining pool as dividing up this solution time amongst themselves - parallelizing it. For the sake of discussion, let's say it takes 1 trillion hash attempts, on average, to mine a block. If there are 1,000 miners and each of them is operating 1,000 mining rigs and each mining rig can solve 1,000 hashes per second, the total hash rate of the network is 1 billion hash attempts per second. At this rate, it will take about 1,000 seconds (almost 17 minutes) to mine a block. In fact, these numbers will hold no matter whether we change hashcash to some other exponential problem (including GNFS), so long as it takes about 1 trillion attempts (on average) to find a solution. There's nothing in the nature of the problem (factoring versus finding a preimage whose hash solves a pre-specified equation) that matters because we only care about the difficulty of the problem (proof of work). In the case of hashcash, we don't care what the preimage looks like (we are indifferent to it), so long as its hash satisfies the required property. Any random number will do just as well as any other random number. In the case of GNFS, we don't care which factors you find, just as long as you find the required number of factors. This indifference means that parallelization is trivial because we can just utilize random search. Proof-of-work, after all, is not really about solving a problem, it's just a race.

What you have to understand is the network as a whole is not "parallelizing" like a GPU would parallelize.  The reason GPU's can beat CPU's in normal hashing is they can all have different start points in their nonce attempts.  They can have 1 core start at 0, another core start at 1,000,000 another core start at 2,000,000 etc.  This way they can split up the workload into chunks.  The network is trying random attempts and not collaborating on which computers have checked which numbers already.  So for tasks that cannot be split up into chunks and one chunk given to each core, CPU's will benefit.  Another thing that makes CPU's benefit is their memory utilization, which in the case of working with number fields, these number fields tend to lead to randomly accessing the memory often, again benefiting CPU's.

jenni161
Member
**
Offline Offline

Activity: 73
Merit: 10


View Profile WWW
December 21, 2017, 08:28:31 AM
 #38

Ok, so you just use that particular algorithm to generate random numbers. Have you studied how many % of large random numbers meet your expectations? And what will happen when you hit actual prime or composite that has only relatively small factors? There will be lot of them.
haltingprobability
Member
**
Offline Offline

Activity: 98
Merit: 26


View Profile
December 21, 2017, 02:46:01 PM
 #39

What you have to understand is the network as a whole is not "parallelizing" ... they can all have different start points in their nonce attempts.  They can have 1 core start at 0, another core start at 1,000,000 another core start at 2,000,000 etc.  

I realize the difference. But it actually doesn't matter because the search space is so large (2256) that the probability that any random slice will overlap another random slice is effectively zero. Mining pools would not be any more efficient if they coordinated the search space than they are by simply randomizing it. This is an artifact of the size of the search space.

Quote
This way they can split up the workload into chunks.  The network is trying random attempts and not collaborating on which computers have checked which numbers already.  So for tasks that cannot be split up into chunks and one chunk given to each core, CPU's will benefit.  Another thing that makes CPU's benefit is their memory utilization, which in the case of working with number fields, these number fields tend to lead to randomly accessing the memory often, again benefiting CPU's.

Both the CPU and GPUs have access to RAM (random-access memory), so access time to any memory location is equivalent, maybe some slight advantage for the CPU if it is onboard and the GPU is on an expansion card.

I encourage you to keep working on your idea - I'm not pouring cold water on it. But I am a computer engineer and computer architecture is where my career experience is, so I'm just giving you some additional facts to work with. Best of luck.
ir.hn (OP)
Member
**
Offline Offline

Activity: 322
Merit: 54

Consensus is Constitution


View Profile
December 21, 2017, 04:48:11 PM
 #40

Ok, so you just use that particular algorithm to generate random numbers. Have you studied how many % of large random numbers meet your expectations? And what will happen when you hit actual prime or composite that has only relatively small factors? There will be lot of them.

Great questions keep them coming!  Originally I was worried about primes but one of the other posters alerted me that everyone can be working on different numbers.  I didn't realize this is how bitcoin works currently.  Everyone's number is determined by the hash of their public key along with other block details.  So if one person gets dealt a prime (or two or three or 100 people) then the others will be the ones to win that block.  This is kind of good because the winning number we know are not prime, so if certain numbers aren't showing up in our list of winning numbers, we can say they may be prime.

So at this size, there will be primes I'm guessing 1 out of 5,000 numbers.  So not bad at all, pretty much inconsequential for the network as a whole.  Lets say we are looking for  20 digit factor of a 100 digit number.  This is like saying we are looking for a 2 digit factor (12 for example) of a 10 digit number (2,000,000,000 for example) so they should be very very common.  Really since everyone generates their own numbers it doesn't even matter how common a working number will be.  We just need one person to find a match.

Taras
Legendary
*
Offline Offline

Activity: 1386
Merit: 1053


Please do not PM me loan requests!


View Profile WWW
December 22, 2017, 05:58:35 PM
 #41

Originally I was worried about primes but one of the other posters alerted me that everyone can be working on different numbers.  I didn't realize this is how bitcoin works currently.  Everyone's number is determined by the hash of their public key along with other block details.  So if one person gets dealt a prime (or two or three or 100 people) then the others will be the ones to win that block.
If one miner gets stuck with a prime, then they're not "out" until the next block is solved - they can and will periodically generate a new number by changing some of the block details, and then hope this new number they're factorizing isn't prime.

On a side note it is possible that miners will be biased toward even numbers, since it is more likely that an even number will be composite than an odd number (marginally, but at this scale it could make a difference). As miners can change certain parts of the block details to whatever they want, they can keep trying until they get an even number, and then work on that one.
ir.hn (OP)
Member
**
Offline Offline

Activity: 322
Merit: 54

Consensus is Constitution


View Profile
February 22, 2018, 04:00:27 AM
 #42

Originally I was worried about primes but one of the other posters alerted me that everyone can be working on different numbers.  I didn't realize this is how bitcoin works currently.  Everyone's number is determined by the hash of their public key along with other block details.  So if one person gets dealt a prime (or two or three or 100 people) then the others will be the ones to win that block.
If one miner gets stuck with a prime, then they're not "out" until the next block is solved - they can and will periodically generate a new number by changing some of the block details, and then hope this new number they're factorizing isn't prime.

On a side note it is possible that miners will be biased toward even numbers, since it is more likely that an even number will be composite than an odd number (marginally, but at this scale it could make a difference). As miners can change certain parts of the block details to whatever they want, they can keep trying until they get an even number, and then work on that one.

Good catch.  That is a glaring vulnerability since we want to ultimately test whether numbers are prime or not.  My intention is that people can datamine the blockchain to know what numbers to rule out in the search for new prime numbers.  If everyone is mining even numbers than this wouldn't work.  honestly I think we would have to eliminate numbers that have any factors up to say 1000.  This way people can't just divide their number by everything from 1-100 to make sure it isn't prime.  The point is to rule out prime candidates by proving they are composite.

I don't know, any other ideas on this vulnerability?

Taras
Legendary
*
Offline Offline

Activity: 1386
Merit: 1053


Please do not PM me loan requests!


View Profile WWW
February 22, 2018, 04:00:56 PM
Last edit: March 03, 2018, 03:42:59 PM by Taras
Merited by DarkStar_ (1)
 #43

Originally I was worried about primes but one of the other posters alerted me that everyone can be working on different numbers.  I didn't realize this is how bitcoin works currently.  Everyone's number is determined by the hash of their public key along with other block details.  So if one person gets dealt a prime (or two or three or 100 people) then the others will be the ones to win that block.
If one miner gets stuck with a prime, then they're not "out" until the next block is solved - they can and will periodically generate a new number by changing some of the block details, and then hope this new number they're factorizing isn't prime.

On a side note it is possible that miners will be biased toward even numbers, since it is more likely that an even number will be composite than an odd number (marginally, but at this scale it could make a difference). As miners can change certain parts of the block details to whatever they want, they can keep trying until they get an even number, and then work on that one.

Good catch.  That is a glaring vulnerability since we want to ultimately test whether numbers are prime or not.  My intention is that people can datamine the blockchain to know what numbers to rule out in the search for new prime numbers.  If everyone is mining even numbers than this wouldn't work.  honestly I think we would have to eliminate numbers that have any factors up to say 1000.  This way people can't just divide their number by everything from 1-100 to make sure it isn't prime.  The point is to rule out prime candidates by proving they are composite.

I don't know, any other ideas on this vulnerability?

Let's say that, to solve a block, you need to find a factor that is at least 1000 digits of a number that is 2000 digits. We already know we can skip odd numbers, so keep trying to hash your block until you get an even number. This should happen nearly instantly. Now, to find a 1000+ digit factor of a 2000 digit even number...... you just divide by 2. Even on a number this size, that kind of operation would take only milliseconds on a CPU.

More digits in the factor meaning higher difficulty is just not accurate. Finding a 50 digit factor would be harder than finding either a 1 digit or 1999+ digit factor of a 2000 digit number. But ruling out prime candidates is not the hard part. You can already prove that a number is composite if it is both greater than 2 and even. We already know it's greater than 2, so you just have to look at the last one bit - if 0 then composite, if 1 then unknown, try again.

Disallowing numbers with any factors (other than 1) up to 1000 is an interesting idea. They certainly exist (here's one, ~100 digits: 1332040162761665637631613428888885580516735653612226801746764441547065802389532 208097762306977281), but I don't know if there's a way to look at a number and calculate if it meets that criteria before you go bruteforcing factors. If there is (I imagine there is) then we're back to looking for a large enough factor anyways.

What you could do instead is have a window of factor length that has to be met. Rather than >= 1000 digits, say 100 to 110 digits, for a factor of a 2000 digit number.

edit: The very big number above is getting a space in it for some reason - if you try to calculate the factors, make sure that you remove that space
c_demian
Newbie
*
Offline Offline

Activity: 17
Merit: 0


View Profile
March 27, 2018, 10:32:44 PM
 #44

It's a challenging proposal.
Prime numbers and numbers having small factors can be ruled out easily. Primality test is much faster than factoring and there are algorithms which are very fast in finding small factors (see elliptic curve factoring). so, you could just pick numbers with specified factor sizes.
This could also lead to finding new factoring algorithms.
Kogs
Member
**
Offline Offline

Activity: 86
Merit: 26


View Profile
March 28, 2018, 04:50:44 AM
 #45

Here is a whitepaper about how to define a new proof of work system that gives preference to CPU's forever theoretically and is immune to vulnerabilities that encryption algorithms are prone to.
(link is to wayback machine so you can trust visiting it)

https://web.archive.org/web/20171212224738/http://www.naturehackerproducts.com/2017/12/new-proof-of-work-pow-method-based-on.html?m=1

The take home message is to find a single factor (of a length defined by the difficulty) of a very large number (over 100 digits).

Since it is such a large number GPU's become very slow at attempting it.

Factoring big numbers can be highly accelerated with quantum computers and Shor's algorithm.

https://en.wikipedia.org/wiki/Shor%27s_algorithm

In my opinion new POW algorithms should not only be ASIC resistant, they should also be quantum computing resistant.

Bitcoin would already suffer if some day quantum computers with enough qubits become reality as the private/public keys could be cracked with Shor's algorithm.
The SHA-256 hashing of the public key would prevent this attack, but there are already lot of public keys revealed which still contain balances (very old transactions, reused bitcoin addresses). And with those known public keys the private keys could be calculated.

Also RSA encryption (used for example in HTTPS connections) could be cracked quite easly with Shor's algorithm which might be the best what could happen to our nice three letter agencies to globally spy on every online connection.

It might still take long time until quantum computers with enough qubits are available, but we should try to avoid not quantum computing proof algorithms when creating new stuff.
bob123
Legendary
*
Offline Offline

Activity: 1624
Merit: 2481



View Profile WWW
March 30, 2018, 08:32:46 PM
 #46

It's a challenging proposal.
Prime numbers and numbers having small factors can be ruled out easily. Primality test is much faster than factoring and there are algorithms which are very fast in finding small factors (see elliptic curve factoring).

Factorization of large numbers usually means 1024 bit (obsolte) or 2048 bit numbers (4096 bit would be recommended to be on the safe side).
Those numbers are incredibly big (about 600+ digits long).



This could also lead to finding new factoring algorithms.

Quite a few organisations have an incentive to find factoring algorithms with a better runtime.
Until now there hasn't been made any considerable progress in solving this problem.

ir.hn (OP)
Member
**
Offline Offline

Activity: 322
Merit: 54

Consensus is Constitution


View Profile
May 06, 2018, 05:57:59 AM
 #47

Originally I was worried about primes but one of the other posters alerted me that everyone can be working on different numbers.  I didn't realize this is how bitcoin works currently.  Everyone's number is determined by the hash of their public key along with other block details.  So if one person gets dealt a prime (or two or three or 100 people) then the others will be the ones to win that block.
If one miner gets stuck with a prime, then they're not "out" until the next block is solved - they can and will periodically generate a new number by changing some of the block details, and then hope this new number they're factorizing isn't prime.

On a side note it is possible that miners will be biased toward even numbers, since it is more likely that an even number will be composite than an odd number (marginally, but at this scale it could make a difference). As miners can change certain parts of the block details to whatever they want, they can keep trying until they get an even number, and then work on that one.

Good catch.  That is a glaring vulnerability since we want to ultimately test whether numbers are prime or not.  My intention is that people can datamine the blockchain to know what numbers to rule out in the search for new prime numbers.  If everyone is mining even numbers than this wouldn't work.  honestly I think we would have to eliminate numbers that have any factors up to say 1000.  This way people can't just divide their number by everything from 1-100 to make sure it isn't prime.  The point is to rule out prime candidates by proving they are composite.

I don't know, any other ideas on this vulnerability?

Let's say that, to solve a block, you need to find a factor that is at least 1000 digits of a number that is 2000 digits. We already know we can skip odd numbers, so keep trying to hash your block until you get an even number. This should happen nearly instantly. Now, to find a 1000+ digit factor of a 2000 digit even number...... you just divide by 2. Even on a number this size, that kind of operation would take only milliseconds on a CPU.

More digits in the factor meaning higher difficulty is just not accurate. Finding a 50 digit factor would be harder than finding either a 1 digit or 1999+ digit factor of a 2000 digit number. But ruling out prime candidates is not the hard part. You can already prove that a number is composite if it is both greater than 2 and even. We already know it's greater than 2, so you just have to look at the last one bit - if 0 then composite, if 1 then unknown, try again.

Disallowing numbers with any factors (other than 1) up to 1000 is an interesting idea. They certainly exist (here's one, ~100 digits: 1332040162761665637631613428888885580516735653612226801746764441547065802389532 208097762306977281), but I don't know if there's a way to look at a number and calculate if it meets that criteria before you go bruteforcing factors. If there is (I imagine there is) then we're back to looking for a large enough factor anyways.

What you could do instead is have a window of factor length that has to be met. Rather than >= 1000 digits, say 100 to 110 digits, for a factor of a 2000 digit number.

edit: The very big number above is getting a space in it for some reason - if you try to calculate the factors, make sure that you remove that space

Thanka for your great posts.  But yes my intention was that it would require a number with the exact number of digits to win the block, not just 70+ digit factor but an exactly 70 digit factor for example.  If we did the "no numbers with any factors up to 1000" thing then it would just require a teeny bit of brute forcing to verify the proposed target number is valid.  So you would broadcast to the network that here was my target number, notice that it isn't divisible by anything under 1000, and here is my proposed 70 digit factor, notice it works and is exactly 70 digits.

ir.hn (OP)
Member
**
Offline Offline

Activity: 322
Merit: 54

Consensus is Constitution


View Profile
May 06, 2018, 06:05:10 AM
 #48

Here is a whitepaper about how to define a new proof of work system that gives preference to CPU's forever theoretically and is immune to vulnerabilities that encryption algorithms are prone to.
(link is to wayback machine so you can trust visiting it)

https://web.archive.org/web/20171212224738/http://www.naturehackerproducts.com/2017/12/new-proof-of-work-pow-method-based-on.html?m=1

The take home message is to find a single factor (of a length defined by the difficulty) of a very large number (over 100 digits).

Since it is such a large number GPU's become very slow at attempting it.

Factoring big numbers can be highly accelerated with quantum computers and Shor's algorithm.

https://en.wikipedia.org/wiki/Shor%27s_algorithm

In my opinion new POW algorithms should not only be ASIC resistant, they should also be quantum computing resistant.

Bitcoin would already suffer if some day quantum computers with enough qubits become reality as the private/public keys could be cracked with Shor's algorithm.
The SHA-256 hashing of the public key would prevent this attack, but there are already lot of public keys revealed which still contain balances (very old transactions, reused bitcoin addresses). And with those known public keys the private keys could be calculated.

Also RSA encryption (used for example in HTTPS connections) could be cracked quite easly with Shor's algorithm which might be the best what could happen to our nice three letter agencies to globally spy on every online connection.

It might still take long time until quantum computers with enough qubits are available, but we should try to avoid not quantum computing proof algorithms when creating new stuff.

Well since every other algorithm suffers massively RIGHT NOW from GPU and/or ASIC speed up, if all we have to worry anout is quantum computing then I think we are way way above par.  For right now we don't even know if quantum computing the way it is envisioned will even work, let alone we have no idea what an actual application will be capable of.  So I say we cross that bridge when we get there.  For now we need gpu and asic and botnet and server farm resistance.

ir.hn (OP)
Member
**
Offline Offline

Activity: 322
Merit: 54

Consensus is Constitution


View Profile
May 06, 2018, 06:12:06 AM
 #49

It's a challenging proposal.
Prime numbers and numbers having small factors can be ruled out easily. Primality test is much faster than factoring and there are algorithms which are very fast in finding small factors (see elliptic curve factoring).

Factorization of large numbers usually means 1024 bit (obsolte) or 2048 bit numbers (4096 bit would be recommended to be on the safe side).
Those numbers are incredibly big (about 600+ digits long).



This could also lead to finding new factoring algorithms.

Quite a few organisations have an incentive to find factoring algorithms with a better runtime.
Until now there hasn't been made any considerable progress in solving this problem.


Yup right on.  I think for our purposes we will be working primarily on the 100-200 digit range.  Still gnfs sieve is the best known way to either look for factors or to determine if these numbers are prime.  Finding one factor of a given length is much easier to do (or prove you can't do) than to prove a mumber is prime.

ir.hn (OP)
Member
**
Offline Offline

Activity: 322
Merit: 54

Consensus is Constitution


View Profile
June 09, 2018, 11:43:57 PM
 #50

Like I have been saying recently, this algorithm would require 1 CPU and 1 GPU for optimum mining.  This would be perfect for commodity devices like phones, laptops, and AI that have "APU" (combined cpu gpu) processors and would help protect against server farms and IoT botnet's.

http://www.mersenneforum.org/showthread.php?t=19312

andrew1carlssin
Jr. Member
*
Offline Offline

Activity: 168
Merit: 3

#Please, read:Daniel Ellsberg,-The Doomsday *wk


View Profile WWW
June 10, 2018, 12:30:34 AM
 #51

recently I have seen more and more interest from the scientific community on this topic ... I had spend some of my spare time looking for papers about it ... I really find this one interesting ..

Quote
Proofs of Work (PoWs) were introduced [DN92] to enforce that a certain amount of energy was
expended for doing some task in an easily verifiable way. In most applications, PoWs force malicious
users to accrue a large workload, thus guarding against email spam, denial of service attacks, and,
most recently, double-spending in cryptocurrencies such as Bitcoin [Nak08]. Unfortunately, existing
PoW schemes are often disconnected from the task they are attached to, so that the work expended
is not actually useful in accomplishing that task. More importantly, the work and energy expended
is generally not useful for anything except proving that work had in fact been done.
To this end, PoWs are wasteful of real resources and energy and, in the massive use case
of Bitcoin, have even been called an ”environmental disaster” [And13]. Two early attempts to
combat this are Primecoin [Kin13] and Permacoin [MJS+14]. The former suggests a Proof of
Work system whose outcome enables the search for chains of prime numbers, whereas the latter
repurposes Bitcoin mining resources to achieve distributed storage of archival data, based on Proofs
of Retrievability (thus requiring clients to invest not just computational resources, but also storage).
Another line of work, studies Proofs of Space [ABFG14, DFKP15, KK14], where a user must
dedicate a significant amount of disk space, instead of computing power, to perform their task.
This accomplishes a similar constraint on malicious activity to PoWs since a group of users cannot
over-perform a task without having access to massive amounts of memory. However, as a typical
user has free disk space, such schemes will not similarly waste a resource like energy and pollute
the environment.
In this work we present an alternative to these: Proofs of Work whose work is actually useful
to solving practical problems.

wpaper source: https://eprint.iacr.org/2017/203.pdf

And if I am not mistake they talk about it on cource about cryptocurrency created by Princeton University at coursera (week 8, Alternative Mining Puzzles)   

Satoshi's book editor; SCIpher - https://pdos.csail.mit.edu/archive/scigen/scipher.html
ir.hn (OP)
Member
**
Offline Offline

Activity: 322
Merit: 54

Consensus is Constitution


View Profile
June 10, 2018, 03:30:06 AM
 #52

Interesting paper but it appears their chosen problem to solve is an orthogonal vector problem.  While that is a small vector field problem so it would take some memory, it is probably still very vulnerable to acceleration (GPU/ASIC).  Their main goal is to provide a useful outcome, whereas my goal is to provide parallelization resistance with useful work a secondary goal.

ir.hn (OP)
Member
**
Offline Offline

Activity: 322
Merit: 54

Consensus is Constitution


View Profile
June 10, 2018, 06:28:06 AM
 #53

Thanks for pointing out those quantum algorithms for further research, but can we really say that shor's will be faster than grover's?  But yes you seem to be right that shor's algorithm could be an "asic" of sorts for GNFS.

But quantum computing is still a hypothetical and we have ASIC's for hashing right now which is a real, pressing, and proven threat.  So I would rather make my bank vault impervious to explosives and torches and not worry so much about people teleporting in at this point.  I think this new PoW at a absolute minimum would give us 50 years of real fairness.  Bitcoin had what, a couple years?  Litecoin a few years?  We are talking drastic improvement here.

Lots of universities and researchers use regular personal computers for doing GNFS and they have been for decades.  There is nothing better.  They can use GPU's for some portions of the work but the heavy lifting is basic commodity CPU's.  Sure you can design a motherboard to be optimized but you will only get incremental gains, nothing like an asic.  That paper is the only sub billion dollar proposed ASIC for large number factoring.  That just shows how impractical it is and I would guess based on reading between the lines of the paper it would give less than an order of magnitude speedup vs the same price and power usage of commodity hardware, and they admit that is their competition.

Off topic but using liquid nitrogen for cooling could give pretty substantial efficiency gains to any hardware for any algorithm...

Pages: 1 2 3 [All]
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!