Bitcoin Forum
November 19, 2024, 08:27:39 AM *
News: Check out the artwork 1Dq created to commemorate this forum's 15th anniversary
 
   Home   Help Search Login Register More  
Pages: « 1 [2]  All
  Print  
Author Topic: Memcoin Protocol (A CPU/GPU-oriented, very memory hard chain)  (Read 6755 times)
scrybe
Sr. Member
****
Offline Offline

Activity: 350
Merit: 250



View Profile
November 12, 2012, 05:30:28 PM
 #21


3.1) The scrypt parameter r is initialized as 128, so the initial memory required per scrypt process is 16 MB.  The value of r will be multiplied by 3.5 every 1050 days (604800), e.g., a little less than 3 years after chain creation r will be 448 and the required memory will be 56 MB per thread.  This is in keeping with Murphy's law, and should ensure that the chain remains CPU/GPU-minable for a long time.[/size]


I see 2 big problems here.
  • If it's Murphy's law you are working against you are doomed to failure. Murphy always wins and in this case I believe it would mean that ASICs will come out that run your coin BEFORE the software implementation is done.
  • Your memory requirements growing over absolute time will eventually outpace Moore's law leading to an increasingly impractical cost of participation that eventually require more RAM than possible.

The first one is a funny typo, but the second is a big issue.

At SOME point we are going to see Moore's law slow down and maybe even plateau as we reach atomic densities.  This protocol will just get more and more memory intensive until just scanning the blockchain takes millions of dollars worth of hardware, let alone mining. BitCoin solves this issue by NOT using time as an absolute factor, but instead Hashrate/difficulty. This means if Moores law stops working tomorrow or in 2050 the network will self-stabilize difficulty and available computing power.

2 conclusions I'm drawing from this:
  • We need to prevent excessive computation from being required to validate the blockchain or operate as a client. The more asymmetric we can make the ratio of verification:generation the better the efficiency for clients compared to the mining effort
  • Don't tie ANYTHING to an absolute growth direction except the number objects. Assume that any adjustments that are made might need to be reversed in order to keep the network operating.

Suggestions:

Maybe your difficulty adjustment should be a composite metric that includes harder targets AND more RAM instead of having them tied to 2 different events?

As far as increasing the ratio between computation and verification would it be possible to sign each block twice? Sign it once with a simple algo, and then sign block, simple signature, and nonce with the complex algo and retain both hashes. Mining could require full verification of the previous complex hashes, but that just needs to be for recent blocks.

Those are my thoughts so far, hope they help.

"...as simple as possible, but no simpler" -AE
BTC/TRC/FRC: 1ScrybeSNcjqgpPeYNgvdxANArqoC6i5u Ripple:rf9gutfmGB8CH39W2PCeRbLWMKRauYyVfx LTC:LadmiD6tXq7gFZvMibhFUZegUHKXgbu1Gb
tacotime (OP)
Legendary
*
Offline Offline

Activity: 1484
Merit: 1005



View Profile
November 12, 2012, 06:03:17 PM
Last edit: November 12, 2012, 06:19:03 PM by tacotime
 #22

I see 2 big problems here.
  • If it's Murphy's law you are working against you are doomed to failure. Murphy always wins and in this case I believe it would mean that ASICs will come out that run your coin BEFORE the software implementation is done.
  • Your memory requirements growing over absolute time will eventually outpace Moore's law leading to an increasingly impractical cost of participation that eventually require more RAM than possible.

The first one is a funny typo, but the second is a big issue.

At SOME point we are going to see Moore's law slow down and maybe even plateau as we reach atomic densities.  This protocol will just get more and more memory intensive until just scanning the blockchain takes millions of dollars worth of hardware, let alone mining. BitCoin solves this issue by NOT using time as an absolute factor, but instead Hashrate/difficulty. This means if Moores law stops working tomorrow or in 2050 the network will self-stabilize difficulty and available computing power.

I fixed that, thank you. Cheesy  I have been wondering this myself, as it's only 10 angstroms to a nanometer and we're moving to sub-10 nanometer design in the next decase.

Quote
2 conclusions I'm drawing from this:
  • We need to prevent excessive computation from being required to validate the blockchain or operate as a client. The more asymmetric we can make the ratio of verification:generation the better the efficiency for clients compared to the mining effort
  • Don't tie ANYTHING to an absolute growth direction except the number objects. Assume that any adjustments that are made might need to be reversed in order to keep the network operating.

I suppose that's kind of the fail-safe of the bitcoin network with difficulty, that difficulty is always reversible instead of always increasing.

Quote
Suggestions:

Maybe your difficulty adjustment should be a composite metric that includes harder targets AND more RAM instead of having them tied to 2 different events?

As far as increasing the ratio between computation and verification would it be possible to sign each block twice? Sign it once with a simple algo, and then sign block, simple signature, and nonce with the complex algo and retain both hashes. Mining could require full verification of the previous complex hashes, but that just needs to be for recent blocks.

Those are my thoughts so far, hope they help.

Well, I think a possible composite algorithm for difficulty adjustment could be a long term retarget for memory (35, 70, or 140 days) and a short term retarget (3.5 days) for difficulty.  The problem with this approach is that if the network becomes inundated with miners that the memory retarget could become too large and destroy the infrastructure of the chain.  I think if we're using 35 days retargets for memory that the maximum increase should be 5%-10% while the maximum decrease should be 20%-50%.

The last point I'm not knowledgeable about.  I think you'd need to have two symmetric merkle trees with both the simple and the complex hash.  The complex hash would need to be solved first, and then whoever solves it would have to solve the simple hash at approximately the same time (it'd have to be really easy to make it near instantaneous) and so would sign for both.  The simple "dummy" tree nodes would just contain data about who solved the block, what the transactions were, and what the network settings were at the time.  This would then have to be constantly evaluated by the master tree using "master nodes" with full hashing capabilities to ensure that both trees are congruent; this would expose the network to master node sybil attacks, though.  Master nodes would then be the source of the dummy tree to clients.

Probably any such network simplification algorithm is going to expose clients using "dummy trees" or other simplified merkle tree structures to this sort of attack.  As the bitcoin algorithm is facing a data storage problem eventually stemming from the same problem, probably a solution will be found for this sometime soon, I'm just not sure what it is.

Code:
XMR: 44GBHzv6ZyQdJkjqZje6KLZ3xSyN1hBSFAnLP6EAqJtCRVzMzZmeXTC2AHKDS9aEDTRKmo6a6o9r9j86pYfhCWDkKjbtcns
scrybe
Sr. Member
****
Offline Offline

Activity: 350
Merit: 250



View Profile
November 12, 2012, 08:33:04 PM
 #23


Well, I think a possible composite algorithm for difficulty adjustment could be a long term retarget for memory (35, 70, or 140 days) and a short term retarget (3.5 days) for difficulty.  The problem with this approach is that if the network becomes inundated with miners that the memory retarget could become too large and destroy the infrastructure of the chain.  I think if we're using 35 days retargets for memory that the maximum increase should be 5%-10% while the maximum decrease should be 20%-50%.


Hmmm, I had not thought about part of that. The hardware that supports the network is going to have to be flexible to a level we don't see today in order to deal with the memory growth. This means that in theory there will never be a point where you can buy hardware a know how long it will work for, unless we are using real time as an input. In fact we are talking about miners being literally useless if the memory requirement gets too high, right? Not just slow, but unable to execute the functions?

Great, that introduces a potential new attack. If you can manufacture systems with significantly more RAM than your competition then you could manipulate the difficulty to render their systems ACTUALLY useless if the memory difficulty got too high. Battle of the supercomputers.

Am I interpreting this right?
Memory requirement based on time == growth of requirement without relation to network realities
memory requirement based on difficulty == Potential attack vector

Limiting the percentage of change over time/blocks just slows down the attack, but still requires other miners to acquire systems with more RAM to continue to compete once the full correction is felt. It does not solve the root issue, just blunts the volatility.

"...as simple as possible, but no simpler" -AE
BTC/TRC/FRC: 1ScrybeSNcjqgpPeYNgvdxANArqoC6i5u Ripple:rf9gutfmGB8CH39W2PCeRbLWMKRauYyVfx LTC:LadmiD6tXq7gFZvMibhFUZegUHKXgbu1Gb
tacotime (OP)
Legendary
*
Offline Offline

Activity: 1484
Merit: 1005



View Profile
November 15, 2012, 04:58:23 AM
 #24

Am I interpreting this right?
Memory requirement based on time == growth of requirement without relation to network realities
memory requirement based on difficulty == Potential attack vector

Limiting the percentage of change over time/blocks just slows down the attack, but still requires other miners to acquire systems with more RAM to continue to compete once the full correction is felt. It does not solve the root issue, just blunts the volatility.

Well, not if the start quantity of RAM is low enough and the adjustments are small enough.

For instance if the start quantity of RAM is 16 MB and adjustments are only 5-10% in 35 days, within a year we'll be at ~46 MB of RAM required per thread with 10% adjustments every 35 days.  Thus an attack on the network to enhance RAM consumption to unsustainable levels would have be maintained for years to really influence the mining market and monopolize the coin.

Limiting the adjustments within the 5-10% range for 35 days periods should allow the market to be self-determining.  The important thing is starting with a flexible memory size and envisioning that with maximal scaling it will not proceed over a certain threshold for at least 4 years, say 512 MB/thread.  The memory difficulty settings in terms of time would have to be approached with caution to prevent attacks; perhaps a maximum of 5% in the upwards direction and 20% in the downwards direction would be ideal (~4 years or 44 35 days periods, the maximum increase in memory usage would yield 137 MB).

Code:
XMR: 44GBHzv6ZyQdJkjqZje6KLZ3xSyN1hBSFAnLP6EAqJtCRVzMzZmeXTC2AHKDS9aEDTRKmo6a6o9r9j86pYfhCWDkKjbtcns
scrybe
Sr. Member
****
Offline Offline

Activity: 350
Merit: 250



View Profile
November 15, 2012, 07:28:16 AM
 #25

Am I interpreting this right?
Memory requirement based on time == growth of requirement without relation to network realities
memory requirement based on difficulty == Potential attack vector

Limiting the percentage of change over time/blocks just slows down the attack, but still requires other miners to acquire systems with more RAM to continue to compete once the full correction is felt. It does not solve the root issue, just blunts the volatility.

Well, not if the start quantity of RAM is low enough and the adjustments are small enough.

For instance if the start quantity of RAM is 16 MB and adjustments are only 5-10% in 35 days, within a year we'll be at ~46 MB of RAM required per thread with 10% adjustments every 35 days.  Thus an attack on the network to enhance RAM consumption to unsustainable levels would have be maintained for years to really influence the mining market and monopolize the coin.

Limiting the adjustments within the 5-10% range for 35 days periods should allow the market to be self-determining.  The important thing is starting with a flexible memory size and envisioning that with maximal scaling it will not proceed over a certain threshold for at least 4 years, say 512 MB/thread.  The memory difficulty settings in terms of time would have to be approached with caution to prevent attacks; perhaps a maximum of 5% in the upwards direction and 20% in the downwards direction would be ideal (~4 years or 44 35 days periods, the maximum increase in memory usage would yield 137 MB).

OK, so given these values, I can be certain that the memory will not grow beyond 137MB in 4 years (and I know the max it could grow in 1 year as well) this is plenty of lead time to create and implement an ASIC with a lot of RAM nearby. I think if your goal is to avoid ASIC competition to CPU and GPU mining, you are not going to find it down this path. I started out mostly playing devil's advocate on this one, but I'm pretty convinced that this is not really going to work out.

Maybe I'm missing the point, but this coin does not appear to have an inherent advantage over LiteCoin.

"...as simple as possible, but no simpler" -AE
BTC/TRC/FRC: 1ScrybeSNcjqgpPeYNgvdxANArqoC6i5u Ripple:rf9gutfmGB8CH39W2PCeRbLWMKRauYyVfx LTC:LadmiD6tXq7gFZvMibhFUZegUHKXgbu1Gb
tacotime (OP)
Legendary
*
Offline Offline

Activity: 1484
Merit: 1005



View Profile
November 15, 2012, 03:38:04 PM
 #26

Well, that's the point of talking about it.

It brings up a very important point about litecoin's memory hardness, mainly that it seems to only partially protect the chain from ASIC mining. A true GPU only chain will probably need a hardware specific encryption algorithm that takes advantage of all of the design features on the AMD GPUS.

Code:
XMR: 44GBHzv6ZyQdJkjqZje6KLZ3xSyN1hBSFAnLP6EAqJtCRVzMzZmeXTC2AHKDS9aEDTRKmo6a6o9r9j86pYfhCWDkKjbtcns
Pages: « 1 [2]  All
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!