Bitcoin Forum
April 24, 2024, 07:59:23 PM *
News: Latest Bitcoin Core release: 27.0 [Torrent]
 
   Home   Help Search Login Register More  
Warning: One or more bitcointalk.org users have reported that they strongly believe that the creator of this topic is a scammer. (Login to see the detailed trust ratings.) While the bitcointalk.org administration does not verify such claims, you should proceed with extreme caution.
Pages: [1]
  Print  
Author Topic: Possible to make a coin that gets more "memory hard" over time?  (Read 2444 times)
caston (OP)
Hero Member
*****
Offline Offline

Activity: 756
Merit: 500



View Profile WWW
January 30, 2012, 12:43:50 PM
Last edit: January 30, 2012, 01:28:24 PM by caston
 #1

Although scrypt coins like TBX/FBX/LTC are quite memory hard compared to bitcoin there was some criticism on the TBX launch that it wasn't memory hard enough. Would it be possible to make a CPU coin that increases its level of "memory hard"?

This could offer protection against botnets because it would require mining rigs to be heavy in low level CPU cache and light on operating system and userland.

Ideally the future cost to produce has more to do with the cost of L1 and L2 cache than with electricity.

This would be an alternate form of "difficulty". The hashrate could actually be going down but the interest and amount of people (and hardware) hashing the coin could go up. The more people hashing the more memory hard it gets to mine.

bitcoin BTC: 1MikVUu1DauWB33T5diyforbQjTWJ9D4RF
bitcoin cash: 1JdkCGuW4LSgqYiM6QS7zTzAttD9MNAsiK

-updated 3rd December 2017
1713988763
Hero Member
*
Offline Offline

Posts: 1713988763

View Profile Personal Message (Offline)

Ignore
1713988763
Reply with quote  #2

1713988763
Report to moderator
"You Asked For Change, We Gave You Coins" -- casascius
Advertised sites are not endorsed by the Bitcoin Forum. They may be unsafe, untrustworthy, or illegal in your jurisdiction.
1713988763
Hero Member
*
Offline Offline

Posts: 1713988763

View Profile Personal Message (Offline)

Ignore
1713988763
Reply with quote  #2

1713988763
Report to moderator
1713988763
Hero Member
*
Offline Offline

Posts: 1713988763

View Profile Personal Message (Offline)

Ignore
1713988763
Reply with quote  #2

1713988763
Report to moderator
fivebells
Sr. Member
****
Offline Offline

Activity: 462
Merit: 250


View Profile
January 30, 2012, 02:48:30 PM
 #2

I believe the scrypt paper has estimates on how the runtime complexity of the algorithm varies as you change its parameters.  Perhaps you could change those parameters every two weeks instead, of changing the hash image criterion.
tacotime
Legendary
*
Offline Offline

Activity: 1484
Merit: 1005



View Profile
January 30, 2012, 04:32:18 PM
 #3

Not sure what you mean...  Vastly larger amounts of L1 and L2 cache will never happen because the speed at which the cache operates decreases with the size of the cache.  The point of multiple cache sizes is to offer slower cache with better hit rates when larger amounts of memory are required, without having to offload onto the comparatively really slow RAM.  Hence, you will never see 256MB L1 caches because the processor would become absurdly slow.

You should read this first: http://en.wikipedia.org/wiki/CPU_cache

I think it's more reasonable to assume that L1 caches will remain similarly sized but become faster in the future with lower hit rates.  So, the algorithm is fine as it is.  If you increase the amount of memory required, you end up with a GPU-favoured implementation of scrypt.

Code:
XMR: 44GBHzv6ZyQdJkjqZje6KLZ3xSyN1hBSFAnLP6EAqJtCRVzMzZmeXTC2AHKDS9aEDTRKmo6a6o9r9j86pYfhCWDkKjbtcns
caston (OP)
Hero Member
*****
Offline Offline

Activity: 756
Merit: 500



View Profile WWW
February 01, 2012, 10:09:45 AM
Last edit: February 01, 2012, 10:28:14 AM by caston
 #4

 So, the algorithm is fine as it is.  If you increase the amount of memory required, you end up with a GPU-favoured implementation of scrypt.

I don't understand this line but the rest of your post is a welcomed commentary that I do intend to provide counter-arguments for.

I would assume that the more memory required the *less* feasible GPU mining became. For instance you could (if artforz released the code) mine scrypt coins with a GPU but it would be so inefficient that you might as well just mine them with the CPU. My understanding is that increasing the amount of memory required further would make GPUs even more pitiful. If you kept increasing the memory required CPU's would decrease in hash power. Some CPU's with smaller and or slower amounts of cache (or inefficient cache usage) would fail to keep up. This would push innovation to improve memory management in CPU's as people try to design ways to make CPU's address large cache sizes faster or make more efficient use of L2 and L3 cache.

We would first see more efficient mining software just as people keep improving the existing scrypt miners but ultimately we would be pushing for CPU's that are continuously improving at memory hard math.  
Although you argue it is difficult to make large amounts of cache easy to address there is room for competition and innovation in this area as people push the boundaries on what is possible with the CPU.

Yes it sounds like a lot of very difficult work I agree but that's the whole idea. It is a speculation market for emerging CPU technology.
 

bitcoin BTC: 1MikVUu1DauWB33T5diyforbQjTWJ9D4RF
bitcoin cash: 1JdkCGuW4LSgqYiM6QS7zTzAttD9MNAsiK

-updated 3rd December 2017
ArtForz
Sr. Member
****
Offline Offline

Activity: 406
Merit: 257


View Profile
February 05, 2012, 03:04:01 AM
 #5

 So, the algorithm is fine as it is.  If you increase the amount of memory required, you end up with a GPU-favoured implementation of scrypt.

I don't understand this line but the rest of your post is a welcomed commentary that I do intend to provide counter-arguments for.

I would assume that the more memory required the *less* feasible GPU mining became. For instance you could (if artforz released the code) mine scrypt coins with a GPU but it would be so inefficient that you might as well just mine them with the CPU. My understanding is that increasing the amount of memory required further would make GPUs even more pitiful. If you kept increasing the memory required CPU's would decrease in hash power. Some CPU's with smaller and or slower amounts of cache (or inefficient cache usage) would fail to keep up. This would push innovation to improve memory management in CPU's as people try to design ways to make CPU's address large cache sizes faster or make more efficient use of L2 and L3 cache.

We would first see more efficient mining software just as people keep improving the existing scrypt miners but ultimately we would be pushing for CPU's that are continuously improving at memory hard math.  
Although you argue it is difficult to make large amounts of cache easy to address there is room for competition and innovation in this area as people push the boundaries on what is possible with the CPU.

Yes it sounds like a lot of very difficult work I agree but that's the whole idea. It is a speculation market for emerging CPU technology.
 
Short version: compared to (1024,1,1) increasing N and r actually helps GPUs and hurts CPUs.
Longer version:
While things are small enough to fit in L2, each CPU core can act mostly independently and has pretty large read/write BW, make it big enough to hit external memory and you've got ~15GB/s shared between all cores.
Meanwhile, GPU caches are too small to be of much use, so... with random reads at 128B/item a 256 bit GDDR5 bus ends up well < 20% peak BW, at 1024B/item that % increases very significantly.
end result, a 5870 ends up about 6 times as fast as a PhenomII for scrypt(8192,8,1). (without really trying to optimize either side, so ymmv).
The only way to make scrypt win on CPU-vs-GPU again would be to go WAAAY bigger, think > 128MB V array so you don't have enough RAM on GPUs to run enough parallel instances to mask latencies... but that also means it's REALLY slow (hash/sec? sec/hash!) and you need the same amount of memory to check results... Now who wants a *coin where a normal node needs several seconds and 100s of megs to gigs of ram just to check a block PoW for validity?

bitcoin: 1Fb77Xq5ePFER8GtKRn2KDbDTVpJKfKmpz
i0coin: jNdvyvd6v6gV3kVJLD7HsB5ZwHyHwAkfdw
caston (OP)
Hero Member
*****
Offline Offline

Activity: 756
Merit: 500



View Profile WWW
February 05, 2012, 08:27:36 AM
Last edit: February 05, 2012, 08:38:07 AM by caston
 #6

Thanks Artforz, I didn't think it would spill over into main or graphics card RAM but such a coin sounds like desktops would quickly become irrelevant compared to high end servers with a lot more RAM slots. It would be interesting to have a coin that starts off CPU then goes GPU with heaps of video RAM then back to CPU but then requiring heaps of system RAM. Soon we could be looking at 8GB ram modules. After a while though you'd have to mine on high end server machines. The original idea though was to make CPU's better at memory hard math without resorting to system RAM. i'm not knowledgeable enough to know how this could be possible.

bitcoin BTC: 1MikVUu1DauWB33T5diyforbQjTWJ9D4RF
bitcoin cash: 1JdkCGuW4LSgqYiM6QS7zTzAttD9MNAsiK

-updated 3rd December 2017
caston (OP)
Hero Member
*****
Offline Offline

Activity: 756
Merit: 500



View Profile WWW
February 07, 2012, 08:44:44 AM
Last edit: February 07, 2012, 08:58:01 AM by caston
 #7

Although if someone were to do it i'd suggest that the way the difficulty adjusts should be optimised to happen slowly enough to not lose hashing power (contributed to the network) from existing CPUs but still encourage increasing efficiency for memory hard math e.g. new state of the art CPUs. The slow hash speed on video card and main RAM used would reduce the hash power of the network and then adjust the difficulty so that it is less memory hard bringing CPU's back in until they are beaten (but more like very gradually phased out) by CPUs more powerful and efficient at memory hard math.  

This alternate coin would encourage CPU innovation and give us a secure network with a much lower energy footprint.

Ideally it could just plod along for years like bitcoin did Jan 2009 to Jan 2011. If cutting edge CPU ability at memory hard math only increases very slowly then the difficulty only goes up very slowly. If there is a sudden innovation rush and technological breakthrough or arms race then the difficulty adjustment will be able to cope with this as well.

So the memory hard difficulty is dependent on the hash power of the network.

bitcoin BTC: 1MikVUu1DauWB33T5diyforbQjTWJ9D4RF
bitcoin cash: 1JdkCGuW4LSgqYiM6QS7zTzAttD9MNAsiK

-updated 3rd December 2017
caston (OP)
Hero Member
*****
Offline Offline

Activity: 756
Merit: 500



View Profile WWW
February 09, 2012, 10:40:32 AM
 #8

I could also suggest that the miner could set their own difficulty. For example if my CPU doesn't have enough cache to mine at the current level of "memory hard" I would set it to a lower level. The block reward would be lower (and possibly slowly shrinking) at this level too. Finding the right balance between keeping lots of people hashing and encouraging development and deployment of CPUs much better at memory hard math would be ideal. The graduation to higher levels of "memory hard" would probably progress the declining block rewards for those mining at easier levels.

bitcoin BTC: 1MikVUu1DauWB33T5diyforbQjTWJ9D4RF
bitcoin cash: 1JdkCGuW4LSgqYiM6QS7zTzAttD9MNAsiK

-updated 3rd December 2017
DeathAndTaxes
Donator
Legendary
*
Offline Offline

Activity: 1218
Merit: 1079


Gerald Davis


View Profile
February 10, 2012, 03:18:59 PM
Last edit: February 13, 2012, 01:43:59 PM by DeathAndTaxes
 #9

I think you are confusing a lot of concepts.  CPU have horrible electrical efficiency when it comes to mathematical work.   They are a jack of all trades and an ace at none.   CPU is less efficient than GPU which is less efficient than FPGA which is less efficient than a sASIC which is less efficient than an ASIC.

Scrypt doesn't magically make a network more efficient it makes it less efficient and locks it into the least efficient method of computation possible by making more efficient processing platforms prohibitively expensive.

Bitcoin isn't "GPU friendly".  It is an open network.  The algorithm was chosen for security not to favor one technology over another so with Bitcoin you will see.

Unoptimized CPU miners -> OpenCL CPU miners -> GPU miners -> FPGA miners -> saSIC/ASIC miners

In essence the network will evolve to take advantage of more and more efficient technology as they become competitive.

Scrypt was designed to make efficient parallel execution impossible.  That is the purpose of it.  It forces execution into horribly inefficient sequential workload.  This makes it a technological dead end.  The network will remain inefficient and get continually more and more inefficient on a relative basis over time.  It was designed to make brute force searches painfully inefficient.  What is a proof of work?
caston (OP)
Hero Member
*****
Offline Offline

Activity: 756
Merit: 500



View Profile WWW
February 13, 2012, 01:27:12 PM
 #10

I think you are confusing a lot of concepts.  CPU have horrible electrical efficiency when it comes to mathematical work.   They are a jack of all trades and an ace at none.   CPU is less efficient than GPU which is less efficient than FPGA which is less efficient than a sASIC which is less efficient than an ASIC.


and an ASIC is less efficient than DNA based computing but that's years away Wink

Anyway as ignorant as I may be that didn't stop Forest gump Wink

I may even pledge some kind of a bounty towards development of this coin. It won't be a huge one though. Just for a bit of fun really.

Scrypt doesn't magically make a network more efficient it makes it less efficient and locks it into the least efficient method of computation possible by making more efficient processing platforms prohibitively expensive.

Bitcoin isn't "GPU friendly".  It is an open network.  The algorithm was chosen for security not to favor one technology over another so with Bitcoin you will see.

Unoptimized CPU miners -> OpenCL CPU miners -> GPU miners -> FPGA miners -> saSIC/ASIC miners

In essence the network will evolve to take advantage of more and more efficient technology as they become competitive.

Scrypt was designed to make efficient parallel execution impossible.  That is the purpose of it.  It forces execution into horribly inefficient sequential workload.  This makes it a technological dead end.  The network will remain inefficient and get continually more and more inefficient over time.  It was designed to make brute force searches painfully inefficient.  What is a proof of work?


Yes, but that's I'm arguing that this is the point. We want to encourage CPU's to become more efficient at memory hard math. Sure if people can load lots of cache onto FPGAs or saSICS/ASICs and make them good at memory hard math then  that's certainly no small feat either.

All we need to do it be a leap a head of ltc and solidcoin and we already have a market. If we can keep that going for a few years we are bound to see some exciting developments in that time.

bitcoin BTC: 1MikVUu1DauWB33T5diyforbQjTWJ9D4RF
bitcoin cash: 1JdkCGuW4LSgqYiM6QS7zTzAttD9MNAsiK

-updated 3rd December 2017
DeathAndTaxes
Donator
Legendary
*
Offline Offline

Activity: 1218
Merit: 1079


Gerald Davis


View Profile
February 13, 2012, 01:43:27 PM
 #11

Yes, but that's I'm arguing that this is the point. We want to encourage CPU's to become more efficient at memory hard math. Sure if people can load lots of cache onto FPGAs or saSICS/ASICs and make them good at memory hard math then  that's certainly no small feat either.

All we need to do it be a leap a head of ltc and solidcoin and we already have a market. If we can keep that going for a few years we are bound to see some exciting developments in that time.

You aren't going to encourage shit.  Do you understand the development and production cost of modern CPUs?  It is in the billions of dollars.  AMD is barely hanging on because the never ending production cycle is so cash intensive.

Nobody and I mean nobody is going to waste criticial resources making chips with larger L1 cache that perform worse at every application under the sun just so they can be better at mining.

Hell Bitcoin "hardware market" is thousands of times larger and AMD won't even devote resources to fix bugs in SOFTWARE (couple magnitudes cheaper) that would make Bitcoin mining more efficient.

I am not sure if you are being silly, confused, or are just insanely naive.
caston (OP)
Hero Member
*****
Offline Offline

Activity: 756
Merit: 500



View Profile WWW
February 18, 2012, 07:54:50 AM
 #12

That sounds like a challenge. Remember I said it would be a means of speculating on future CPU innovation. If you look at history there have been fundamental breakthroughs in CPU technology at various times. We are going through a period of steady innovation.

I would say something like this:

If you don't believe that we will make some incredible breakthroughs in CPU tech in the future don't invest in this hypothetical coin.

If you do want to speculate that there will be incredible breakthroughs in CPU tech then do invest in this coin.

There may be a faithful  small number of believers that keep hashing away waiting for the breakthrough in CPU tech. It may never come or it may be decades away or it may happen in just a few years. People
like you will keep saying it will never happen and those people would never invest in this coin. If it does happen those people will wish they did and the people that did decide to speculate in it will win big. Their initial investment would really be more like taking up a hobby than taking a risk with the upside of a huge potential pay off.

It is for speculation. I.e. No one knows what will happen in the future but allowing the creation of markets for speculation in technological innovation would be a big step forward.




bitcoin BTC: 1MikVUu1DauWB33T5diyforbQjTWJ9D4RF
bitcoin cash: 1JdkCGuW4LSgqYiM6QS7zTzAttD9MNAsiK

-updated 3rd December 2017
DeathAndTaxes
Donator
Legendary
*
Offline Offline

Activity: 1218
Merit: 1079


Gerald Davis


View Profile
February 19, 2012, 11:25:48 PM
 #13

That sounds like a challenge. Remember I said it would be a means of speculating on future CPU innovation. If you look at history there have been fundamental breakthroughs in CPU technology at various times. We are going through a period of steady innovation.

I would say something like this:

If you don't believe that we will make some incredible breakthroughs in CPU tech in the future don't invest in this hypothetical coin.

If you do want to speculate that there will be incredible breakthroughs in CPU tech then do invest in this coin.

There may be a faithful  small number of believers that keep hashing away waiting for the breakthrough in CPU tech. It may never come or it may be decades away or it may happen in just a few years. People
like you will keep saying it will never happen and those people would never invest in this coin. If it does happen those people will wish they did and the people that did decide to speculate in it will win big. Their initial investment would really be more like taking up a hobby than taking a risk with the upside of a huge potential pay off.

It is for speculation. I.e. No one knows what will happen in the future but allowing the creation of markets for speculation in technological innovation would be a big step forward.





It isn't an innovation.  You could make a CPU right with current tech that had huge amounts of L1 cache.  It would do awesome against things like scrypt and completely suck at everything else (you know the other 99.9999999999999999999999999999999999999999999999999999999999999999999999999999 9999999999999999999999999999999999999999999999999999999999999999999999999999999 999999999999999999999999999999999999999999999% of the computing world).

More L1 cache = slower L1 cache thus the cache remains small to maximize performance and reduce latency when the cahce hits. 

Computer memory works on a pyramid model which maximizes performance per unit of cost.

L1 cache = smallest and lowest latency
L2 cache = larger higher latency
L3 cache = usually shared between multiple cores and even higher latency
main segment memory = massive larger in size and at least two orders magnitude higher latency
off segment memory = in multiple socket (not core) CPU the memory of another CPU can be used but there is an increased latency hit.
virtual memory = nearly unlimited in size but 2 to 3 orders of magnitude slower than main memory
caston (OP)
Hero Member
*****
Offline Offline

Activity: 756
Merit: 500



View Profile WWW
February 20, 2012, 09:10:51 PM
 #14

There is still potential to a) reduce the latency at each step of the pyramid or b) throw out the pyramid model and come up with something new entirely

While you say the niche market for software that would make use of this hardware is very small it would almost certainly be associated with very high profit.

bitcoin BTC: 1MikVUu1DauWB33T5diyforbQjTWJ9D4RF
bitcoin cash: 1JdkCGuW4LSgqYiM6QS7zTzAttD9MNAsiK

-updated 3rd December 2017
psiborg
Newbie
*
Offline Offline

Activity: 25
Merit: 0


View Profile
February 21, 2012, 05:12:59 PM
 #15

Unless that niche market starts promising big wads of money to CPU manufacturers first these designs probably won't see the light of day anytime soon.
caston (OP)
Hero Member
*****
Offline Offline

Activity: 756
Merit: 500



View Profile WWW
February 24, 2012, 12:31:29 PM
 #16

Unless that niche market starts promising big wads of money to CPU manufacturers first these designs probably won't see the light of day anytime soon.

That's not the coins responsibility.

bitcoin BTC: 1MikVUu1DauWB33T5diyforbQjTWJ9D4RF
bitcoin cash: 1JdkCGuW4LSgqYiM6QS7zTzAttD9MNAsiK

-updated 3rd December 2017
tromp
Legendary
*
Offline Offline

Activity: 976
Merit: 1076


View Profile
February 11, 2014, 05:22:02 PM
 #17

Short version: compared to (1024,1,1) increasing N and r actually helps GPUs and hurts CPUs.
Longer version:
While things are small enough to fit in L2, each CPU core can act mostly independently and has pretty large read/write BW, make it big enough to hit external memory and you've got ~15GB/s shared between all cores.
Meanwhile, GPU caches are too small to be of much use, so... with random reads at 128B/item a 256 bit GDDR5 bus ends up well < 20% peak BW, at 1024B/item that % increases very significantly.
end result, a 5870 ends up about 6 times as fast as a PhenomII for scrypt(8192,8,1). (without really trying to optimize either side, so ymmv).
The only way to make scrypt win on CPU-vs-GPU again would be to go WAAAY bigger, think > 128MB V array so you don't have enough RAM on GPUs to run enough parallel instances to mask latencies... but that also means it's REALLY slow (hash/sec? sec/hash!) and you need the same amount of memory to check results... Now who wants a *coin where a normal node needs several seconds and 100s of megs to gigs of ram just to check a block PoW for validity?

A Proof of work can be both requiring tons of memory, and be trivially verifiable.
See Cuckoo Cycle at https://github.com/tromp/cuckoo
tromp
Legendary
*
Offline Offline

Activity: 976
Merit: 1076


View Profile
February 11, 2014, 05:24:14 PM
 #18

I may even pledge some kind of a bounty towards development of this coin. It won't be a huge one though. Just for a bit of fun really.

The coin isn't there yet, but Cuckoo Cycle appears to be the proof of work that you want.
tromp
Legendary
*
Offline Offline

Activity: 976
Merit: 1076


View Profile
February 11, 2014, 07:40:48 PM
 #19

I may even pledge some kind of a bounty towards development of this coin. It won't be a huge one though. Just for a bit of fun really.

The coin isn't there yet, but Cuckoo Cycle appears to be the proof of work that you want.

My God, dude! This thread is from Feb 2012!

I know:-(

Anyway, the misconception that a memory-hard PoW must be slow to verify appears
to be widespread, so I wanted to rectify it in case others stumble on this old thread.
Pages: [1]
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!