Bitcoin Forum

Alternate cryptocurrencies => Altcoin Discussion => Topic started by: Frodek on May 13, 2012, 09:31:47 AM



Title: Scrypt GPU-hostile / CPU-friendly
Post by: Frodek on May 13, 2012, 09:31:47 AM
I search code for generating hash GPU-hostile / CPU-friendly.
I found two:
https://github.com/Lolcust/Tenebrix-miner
https://github.com/ArtForz/cpuminer
Generators are the same and ArtForz is faster, or ArtForz is for bitcoin, and Lolcust for Solidcoin,Litecoin?
Which is the main .c file containing algorithm?
Are sample generated hash , for example hash for "abcd" to ensure my implementation is correct?


Title: Re: Scrypt GPU-hostile / CPU-friendly
Post by: gmaxwell on May 13, 2012, 05:36:21 PM
I search code for generating hash GPU-hostile / CPU-friendly.
I found two:
https://github.com/Lolcust/Tenebrix-miner
https://github.com/ArtForz/cpuminer
Generators are the same and ArtForz is faster, or ArtForz is for bitcoin, and Lolcust for Solidcoin,Litecoin?
Which is the main .c file containing algorithm?
Are sample generated hash , for example hash for "abcd" to ensure my implementation is correct?

Scrypt is not really "Scrypt GPU-hostile".  Please direct your attention to the scrypt paper.  Scrypt is designed to use more of the large fast drams that desktop computers have so that attackers with asic farms get less speedup over desktop computers built on the same asic process.  GPUs, however, also have lots of very fast dram and there now exists fairly fast gpu implementations of scrypt.


Title: Re: Scrypt GPU-hostile / CPU-friendly
Post by: etotheipi on May 13, 2012, 08:11:29 PM
... GPUs, however, also have lots of very fast dram and there now exists fairly fast gpu implementations of scrypt.

Scrypt was designed to be able to customize the the memory-per-thread requirement.  A quad-core CPU using its 8GB of RAM has 2 GB/core.  A GPU with 2 GB of RAM for 1600 cores has only 1-2 MB/core. 

The goal of scrypt was to be able to tailor the computational problem to disarm massively-parallel processors from being able to execute too many threads simultaneously.  I don't know what exactly the designer had in mind, but I bet GPUs were considered.  And it will be a long time before that gap is closed, if ever.