Bitcoin Forum
December 07, 2016, 12:48:29 PM *
News: To be able to use the next phase of the beta forum software, please ensure that your email address is correct/functional.
 
   Home   Help Search Donate Login Register  
Pages: [1]
  Print  
Author Topic: Hashing choices in litecoin  (Read 2605 times)
BeeCee1
Member
**
Offline Offline

Activity: 116


View Profile
October 19, 2011, 01:37:16 AM
 #1

scrypt is designed to scale in both processing difficulty and memory use,
however, litecoin doesn't seem to have any way to change either. This seems like a big oversight, in a few years it won't be so gpu unfriendly.

For the salsa 20 hash, litecoin uses the 8 iteration version.  There have been identified attacks for 7 rounds, this doesn't leave much margin of error.  The 12 round variant seems much safer.

Anyone know why these choices were made?  Are they just copied from tenebrix?
1481114909
Hero Member
*
Offline Offline

Posts: 1481114909

View Profile Personal Message (Offline)

Ignore
1481114909
Reply with quote  #2

1481114909
Report to moderator
Advertised sites are not endorsed by the Bitcoin Forum. They may be unsafe, untrustworthy, or illegal in your jurisdiction. Advertise here.
1481114909
Hero Member
*
Offline Offline

Posts: 1481114909

View Profile Personal Message (Offline)

Ignore
1481114909
Reply with quote  #2

1481114909
Report to moderator
1481114909
Hero Member
*
Offline Offline

Posts: 1481114909

View Profile Personal Message (Offline)

Ignore
1481114909
Reply with quote  #2

1481114909
Report to moderator
1481114909
Hero Member
*
Offline Offline

Posts: 1481114909

View Profile Personal Message (Offline)

Ignore
1481114909
Reply with quote  #2

1481114909
Report to moderator
ElectricMucus
Legendary
*
Offline Offline

Activity: 1540


Drama Junkie


View Profile
October 19, 2011, 07:24:32 AM
 #2

I don't see a problem with GPU unfriendlyness since the amount of caching would probably still be too large for future GPUs. It doesn't make much sense to pack much memory to a GPU alu since there isn't any real use for that.

However if there ever will be a Massively Parallel Processor Array with enough capacity to buffer the values traditional CPUs would become obsolete very quickly.

First they ignore you, then they laugh at you, then they keep laughing, then they start choking on their laughter, and then they go and catch their breath. Then they start laughing even more.
BeeCee1
Member
**
Offline Offline

Activity: 116


View Profile
October 20, 2011, 01:49:50 AM
 #3

I don't see a problem with GPU unfriendlyness since the amount of caching would probably still be too large for future GPUs. It doesn't make much sense to pack much memory to a GPU alu since there isn't any real use for that.

GPU's already have 1 to 2 gigs of memory on the card, and they are very good at streaming data.  Yes, scrypt uses random access to limit the benefit of the streaming memory, but that really only adds latency.  Even if GPU alus don't get enough cache in the future, threading would let you hide the latency by having more processes.

Take the extreme case, lets say litecoin is around in 15 or 20 years, are you really convinced that GPU's won't get cache or threading in that time?
tacotime
Legendary
*
Offline Offline

Activity: 1484



View Profile
October 20, 2011, 02:14:49 AM
 #4

I don't see a problem with GPU unfriendlyness since the amount of caching would probably still be too large for future GPUs. It doesn't make much sense to pack much memory to a GPU alu since there isn't any real use for that.

GPU's already have 1 to 2 gigs of memory on the card, and they are very good at streaming data.  Yes, scrypt uses random access to limit the benefit of the streaming memory, but that really only adds latency.  Even if GPU alus don't get enough cache in the future, threading would let you hide the latency by having more processes.

Take the extreme case, lets say litecoin is around in 15 or 20 years, are you really convinced that GPU's won't get cache or threading in that time?

In 5 years the standard number of cores on an x86 CPU will probably be 32 and they will run at 6GHz+ on stock clocks...  So it's probably irrelevant

Code:
XMR: 44GBHzv6ZyQdJkjqZje6KLZ3xSyN1hBSFAnLP6EAqJtCRVzMzZmeXTC2AHKDS9aEDTRKmo6a6o9r9j86pYfhCWDkKjbtcns
BeeCee1
Member
**
Offline Offline

Activity: 116


View Profile
October 20, 2011, 02:27:23 AM
 #5

In 5 years the standard number of cores on an x86 CPU will probably be 32 and they will run at 6GHz+ on stock clocks...  So it's probably irrelevant

Which brings us back to the original question,  how come litecoin doesn't scale the processing power and memory usage over time.
coblee
Donator
Legendary
*
Offline Offline

Activity: 1078


firstbits.com/1ce5j


View Profile WWW
October 20, 2011, 02:41:31 AM
 #6

In 5 years the standard number of cores on an x86 CPU will probably be 32 and they will run at 6GHz+ on stock clocks...  So it's probably irrelevant

Which brings us back to the original question,  how come litecoin doesn't scale the processing power and memory usage over time.

As CPUs get faster, they will be able to compute scrypt faster. So the difficulty will adjust accordingly. There's no need to scale the processing power and memory usage. The point of using scrypt is such that anyone can use their CPU to mine and won't be at an overewhelming disadvantage compared to people with GPU farms.

And yes, Litecoin copied Tenebrix's Scrypt so that there's compatibility between miners. If in the future there's a real need to update it to keep things fair, we will consider it then.

ElectricMucus
Legendary
*
Offline Offline

Activity: 1540


Drama Junkie


View Profile
October 20, 2011, 11:33:19 AM
 #7

Take the extreme case, lets say litecoin is around in 15 or 20 years, are you really convinced that GPU's won't get cache or threading in that time?
I honestly don't know.
But that depends on what tasks a GPU is supposed to do in the future. If the basic rendering techniques don't change I don't see a reason why they didn't just pack more and more ALUs on the chip and leave the architecture as it is.
If not... how we are supposed to tell what's coming?

There might be a future in voxel graphics and raytracing both which would benefit from a different architecture and it would make sense to optimize the hardware for that if they ever become popular.

The thing we ought to be looking for are sandwiched/3d chips where a memory die could lie on top of a alu die, eliminating the huge effort to produce cache memory, if that ever takes off even small microcontroller like devices with multiple alus could outperform todays cpus but I have no Idea when the gonna be.   

First they ignore you, then they laugh at you, then they keep laughing, then they start choking on their laughter, and then they go and catch their breath. Then they start laughing even more.
tacotime
Legendary
*
Offline Offline

Activity: 1484



View Profile
October 20, 2011, 07:31:25 PM
 #8

In 5 years the standard number of cores on an x86 CPU will probably be 32 and they will run at 6GHz+ on stock clocks...  So it's probably irrelevant

No they won't.

1) Number of cores will increase but it is more like 50% increase every 2 years (track the move from single core to double to quad to hex core).  The number of cores isn't going to increase 8x in next 5 years.

2) You won't see a 6GHz+ chip (maybe not ever).   Power draw increases by the SQUARE of frequency.  So if you double the frequency you don't get 2x the power draw you get 4x the power draw (and 4x the heat).  This is the entire reason for multi-core designs.  Back in Pentium III days Intel has a long term timeline  ... 10Ghz by 2010.  We didn't quite make it there. 

http://www.geek.com/articles/chips/intel-predicts-10ghz-chips-by-2011-20000726/

Have you noticed that frequency of fast CPU today isn't much faster than 2 years ago, and not significantly faster than 6 years ago.  If hypothetically you could make a 6GHz+ chip and say it consumed 240W.  You could get the same computational power by redesigning the chip to be more efficient (more instructions per clock aka Pentium IV -> Core 2 -> i7) and more cores and then clock it at ~3GHz and likely end up with ~120W TDP.


Intel was attempting to break the ceiling on speed with the Prescott generation of CPUs by extending the pipeline in the chips -- didn't happen, but AMD is taking the approach again with Bulldozer.  AMD wanted 4.5GHz stock clocks on Bulldozer this round but didn't pull it off; however, with the enhanced latencies and huge pipeline future versions of the chip should clock around there.  All of Intel's current line easily clocks above 4GHz, with the mean being 4.5GHz.  Further, this doesn't represent actual performance gains as new processes evolve that give performance gains without large new numbers of transistors.  The point of the post was though: Soon CPUs will be so much faster than even if GPUs improved, they would still be behind.

Code:
XMR: 44GBHzv6ZyQdJkjqZje6KLZ3xSyN1hBSFAnLP6EAqJtCRVzMzZmeXTC2AHKDS9aEDTRKmo6a6o9r9j86pYfhCWDkKjbtcns
Pages: [1]
  Print  
 
Jump to:  

Sponsored by , a Bitcoin-accepting VPN.
Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!