Bitcoin Forum
November 08, 2024, 08:37:18 AM *
News: Latest Bitcoin Core release: 28.0 [Torrent]
 
   Home   Help Search Login Register More  
Pages: « 1 2 3 4 5 6 7 [8]  All
  Print  
Author Topic: Thread about GPU-mining and Litecoin  (Read 33235 times)
Schwede65
Sr. Member
****
Offline Offline

Activity: 309
Merit: 250


View Profile
March 06, 2012, 03:40:26 PM
Last edit: March 06, 2012, 03:57:49 PM by Schwede65
 #141

i have heard on btc-e chat, that mtrlt made a half-speed of the reaper13-DEMO...

the full version should work twice as fast...

so the ltc-gpu-mining is profitable down to ~0.0007 btc/ltc and not for the public with ~0.0014...

i think there must be a new ltc-algo to save the public from the grifter(s)...

Edit: the half-speed-reaper13-DEMO-release seems to be the compromise between giving the release or not
tacotime
Legendary
*
Offline Offline

Activity: 1484
Merit: 1005



View Profile
March 06, 2012, 04:18:16 PM
Last edit: March 06, 2012, 05:20:24 PM by tacotime
 #142

i have heard on btc-e chat, that mtrlt made a half-speed of the reaper13-DEMO...

the full version should work twice as fast...

so the ltc-gpu-mining is profitable down to ~0.0007 btc/ltc and not for the public with ~0.0014...

i think there must be a new ltc-algo to save the public from the grifter(s)...

Edit: the half-speed-reaper13-DEMO-release seems to be the compromise between giving the release or not

The speeds reported by mtrlt were the same as the release for the demo, the rumor as I understood it was that his new SC2 miner was supposed to be twice as fast for GPU

I've wondered if the reason mtrlt has not released the source code is because it may borrow heavily from ssvb's code, as ssvb was the one to solve the in-place hashing of ltc

Anyway, this outlines the difficulty in actually a creating an algorithm that can not easily be serialized so that it will run faster on a CPU than a GPU.  I've been thinking about it some, and I've wondered if using multiple algorithms and randomizing a number of the settings for each (then possibly the order in which they are used) to generate 10s or hundreds of thousands of possible algorithms, only one of which could possibly decrypt the next block.  The difficulty in parallelizing the data would have to do with the difficulty of assembling a large number of algorithms sequentially and then assessing them.  It's hard to think of something that a CPU can due better than a GPU when it's a repetitive task based on the same operations of a dataset...  Having to brute force the actual construction of the algorithm may be something that a GPU might struggle with, though.

edit: There's a pretty neat paper of algorithms and their runtime comparisons for GPU/CPU.  There are sorting and physics algorithms that perform significantly slower on a CPU as compared to a GPU.

www.cs.utexas.edu/users/ckkim/papers/isca10_ckkim.pdf

More info on the mentioned sorting algorithm:
Quote
Our 4-core implementation is competitive with the per-
formance on any of the other modern architectures, even
though Cell/GPU architectures have at least 2X more
compute power and bandwidth. Our performance is
1.6X–4X faster than Cell architecture and 1.7X–2X faster
than the latest Nvidia GPUs (8800 GTX and
Quadro FX 5600).
Also note that as the set size becomes larger, GPUs run out of memory and are unable to even compute the set, although extrapolating from the data even if they were able to it would still be slower than for the CPU.
http://pcl.intel-research.net/publications/sorting_vldb08.pdf

The algorithm was subsequently surpassed with this GPU one, however for small data sets Intel's TBB parallel sort is still faster.
http://mgarland.org/files/papers/gpusort-ipdps09.pdf
Apparently over the past few years there has been a battle between Intel and nVidia to try to find things that CPUs and GPUs do better than one another, and there is a wealth of well-cited literature out there.

The algorithm was again overhauled, and CPU-based radix/merge sort still manages to beat out GPUs in a number of cases:
Quote
Comparing CPUs and GPUs: In terms of absolute performance
of CPU versus GPU, we find that the best radix sort, the CPU radix
sort outperforms the best GPU sort by about 20%. The primary rea-
son is that scalar buffer code performs badly on the GPU. This ne-
cessitates a move to the split code that has many more instructions
than the buffer code. This is enough to overcome the 3X higher
compute flops available on the GPU4 . On the other hand, the GPU
merge sort does perform slightly better than the CPU merge sort,
but the difference is still small. The difference is due to the absence
of a single instruction scatter, and the additional overheads, such as
index computations, affecting GPU performance.
http://dl.acm.org/citation.cfm?id=1807207

So, it seems reasonable that an algorithm which is heavily dependent on the radix sort implemented will perform faster on CPUs as compared to GPUs.  It's important that only non-multifield data be used, because a very fast GPU algorithm was implemented for that recently:
http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=5713164

There's also a tree search algorithm here which runs significantly faster with smaller data sets on CPUs/MICA versus GPUs; if it could be integrated into an encryption algorithm it might destroy GPU performance

www.webislands.net/pubs/FAST__SIGMOD10.pdf

Code:
XMR: 44GBHzv6ZyQdJkjqZje6KLZ3xSyN1hBSFAnLP6EAqJtCRVzMzZmeXTC2AHKDS9aEDTRKmo6a6o9r9j86pYfhCWDkKjbtcns
throwaway
Newbie
*
Offline Offline

Activity: 57
Merit: 0


View Profile
March 07, 2012, 01:34:30 PM
 #143

The original purpose of Litecoin is to be a CPU coin where anybody with their computer can mine litecoins. What has happened with Bitcoin is that GPU mining on bitcoin was a lot more efficient, so a lot of people starting mining bitcoins with GPUs. This pumped up the difficulty and made CPU mining unprofitable and therefore pointless. I don't want this to happen to Litecoin and I think most people agree with me on this.

Recently, there has been some rumors that mtrlt has modified his GPU miner to work with Litecoin. And he claims to have been able to create a GPU miner that outperforms CPU miner by a lot. Of course, all this could be FUD thrown at Litecoin by Solidcoin supporters. But I have talked to mtrlt about this and he seems genuine. So I'd like to get to the bottom of this.

Here's what I'd like to accomplish:
1) Figure out if GPU mining litecoins is indeed more efficient. And if so how much better is it.
2) Do we want to switch to a new hashing algorithm that is more GPU-hostile.
3) If we do want to switch, there are a ton of other questions. Can we modify scrypt params or do we need something totally different. How far away do we do the algorithm switch? How do we get miners/pools/clients ready for the switch so that there's no downtime?

Everyone, please refrain from SolidCoin bashing in this thread. And SolidCoin supporters, please refrain from posting unless you have something constructive to say. Thanks.

coblee,

Given that:

* in a couple of years every (consumer) CPU sold will have opencl-capable integrated graphics
* bitcoin mining will move more and more towards FPGAs/ASICs

I don't think any changes are necessary. Very soon anybody will have a computer capable of GPU-mining Litecoin. Also, since eventually people will stop using GPUs to mine bitcoin, the swings in difficulty from people switching between the two chains won't be a problem.
da2ce7
Legendary
*
Offline Offline

Activity: 1222
Merit: 1016


Live and Let Live


View Profile
March 08, 2012, 07:33:08 AM
 #144

I don't think any changes are necessary. Very soon anybody will have a computer capable of GPU-mining Litecoin. Also, since eventually people will stop using GPUs to mine bitcoin, the swings in difficulty from people switching between the two chains won't be a problem.

Yes, don’t threat.  GPU's are at most 10x faster than cpu's for litecoin mining.  For Bitcoin it is at least 100x faster on a GPU.

This means that Litecoin is at least 10x more efficient on a CPU than Bitcoin.

Over time, this gap will narrow; as the in-built GPU's in inside every computer get better.

One off NP-Hard.
m3ta
Sr. Member
****
Offline Offline

Activity: 435
Merit: 250



View Profile WWW
March 21, 2012, 08:39:39 PM
 #145


Yes, don’t threat.  GPU's are at most 10x faster than cpu's for litecoin mining.  For Bitcoin it is at least 100x faster on a GPU.

Over time, this gap will narrow; as the in-built GPU's in inside every computer get better.

http://scottbush.net/language/apostrophes-have-no-place-in-plural-acronyms/

Just sayin'.

Why the frell so many retards spell "ect" as an abbreviation of "Et Cetera"? "ETC", DAMMIT! http://en.wikipedia.org/wiki/Et_cetera

Host:/# rm -rf /var/forum/trolls
Cosbycoin
Hero Member
*****
Offline Offline

Activity: 980
Merit: 506



View Profile
March 22, 2012, 02:38:18 AM
 #146

Wouldn't changing the algorithm force a new blockchain for litecoin? That'd screw up every pool, exchange, client, etc. which is probably going to annoy a lot of the network.
And if that did cause some sort-of "litecoin 2", that would send the value of litecoin downhill, meaning anyone who currently has a lot of their money in litecoins, ends up with nothing...

Coblee has multiple options.

1) Drop support for litecoin1 and start litecoin2 or whatever, and give people all the existing coins in Litecoin1 into Litecoin2 (I did this with SolidCoin v2). Of course this also means the guys GPU mining still have their new coins in the new network but it potentially means you can start fresh with a new algorithm. It's a lot of work however.

2) Do a forking change in litecoin itself. The problem with this is the "old network" will continue in parallel with the "new litecoin". Which leads to a lot of support issues "Which litecoin are you on?".

3) Start an entirely new coin. Like he did with Litecoin over fairbrix. Probably the second easiest option to pull off since you don't have to support multiple old things and allows him to do some things from scratch a bit better. However the downside is the original Litecoins probably decrease in value due to no more developer (like tenebrix and fairbrix).

4) Do nothing. The easiest option.

The hardest thing is, unless you are well versed in understanding CPU and GPU architecture making a CPU hard coin is very difficult. Given Coblees failure to know if Scrypt was GPU hard are we going to believe he can now make one that is? This will require a lot of work and a lot of testing to verify. And you're going to need talented C++ and OpenCL coders to help you out I think.

So far it looks like LTC is doing just fine.  Grin
str4wm4n
Legendary
*
Offline Offline

Activity: 1611
Merit: 1001


View Profile
March 22, 2012, 05:54:20 AM
 #147


Yes, don’t threat.  GPU's are at most 10x faster than cpu's for litecoin mining.  For Bitcoin it is at least 100x faster on a GPU.

Over time, this gap will narrow; as the in-built GPU's in inside every computer get better.

http://scottbush.net/language/apostrophes-have-no-place-in-plural-acronyms/

Just sayin'.

DISREGARD GRAMMERS

AWQUIRE LITECOINZ
vmarchuk
Newbie
*
Offline Offline

Activity: 45
Merit: 0



View Profile
March 22, 2012, 06:45:32 PM
 #148

Anyone know what proper setting for 7xxx series card using reaper for litecoin ?

My 7xxx card get half speed of 6xxx series cards. Using 12.2pre.

When I put aggression higher than 12, I get many GPU error on 7xxx card at half hash rate 6xxx, but on 6xxx card, I can set aggression near 22 before 6xxx give me gpu error.

I get near 450KhashS on 6970 but only near 225KhashS on 7950 before GPU error.
Bitinvestor
Sr. Member
****
Offline Offline

Activity: 470
Merit: 250


View Profile
March 22, 2012, 08:24:49 PM
 #149

Anyone know what proper setting for 7xxx series card using reaper for litecoin ?

My 7xxx card get half speed of 6xxx series cards. Using 12.2pre.

When I put aggression higher than 12, I get many GPU error on 7xxx card at half hash rate 6xxx, but on 6xxx card, I can set aggression near 22 before 6xxx give me gpu error.

I get near 450KhashS on 6970 but only near 225KhashS on 7950 before GPU error.

That's as good as it gets at the moment. Reaper is not yet optimized for the 7xxx series.

Those who cause problems for others also cause problems for themselves.
Cosbycoin
Hero Member
*****
Offline Offline

Activity: 980
Merit: 506



View Profile
March 22, 2012, 08:43:49 PM
 #150

Anyone know what proper setting for 7xxx series card using reaper for litecoin ?

My 7xxx card get half speed of 6xxx series cards. Using 12.2pre.

When I put aggression higher than 12, I get many GPU error on 7xxx card at half hash rate 6xxx, but on 6xxx card, I can set aggression near 22 before 6xxx give me gpu error.

I get near 450KhashS on 6970 but only near 225KhashS on 7950 before GPU error.

That's as good as it gets at the moment. Reaper is not yet optimized for the 7xxx series.

This statement alone implies that the future hash rate of the LTC network will get larger.
AnonyMint
Hero Member
*****
Offline Offline

Activity: 518
Merit: 521


View Profile
August 16, 2013, 08:29:22 AM
Last edit: August 16, 2013, 02:10:09 PM by AnonyMint
 #151

Click the following link for details:

https://bitcointalk.org/index.php?topic=45849.msg2940005#msg2940005


Something I've wondered:

Why are you using N=1024, r=1, and p=1 for scrypt?  Why didn't the recommended values from the paper, N=1024, r=8, p=1 get used?
If I remember correctly, ArtForz said that the parameters (1024, 1, 1) resulted in a lower GPU/CPU performance ratio.
Some analysis by him can be found here: https://bitcointalk.org/index.php?topic=45849.0

I have addressed this point in my link above.


From what I know of the gpu miner, option 3 of modifying the scrypt parameter will have minimal impact. The pad size did not seem to matter much, and can be compressed for lack of a better word, with on the fly value reconstruction. So any increase in pad size will have a relatively equal impact on cpu miners until you exceed their cache size, at which point, gpus may become even more efficient.

I think you will be stuck with option 2, finding a completely different hashing algorithm.

Until you put an Scrypt inside of an Scrypt, such that the inner stays in the cache.  See my link above.


Are you saying he has disproved the sequential memory hardness for the ROMix algorithm from the original scrypt paper?

No, apparently the issue is the relative memory bandwidths of the different features of hardware (and the ability to hide memory latency in multitheading) and the original Scrypt sequential memory hard proof doesn't factor that. My link above proposes a way to elevate the CPU's cache to a large memory size to overcome the discrepancy.


Anyway, this outlines the difficulty in actually a creating an algorithm that can not easily be serialized so that it will run faster on a CPU than a GPU.

See my link above for an idea for the algorithm.


well without decent network hashrate it can be attacked easy by any botnet ...

See my link above for another idea of how to eliminate botnets with a CPU-only coin.


any further thoughts on this?
I think it would be good to be a cpu only coin again,
tho the retards have already said they will enjoy the challenge of getting it working on gpu...

IMO Litecoin loses its point unless its CPU only Smiley
The speedup over a CPU is less than an order of magnitude.  It's not fatal.

Appears to be greater than an order-of-magnitude. See my link above.


2)
I suppose that increasing the memory size parameter of scrypt to a very large amount (megabytes...) which doesn't fit in the cache would mean that it'd be infeasible to do hash attempts in parallel with a GPU (and maybe even with several CPU cores), but it also most likely means that people couldn't use their computer to do other stuff while mining litecoins due to system responsiveness issues. Therefore it's possible that the current scrypt parameters as chosen by ArtForz and Lolcust are the best, espeically if bitcoin GPU mining remains more profitable than litecoin GPU mining.

Either the CPU is compute-bound (in small cache memory) or memory-bound in large memory. Either way you can't use your computer for other work that requires the same bound if you want to get maximum hashing rate.


IMO Litecoin loses its point unless its CPU only Smiley
Clearly, yes. Now instead of being more accessible to everyone than BTC (because everyone has an okay CPU while not everyone has an okay GPU), LTC is accessible only to the few who are lucky enough to have the GPU miner working properly... Huge step backwards. The only positive effect is that it seems the BTC hashrate lowered a bit recently.

I became interested in mining a second cryptocurrency only because tennebrix (and then others) came up with a way to make CPU mining viable again... I figured, why not put the two or three decent machines I run at home, as well as a couple strays at my shop, onto a useful task instead of simply letting them sit around growing slowly more obsolete by the day?  I suspect I represent a fairly typical LTC/SC enthusiast, in that regard.

Put bluntly, if GPU mining becomes viable for LTC and/or SC, their entire raison d'etre vanishes.


Since I was asked to clarify, "significantly more efficient", I guess I will post some hash per watt numbers.

According to litecoin wiki mining hardware comparison, an AMD Phenom X4 955 at 3.6ghz gets 24kh @ 125 watts. This translates to 0.192kh per watt.
A gpu rig consisting of 69xx series gpus can produce 998kh @ 920 watts at the wall. This translates to 1.08kh per watt.

So does at least a 5.6 factor increase in *efficiency* qualify as "significantly more"?

Consider the litecoin wiki entry for the Intel Core i7 860 which produces 25kh at 153 watts (a believable wattage consumption for the entire system). It gives a system kh/watt score of only 0.163. The gpu example is now a factor of 6.6 times more efficient.

PS, Mtrlt has gotten better kh/watt scores by playing with the clocks and voltages, but I figured I would give you an initial test result.

More hardware comparisons for Litecoin:

http://litecoin.info/Mining_Hardware_Comparison
http://coinpolice.com/gpu/


coblee,

Given that:

* in a couple of years every (consumer) CPU sold will have opencl-capable integrated graphics
* bitcoin mining will move more and more towards FPGAs/ASICs

I don't think any changes are necessary. Very soon anybody will have a computer capable of GPU-mining Litecoin. Also, since eventually people will stop using GPUs to mine bitcoin, the swings in difficulty from people switching between the two chains won't be a problem.

As far as I can see, the consumer-grade GPU on the CPU motherboard won't be the threat that exists for the stand-alone cards which will always have order-of-magnitude greater large memory bandwidth, unless the motherboard becomes something of an amalgamation with only GDDR5 memory, e.g. the Sony PS4 (see link with the post that I linked at top of this post of mine). If and when that ever become ubiquitous, then the CPU-only coin will still be mined competitively by the amalgamated system.


One thing (somewhat theoretical) I would throw out there is that as GPUs become more "CPU like" they will devote the necessary resources (transistors and chip yield) to increased L1 cache.  GPU long since outstripped the growth in pixel counts so they devoted more resources to improved image quality at a fixed number of pixels and/or polygons.  The GPU resources are growing faster than developers ability to use them as devising more complex and realistic "effects" requires more human capital than simply doubling the polygon count or going from a 800x600 pixel count to 1920x1200 pixel count.  So it will be increases in GPGPU workload which increasingly drives development of future GPUs.

Given my idea of nested Scrypt, if the GPU has adequate L1 cache per CU, the problem remains that if I set the parameters to be for example 4 cores running 32KB inner scrypt, with 1.5GB outer scrypt, then CPU (with 6GB GDDR RAM) can only employ 4 of its CU (cores). So it can only use a fraction of its hardware.

For example, the HD 7970 has 32 CU (cores) each with 16 KB L1 cache and 24 KB L2 cache running at 2TB/s and 0.7 TB/s respectively. But the Intel Haswell Core i7/i5 has 4 cores with 32 KB L1 cache and 256 KB L2 cache running at 1TB/s and 0.33 TB/s respectively.

So if the coin requires 32 KB inner Scrypt, then HD 7970 is going to be at the 0.25 to 0.5 TB/s of the GDDR RAM but with much latency and only 4 threads so much slower than the CPU. Even if the coin requires only 16 KB inner Scrypt or later version of the GPU has 32 KB L1 cache, the GPU is still going to be employing only 4 threads same as for the CPU, but may run at twice the speed of the CPU because of the double L1 cache speed.

GPU traditionally had very little local cache because there is no need when performing traditional graphics work.  That dynamic likely won't hold true in the future.  NVidia Tesla cards for example can be configured to double the amount of L1 cache because it is so useful in boosting performance of some GPGPU functionality.   Larger L1 caches will eventually trickle down into consumer grade products to.

My other idea is to force the total memory requirement of the outer Scrypt higher than any GPU, since I know of no GPU which allows addon GDDR memory. There is no retail market for GDDR memory.

No coin today is "anti-GPU" it is they can be described as "large L1 cache dependent".

Not "today", but my idea is the nested Scrypt idea should make them more "anti-GPU" when coupled with a large L1 or L2 cache dependent.

Employing the 256 KB L2 cache of the Intel Core family would mean the HD 7970 can only run three threads and still be in L2 cache but its L2 cache is 2X faster than the CPU, so 2 x 3/4 = 3/2 speed of the CPU. Or the HD 7970 could run 4 threads being main memory bound which has comparable bandwidth but memory latency would accumulate, so slower than the CPU.


SolidCoin's Hashing Algorithm

Actually the SolidCoin hash was developed to be fairly equal in performance on GPUs and CPUs, watt for watt. It is currently delivering that and has been for some time (with a small favor to CPUs). SolidCoin takes all consumer hardware that is viable so we can let the widest range of people mine it fairly. Unlike Bitcoin which is going to be FPGA soon and Litecoin (which was supposed to be CPU only and is now a GPU coin), we want everyone to be able to mine.

What looks interesting is that they still claim the SC2 algorithm to be GPU-resistant. I'm not at all convinced. Any technical opinion on this?
It's not GPU resistant, but random reads on constant 4MB buffer make the difference of CPU/GPU slightly lower cause GPU lucks on a lot of cache. But it's still 4-6 times faster on GPU as on CPU

Check out SolidCoins mining page for info on how a correct implementation of CPU/GPU hard algorithm should figure in performance.
http://wiki.solidcoin.info/wiki/Mining_Hardware_Performance

I see nothing correct in your Hashing Algorithm, i've implemented 2.5 faster Version of miner for it (CPU) and due lack of time and profit for mining there is even 3x-8x better implementation of GPU miner for Solidcoin Smiley so it's nothing other Smiley

That SolidCoin link shows roughly the same advantage for the AMD GPU HD 7970 over the Intel CPU iCore as for Litecoin.

I didn't take the time to study the linked SolidCoin hashing algorithm, but if it is based on a claimed advantage of randomized memory latency, note the point I make in my link at the top, is that this latency can be hidden by the multithreading of many threads.

unheresy.com - Prodigiously Elucidating the Profoundly ObtuseTHIS FORUM ACCOUNT IS NO LONGER ACTIVE
AnonyMint
Hero Member
*****
Offline Offline

Activity: 518
Merit: 521


View Profile
August 18, 2013, 02:15:17 AM
 #152

Elaboration:

https://bitcointalk.org/index.php?topic=267522.msg2955080#msg2955080

unheresy.com - Prodigiously Elucidating the Profoundly ObtuseTHIS FORUM ACCOUNT IS NO LONGER ACTIVE
Pages: « 1 2 3 4 5 6 7 [8]  All
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!