Bitcoin Forum
April 16, 2024, 07:23:44 AM *
News: Latest Bitcoin Core release: 26.0 [Torrent]
 
   Home   Help Search Login Register More  
Pages: « 1 2 3 [4]  All
  Print  
Author Topic: A custom designed FPGA miner for LTC?  (Read 5742 times)
Viceroy
Hero Member
*****
Offline Offline

Activity: 924
Merit: 501


View Profile
May 26, 2013, 03:36:32 AM
 #61

I like you more and more all the time, nova.  you seem an upstanding guy.  don't let these neysayers get you down.
It is a common myth that Bitcoin is ruled by a majority of miners. This is not true. Bitcoin miners "vote" on the ordering of transactions, but that's all they do. They can't vote to change the network rules.
Advertised sites are not endorsed by the Bitcoin Forum. They may be unsafe, untrustworthy, or illegal in your jurisdiction.
1713252224
Hero Member
*
Offline Offline

Posts: 1713252224

View Profile Personal Message (Offline)

Ignore
1713252224
Reply with quote  #2

1713252224
Report to moderator
1713252224
Hero Member
*
Offline Offline

Posts: 1713252224

View Profile Personal Message (Offline)

Ignore
1713252224
Reply with quote  #2

1713252224
Report to moderator
Nova! (OP)
Full Member
***
Offline Offline

Activity: 140
Merit: 101


View Profile
May 26, 2013, 04:06:04 AM
 #62

I like you more and more all the time, nova.  you seem an upstanding guy.  don't let these neysayers get you down.

Thanks, they're not getting me down.  I don't let others control my opinion of myself.  He had a fundamentally valid point.  I've worked as a coder as a team-leader as a programming manager and as a project lead in the real world.  I love to think to myself that my mind is as clear and as sharp as it used to be.  However if a programmer came to me with that level of mistake it would have been a borderline HR issue in my book.

Coming clean and saying oops I screwed up is one thing, but examining the fundamentals of how and why a mistake occurs is actually more important.
I post-mortem everything whether it succeeded or failed because the only thing that matters to me is the knowledge gained.  I view failure as a fundamental and nessecary part of the learning process.  However the same failure twice just calls into question one's own competency.

10 years ago I made a similar mistake and it cost me a company.  Literally a company I founded failed because I looked at a vast chunk of un-commented code, thought I understood it, modified it and a year later the company was gone.  I examined that whole scenario over and over again trying to find the root cause and realized the cause had only been me.  I looked at something, believed I understood what it was doing and arrogantly thought I could make it better, faster, stronger.

From that time on I had always been careful to consult with the original developer to divine intent if intent was in anyway unclear and just in general excersize much more caution before believing that I understood.  In this case I did it all over again, and this time it wasn't a vast chunk of cryptic code with no explanation, it was a small chunk with an entire whitepaper backing it.  I did in fact read the whitepaper.  I'm still not sure what the missing thought process was here.  I freely admit this mistake was the root of this debacle.

It still catches my eye as not optimal.  That's not to say it's suboptimal in anyway and I concede that it's probably optimal for a GPU, but it doesn't feel right for what I'm trying to accomplish, so I'm struggling with a new challenge.

Which honestly is exactly where I like to be.  I just need to be more careful next time.  Smiley

Donate @ 1LE4D5ERPZ4tumNoYe5GMeB5p9CZ1xKb4V
mtrlt
Member
**
Offline Offline

Activity: 104
Merit: 10


View Profile
May 26, 2013, 04:40:07 AM
 #63

Silly kitchen psychology coming right up..

I don't think your problem is your age. Then again, I don't know your age, and I'm not old enough to know the effects of aging for sure. (I'm 24.) I think your problem is overconfidence. When you generate a hypothesis out of thin air (which is a valid way of generating hypotheses, and without any information indeed the only way), you assume it must be close to the truth, instead of finding out whether it's even related. Example: You assumed RND meant random, because that was probably the first thing to pop into your head. (Incidentally, this is how I deduced you don't know much about hash algorithms in general, since all hash algorithms I know of have rounds, and if I see RND being used in a hash algorithm, I'm naturally going to assume it's referring to rounds.) Then you decided that this is obviously the place where scrypt does its memory-hard magic, and is thus the bottleneck of the algorithm, without even looking at surrounding code to see how it was being used. Am I even close to the truth? Just curious.
gica_contra
Sr. Member
****
Offline Offline

Activity: 350
Merit: 250



View Profile
May 26, 2013, 05:46:50 AM
 #64

Op reminds me of a younger me trying to do a digital scope without any knowledge of FPGAs as my final thesis. It sampled up to 20kHz from a 130Mhz clock. The interface looked killer though and the guys judging it knew even less about FPGAs than I did so everything went better than expected. God that was an awful implementation  Grin
Nova! (OP)
Full Member
***
Offline Offline

Activity: 140
Merit: 101


View Profile
May 26, 2013, 06:32:44 AM
 #65

Silly kitchen psychology coming right up..

I don't think your problem is your age. Then again, I don't know your age, and I'm not old enough to know the effects of aging for sure. (I'm 24.) I think your problem is overconfidence. When you generate a hypothesis out of thin air (which is a valid way of generating hypotheses, and without any information indeed the only way), you assume it must be close to the truth, instead of finding out whether it's even related. Example: You assumed RND meant random, because that was probably the first thing to pop into your head. (Incidentally, this is how I deduced you don't know much about hash algorithms in general, since all hash algorithms I know of have rounds, and if I see RND being used in a hash algorithm, I'm naturally going to assume it's referring to rounds.) Then you decided that this is obviously the place where scrypt does its memory-hard magic, and is thus the bottleneck of the algorithm, without even looking at surrounding code to see how it was being used. Am I even close to the truth? Just curious.


Close but reverse.
I had read the whitepaper and got what I thought was a good understanding, of the way scrypt was supposed to work.
Then I looked at the code and I saw RND being called repeatedly.  Of course to my mind it all made perfect sense at that point.  You were clearly re-seeding a random number generator. (rounds of SHA256 had dropped out of my head).  The reference to SHA256 in the function name didn't really register, but the couple of times I did see it, my internal explanation was along the lines of "ok so he took the framework from the SHA256 algo and modified it to the scrypt algo".  At this point SHA256 rounds in scrypt had completely fallen out of my head.

Anyways, yeah I saw RND, realized in my infinite wisdom that you would need a custom seedable random number generator, tracked down the RND at the top said to myself "ok that could in fact work as a way of generating a random list".  Never crossed my mind to compare it to SHA256.  I get the concept of rounds, but at this point my mind saw that as being somewhere in a loop somewhere.  I didn't need to account for it just then.

Once I saw the section though, that's where I realized that this could be optimized and probably should be optimized by a custom core with only the logic to perform this function.  Putting it in the stack unrolled or as a function call would likely be much slower than a call out to a logic unit optimized to the specific task.  However the reads and writes in memory would be problematic, hence the idea of sharing on-die memory.  Still hadn't quite worked out the nuts and bolts of how it would fit together.

I still feel that way.  I'm still studying it, but I do still feel that way.

 

Donate @ 1LE4D5ERPZ4tumNoYe5GMeB5p9CZ1xKb4V
mtrlt
Member
**
Offline Offline

Activity: 104
Merit: 10


View Profile
May 26, 2013, 01:04:02 PM
 #66

Close but reverse.
I had read the whitepaper and got what I thought was a good understanding, of the way scrypt was supposed to work.
Then I looked at the code and I saw RND being called repeatedly.  Of course to my mind it all made perfect sense at that point.  You were clearly re-seeding a random number generator. (rounds of SHA256 had dropped out of my head).  The reference to SHA256 in the function name didn't really register, but the couple of times I did see it, my internal explanation was along the lines of "ok so he took the framework from the SHA256 algo and modified it to the scrypt algo".  At this point SHA256 rounds in scrypt had completely fallen out of my head.

Anyways, yeah I saw RND, realized in my infinite wisdom that you would need a custom seedable random number generator, tracked down the RND at the top said to myself "ok that could in fact work as a way of generating a random list".  Never crossed my mind to compare it to SHA256.  I get the concept of rounds, but at this point my mind saw that as being somewhere in a loop somewhere.  I didn't need to account for it just then.

Whitepapers can leave one very confused, I for one have never read the scrypt whitepaper. I've just taken a cursory glance at it and decided I can't possibly understand its complex language within a reasonable time. Looking at code is far more productive. And yes, the SHA-256 rounds can be in a loop, but usually it's unrolled for speed.

Quote
Once I saw the section though, that's where I realized that this could be optimized and probably should be optimized by a custom core with only the logic to perform this function.  Putting it in the stack unrolled or as a function call would likely be much slower than a call out to a logic unit optimized to the specific task.  However the reads and writes in memory would be problematic, hence the idea of sharing on-die memory.  Still hadn't quite worked out the nuts and bolts of how it would fit together.

I still feel that way.  I'm still studying it, but I do still feel that way.
If you still feel that way, you have to strive to discard that feeling from your mind as quickly as possible, because that way is wrong. SHA-256 is neither the problem nor the hard part in LTC mining.
Nova! (OP)
Full Member
***
Offline Offline

Activity: 140
Merit: 101


View Profile
May 26, 2013, 06:42:46 PM
 #67

If you still feel that way, you have to strive to discard that feeling from your mind as quickly as possible, because that way is wrong. SHA-256 is neither the problem nor the hard part in LTC mining.

Agreed.  Wishing I had a proper profiler about now though Smiley

Donate @ 1LE4D5ERPZ4tumNoYe5GMeB5p9CZ1xKb4V
Luckybit
Hero Member
*****
Offline Offline

Activity: 714
Merit: 510



View Profile
May 26, 2013, 06:55:20 PM
 #68

I have found what I believe is a shortcut in scrypt that if implemented correctly in hardware could dramatically speed up the hashrate.
I believe it should work and I know how I would implement it if I had the resources to acquire the FPGA and tools I need.

To show good faith I will elaborate on the algo and how the shortcut would work.  
This is really over simplified, but you are free to take this idea and roll with it.

scrypt the algo used by LTC and in fact all hashing algos, are comprised of 2 predominant steps.
#1 Generate a random list
#2 Hash across it.

To generate consistent results the random algo is actually deterministic pseudo-random and the setup for it is determined by a seed.
We will call this the prng.

The other step is hashing which is pretty well understood, you take a value from list a and replace it with a value from list b.
When you are done iterating you now have a hash.

scrypt differs mostly because it uses an entirely new list so frequently.  
The setup and tear down of this list requires quite a bit of CPU time and a lot of time is wasted on the memory bus performing storage & retrieval operations.
It cannot be done concurrently because the list itself changes frequently.

The shortcut is to have a multicore setup and a ton of on-die ram.
A dedicated prng core which does the setup and teardown for the second core.

The secondary core is the hashing core.  It would tell the prng core to setup a new list.
Then it would retrieve position x off the list from the shared memory space.
Other than that it would also perform all the normal hashing functions in a dedicated memory space.

I believe the total I need to make this work is about $12k USD, the FPGA I'm targeting right now is $10k and a license for the dev tools will be about $2k.
If I can find a less expensive option then I will go for that, but there aren't that many FPGAs that meet requirements right now.  
The particular target FPGA also has a direct path to ASIC from the mfr.

If you're willing to donate to the effort, I will keep you in the loop with full disclosure including build instructions and a copy of the sources and the firmware.
I haven't decided on a license for this if it works, but you will at least have a right to personal use.  
Perhaps if enough people are interested in production level manufacturing we could go a different route.  I'm not particularly interested in making this something I do for the rest of my life, but the contrarian in me is very excited by the potential here.

The LTC donation address is below.
LKfKkRMvMf2stQMNzQdKCvaf2YueAv1QSa

You can also donate BTC to the key in my sig.
There is no maximum but if you do decide to donate please send at least 0.5 LTC or the equivalent in BTC.
Then post just the address you donated from and I'll PM you here with a bitmessage key to join the group.

Thanks in advance!


Go on Cryptostocks.com and list there. There is another group offering FGPA shares. You should do the same and try and pull an ASICMiner type deal.
WindMaster
Sr. Member
****
Offline Offline

Activity: 347
Merit: 250


View Profile
May 26, 2013, 10:42:05 PM
 #69

If you still feel that way, you have to strive to discard that feeling from your mind as quickly as possible, because that way is wrong. SHA-256 is neither the problem nor the hard part in LTC mining.

Agreed.  Wishing I had a proper profiler about now though Smiley

Let me save you some time and say the SHA256 overhead is around 0.1% of the overall processing time involved in scrypt, +/- a bit.  That's why mtrlt is saying not to bother with that, since there's an upper bound of about 0.1% you could gain even if you could optimize it down to zero overhead to calculate SHA256 hashes.
ReCat
Sr. Member
****
Offline Offline

Activity: 406
Merit: 250



View Profile WWW
May 31, 2013, 02:10:29 AM
 #70

Great. Now GPU miners will be completely obsoleted. Way to go. Tongue

BTC: 1recatirpHBjR9sxgabB3RDtM6TgntYUW
Hold onto what you love with all your might, Because you can never know when - Oh. What you love is now gone.
jackjack
Legendary
*
Offline Offline

Activity: 1176
Merit: 1233


May Bitcoin be touched by his Noodly Appendage


View Profile
June 14, 2013, 10:29:51 AM
 #71

If I have completely misunderstood hashing over a lifetime of programming, then I really have some long hard thinking to do.
Quote from: Nova!
Oh wait, RND meant 'round' and not 'random'?
Lol

Own address: 19QkqAza7BHFTuoz9N8UQkryP4E9jHo4N3 - Pywallet support: 1AQDfx22pKGgXnUZFL1e4UKos3QqvRzNh5 - Bitcointalk++ script support: 1Pxeccscj1ygseTdSV1qUqQCanp2B2NMM2
Pywallet: instructions. Encrypted wallet support, export/import keys/addresses, backup wallets, export/import CSV data from/into wallet, merge wallets, delete/import addresses and transactions, recover altcoins sent to bitcoin addresses, sign/verify messages and files with Bitcoin addresses, recover deleted wallets, etc.
mesquka
Member
**
Offline Offline

Activity: 70
Merit: 10


"Human equivalent of a typo."


View Profile WWW
August 03, 2013, 08:19:29 AM
 #72

 second WindMaster, SCAM!!!! Anyone with technical knowledge can see this. Remember people: check things out before donating.
digitalindustry
Hero Member
*****
Offline Offline

Activity: 798
Merit: 1000


‘Try to be nice’


View Profile WWW
August 03, 2013, 08:45:44 AM
 #73

just from an economic stand point I believe , with regard to sCrypt one will see ASIC before FPGA - in this iteration of the cycle .


here is why I'm correct:


1. Because SHA256 was A  "first of a first" 

2. The ASIC market is much more well developed generally now in the cycle.

3. The whole environment is more developed the whole Crypto market .

4. high difficulty of SHA256 will drive market forces "desire" to push ASIC makers to build it.

5. They/some will do what the market want.

- Twitter @Kolin_Quark
MrHempstock
Full Member
***
Offline Offline

Activity: 140
Merit: 100


"Don't worry. My career died after Batman, too."


View Profile
August 03, 2013, 09:47:53 AM
 #74

second WindMaster, SCAM!!!! Anyone with technical knowledge can see this. Remember people: check things out before donating.

After a month and a half? Just curious.

BTCitcointalk 1%ers manipulate the currency and deceive its user community.
Pages: « 1 2 3 [4]  All
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!