Bitcoin Forum
May 22, 2024, 02:16:43 AM *
News: Latest Bitcoin Core release: 27.0 [Torrent]
 
   Home   Help Search Login Register More  
Warning: One or more bitcointalk.org users have reported that they strongly believe that the creator of this topic is a scammer. (Login to see the detailed trust ratings.) While the bitcointalk.org administration does not verify such claims, you should proceed with extreme caution.
Pages: « 1 [2]  All
  Print  
Author Topic: Bounty crowdsale to GPU-based miner for Zcash  (Read 2563 times)
nerdralph
Sr. Member
****
Offline Offline

Activity: 588
Merit: 251


View Profile
September 27, 2016, 12:15:15 AM
 #21

Where can we read more info about Zcash? Is there any BitcoinTalk Ann? Any links are welcome.

There's lots on github.  You can start here:
https://github.com/zcash/zcash/wiki/specification
mjosephs
Full Member
***
Offline Offline

Activity: 129
Merit: 100


View Profile
September 29, 2016, 04:13:44 AM
Last edit: September 30, 2016, 12:10:25 AM by mjosephs
 #22

Hello folks, this thread appears to be the place with the highest concentration of zECQuihash optimization gurus.  May I trouble you with a stupid question?

You need a fair bit of space to hold the index-lists...

Indeed you do.

I'm trying to correlate that with the zcashd implementation, since it's the only available working cross-checker (too bad it is written for speed instead of readability).  The only part that has me confused is the need to eliminate entries whose index sets contain duplicate/repeated entries after EVERY one of the k collision steps.  There is no mention of this in the algorithm given in the Equihash paper.

I understand why these can't be part of the final solution (the X_i's must use distinct i's).  But why not wait until the end to drop these superfluous solutions?  Why check for duplicates k times?  I did some quick back-of-the-envelope calculations and I'm having a hard time believing that leaving these entries in the table would significantly increase memory consumption.  If you had a duplicate X_a in the second level of the collision, it would have to be a result of (X_a*X_b) colliding with (X_a*X_c) on the first 40 bits (where X_a collides with each of X_b and X_c on the first 20 bits).  If this is true then

 ((X_a*X_b)*(X_a*X_c)) = 00000000000000000000 00000000000000000000

and therefore

 (X_a*X_a*X_b*X_c) = 00000000000000000000 00000000000000000000

so

 (X_b*X_c) = 00000000000000000000 00000000000000000000

and moreover

 (X_a*X_b) = 00000000000000000000 ....................

 (X_a*X_c) = 00000000000000000000 ....................

In general it seems like repeated indices arise in the Equihash-constrained variant of Wagners algorithm as a result of an "overcollision" -- when two rows of the table collide not just on the current column but the next one as well.  When this happens the members of the overcollided set will generate one suprious entry in the next round for every entry they collide with in the current round.  It seems like that would happen on average twice per column.  Is that really enough to bloat the memory usage in a noticable way for k=10 k=9?  Surely for something like k=100 we'd be talking about serious combinatorial explosion, but for k=10 k=9 there just aren't that many steps compared to the size of the table.

(edit: changed k=10 to k=9)

tromp
Legendary
*
Offline Offline

Activity: 980
Merit: 1088


View Profile
September 29, 2016, 02:24:22 PM
 #23

Hello folks, this thread appears to be the place with the highest concentration of zECQuihash optimization gurus.  May I trouble you with a stupid question?


I'm trying to correlate that with the zcashd implementation, since it's the only available working cross-checker (too bad it is written for speed instead of readability).  The only part that has me confused is the need to eliminate entries whose index sets contain duplicate/repeated entries after EVERY one of the k collision steps.  There is no mention of this in the algorithm given in the Equihash paper.

I understand why these can't be part of the final solution (the X_i's must use distinct i's).  But why not wait until the end to drop these superfluous solutions?  Why check for duplicates k times?  I did some quick back-of-the-envelope calculations and I'm having a hard time believing that leaving these entries in the table would significantly increase memory consumption.  If you had a duplicate X_a in the second level of the collision, it would have to be a result of (X_a*X_b) colliding with (X_a*X_c) on the first 40 bits (where X_a collides with each of X_b and X_c on the first 20 bits).  If this is true then

 ((X_a*X_b)*(X_a*X_c)) = 00000000000000000000 00000000000000000000

and therefore

 (X_a*X_a*X_b*X_c) = 00000000000000000000 00000000000000000000

so

 (X_b*X_c) = 00000000000000000000 00000000000000000000

and moreover

 (X_a*X_b) = 00000000000000000000 ....................

 (X_a*X_c) = 00000000000000000000 ....................

In general it seems like repeated indices arise in the Equihash-constrained variant of Wagners algorithm as a result of an "overcollision" -- when two rows of the table collide not just on the current column but the next one as well.  When this happens the members of the overcollided set will generate one suprious entry in the next round for every entry they collide with in the current round.  It seems like that would happen on average twice per column.  Is that really enough to bloat the memory usage in a noticable way for k=10?  Surely for something like k=100 we'd be talking about serious combinatorial explosion, but for k=10 there just aren't that many steps compared to the size of the table.

Sorry if I overlooked it, but where is the stupid question?

All I see are some very astute observations. Welcome to club Guru!
nerdralph
Sr. Member
****
Offline Offline

Activity: 588
Merit: 251


View Profile
September 29, 2016, 03:11:59 PM
 #24

@mjosephs

I had previously thought about the need to keep all pairs ab, ac, bc when Xa, Xb, and Xc all collide on n/(k+1) bits.  Your question got me thinking more about the zcash isProbablyDuplicate code and possible ways of avoiding duplicates.  When there is a collision on 40 bits as in your example, I think it may be safe to discard all those tuples at the first stage.  In other words, when a collision on 20 bits is found, check if there is a collision on 40 bits before deciding to keep the tuples.  I think the cost of doing the check early is quite low, so I can think of little reason to postpone the check to the end.
mjosephs
Full Member
***
Offline Offline

Activity: 129
Merit: 100


View Profile
September 30, 2016, 12:05:09 AM
Last edit: September 30, 2016, 12:21:25 AM by mjosephs
 #25

Sorry if I overlooked it, but where is the stupid question?

All I see are some very astute observations. Welcome to club Guru!

Thank you @tromp, that's quite flattering coming from you.

I'm guessing that with the sprint towards October 28th you don't have a whole lot of bandwidth to think hard about this, but if you do come across a reason why waiting until the end to prune duplicates doesn't work, or conclude that it does, a quick post here would be much appreciated.


In other words, when a collision on 20 bits is found, check if there is a collision on 40 bits before deciding to keep the tuples.

Thanks @nerdralph, that was my plan too -- check for overcollisions while updating the rest of the row between sorting passes, instead of checking tuple-lists for duplicates.  This way the index tuples become sort of a "write-only/append-only" (or "read-infrequently") type of memory, which allows other much more valuable optimizations.  I just wanted to check if there was some obvious reason why it wouldn't work.

I will sketch out a monte carlo program to check the penalty (expected to be small) and if there are any other situations that lead to duplicate indices that aren't accounted for above.

I sure wish there were a simple ZECquihash solution checker (not solver) written for clarity rather than performance.  The only one we've got right now is the one that comes with zcashd and it is built out of subroutine calls shared with the miner, which is so heavily optimized that it's very hard to read the (uncommented!) code.  Maybe when @tromp becomes a ZEC-billionaire next month he'll publish one Smiley

tromp
Legendary
*
Offline Offline

Activity: 980
Merit: 1088


View Profile
September 30, 2016, 01:07:42 AM
 #26

I'm guessing that with the sprint towards October 28th you don't have a whole lot of bandwidth to think hard about this, but if you do come across a reason why waiting until the end to prune duplicates doesn't work, or conclude that it does, a quick post here would be much appreciated.

Waiting until the end does in fact work.

Quote
Maybe when @tromp becomes a ZEC-billionaire next month he'll publish one Smiley

I have no plans to mine ZEC myself:-(
Pages: « 1 [2]  All
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!