Bitcoin Forum
November 19, 2024, 03:17:25 PM *
News: Check out the artwork 1Dq created to commemorate this forum's 15th anniversary
 
   Home   Help Search Login Register More  
Pages: « 1 [2] 3 »  All
  Print  
Author Topic: Cuckoo Cycle Speed Challenge; $2500 in bounties  (Read 6520 times)
tromp (OP)
Legendary
*
Offline Offline

Activity: 990
Merit: 1110


View Profile
July 31, 2014, 05:09:34 PM
 #21

This thread in Bitcoin Development & Technical Discussion may also be of interest:

https://bitcointalk.org/index.php?topic=717267.0

For the record, I have no interest in developing my own coin. As a computer scientist, proof-of-work algorithms interest me, and Cuckoo Cycle has become one of my hobbies. Beyond the proof-of-work, crypto currencies have too many complexities (e.g. key management, networking issues) that I don't want to have to deal with.

Cuckoo Cycle will be adopted eventually. I suggested that the Ethereum guys take a serious look at it. Let's see what they come up with and how it compares...

Hi, John -

One more follow-up.  I posted in my public review that, beyond the speedups I suggested to you, I was somewhat nervous about algorithmic speedups to the problem still being possible, and I can't shake that nervousness yet.  (But I haven't been able to exploit it either, but my failure to do so doesn't mean it's not possible, just that it's not immediately obvious).

Please stay nervous:-) As you noted in the past, losing my nervousness is one of my problems...

Quote
One thing I'm curious about: your current union-find solution seems to echo that of Monien.  Before wasting my time thinking about it more if you've already done the work, did you attempt to apply Yuster & Zwick's algorithm (in "Finding Even Cycles Even Faster")?  The only difference is a factor of inverse-of-ackerman's in (V), but it's mostly intriguing because it's simpler from a data structure perspective, which might lead to a faster real-world implementation.  Or not. Smiley

I thought their approaches are quite different. Can you point to a specific paper and page of Monien describing a union-find like algorithm? My paper has a paragraph on both the similarity to and crucial difference with union-find
which explains why I can't use path-compression (which leads to the Ackermann function). My algorithm also crucially depends on the edge to vertex ratio being at most one half.
But any speedup of this part is mostly irrelevant anyway, as the running time of Cuckoo Cycle is entirely dominated by the edge trimming phase.

Quote
My nervousness remains because of the unique structure of your graphs - as you note, they're like GNMp graphs - and it's not clear that there are specific tricks that can be used here.  I'm a little less worried about this than I was, but I've still got this concern that the algorithm could be vulnerable to non-breakthrough-level theory developments (i.e., things easier than breaking discrete logs. :-).  Still trying to wrap my head around a good formulation of it, and people shouldn't take me too seriously on this, because it's a tickling hunch, not a "gosh, I'm sure" kind of feeling.

I still compare it in my head to Momentum, where there's less structure sitting around, and therefore I'm more confident in the analysis, which I view as a good property in a PoW.  But we know that Momentum, even if it were modified to use SIPhash, basically turns into a DRAM bandwidth problem.  That's not bad from an ASIC standpoint, but it is comparatively GPU-friendly.

Momentum suffers from a having a linear time-memory trade-off though. It's essentially identical to Cuckoo Cycle with N=2^26 vertices, cycle length 2, and #edges = N rather than N/2. Note that the best known implementations (entirely or partly due to you) are also essentially identical. I believe that by generalizing cycle length, Cuckoo Cycle leaves less room for trade-offs, as witnessed by the fact that Momentum's linear tmto does not carry over...
dga
Hero Member
*****
Offline Offline

Activity: 737
Merit: 511


View Profile WWW
July 31, 2014, 06:44:59 PM
 #22


Quote
One thing I'm curious about: your current union-find solution seems to echo that of Monien.  Before wasting my time thinking about it more if you've already done the work, did you attempt to apply Yuster & Zwick's algorithm (in "Finding Even Cycles Even Faster")?  The only difference is a factor of inverse-of-ackerman's in (V), but it's mostly intriguing because it's simpler from a data structure perspective, which might lead to a faster real-world implementation.  Or not. Smiley

I thought their approaches are quite different. Can you point to a specific paper and page of Monien describing a union-find like algorithm? My paper has a paragraph on both the similarity to and crucial difference with union-find
which explains why I can't use path-compression (which leads to the Ackermann function). My algorithm also crucially depends on the edge to vertex ratio being at most one half.
But any speedup of this part is mostly irrelevant anyway, as the running time of Cuckoo Cycle is entirely dominated by the edge trimming phase.



"the complexity of determining a shortest cycle of even length" (I couldn't find a non-paywalled PDF, but could send you a copy if you want).  A key difference is the shortest aspect, however.  And asymptotically, the V^2 runtime is worse than the |V||E| runtime of the simpler BFS-based approach on the graph you choose for cuckoo.

(Note:  I wasn't suggesting it's better -- I mostly wanted to avoid chasing down the Yuster algorithm if you'd already evaluated it. ;-)

Quote

Quote
My nervousness remains because of the unique structure of your graphs - as you note, they're like GNMp graphs - and it's not clear that there are specific tricks that can be used here.  I'm a little less worried about this than I was, but I've still got this concern that the algorithm could be vulnerable to non-breakthrough-level theory developments (i.e., things easier than breaking discrete logs. :-).  Still trying to wrap my head around a good formulation of it, and people shouldn't take me too seriously on this, because it's a tickling hunch, not a "gosh, I'm sure" kind of feeling.

I still compare it in my head to Momentum, where there's less structure sitting around, and therefore I'm more confident in the analysis, which I view as a good property in a PoW.  But we know that Momentum, even if it were modified to use SIPhash, basically turns into a DRAM bandwidth problem.  That's not bad from an ASIC standpoint, but it is comparatively GPU-friendly.

Momentum suffers from a having a linear time-memory trade-off though. It's essentially identical to Cuckoo Cycle with N=2^26 vertices, cycle length 2, and #edges = N rather than N/2. Note that the best known implementations (entirely or partly due to you) are also essentially identical. I believe that by generalizing cycle length, Cuckoo Cycle leaves less room for trade-offs, as witnessed by the fact that Momentum's linear tmto does not carry over...

What's the linear TMTO in momentum?  It seems a lot like the edge trimming step in Cuckoo (and you're right - I just repurposed my code from momentum for that step in Cuckoo).  But isn't the tmto quadratic in momentum as well?  E.g., split the nonce set in two halves n_1, n_2, and then test n_1 x n_1, n_1 x n_2, n_2 x n_2?  Or is there something I missed?

Duh, nevermind.  Right - rip through and save all outputs with high order bit 0.  Repeat again for high order bit 1.  Straightforward.  Sorry, brain has been in too many meetings today.

tromp (OP)
Legendary
*
Offline Offline

Activity: 990
Merit: 1110


View Profile
July 31, 2014, 08:04:49 PM
 #23

"the complexity of determining a shortest cycle of even length" (I couldn't find a non-paywalled PDF, but could send you a copy if you want).  A key difference is the shortest aspect, however.  And asymptotically, the V^2 runtime is worse than the |V||E| runtime of the simpler BFS-based approach on the graph you choose for cuckoo.
(Note:  I wasn't suggesting it's better -- I mostly wanted to avoid chasing down the Yuster algorithm if you'd already evaluated it. ;-)

Please email me a copy of that Monien paper. Btw, my runtime is O(|E|) = O(|V|), not |V| |E|. I didn't check much graph theory literature because my setting is pretty specific:

  the graph is (pseudo-)random. 
  it is very sparse; in fact a forest, apart from a constant expected number of additional cycle inducing edges.
  it is only given implicitly, rather than some adjacency matrix or set of adjacency lists.

That's why the cuckoo hashing literature seemed more relevant, since these graph arise in that setting.

dga
Hero Member
*****
Offline Offline

Activity: 737
Merit: 511


View Profile WWW
July 31, 2014, 08:51:47 PM
 #24

"the complexity of determining a shortest cycle of even length" (I couldn't find a non-paywalled PDF, but could send you a copy if you want).  A key difference is the shortest aspect, however.  And asymptotically, the V^2 runtime is worse than the |V||E| runtime of the simpler BFS-based approach on the graph you choose for cuckoo.
(Note:  I wasn't suggesting it's better -- I mostly wanted to avoid chasing down the Yuster algorithm if you'd already evaluated it. ;-)

Please email me a copy of that Monien paper. Btw, my runtime is O(|E|) = O(|V|), not |V| |E|. I didn't check much graph theory literature because my setting is pretty specific:

  the graph is (pseudo-)random. 
  it is very sparse; in fact a forest, apart from a constant expected number of additional cycle inducing edges.
  it is only given implicitly, rather than some adjacency matrix or set of adjacency lists.

That's why the cuckoo hashing literature seemed more relevant, since these graph arise in that setting.


Agreed, and emailed.  Just trying to figure out where you've gone in looking for solutions. Smiley  The sparsity is very interesting (and what makes edge pruning effective).  The nature of the oracle is also fun.  I tried yesterday to find a theory postdoc or someone to dangle the problem in front of, but didn't get any bites, just forward pointers.

dga
Hero Member
*****
Offline Offline

Activity: 737
Merit: 511


View Profile WWW
August 01, 2014, 02:03:12 AM
Last edit: August 01, 2014, 02:13:31 AM by dga
 #25

Would you consider extending the applicability of your time/memory tradeoff bounty to a theoretical improvement to the asymptotic bounds for the time-memory tradeoff, with a low-speed demonstrator, but not an insanely tuned implementation, proving the feasibility of a sub-quadratic TMTO (but superlinear - i'm guessing some extra log(n) factor) for the edge pruning component of Cuckoo Cycle?

tromp (OP)
Legendary
*
Offline Offline

Activity: 990
Merit: 1110


View Profile
August 01, 2014, 02:46:57 AM
 #26

Would you consider extending the applicability of your time/memory tradeoff bounty to a theoretical improvement to the asymptotic bounds for the time-memory tradeoff, with a low-speed demonstrator, but not an insanely tuned implementation, proving the feasibility of a sub-quadratic TMTO (but superlinear - i'm guessing some extra log(n) factor) for the edge pruning component of Cuckoo Cycle?

Are you asking for a bounty for using N/k bits and an o(k^2) slowdown?
The problem with asymptotic running times is that they're hard to verify:-(

I'd be happy to generalize the bounty as follows:

$1000/E for an open source implementation that uses at most N/k bits
while running up to 1.5*k^E times slower, for any k>=2 and E>=1.

Or is that still too strict for your taste?
dga
Hero Member
*****
Offline Offline

Activity: 737
Merit: 511


View Profile WWW
August 01, 2014, 10:26:55 AM
 #27

Would you consider extending the applicability of your time/memory tradeoff bounty to a theoretical improvement to the asymptotic bounds for the time-memory tradeoff, with a low-speed demonstrator, but not an insanely tuned implementation, proving the feasibility of a sub-quadratic TMTO (but superlinear - i'm guessing some extra log(n) factor) for the edge pruning component of Cuckoo Cycle?

Are you asking for a bounty for using N/k bits and an o(k^2) slowdown?
The problem with asymptotic running times is that they're hard to verify:-(

I'd be happy to generalize the bounty as follows:

$1000/E for an open source implementation that uses at most N/k bits
while running up to 1.5*k^E times slower, for any k>=2 and E>=1.

Or is that still too strict for your taste?

Yes, but N/k bits with an O(k log N) slowdown.  Proof-of-concept implementation with no mind at all paid to efficiency, but showing clearly the attack vector and its resulting complexity.

In other words - close to linear, but not quite.


tromp (OP)
Legendary
*
Offline Offline

Activity: 990
Merit: 1110


View Profile
August 01, 2014, 01:22:04 PM
 #28

I'd be happy to generalize the bounty as follows:

$1000/E for an open source implementation that uses at most N/k bits
while running up to 1.5*k^E times slower, for any k>=2 and E>=1.

Or is that still too strict for your taste?

Yes, but N/k bits with an O(k log N) slowdown.  Proof-of-concept implementation with no mind at all paid to efficiency, but showing clearly the attack vector and its resulting complexity.

In other words - close to linear, but not quite.

I see. That would indeed be very interesting, even if not a practical attack. But to be precise, is there a dependence on k in the hidden constant? Can you set k equal to log(N), or sqrt(N)?
dga
Hero Member
*****
Offline Offline

Activity: 737
Merit: 511


View Profile WWW
August 01, 2014, 02:47:43 PM
 #29

I'd be happy to generalize the bounty as follows:

$1000/E for an open source implementation that uses at most N/k bits
while running up to 1.5*k^E times slower, for any k>=2 and E>=1.

Or is that still too strict for your taste?

Yes, but N/k bits with an O(k log N) slowdown.  Proof-of-concept implementation with no mind at all paid to efficiency, but showing clearly the attack vector and its resulting complexity.

In other words - close to linear, but not quite.

I see. That would indeed be very interesting, even if not a practical attack. But to be precise, is there a dependence on k in the hidden constant? Can you set k equal to log(N), or sqrt(N)?

k must be > log(n), or the constants lose out.  Anything >= log^2(n) is fine.

Obviously, because it's only acting on edge trimming, there's a lower bound on the size requirement determined by the cycles.  In essence, it boils down to #nodes-involved-in-paths-of-length-L * log(N), where L is the number of edge trimming steps you're willing to pay for.  From our prior discussion about that, that's in the tens of thousands of nodes for a N=2^26.

Here's a pretty concrete example:

- Using the equivalent of 7 iterations of edge trimming
- N= 2^28, E=2^27
- Using k=sqrt(N), requiring roughly 2^14 *words* of memory (so sqrt(N) * log(N))
- Processes in O(2^28 * 2^14) steps
- Reduces the graph down to about 2.5% of its original size. <-- requires 0.025N\logN words to represent, of course...

The way in which it's not a perfectly linear TMTO is that I have to go to a word representation of the graph, not the bitvector I introduced earlier.  It's a little more nuanced than that, but this is the core.  

I'm writing the proof of concept in Julia because I wanted to learn a new language, and so it's glacially slow because I'm a newbie to it.  I can discuss high-performance implementation strategies for it, of course, and I believe it's a pretty ASIC-friendly algorithm.

I'll write the whole thing up, share my PoC source code, and if you feel like throwing me some of the bounty, cool.

... wish there were some venue to publish this stuff in. Smiley

tromp (OP)
Legendary
*
Offline Offline

Activity: 990
Merit: 1110


View Profile
August 01, 2014, 09:00:28 PM
 #30

- Using the equivalent of 7 iterations of edge trimming
- N= 2^28, E=2^27
- Using k=sqrt(N), requiring roughly 2^14 *words* of memory (so sqrt(N) * log(N))
- Processes in O(2^28 * 2^14) steps
- Reduces the graph down to about 2.5% of its original size. <-- requires 0.025N\logN words to represent, of course...

The current code already trims the edges down to about 1.6%, in order to allow the cycle finding to run in the same memory footprint as the edge trimming. To run the cycle finding in only 2^14 words, you'd have to trim WAY more?!

Quote
The way in which it's not a perfectly linear TMTO is that I have to go to a word representation of the graph, not the bitvector I introduced earlier.  It's a little more nuanced than that, but this is the core.  

That would make the situation very similar to that of Momentum, where the linear TMTO applies to the original implementatoin using N=2^26 words, but not to your trimming version using N+2N/k bits (assuming it recomputes SHA hashes instead of storing them).

Quote
I'll write the whole thing up, share my PoC source code, and if you feel like throwing me some of the bounty, cool.

For a linear TMTO of my original non-trimming Cuckoo Cycle implementation I will offer a $666 bounty.

Quote
... wish there were some venue to publish this stuff in. Smiley

You could do as I did and publish on the Cryptology ePrint Archive. That's good for exposure, but, lacking peer-review, not so good for academic merit:-(
fluffypony
Donator
Legendary
*
Offline Offline

Activity: 1274
Merit: 1060


GetMonero.org / MyMonero.com


View Profile WWW
August 01, 2014, 09:06:17 PM
 #31

Quote
... wish there were some venue to publish this stuff in. Smiley

You could do as I did and publish on the Cryptology ePrint Archive. That's good for exposure, but, lacking peer-review, not so good for academic merit:-(

The problem is there's nothing like Nature when it comes to cryptography (and certainly nothing like Pubmed). Then again, compared to medical sciences, this is an industry that is still very much in its infancy.

dga
Hero Member
*****
Offline Offline

Activity: 737
Merit: 511


View Profile WWW
August 01, 2014, 09:37:44 PM
 #32

Quote
... wish there were some venue to publish this stuff in. Smiley

You could do as I did and publish on the Cryptology ePrint Archive. That's good for exposure, but, lacking peer-review, not so good for academic merit:-(

The problem is there's nothing like Nature when it comes to cryptography (and certainly nothing like Pubmed). Then again, compared to medical sciences, this is an industry that is still very much in its infancy.

Well - there are lots of academic conferences.  I may toss it to one, but I suspect I'll throw it to ArXiv or crypto ePrint. 

The challenge that I see with it is that the CC paper wasn't published in a peer-reviewed spot, so it makes it harder to publish something following on to it.  The reason I jumped back on CC is because people in the cryptocurrency space are expressing a lot of interest in it, and that raises the importance of reviewing it, but there aren't too many academics who follow bitcointalk or think that when major devs for bitcoin are closely following something, it becomes important.  Ahh well.  It's fun stuff in any event. Smiley


dga
Hero Member
*****
Offline Offline

Activity: 737
Merit: 511


View Profile WWW
August 02, 2014, 12:38:28 AM
Last edit: August 02, 2014, 05:39:10 PM by dga
 #33

http://www.cs.cmu.edu/~dga/crypto/cuckoo/analysis.pdf

Very hackish proof-of-concept:

http://www.cs.cmu.edu/~dga/crypto/cuckoo/partitioned.jl

I'm releasing this well in advance of what I would normally do for academic work because I think it's worth pushing the discussion of proof-of-work functions forward fast -- crypto moves too fast these days -- but please be aware that it's not done yet and requires a lot more polish.  The bottom line is as I suggested above:  It successfully TMTO's but still requires something like 1-3% of the "full" graph space, because that's what gets passed to the solver after edge trimming.  I don't think this is the final word in optimizing for cuckoo cycle -- I believe there are some further nifty optimizations possible, though I think this gets the first primary attack vector down.

John, I'd love your feedback on whether this is clear and/or needs help, or if you find some of the handwavy O(N) parts too vague to constitute a real threat.  I plan to clean it up more, but, as noted, I figured that it's better to swallow my perfectionism and push the crappy version out there faster to get the dialogue rolling, since I don't have coauthors on it to push it in the right direction. Smiley

Feedback appreciated!

(The weakest part of the analysis, btw, is the growth rate of the dictionaries used to track the other live nodes.  With 7 iterations of edge trimming, for example, they actually grow slightly larger than the original node set in a partition, but less than 2x its size in some spot checks.  I need to think more carefully about how that affects some of the asymptotic factors.)

As an example of the latter, with a partition initially containing 16384 nodes:

Code:
609 live nodes at start of iteration 6
Size of hop 2 dictionary: 3140
Size of hop 3 dictionary: 4940
Size of hop 4 dictionary: 6841
Size of hop 5 dictionary: 8873

That's 23794 total dictionary entries, or 1.45x the initial partition size, and at iteration 7, it's grown to 26384, or 1.61x.  It's not an exponential explosion, so I'm not worried about it invalidating the major part of the result, but it's the place where I or someone else should focus some algorithmic attention to reduce.

update 2:  To run the julia code, install Julia, and then type:  
Code:
include("partitioned.jl")

It's glacially slow.  For the impatient, reduce the problem size from 2^27 to something smaller first, like 2^20, and the partition accordingly.  (Note:  I use "N" to mean the size of one half of the bipartite graph, whereas John's formulation uses it to include both, so n=2^27 is equivalent to john's 2^28)

Update 3:  Herp derp.  That dictionary growth was my stupidity - I forgot to not include the edge back to the node that caused the insertion in the first place, so it's got a little bit of accidental exponential growth.  I'll fix that tomorrow.  That should get it back to closer to the linear scaling I'd expected.

Update 4:  Fixed that above silliness with adding back inbound edges.  Much improved dictionary size, now in line with what it should have been:

Code:
647 live nodes at start of iteration 6
Size of hop 2 dictionary: 2803
Size of hop 3 dictionary: 3554
Size of hop 4 dictionary: 4152
Size of hop 5 dictionary: 4404

By the end of iteration 7, the sum of all dictionaries (for that run) was 15797, slightly less than the number of nodes in the original partition, so at least through 7 iterations, the space for dictionaries remains O(|P| log N).  Empirically speaking only, of course, since I haven't really done that analysis as it needs to be.
Julia code file on the website updated.

tromp (OP)
Legendary
*
Offline Offline

Activity: 990
Merit: 1110


View Profile
August 02, 2014, 12:57:00 AM
 #34

http://www.cs.cmu.edu/~dga/crypto/cuckoo/analysis.pdf

Very hackish proof-of-concept:

http://www.cs.cmu.edu/~dga/crypto/partitioned.jl

I'm releasing this well in advance of what I would normally do for academic work because I think it's worth pushing the discussion of proof-of-work functions forward fast -- crypto moves too fast these days -- but please be aware that it's not done yet and requires a lot more polish.  The bottom line is as I suggested above:  It successfully TMTO's but still requires something like 1-3% of the "full" graph space, because that's what gets passed to the solver after edge trimming.  I don't think this is the final word in optimizing for cuckoo cycle -- I believe there are some further nifty optimizations possible, though I think this gets the first primary attack vector down.

John, I'd love your feedback on whether this is clear and/or needs help, or if you find some of the handwavy O(N) parts too vague to constitute a real threat.  I plan to clean it up more, but, as noted, I figured that it's better to swallow my perfectionism and push the crappy version out there faster to get the dialogue rolling, since I don't have coauthors on it to push it in the right direction. Smiley

Feedback appreciated!

Thanks, Dave. I appreciate the quick release. Will start going over it tonight.
tromp (OP)
Legendary
*
Offline Offline

Activity: 990
Merit: 1110


View Profile
August 02, 2014, 02:37:55 AM
Last edit: August 02, 2014, 03:06:16 AM by tromp
 #35

Thanks, Dave. I appreciate the quick release. Will start going over it tonight.

You're on to something here!

In fact, l think you may be able to avoid the original union-find like algorithm,
and try to recognize 42 cycles starting from any D21 set.
If a node in D21 has 2 or more neighbours in D20, then you can work
your way back and try to find disjoint paths back to the same starting node in P.
(it suffices to check disjointness within each Di)

This approach sounds pretty similar to what Monien and Yuster/Zwick were doing,
so I'm going to have to go back and study those papers in detail.

Some more notes:

If you have for instance |P| = N/1000, then it's not wise to try all 1000 subsets P.
If there is a 42 cycle then you have a good chance of having one of its nodes in
one of the first 100 subsets.

It's going to be interesting to analyze whether a bunch of ASICs implementing this approach
is going to outperform the reference implementation on an FPGA plus a few hundreds DRAM chips, taking into account both fabrication costs and power usage of all components involved.

I remain hopeful that the constant factor overhead of this TMTO may preserve CC's ASIC resistance.
dga
Hero Member
*****
Offline Offline

Activity: 737
Merit: 511


View Profile WWW
August 02, 2014, 02:51:57 AM
Last edit: August 02, 2014, 06:36:22 PM by dga
 #36

Thanks, Dave. I appreciate the quick release. Will start going over it tonight.

You're on to something here!

In fact, l think you may be able to avoid the original union-find like algorithm,
and try to recognize 42 cycles starting from any D21 set.
If a node in D21 has 2 or more neighbours in D20, then you can work
your way back and try to find disjoint paths back to the same starting node in P.
(it suffices to check disjointness within each Di)


Must zzz, but yeah - that's kind of the approach I was trying to figure out how to express/implement when I talked about a sampling based approach working for this problem.  (See the last paragraph of my initial review post.)  It took me a long time to wrap my head around it and I think you just put it better than I was able, but it's that core idea of being able to start from a subset, prune it down, and then expand out and try to find cycles that participate in that subset.

If your subset is 20% of the nodes, for example, you're pretty likely to find the 42 cycle if it exists.

dga
Hero Member
*****
Offline Offline

Activity: 737
Merit: 511


View Profile WWW
August 02, 2014, 06:34:17 PM
 #37

Now that I've had a bit of time to digest it, it strikes me that a better way of phrasing my algorithm might be this:

Select |P| initial nodes.

Begin a breadth-first-search from each node n in P.

Upon reaching a terminal node (one with only one incident edge), remove the edge to that node, recursing as needed if that removal causes another node to become terminal, and so on.  If a node in p becomes itself terminal, remove it from the set and remove the outbound BFS chain from it.

For each level of the breadth, use one O(N) pass through the entire edge set by generating the edges and matching them against the leading edge nodes in the BFS tree.

This description alone is effective for edge trimming, but building on this for directly detecting 42-cycles requires more careful specification of how to handle the BFS graph representation and what to do about cycles in it.

btw - I wasn't actually clear on the definition of the CC PoW in one way:  Is a valid proof any sequence of unique edges that form a cycle, or must the cycles be completely node-disjoint as well?

tromp (OP)
Legendary
*
Offline Offline

Activity: 990
Merit: 1110


View Profile
August 02, 2014, 07:21:13 PM
 #38

This description alone is effective for edge trimming, but building on this for directly detecting 42-cycles requires more careful specification of how to handle the BFS graph representation and what to do about cycles in it.

Indeed. I checked that the Yuster/Zwick paper doesn't add anything relevant for us over the Monien paper (it extends results for dense graphs), so I'm studying Monien's "how to find long paths efficiently" paper now...

Quote
btw - I wasn't actually clear on the definition of the CC PoW in one way:  Is a valid proof any sequence of unique edges that form a cycle, or must the cycles be completely node-disjoint as well?

As the OP mentions, the verification checks that each node is incident to exactly two edges, so yes, the cycle must be node disjoint.
tromp (OP)
Legendary
*
Offline Offline

Activity: 990
Merit: 1110


View Profile
August 02, 2014, 11:05:20 PM
Last edit: August 02, 2014, 11:26:42 PM by tromp
 #39

For each level of the breadth, use one O(N) pass through the entire edge set by generating the edges and matching them against the leading edge nodes in the BFS tree.

This description alone is effective for edge trimming, but building on this for directly detecting 42-cycles requires more careful specification of how to handle the BFS graph representation and what to do about cycles in it.

Hmm, seems we were overlooking the obvious.

You can just feed all the edges generated in each pass (having one endpoint already present in subset P or as a key in cuckoo_hash) to my cycle finder (lines 357-390 of cuckoo_miner.h, tweaked to ignore duplicates/2-cycles). It doesn't care where the edges come from or how they are located between successive D_i layers. It will even happily report cycles that bounce back and forth multiple times between layers.

This should be relatively easy to code...
dga
Hero Member
*****
Offline Offline

Activity: 737
Merit: 511


View Profile WWW
August 03, 2014, 12:15:11 AM
Last edit: August 03, 2014, 02:04:45 AM by dga
 #40

- Using the equivalent of 7 iterations of edge trimming
- N= 2^28, E=2^27
- Using k=sqrt(N), requiring roughly 2^14 *words* of memory (so sqrt(N) * log(N))
- Processes in O(2^28 * 2^14) steps
- Reduces the graph down to about 2.5% of its original size. <-- requires 0.025N\logN words to represent, of course...

The current code already trims the edges down to about 1.6%, in order to allow the cycle finding to run in the same memory footprint as the edge trimming. To run the cycle finding in only 2^14 words, you'd have to trim WAY more?!

Quote
The way in which it's not a perfectly linear TMTO is that I have to go to a word representation of the graph, not the bitvector I introduced earlier.  It's a little more nuanced than that, but this is the core.  

That would make the situation very similar to that of Momentum, where the linear TMTO applies to the original implementatoin using N=2^26 words, but not to your trimming version using N+2N/k bits (assuming it recomputes SHA hashes instead of storing them).

Quote
I'll write the whole thing up, share my PoC source code, and if you feel like throwing me some of the bounty, cool.

For a linear TMTO of my original non-trimming Cuckoo Cycle implementation I will offer a $666 bounty.


I'll take you up on that. Smiley  (BTC address is in my signature).  It seemed worth doing the theory bits more solidly before trying to optimize a potentially asymptotically-wrong, or even big-constants-wrong, algorithm.  And I appreciate you extending the bounty to include that.

While nothing's perfect, there are a lot of coins that could learn from the way you're approaching the development of your PoW ideas.  Or maybe they shouldn't -- there are a few CPU/GPU/etc. hackers who might go out of business. ;-)

(btw, I re-tested with 8 iterations, and it, like basic edge trimming, reduces the set to about 1.8%.  I'm on battery, so I didn't want to test 9.  Dunno what i was thinking coding this in Julia.  So this is an effective TMTO down to 1.8% compared to the original.  Not perfect, but not too bad, either -- certainly enough to let it run in SRAM.)

Pages: « 1 [2] 3 »  All
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!