Bitcoin Forum
May 28, 2024, 01:30:06 AM *
News: Latest Bitcoin Core release: 27.0 [Torrent]
 
   Home   Help Search Login Register More  
Pages: « 1 [2]  All
  Print  
Author Topic: Layman's Journey to Understanding Zerocash  (Read 3493 times)
This is a self-moderated topic. If you do not want to be moderated by the person who started this topic, create a new topic.
sulfurtank
Hero Member
*****
Offline Offline

Activity: 592
Merit: 500


View Profile
December 14, 2015, 11:53:28 AM
 #21


Wright explicitly stated that he wasn't claiming the script can loop, but that factors orthogonal to the scripting can implement the paradigm of looping. His point is the Bitcoin block chain stack is already Turing-complete. And on that point he is correct, because the entropy of the block chain state is unbounded. Anything that can be done with Ethereum due to Turing-completeness which is not an attack on scripting node resources, can also theoretically (in terms of asymptotic computational complexity theory) be done on the Bitcoin block chain.

I agree with the last sentence quoted here. What Ethereum
VM provides is a succinct and unified way of writing smart
contracts, but these can be implemented on Bitcoin blockchain
as well. It's just more laborious and leads to a profiliteration
of standards (colored coins, Mastercoin etc)

But my point was that even then, the contractual logic and
reading/writing data on the blockchain and Turing-completeness
happens on a higher level than that of scripting.

Based on the transcription snippet,
Craig Wright seems to envision a separate control stack that is
used by the script, and I doubt he had the blockchain in mind.

Solutions are already present which build on top of Bitcoin or other such coins to provide smart contracts, m of n transactions, programmable features and more for those who want to use more advanced scripting features which bitcoin would be unable to support. The currently most fully featured example of this is Open Transactions. It is also token-less in the sense there are no "coins" being minted in the use of the application keeping it pure.
TPTB_need_war (OP)
Sr. Member
****
Offline Offline

Activity: 420
Merit: 262


View Profile
December 14, 2015, 11:56:30 AM
 #22

The reason I was forceful in this point is because I think the aggressive territorialness it the antithesis of collaboration and reason. Szabo told him to go away and write a paper. Maxwell has treated me the same both in cases where I had an error (Winternitz sigs) and also in cases where I was correct (e.g. that CoinJoin can't be protected with blacklisting and also my posts about how to cure selfish mining). I understand these guys can get annoyed with those who are learning and don't have their thoughts well organized or when committing Dunning-Kruger ad nauseum. Wright didn't seem to be abusive, at least in the portion I viewed.

So I have a personal distaste for the "get your toe off my lawn" attitude. I need to remember to contain this urge as any emotional motivation is going to drive irrationality.

Any way, I want to get back on to the topic of this thread, so I hope to make this my last post on this diversion.

P.S. I was so sleepless when I wrote that post that pointed out the error, that I was afraid I had screwed up my thinking. I was rushing and not even enough time to review all the evidence before posting. And I almost messed up the articulation and started to conflate issues myself. I got lucky. Whew. I am trying to do too much. I will eventually make a big mistake too.

TPTB_need_war (OP)
Sr. Member
****
Offline Offline

Activity: 420
Merit: 262


View Profile
December 15, 2015, 10:02:38 AM
Last edit: December 15, 2015, 11:18:24 AM by TPTB_need_war
 #23

I thought maybe I was going to set this aside and go back to programming, but something that might be important has caught my attention and I want to settle this issue before putting this thread on my back burner.

There is an incongruence in my explanation and conceptualization the NP section of the my incomplete second installment (at least what is there as of now until I edit again to finish it), and it was making me confused. This incongruence might be an insight into solving the $1 million bounty for proving the relationship between P and NP. Probably not (given many smarter than me have thought about this problem extensively), but I want to correct my understanding or correct complexity theory, which ever is correct.

The NP class includes problems such as asking whether a specific input threshold distance has a shorter path distance in the "traveling salesman problem". Computing just the NP-hard (not NP) "traveling salesman problem" with exhaustively precise optimum global search is deterministic in EXP (i.e. intractable exponential time, actually factorial N! for naive bruteforce search but improved algorithms are EXP currently best O(2N), where N is the number of cities input to the computation). Deterministic because it is guaranteed to find the optimum and terminate, but the problem is the time it takes is asymptotically intractable and can't be realistically computed for very large N. There are other estimation algorithms for this problem that can produce less than perfectly optimum answers in P (i.e. tractable polynomial time). These estimation algorithms are nondeterministic because the algorithms can continue searching forever looking for an improved result to the result computed so far. The nondeterminism is whether these estimation algorithms will ever find an improved result or not (relative to current best result found), so it is undecidable when to terminate the algorithm. Converting the NP-hard problem to a NP problem by asking whether the result is more optimum than some input distance threshold is analogous to asking for a consecutive number of 0 digits in the expansion of π, in that it is undecidable (i.e. not knowable a priori) how many steps the algorithm needs to execute to find the requested threshold or if the threshold will ever be found, so both problems are in the NP complexity class because if threshold result is found then it can be verified in P time (I need to explain more clearly that NP only requires verification in P). My conceptualization is the global search of the perfect deterministic algorithm has been converted to a localized perspective that is sampled over time for the estimation nondeterministic algorithm, and thus aliasing error results. By localization I mean observe how the various estimation algorithms sample some local heuristics such as number of times a city has been already visited, instead of a deterministic globalized search. Thus the conversion from the perfectly optimized total/global containment rule (c.f. my upthread discussion about that) which is finite in N thus possessing a bounded, finite entropy (yet requires intractable EXP time), is converted to a localized containment rule which thus opens the entropy to unbounded recursion (analogously as the expansion of the digits of π is unbounded recursion) thus nondeterministic (yet requires only tractable P time). I should explain in the future the congruence of integer factorization to this model of explanation/conceptualization I have employed.

So conceptually I do understand the NP computational complexity. It is proven that global optimization NP problems are NP-complete because the verification of such a nondeterministic global optimization problem is always reducible to a deterministic polynomial computation. Afaics an information-theoretic Pedersen commitment appears to satisfy the requirement of NP in that it can be verified with an algorithm in P. And information-theoretic security is predicated on the entropy of the solution space to be unbounded (i.e. every inversion of the commitment solution is equiprobable), so the valid solution is the one that is revealed for verification.

Even one of the prominents in the field pointed out that any meaningful advance would provide paradigmatic shifts of insight into complexity theory otherwise it wouldn't be interesting and useful

So where I am driving with this line of conceptualization is that the "traveling salesman problem" is not NP-hard as claimed, because its perfectly optimum solution is computable in EXP but the perfectly optimum solution for information-theoretic inversion is undecidable!! The definition of NP-hard is that these problems should be at least as hard to compute as every NP-complete problem. The error in complexity theory seems to be a conflation between the hardness of finding a result versus the hardness of verifying a result. In other words, there is no relationship between P and NP! Because I first conceptualized nondeterminism as unbounded entropy, this enabled me to see this incongruence (that is assuming I am not missing something and incorrect in my conceptualization).



The statements verifiable in polynomial time by a deterministic Turing machine and solvable in polynomial time by a non-deterministic Turing machine are totally equivalent, and the proof can be found in many textbooks, for example Sipser's Introduction to the Theory of Computation, section 7.3.

7.3 The Class NP

---8<---

Theorem 7.17

---8<---

We show how to convert a polynomial time verifier to an equivalent polynomial time NTM [nondeterministic Turing machine] and vice versa. The NTM simulates the verifier by guessing the certificate.

Thus as we can see that what is proved is insufficient—if it is correct that verifying a Pedersen commitment is in NP—because an information-theoretic Pedersen commitment is always undecidable thus every guess is equiprobable to be correct so there are no solutions and thus the verification can't be converted to a guess on an NTM.

So let's assume that Pedersen commitments are not supposed to be in NP. How could we change the definition of NP so that it would exclude Pedersen commitments. We would have to require that every verification can be converted to a guess on a NTM in order to exclude Pedersen commitments. In that case, NP is not complete for polynomial time, deterministic verification. So what is this pointing to as a deeper insight into computational complexity theory?

The unifying concept (generative essence) appears to be that NP arises not from problems being hard, but because they are undecidable due to unbounded entropy. Thus leads to a conjecture that intractable problems (at least given the machine model available) are equivalent to unbounded entropy and thus can't be in P. Thus Donald Knuth's intuition expecting a bounded solution would be incorrect.

This would tell us something very profound about our universe. I speculate rather brashly that it ties into the Second Law of Thermodynamics that the entropy of the universe is always trending to maximum. So the edge of our universe need only be intractable to be unbounded. The implication is we can't contain infinity within itself and be total. Just as infinite speed-of-light would collapse the past, present, and future into an infinitesimal point in spacetime, if intractable problems were not unbounded entropy then we could compute the edge of the universe and thus compute nothing.

Now to turn that into some kind of defensible proof, even if it is correct which is not certain yet. I look forward to replies educating me on my myopia.

Edit: I realized my mistake. The Pedersen commitment is not binding. So this means the computation of a guess is deterministic because any guess always succeeds. And thus Pedersen commitments are actually in P not NP. I still believe there is some insight to gleem from the observation (conjecture or proven?) that converting deterministic intractable problems into tractable problems makes them nondeterministic and unbounded in entropy, but I am not seeing yet how to frame it in a way that proves anything. Perhaps proving that the conversion must be unbounded would prove P ≠ NP. I need to contemplate that.

P.S. I think I roughly conceptualize how Zerocash works. The program to prove and verify is converted to a boolean circuit and encoded into a quadratic form of a span program and the pairings (that originate from the insight of the ⊕P complexity class due to the completeness of the problem of computing the permanent for the binary case) are obfuscated in ECC fields, so that the computation is homomorphic and the proof only validates if the computation matches the public key. In other words, it is impossible to factor the encoded pairings such that one could create a public key for a program and then fake that they had run the program when they really didn't. I will explain this more clearly in the future when I have more time to study. The ECC math is beyond what I was educated to know, so it will take me some time to digest it.

TPTB_need_war (OP)
Sr. Member
****
Offline Offline

Activity: 420
Merit: 262


View Profile
December 15, 2015, 05:07:51 PM
Last edit: December 15, 2015, 07:41:16 PM by TPTB_need_war
 #24

I still believe there is some insight to gleem from the observation (conjecture or proven?) that converting deterministic intractable problems into tractable problems makes them nondeterministic and unbounded in entropy, but I am not seeing yet how to frame it in a way that proves anything. Perhaps proving that the conversion must be unbounded would prove P ≠ NP. I need to contemplate that.

On proving P ≠ NP, I do not understand why it can't be proven that the algorithmic Kolmogorov complexity of the "traveling salesman problem" is at least O(n2(2n-n-1)), i.e. an exponential lower bound, which is just below the O(n22n) of Held-Karp algorithm which is the best known algorithm  Huh

My logic is so simple, that I must be missing something.

At best the algorithm can cache so optimal computation of each grouping of cities will only be computed once (i.e. the number of unique times), by Reed's law we get 2n-n-1 groupings of cities. And for the first leg of the path possibilities, Metcalf's law tells us there are n2 possible unique links between cities.

Afaics, it is definitionally impossible to compute at fewer algorithmic steps than the Kolmogorov complexity, because that is the minimum information content of the algorithm, i.e. the entropy.

Even if we applied some heuristic such as grouping cities in clusters, this would not be an absolute assurance of the optimum path. There is no way to get the assurance while reducing the deterministic algorithm complexity below the Kolmogorov complexity.

Since this algorithm is NP-hard and the related NP-complete problem can be reduced to it, then every problem in NP has the same lower bound. The Independent Set on a planar graph is listed an exception of NP-complete problems being EXP, but this is for edges of the graph that can't intersect, which obviously the "traveling salesman problem" can't be reduced to without some exponential blowup remapping because the links between cities can overlap.

All the NP-hard (and NP-complete reduced on them) problems revolve around exhaustively searching an exponential set of unique possibilities for an optimum. The Kolmogorov complexity can't be reduced to polynomial, because there can't exist a short-cut that destroys information and still is optimum. Once the computation is below the threshold of the Kolmogorov complexity, the result can't be deterministically optimum.

Here is a nearly duplicate proof, but he fails to multiply by n2.

One of the similar attempts at a proof I've seen is this one which also attempts to prove a lower bound of the algorithmic complexity. And in his 2014 update, he mentions Kolmogorov complexity as the justification.

Another paper on quick glance appears to try to prove all NP-complete problems are a clustering problem that can't be solved in polynomial time.

This is very well written.

P.S. I was amazed to stumble onto an attempted proof by our very own cohort MPex.


Edit: A possible reason my idea about doesn't work in a proof, from the researcher who discovered the NP complexity class:

However,
all attempts to find even super-linear lower bounds for unrestricted Boolean circuits
for “explicitly given” Boolean functions have met with total failure; the best such
lower bound proved so far is about 4n. Razborov and Rudich [27] explain this
failure by pointing out that all methods used so far can be classified as “natural
proofs”, and natural proofs for general circuit lower bounds are doomed to failure,
assuming a certain complexity-theoretic conjecture asserting that strong pseudorandom
number generators exist. Since such generators have been constructed
assuming the hardness of integer factorization, we can infer the surprising result
that a natural proof for a general circuit lower bound would give rise to a more
efficient factoring algorithm than is currently known.

The failure of complexity theory to prove interesting lower bounds on a general
model of computation is much more pervasive than the failure to prove P 6= NP.
It is consistent with present knowledge that not only could Satisfiability have a
polynomial-time algorithm, it could have a linear time algorithm on a multitape
Turing machine. The same applies for all 21 problems mentioned in Karp’s original
paper [21].

More on that point:

Now the thought that underlies the work of Razborov and Rudich is one that you have probably had yourself: it seems to be extraordinarily hard to tell the difference between a random Boolean function of low complexity and a purely random Boolean function. In fact, so much does this seem to be the case, that one might even hazard a guess that some theorem along the following lines is true: no polynomial-time algorithm can tell the difference between a random Boolean function of low complexity and a purely random Boolean function.

Except my logic wasn't employing any probabilistic argument as apparently required by natural proofs nor was I stating that NP problems lack some simple feature that P problems have:

[...] consider a commonly envisioned proof strategy for proving P ≠ NP:
Formulate some mathematical notion of "discrepancy" or "scatter" or "variation" of the values of a Boolean function, or of an associated polytope or other structure. [...]
Show by an inductive argument that polynomial-sized circuits can only compute functions of "low" discrepancy. [...]
Then show that SAT, or some other function in NP, has "high" discrepancy.
Our main theorem in Section 4 gives evidence that no proof strategy along these lines can ever succeed.

This is a more comprehensible explanation:

http://www.ams.org/journals/notices/201111/rtx111101586p.pdf

TPTB_need_war (OP)
Sr. Member
****
Offline Offline

Activity: 420
Merit: 262


View Profile
December 16, 2015, 09:23:51 AM
Last edit: December 16, 2015, 06:28:57 PM by TPTB_need_war
 #25

Sharing one more mental note on P vs. NP complexity, before putting this thread on my back burner. When I come back to this in the future, I will return to the attempt for a layman's explanation. Einstein said (paraphrased) that if one can't explain their science to a layman, then they don't thoroughly understand their science (still remember the public library in Aptos, CA, U.S.A. where I read that in 1995 when I was living in my car and sleeping in the parking lot of the library). I am just rushing at the moment to write down (upload) my thoughts, so I can download my brain dump easily in the future.

(Frankly, I am worn out mentally & physically ill from the past few days of doing this, most certainly due to my illness which prevents me from going full bore as I used to. And I can't sustain this high-level of hours upon hours of research. Seems the light from the monitor even hurts not only my eyes but feel the light penetrating my brain. My brain and stomach hurts. Need a break or pace myself.)

Factorization is inherently less hard than NP-hard problems and developing some generative essence conceptual insight into why, may be helpful.  (Note I made two minor edits yesterday to Wikipedia on this issue)

Note that security of all non-quantum resistant public-key cryptography (such as all elliptic curve cryptography, which includes RSA, Cryptonote, RingCT, CT, and Zerocash) depends on the security of factorization. Quantum secure public key cryptography includes for example the Winternitz signatures that I recently programmed and other candidates, which are covered in Bernstein's book and post-quantum cryptography website.

I believe the following explanation is driving towards a proof that P ≠ NP. I may or may not be very close and may just need to catch up on sleep and free time so I could formulate the necessary holistic understanding of what needs to be proved.

I described in my prior post that NP-hard problems are global walk optimization that requires enumerating every combination of nodes. The only optimization is to eliminate redundant walks that are not unique in the Kolmogorov complexity. Just adding one city in the "traveling salesman problem" can totally change the result and all the computations. There is no periodic structure nor structural localization that isn't a heuristic. I defined precisely what heuristic means. It means the Kolmogorov complexity considered is less than that of the optimum search, thus the result (which has discarded information content necessary for an assured optimization) can't be deterministic (and such heuristic estimations is how we convert NP-hard problems into NP-complete problems in the sense of finding a candidate in polynomial time, and also apparently how the human brain makes decisions about such optimization problems). These suboptimal heuristic estimations can often be quite close to fully optimized, but some cases of input cause the estimation to veer far from optimized (and I believe it may even be undecidable whether all such failure cases can be enumerated).

My conjured abstraction of NP-hard problems is they are co-inductive! I need to prove that. Co-induction means the structure is unknown until finality is reached. Co-induction is the process of peeling away an onion to discover the obscured (opaque) structure and is the mathematical dual of induction which is the process of recursively adding to the observable (transparent) structure. Solving NP-hard problems is the process of assembling the structure co-inductively. I believe this abstract insight might be one that can prove P ≠ NP.

Whereas, factorization is the process of disassembling the inductive process of multiplication. Conceptually we can see that structural disassembly is much less computationally complex than structural assembly, because disassembly can exploit global structure; whereas, assembly has no definitively global structure until the finality (of global walk).

A number has a periodic global structure where the periods are any of the divisors we want to divide that number by. That period cycles over and over through the numerator and the number of cycles of the period is the output of the division operation. Thus it is not surprising that factorization can be parallelized on a quantum computer because the problem of factorization can be converted into finding the period of a function where the Fourier transform to the frequency domain can leverage the 2n parallel states of quantum mechanics and decoherence of the adiabatic amplitude.

The trouble is that if we measure the computer’s state, we see only one candidate solution x, with probability depending on its amplitude αx.6 The challenge is to arrange the computation in such a way that only the x’s we wish to see wind up with large values of αx. For the special case of factoring, Shor [66] showed that this could be done using a polynomial number of operations—but what about for NP-complete problems?

Although the above quoted paper goes into some wild theories about the potential of solving NP-hard problems on quantum computers, afaik (or afaics) those proposed ideas do not seem to factor in the exponential blowup in costs and/or space.

This distinction is also evident in verification. Whereas the verification of whether a number is prime (i.e. has factors or not) is a deterministic exact answer in polynomial time, deterministic exact verification of an HP-hard problem is NP-hard (and is in EXP) and deterministic verification in P (of a lower bound for the optimum) is only an estimation.

Thus I think this answer from Alon Amit is short-sighted in that the search space of factorization does have structure and he seems to assert (quite brashly) that structure is irrelevant. And note similar to my experience at cs.stackexchange where some professors and PhD students sometimes get basic concepts wrong (and are closed-minded to correction!), I noted Alon's answer was upvoted by such prominents. To my horror, I visited cs.stackexchange again today researching this post, and oh my lord the errors in logic there which all seem to stem from not having a solid grounding in the definition of these classes and the definition of nondeterminism being muddled (e.g. by NFAs). I recommend being very skeptical of anything you read on that site. Whereas, most of what Thomas Porin writes on the cryptography stackexchange is astute and helpful (although I saw one error he made also).


Edit: I haven't well developed this thought yet, but it seems that the co-P and other such codomain categorization in complexity theory may be violating category theory. Perhaps that is what my insight will illuminate. Afair, it isn't possible for a member to be both in its domain and codomain. Perhaps NP should be the codomain of P. I need to study all that more carefully. Probably I am incorrect on that, as one would assume many smart people have already looked at that, unless it has a non-obvious twist of insight or logic.

TPTB_need_war (OP)
Sr. Member
****
Offline Offline

Activity: 420
Merit: 262


View Profile
December 16, 2015, 07:52:02 PM
 #26

Btw, why I am interested in Zerocash:

As for a potential solution to the IP address obfuscation issue, there is a white paper that I was first introduced to by jl777 this year and now someone else has asked me about it in a PM:

http://dedis.cs.yale.edu/dissent/

http://bford.info/pub/net/panopticon-cacm.pdf

Section 3 explains very well some of the major attacks against the onion routing (OR) in Tor and I2P.

The problems with this Dissent protocol some of which they admit in the section "5. Challenges and Future Work":

  • It requires N2 communication for N participants. If the entire network isn't included in one grouping, then next problem results. They offer a federated server "solution" but this I believe puts jamming (and anonymity?) at risk of collusion of the servers?
  • Same as for any mixnet (incluring OR and Cryptonote), if there are multiple groupings (or rings) then users can be unmasked by (a form of an intersection attack whereby) correlating which groups they participated in. This same problem results from one grouping and the fact that different users are participating at different times. This is a fundamental problem for mixnets  (including on chain mixes such as Cryptonote) that caused me to realize the problem was unsolvable.
  • Anti-jamming is based on an identity. Per the criticism I made against CoinJoin in 2013, we are creating anonymity so identity can't be insured. Perhaps we could tie identities to specific UTXO and confiscate those who jam. I would need to look into the details of that change to their design, as to whether this would violate the anonymity (and I assume yes it would until shown otherwise because of what I've learned over the past 2 years).
  • It has a simultaneity requirement (similar to Dash's mixing), more so than Tor or I2P.

Why use this complex mixnet stuff (that won't really work well) when Zerocash elegantly solves the problem and is entirely autononomous. To quote smooth (he was referring to Cryptonote but he should have been referring to Zerocash), "a pidgeon could carry your transaction to the block chain and it wouldn't matter". Let me rephrase that, "a truck with your name painted on the side could carry your transaction to the block chain and it wouldn't matter". With Zerocash, everything is hidden so even if you put your name in the transaction packets, it wouldn't affect your anonymity because no one can see any of the details of the transaction. All they will see is you put your name on this encrypted blob of data. So you are worried about the compromised key of Zerocash leading to a hidden inflation of the money supply (I was too), but it doesn't affect the anonymity in any case. Well even that has solutions, e.g. make multiple sets of keys and sign all transactions with more than one signature so you have more assurance that all of the keys weren't fraudulently generated. Or run Zerocash only as a mixer and net out all the coins in/out periodically to be sure it is not creating coins out-of-thin-air.

YarkoL
Legendary
*
Offline Offline

Activity: 996
Merit: 1013


View Profile
December 16, 2015, 08:36:38 PM
 #27


Zerocash + quantum-resistant signature scheme is
indeed a kind of holy grail of cryptofinance. (That's why I'm
on this magical mystery bus)

That being said, Tor/i2p is "pretty good" privacy solution
too at the moment, and simple Coinswap-style anonymity
is afaics quantum-resistant (and that is the current goal of
my development efforts.)

“God does not play dice"
TPTB_need_war (OP)
Sr. Member
****
Offline Offline

Activity: 420
Merit: 262


View Profile
December 16, 2015, 09:21:37 PM
 #28

Coinswap requires trusting the intermediary Carol. You could have every participant act as an intermediary for another set of participants, and then run the transactions across numerous swaps, so that it becomes one big mix but this could I think be jammed. I don't want to be down on something you are developing. I haven't studied Coinswap in detail, but on quick glance my opinion of it hasn't improved. You are correct that the anonymity is quantum resistant, but only because there isn't any cryptography in the anonymity but rather trusting the intermediary.

Zerocash is not quantum resistant as currently formulated.

Don't know if you followed the link to Berstein paper where he makes the point that quantum computing may be too expensive to be more of a threat than classical computing.

YarkoL
Legendary
*
Offline Offline

Activity: 996
Merit: 1013


View Profile
December 16, 2015, 09:32:39 PM
 #29


Coinswap is trustless. The parties send refund transactions to
each other off the blockchain, to be published later if someone
disappears or tries to cheat. The protocol is a little
complicated and ugly, but the simplicity of the idea strikes to
me as beautiful.

Zerocash is not quantum resistant as currently formulated.

Don't know if you followed the link to Berstein paper where he makes the point that quantum computing may be too expensive to be more of a threat than classical computing.


I know Zerocash is not quantum resistant, and if memory
serves, you have been working to rectify that for some time already  Wink


“God does not play dice"
TPTB_need_war (OP)
Sr. Member
****
Offline Offline

Activity: 420
Merit: 262


View Profile
December 16, 2015, 09:56:26 PM
 #30

Coinswap is trustless in terms of not losing funds, but afaics the anonymity only comes from trust that Carol doesn't reveal that Alice is paying Bob. And from a high flow of such swaps through Carol simultaneously so that an external observer sees a mixnet.

Adding quantum computing resistance to Zerocash or Cryptonote rings is some very intense and experimental math. Some of it hasn't been invented yet. I don't know if I am capable of making that contribution (certainly not without another year of study at least). The quantum resistance for super singular isogenies pairings of elliptic curves is attained because it forms a non-abelian group. This I assume kills Shor's algorithm for factoring because the commutativity property is lost. But I need to study this more deeply before I can comment intelligently on this matter and other potential methods. I do think it is likely that someone will be able to produce quantum computing resistant versions of Zerocash and the underlying zk-SNARKs.

YarkoL
Legendary
*
Offline Offline

Activity: 996
Merit: 1013


View Profile
December 17, 2015, 11:46:27 AM
 #31

Coinswap is trustless in terms of not losing funds, but afaics the anonymity only comes from trust that Carol doesn't reveal that Alice is paying Bob.

Thank you for making that distinction. It is something
that can (I think) be alleviated by proper incentives and rendered
relatively harmless by other means, mainly safety precautions.

In any case, I like privacy solutions that are implemented via
protocol and not by algorithm, as I think they are closer to
the philosophy of decentralization.


Adding quantum computing resistance to Zerocash or Cryptonote rings is some very intense and experimental math. Some of it hasn't been invented yet. I don't know if I am capable of making that contribution (certainly not without another year of study at least). The quantum resistance for super singular isogenies pairings of elliptic curves is attained because it forms a non-abelian group. This I assume kills Shor's algorithm for factoring because the commutativity property is lost. But I need to study this more deeply before I can comment intelligently on this matter and other potential methods. I do think it is likely that someone will be able to produce quantum computing resistant versions of Zerocash and the underlying zk-SNARKs.

Starting to get way over my head again. A rank novice in post-quantum cryptography
such as myself usually gets to hear about hash-based schemes like Lamport
signatures first. Ignoring their negatives (size, one time use) for now,
what sort benefits do you see in employing purely mathematical means (such
as EC isogenies) instead?


“God does not play dice"
thaaanos
Sr. Member
****
Offline Offline

Activity: 370
Merit: 250


View Profile
December 18, 2015, 12:40:27 AM
 #32

Hello glad to see your ideas get some flesh.
I will look deeper but a few thoughts on your firsts post, I will return with more
 
I too am not very happy with your approach on determinism , I think you need to look into Denotational semantics to streamline your thoughts on that matter.
As for the layman's journey I think the approach on Complexity of Gregory Chaitin in "Meta Math!: The Quest for Omega" may give you some ideas. Interesting read by an interesting person btw.
YarkoL
Legendary
*
Offline Offline

Activity: 996
Merit: 1013


View Profile
December 18, 2015, 11:13:33 AM
 #33


As for the layman's journey I think the approach on Complexity of Gregory Chaitin in "Meta Math!: The Quest for Omega" may give you some ideas. Interesting read by an interesting person btw.

Other good books dealing with computational
complexity, mostly as it relates to AI

Douglas Hofstadter: Gödel, Escher, Bach (classic!)
Rudy Rucker: Infinity and the Mind (transfinite sets, Gödel's proof)
Rudy Rucker: Mind Tools (complexity, Chaitin's Omega)
Roger Penrose: Emperor's New Mind (Anti-AI: the first part of the book is
about the incompleteness theorems)

“God does not play dice"
HCLivess
Legendary
*
Offline Offline

Activity: 2114
Merit: 1090


=== NODE IS OK! ==


View Profile WWW
December 18, 2015, 01:12:36 PM
 #34

I understand that these are some theoretical concepts which never have been implemented in any practical way.
But if this is a good outlet for your ADHD, please carry on.

TPTB_need_war (OP)
Sr. Member
****
Offline Offline

Activity: 420
Merit: 262


View Profile
December 18, 2015, 04:02:13 PM
Last edit: December 18, 2015, 04:27:39 PM by TPTB_need_war
 #35

I understand that these are some theoretical concepts which never have been implemented in any practical way.

Most definitely everything being written about here is running the practical software you use every day. NP algorithms are used extensively in software of all types.

Perhaps you do not realize how idiotic and embarrassingly ignorant your statement is?

But if this is a good outlet for your ADHD, please carry on.

Well that explains your irrationality doesn't it. Carry on attacking people and not even bothering to understand.

How ashamed will you be in the future if I win the $1 million proving P ≠ NP and simultaneously produce the coin that overtakes Bitcoin? What will you say then fool? Will you hide under a rock?

I wonder what motivates such a gorilla-like behavior. What did I do to you to deserve your irrational temper tantrum?

Financisto
Hero Member
*****
Offline Offline

Activity: 632
Merit: 768

BTC⇆⚡⇄BTC


View Profile WWW
January 18, 2016, 07:27:54 AM
 #36

@TPTB_need_war

Very good (tech) articles/posts.

Very delightful to read. Thank you!

BTW

Are those projects still alive?

http://zerocash-project.org

http://zercoin.org

http://z.cash

I haven't seen any action since 2014...  Undecided

They promise so much improvement towards privacy and anonymity.

Are there any active github repositories?

I just found this (inactive) one:

https://github.com/Zerocash

Is it all dead?  Huh

LIST • ESCROW providers • Ranking & Scores available!LIST • FOSS BrainwalletsBTC ⇆⚡⇄ BTCBTC aka BTC: 16MBvhaJoRBxW3Vk6apnvz3UYT9HAgraVS ⚡ PGP: 2680207AA9A1B69FE7A033D80DE0F221074384C4 ⚡ If you think freedom matters, please support the development of these privacy projects→DONATE some sats: TailsQubes OSWhonixVeraCryptPicocryptKryptorSimpleX Chat
AlexGR
Legendary
*
Offline Offline

Activity: 1708
Merit: 1049



View Profile
May 08, 2016, 10:38:29 AM
 #37

I still believe there is some insight to gleem from the observation (conjecture or proven?) that converting deterministic intractable problems into tractable problems makes them nondeterministic and unbounded in entropy, but I am not seeing yet how to frame it in a way that proves anything. Perhaps proving that the conversion must be unbounded would prove P ≠ NP. I need to contemplate that.

On proving P ≠ NP, I do not understand why it can't be proven that the algorithmic Kolmogorov complexity of the "traveling salesman problem" is at least O(n2(2n-n-1)), i.e. an exponential lower bound, which is just below the O(n22n) of Held-Karp algorithm which is the best known algorithm  Huh

My logic is so simple, that I must be missing something.

Say I want to crack your 4 digit pin (0000-9999).

No matter what I do, I have to generate the numbers, try them, verify them.

Simply verifying the answer is just a subset of generating candidates/trying/verifying. So verification time alone is always less than trying candidates AND verifying.

Time spent (cracking + verifying) > (verifying), because verifying alone is (at least) -1 step.

So P ≠ NP.
YarkoL
Legendary
*
Offline Offline

Activity: 996
Merit: 1013


View Profile
May 08, 2016, 11:16:31 AM
 #38


So verification time alone is always less than trying candidates AND verifying.

That holds only if you verify a single number
again and again.

“God does not play dice"
AlexGR
Legendary
*
Offline Offline

Activity: 1708
Merit: 1049



View Profile
May 08, 2016, 11:52:44 AM
 #39


So verification time alone is always less than trying candidates AND verifying.

That holds only if you verify a single number
again and again.

I'm not sure I'm following. My base is this:

Quote
https://en.wikipedia.org/wiki/P_versus_NP_problem

If the solution to a problem is easy to check for correctness, is the problem easy to solve?

So, I can check for correctness pretty easily, but to solve the problem it could take generating, inputting and verifying 10.000 tries max (or if I was looking for a DES 8-char password, a lot more) versus just checking for correctness.

On one hand I must have a process to increase the counter loop, input something, check something, restart the loop, on the other hand I only have a process of inputting / checking for the one final answer.

Say if the pin is 6666, or even 0000, in one process I've gone through many more steps to start generating and checking candidates, on the other hand I already know that the pin is 6666 or 0000 and I'm just checking it to see if the computational problem (cracking the pin) is solved.

The process of solving the computational problem (=finding the right pin) takes more time than simply verifying that the problem is solved with the right answer (=checking the pin). If that happens for even one problem, then how can P=NP as a broader "rule"? It can't.
Pages: « 1 [2]  All
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!