Bitcoin Forum
May 10, 2024, 04:57:02 PM *
News: Latest Bitcoin Core release: 27.0 [Torrent]
 
   Home   Help Search Login Register More  
Pages: [1] 2 »  All
  Print  
Author Topic: Layman's Journey to Understanding Zerocash  (Read 3492 times)
This is a self-moderated topic. If you do not want to be moderated by the person who started this topic, create a new topic.
TPTB_need_war (OP)
Sr. Member
****
Offline Offline

Activity: 420
Merit: 262


View Profile
December 11, 2015, 05:20:52 AM
Last edit: December 13, 2015, 10:43:27 AM by TPTB_need_war
 #1

Join me on a technical journey to try to understand Zerocash (not Zerocoin which is a separate project although Zerocash does name their anonymous units zerocoins), starting from first principles.

I am curious and want to broaden and deepen my understanding of the technologies in Zerocash. I want to share my learning experience and perhaps also obtain collaboration with the community in learning and elucidating in layman's terms.

Here is a high-level enumeration of technologies we need to understand in reverse order of what we need to understand first, i.e. where the former requires understanding of the latter:

  • Zerocash
  • zk-SNARKS
  • quadratic arithematic programs (QAPs)[1]
  • quadratic span programs (QSPs)[1]
  • probabilistically checked proofs (PCPs), as an alternative to QAPs or QSPs
  • span programs
  • computational complexity classes, e.g. P, NC, NP, E, #P, ⊕P

I recently realized that Zerocash is the only way to achieve anonymity without the (virtually impossible to guarantee) requirement to obscure our IP address.

This is a technical education and discussion of technology thread, so moderation is present to be able to keep it focused on such (because I understand there are political vested interests regarding anonymity technologies in the Altcoin arena).

P.S. sometimes I need a distraction from pure coding, and learning about theories and ideas is an outlet/diversion for me to relieve the stress or mental monotony of the coding process as compared to the "grand ideas" mental process.

[1]http://www.iacr.org/archive/eurocrypt2013/78810623/78810623.pdf
https://eprint.iacr.org/2012/215.pdf

1715360222
Hero Member
*
Offline Offline

Posts: 1715360222

View Profile Personal Message (Offline)

Ignore
1715360222
Reply with quote  #2

1715360222
Report to moderator
1715360222
Hero Member
*
Offline Offline

Posts: 1715360222

View Profile Personal Message (Offline)

Ignore
1715360222
Reply with quote  #2

1715360222
Report to moderator
1715360222
Hero Member
*
Offline Offline

Posts: 1715360222

View Profile Personal Message (Offline)

Ignore
1715360222
Reply with quote  #2

1715360222
Report to moderator
In order to achieve higher forum ranks, you need both activity points and merit points.
Advertised sites are not endorsed by the Bitcoin Forum. They may be unsafe, untrustworthy, or illegal in your jurisdiction.
1715360222
Hero Member
*
Offline Offline

Posts: 1715360222

View Profile Personal Message (Offline)

Ignore
1715360222
Reply with quote  #2

1715360222
Report to moderator
TPTB_need_war (OP)
Sr. Member
****
Offline Offline

Activity: 420
Merit: 262


View Profile
December 11, 2015, 10:26:18 AM
Last edit: December 12, 2015, 08:04:39 PM by TPTB_need_war
 #2

Computational Complexity Classes: Overview

An algorithm (or program) is a set of logical steps that transform some state (i.e. variables). If the algorithm is a function (i.e. not a procedure), then it will read the input state and write the output state, thus being a transformation from the input state to the output state. (I'm referring to a pure function which does not read nor write any global variables).

A computational complexity class defines boundaries on the computation allowed in an algorithm. Some examples of boundaries are:

1.
Result type:
e.g. a boolean, a count, an optimum, etc.
2.
Machine model:
e.g. boolean circuit, deterministic Turing, non-deterministic Turing, parallel processing, quantum, etc.
3.
Resources:
e.g. polynomial time, logarithmic space, constant depth, exponential time, etc.

In other words, a computation complexity class is a statement about the constraints on our program (algorithm). Interesting limitations include the 1) type of result computed, 2) type of machine the program runs on, and the 3) resources the program employs.

Computational complexity informs us on the bounds of the requirements on computation, but not on whether the computation is intractable or significantly removed from any implication of the bounds. Meaning any implications we make from computational complexity can overturned by future algorithmic discoveries— i.e. the implications can't be proven which is the fundamental weakness[1] of (most!) cryptography that relies on conditional (i.e. not information-theoretic) security. Meaning that most cryptography relies on the intractability of the computation required to break the security; but intractability can only be demonstrated for some example cases and not be proven to apply for all the cases not enumerated (i.e. ubiquitous lower bounds can't be generally proven for all real world scenarios, not even for the asymptotic Big Omega Ω, because for example any state complexity class depends on a machine model that might have an improved variant in the future with a possible reduction/conversion to that model).

For example, the elliptic curve discrete logarithm problem (ECDLP)[2] has been shown to be computationally intractable for the types of computation we know can be applied to cracking it, but it completely breaks on a future hypothetical quantum computing machine model (but conjectured to require a qubit for every bit of ECC security) or recently mathematically broken for small characteristic fields.

Often the stated computational complexity is for the asymptotic case, e.g. Big O, Big Theta Θ, Big Omega Ω; meaning the complexity class for values of the variables of the computation that stress the boundaries to the maximum (e.g. which is often meaning as the variables go to infinity). However, the asymptotic case might not be indicative of the real world case, e.g. in the following plot f(x) ≤ cg(x) [red line under blue line] asymptotically but not always for x < 5.



https://www.cs.cornell.edu/Courses/cs3110/2012sp/lectures/lec19-asymp/review.html

Quote
One important advantage of big-O notation is that it makes algorithms much easier to analyze, since we can conveniently ignore low-order terms.

---8<---

Of course, since we are ignoring constant factors, any two linear algorithms will be considered equally good by this measure. There may even be some situations in which the constant is so huge in a linear algorithm that even an exponential algorithm with a small constant may be preferable in practice. This is a valid criticism of asymptotic analysis and big-O notation. However, as a rule of thumb it has served us well. Just be aware that it is only a rule of thumb--the asymptotically optimal algorithm is not necessarily the best one.

Thus although computational complexity class analysis is interesting, aids intuitive insight, and useful for comparing bounds, it can't prove certain practical questions such as “can an efficient implementation of this algorithm be found” or “will all implementations of algorithms of this class always be intractable”.

For example, RSA's asymptotic security which since 1978 has been reduced only from requiring b2+o(1)-bit to b3+o(1)-bit keys for b-bit security on non-quantum (a.k.a. “classical”) computing, has in the real world made RSA insecure against an adversary as powerful as the NSA when they only need to crack one key in order to break the HTTPS (SSL/TLS) privacy of 20% of the top million websites (or much easier just steal or use a national security letter confiscation of the keys of the popular certificate authorities). Caveat or lesson is be very careful with intuition and claimed implications from computational complexity.

[1]Rolf Oppliger, Contemporary Cryptography, Second Edition, Introduction, §1.2.2.2 Notions of Security, p.11
[2]https://www.certicom.com/index.php/52-the-elliptic-curve-discrete-logarithm-problem
I. Chatzigiannakis, A. Pyrgelis, P. Spirakis, Y. Stamatiou, Elliptic Curve Based Zero Knowledge Proofs and Their Applicability on Resource Constrained Devices, §3. Zero Knowledge Protocols Based on the ECDLP, http://arxiv.org/abs/1107.1626, 8 July 2011.

TPTB_need_war (OP)
Sr. Member
****
Offline Offline

Activity: 420
Merit: 262


View Profile
December 12, 2015, 12:26:20 AM
Last edit: December 12, 2015, 07:03:21 PM by TPTB_need_war
 #3

Pertaining to the Computational Complexity Classes, the negative votes one has to endure when they are the posting insight that is too conceptually unifying and the readers are too thick to grasp that reality.

I've had other examples, but at least this one has come up from a -9 vote to a -2 lately (as more readers start to grasp I was correct all along).

Yet I have another one at -6 which I am very confident I am correct and the voters are idiots.

Edit: it is pitiful that these self-annointed "compsci" idiots have such high reputations (thus fooling the readers and spreading incorrect information as "authorities"). This is really fucking stoopid.


Edit#2: I need to start copying all the comments over here because the one bezerk moderator at cs.stackexchange is deleting my information/work again!

http://cs.stackexchange.com/questions/5008/differences-and-relationships-between-randomized-and-nondeterministic-algorithms/50623?noredirect=1#comment104814_50623

Quote from: myself
Quote from: David Richerby
Quote from: myself
Quote from: David Richerby
That isn't what nondeterminism means in computer science. Nondeterministic algorithms are not "unpredictable."

Nondeterminism is precisely unbounded entropy in computer science. Go review the Halting problem, undecideability, and Turing-completeness. The unpredictability in nondeterministic algorithms is due to not being able predict if the algorithm will terminate. Period. All the downvoters are ignorant. Whereas, randomization comes from entropy that is orthogonal to the input variables, i.e. under control of an orthogonal source of entropy. I do hope people realize that entropy is disorder and all randomness is due to entropy. Geez. Even /dev/urandom is seeded with entropy.

You have completely misunderstood what nondeterminism means in computer science. It has nothing whatsoever to do with entropy. It does not mean what you think it means: that is why you think all the other answers are wrong. Perhaps the name was badly chosen but that's beside the point. It has a particular meaning in computer science, which is different from the meaning it has in other sciences. You are trying to use the wrong definition, and that is why everything seems completely wrong to you.

You can't seem to grasp that I am unifying the definition compatible to the way it is used in computer science. The use of the term nondeterminism when talking about computational complexity theory (e.g. NP vs. RP vs. P) is clearly all about the entropy as I have explained it in my answer. That finite automa decided to abuse the one term "nondeterministic", is just an aberration. The existence of the three terms in NP, RP, and P is strongly canonical. Also canonical bcz consistent with the universe of definitions for nondeterminism. Sorry dude, you are wrong.



Clearly NFA is only congruent with the rest of the universe of definitions for nondeterminism (including the more canonical computer science definition in complexity theory for NP, RP, P) if we realize that the NF go together. Thus NFA is not referring to the subject of this question which is nondeterminism, but rather to nondeterminism in the presence of finite entropy, which is another conceptual entity entirely different from nondeterminism. The mistake in your logic is assuming the N in NFA is distinct from the F, which it obviously isn't! Cripes.

http://cs.stackexchange.com/questions/5008/differences-and-relationships-between-randomized-and-nondeterministic-algorithms/50623?noredirect=1#comment104790_50623

Quote from: myself
I am appalled because other than Alen tin Brink's answer, all the other answers are highly incorrect. And yet I observe readers upvoting those highly incorrect answers and downvoting mine. It is bewildering and flabberghast. It shows that this topic is highly misunderstood. Computational complexity issues are a very high IQ abstraction apparently. Thus apparently only a small percent of readers do really understand these issues. Otherwise explain to me how you think I am incorrect.

http://cs.stackexchange.com/questions/5008/differences-and-relationships-between-randomized-and-nondeterministic-algorithms/50623?noredirect=1#comment104857_50623
http://cs.stackexchange.com/questions/5008/differences-and-relationships-between-randomized-and-nondeterministic-algorithms/50623?noredirect=1#comment104859_6061

Quote from: myself
Quote from: Raphael
Quote from: myself
Quote from: Raphael
Quote from: myself
Quote from: Raphael
I[your] answer has nothing to with how non-determinism is defined in automata resp. computability theory. You should stop flaming around and get a textbook. If you don't understand the other answers, I don't think we can help you in a comment.

Finite automata is not relevant to the question of non-determinism because finite automata has a finite entropy! That is fundamental. The use of the word "non-determinism" in finite automata is a misnomer and a mirage. You can't just take a few researchers error in appropriate naming and pretend to foist that on correct logic. Also I understand the other answers to the extent I have commented on them, e.g. I understand that finite state machines can't represent an unbounded entropy nor generally run Turing-complete programs.

Quote from: myself
"The use of the term nondeterminism when talking about computational complexity theory [...] is clearly all about the entropy"

-- no, it's not.

Appeal to authority is not argument. Please provide your reasoning or give up. Perhaps these quotes from Minsky can help you gain clarity

I don't need any reasoning. I am only appealing to the formal definition of non-determinism in automata/computability theory. Which, I daresay, you have either never read or not understood.

Formal definition of non-deterministic finite automa (NFA) is finite input entropy (data: the 5-tuple). Thus every NFA can run on a deterministic Turing machine, i.e. doesn't require a nondeterministic Turing-complete machine. Thus NFAs are not in the class of nondeterministic problems. The notion of "nondeterminism" in NFA is that its determinism (while clearly present since every NFA can be converted to a DFA) is not explicitly expanded — not the same as nondeterminism of computation

http://cs.stackexchange.com/questions/5008/differences-and-relationships-between-randomized-and-nondeterministic-algorithms/50623?noredirect=1#comment104860_50623
http://cs.stackexchange.com/questions/5008/differences-and-relationships-between-randomized-and-nondeterministic-algorithms/50623?noredirect=1#comment104861_6061

Quote from: myself
As I explained already, the notion of non-determinism in NFAs couple the non-deterministic with the finite entropy. Thus the non-determinism is a local concept of not expanding the determinism as a form of compression or convenience, thus we don't say NFAs are non-deterministic, rather they possess appearance of randomness to an oracle unwilling to compute the deterministic expansion. But it is all a mirage because it call be expanded deterministically bcz the entropy is not unbounded, i.e. finite. If you don't have an argument, I have no more time to waste on your nonsense.

http://cs.stackexchange.com/questions/5008/differences-and-relationships-between-randomized-and-nondeterministic-algorithms/50623?noredirect=1#comment104864_50623
http://cs.stackexchange.com/questions/5008/differences-and-relationships-between-randomized-and-nondeterministic-algorithms/50623?noredirect=1#comment104865_6061

Quote from: myself
The claimed "non-determinism" in NFAs is really randomness is sense of my definition of the distinction between randomness & nondeterminism. My definition is that randomness is where some of the entropy that is not under the control, knowledge (, or desired non-explicit expansion in the case of a NFA) of the input to the program or function. Whereas, true nondeterminism is the inability to know the entropy in any case, because it is unbounded. This is precisely what distinguished randomized from nondeterminism. So NFA should be an example of the former, not the latter as you claimed.

http://cs.stackexchange.com/questions/5008/differences-and-relationships-between-randomized-and-nondeterministic-algorithms/50623?noredirect=1#comment104858_6061

Quote from: myself
Why are my comments disappearing from your answer? Are you appealing to moderators to delete information?

http://cs.stackexchange.com/questions/5008/differences-and-relationships-between-randomized-and-nondeterministic-algorithms/50623?noredirect=1#comment104868_50623

Quote from: myself
You all will eventually realize I am correct. There is no other formulation of the information set which is harmonious and congruent. My answer is correct as well as my comments. I have no idea how long it will take you to realize the truth. That isn't my job. It is rather bewildering that cliques of grown men can go off on political tirades of up voting each other's comments, downvoting correct answers, and flagging my comments for deletion by moderators. I am ashamed of you. I am now duplicating all these comments on a public forum to document this abuse.

http://cs.stackexchange.com/questions/367/how-can-it-be-decidable-whether-pi-has-some-sequence-of-digits?noredirect=1#comment9096_367
http://cs.stackexchange.com/questions/367/how-can-it-be-decidable-whether-pi-has-some-sequence-of-digits#comment104869_367

Quote from: myself
Quote from: Raphael
Quote from: myself
@Raphael, for the third time, since you've had the moderator delete this comment on the prior two occasions, the notation I have normally seen in the cryptography papers I read is {0}^n which is less ambiguous bcz {} is the notation for set. I am now duplicating these comments where I have 1000s of readers in order to document moderators abusing their authority and deleting information. Ah you are German, that entirely explains the control freak attitude. The link is from Eric S. Raymond the 150+ IQ genius who coined the moniker "open source".

I am a moderator. I removed your comments because they are redundant/obsolete, which is completely in line with our policy. Stop reposting them. (Did not you tell me that argument by authority is no argument at all?)

How is a comment about the notation you are using redundant? I am pointing out that you used a more ambiguous notation than what I've seen used. Why do you feel not only compelled but entitled to delete that comment? And you've removed it again for the 3rd time.

http://cs.stackexchange.com/questions/5008/differences-and-relationships-between-randomized-and-nondeterministic-algorithms/6061?noredirect=1#comment104883_6061

Quote from: myself
Quote from: Raphael
I'm removing your comments because they are obsolete (literal copies from the comments below your own answer). They are also not information but misleading tirades. If you want to tell the world how wrong every book and article written about this ever is, start a blog. This is not the place.

So you are the mod here (oh lord help us) and thus abusing your position by being the biased judge in your own case? A highly ethical person would recluse themselves from a conflict of interest. Your characterization of the quality of my comments is laughable. You don't even seem to comprehend what nondeterminism is and yet you are a PhD student in Algorithms & Complexity group at University of Kaiserslautern, Germany. An embarrassment to your university. Keep up the ad hominen against me & I will return it in kind.

http://cs.stackexchange.com/questions/5008/differences-and-relationships-between-randomized-and-nondeterministic-algorithms/50623?noredirect=1#comment104882_50623

Quote from: myself
Quote from: Raphael
We can agree on that: please stop trying to "teach" us, it's not helping anybody.

And one piece of advice: if everybody else -- including professors, doctors and graduates of (theoretical) computer science -- tell you that you are wrong, you may want to stop and consider if they are right. As a help for this reflection process, have a look at this learner classification by Wolcott/Lynch (p36). In particular, note the commen weaknesses of the "Biased Jumper".

You continue to appeal to authority. Also you don't know my qualifications. I am still waiting for you to present a coherent logic. I have presented all the coherent logic. Allow the open source peer review from many academics over a sufficient sample, then you will see this initial barrage of nonsense from your private clique here is overturned by reason. P.S. I am very familiar with the concept of the Dunning-Kruger syndrome. I do not like your "I know it all" and very inflexible attitude to learning new concepts. You've been condescending to others who asked reasonable questions.

http://cs.stackexchange.com/questions/5008/differences-and-relationships-between-randomized-and-nondeterministic-algorithms/50623?noredirect=1#comment104872_6079

Quote from: myself
Quote from: myself
-1 Randomness can be deterministic, if it inputs a seed of a bounded number of bits in possibilities (bounded entropy) instead of accessing new seeds an unbounded number of occurrences. Nondeterministic is unbounded possibilities (entropy). Wikipedia defines nondeterministic algorithms to be those which can't be proven to terminate. Thus Wikipedia is not presenting two different definitions. Thus I am downvoting.

Quote from: reinierpost
There is nothing 'unreal' about running a nondeterministic algorithm, it just means that while running, choices need to be made for which the outcome isn't determined by the algorithm. The only difference between 1 and 2 is that in 1 you haven't said when the algorithm is considered to "succeed", whereas in 2 the algorithm must be of the kind that always says "yes" or "no" at the end of a run and you have a decision problem and you define the algorithm's answer to the problem as "yes" if and only if any one of its possible runs answers "yes". So 2 is just a special case of 1.

Everyone is conflating the difference between randomized and nondeterministic. This causes your comment to be muddled. The algorithm responds to the interaction of the input (variable) entropy and its source code (invariant) internal entropy. Nondeterminism is unbounded entropy. Invariant entropy can even be internally unbounded such as expanding the digits of π. Randomized is some of the entropy is not coupled to the input as defined (i.e. it may be coming from a system call to /dev/random, or simulated randomness e.g. NFA or a PRNG).

TPTB_need_war (OP)
Sr. Member
****
Offline Offline

Activity: 420
Merit: 262


View Profile
December 12, 2015, 06:36:29 PM
 #4

Edit: it is pitiful that these self-annointed "compsci" idiots have such high reputations (thus fooling the readers and spreading incorrect information as "authorities"). This is really fucking stoopid.

They mods over at cs.stackexchange deleted the entire comment trail wherein I proved that the given answer was incorrect. This must have embarrassed the shit out of them because the answer had 75 up votes from professors, PhD students, etc.. Likely the sparkling academic cathedral portion of our world is apparently getting dumber and more corrupt. Here follows the comment trail which I saved before it was deleted.


Quote from: myself
Quote from: Gilles
Quote from: myself
Quote from: JeffE
Quote from: myself
Quote from: JeffE
Quote from: myself
Quote from: JeffE
Quote from: Raphael
Quote from: myself
Quote from: Raphael
Quote from: myself
Quote from: Raphael
Quote from: myself
Quote from: JeffE
Quote from: myself
Quote from: JeffE
Quote from: Shagun
Quote from: Carl Mummert
Quote from: Shagun
Quote from: cody
Quote from: danr
Quote from: Raphael
Quote from: Carl Mummert
Quote from: sepp2k
Quote from: gallias
Quote from: Alex ten Brink
Quote from: gallias
1. Take a random Turing machine and a random input. 2. Either the computation will go on for ever or it will stop at some point and there is a (constant) computable function describing each one of these behaviors. 3. Huh 4. Profit! I feel like, following your proof scheme, one could prove that termination is decidable. Did I miss something?

Watch out what the Halting theorem states: it says that there exists no single program that can decide whether a given program halts. You can easily make two programs such that either one computes whether a given program halts: the first always says 'it halts', the second 'it doesn't halt' - one program is always right, we just can't compute which one of them is!

That is exactly my point: how can you claim that a problem is decidable (cf. this thread's title) when you're not able to decide which one of the two algorithms is the good one?

That's not the same thing though. In the case of Alex's example neither of the algorithms will return the right result for all inputs. In the case of this question one of them will. You can claim that the problem is decidable because you know that there is an algorithm that produces the right result for all inputs. It doesn't matter whether you know which one that algorithm is.

The definition of "decidable" is that an algorithm exists to perform the task, not that we know which algorithm it is. As long as we can prove there is at least one suitable algorithm in existence then we know the problem is decidable.

@gallais Good that you raise this point; it is the pitfall most fall into. Think of it this way: The halting problem function wants to decide something for all Turing machines. Here, we only talk about one number. If you extend $f$ here to take a real number as parameter and ask the same question as for $\pi$, now that will be undecidable! The other way round, for any fixed Turing machine the halting problem may very well be decidable (and it is for many).

I am also having a bit of a problem to accept this answer. But I think my issue is that this answer is non-constructive, and I am more inclined to constructive mathematics. But I guess the normal definition of computable is computable in a classic sense, and personally such a definition feels like nonsense.

I agree with @danr: this proof is non-constructive, and it's always unsettling to have non-constructive proofs of statements about computability...

@carl : you pointed out that the definition of "decidable" is that an algorithm exists to perform the task, not that we know which algorithm it is. As long as we can prove there is at least one suitable algorithm in existence then we know the problem is decidable. But can we say that the second algorithm exists without knowing the value of N? Its one thing to say that one of the 2 algorithms is correct always and another thing to say that algorithm uses a constant which is not computable itself.

If $N$ exists, it is just a natural number, and every natural number is computable (because we can just hard-code it).

But doesn't our inability to enumerate and hence test for every natural number matter?

No. It doesn't.

@Raphael at al, the answer is incorrect. It is undecidable whether any algorithm will terminate when searching for a string of 0 values in the digits of π, because the input entropy is unbounded (bz π is irrational so the digits patterns never repeat) thus the problem is Turing-complete. Any algorithm you invent will for each n either find the string, or never find it & never terminate, and it is undecidable a priori which.

Perhaps you are conflating NP with this problem. NP is computable because we only run a deterministic verification (i.e. always terminates) on a non-deterministic algorithm, e.g. meaning that someone has told us to verify that a particular input n terminates within M number of algorithmic steps with a "yes" result, so verifying that is deterministic (can terminate after if we reach M steps) . But this problem is asking to prove that (it is decidable that) n is always computable, which is not the same as asking to verify that a particular value is computable in a defined number of steps.

That is not what "Turing-complete" means. Your argument about algorithms that search for strings in the binary expansion of π does not apply to all algorithms. In particular, none of the algorithms in my answer search for strings in the binary expansion of π. Nevertheless, exhaustive case analysis implies that exactly one of them is always correct. Nobody knows which one, and perhaps nobody can ever know which one, but that doesn't matter. A correct algorithm exists, and therefore the problem is decidable.

No, I am not confusing anything with NP. This problem is not asking us to decide "whether n is always computable". The problem is asking us to prove that the problem "Given an integer n, is there a string of n consecutive zeros in the binary expansion of π?" is computable.

The question does not ask "whether n is always computable". Rather it states "Prove f(n) is computable" (not n but f(n)). Thus the question demands that our algorithm for f(n) is computable. Even the title of the question reaffirms the requirement that the algorithm be computable, "How can it be decidable whether π has some sequence of digits?".

Turing-completeness is the requirement for unbounded input, memory, time, etc so as to be general enough to compute any computation. Since expansion of the series for π is infinite and the patterns don't repeat, then there is no algorithm that doesn't require unbounded steps in the general case of any value for input n. Thus f(n) is Turing-complete and thus is undecidable. QED. You shouldn't just foist nonsense on what is supposed to be accurate site for learning about compsci.

Again, it does not seem as though you knew what you are talking about, at all. "Thus f(n) is Turing-complete and thus is undecidable" -- that's ... not even wrong.

The implication is that f(n) requires a Turing-complete programming language because it requires unbounded recursion to expand the digits of π. Here is a canonical quote for you, "When applied to a programming language, Turing-complete means that it can fully exploit the capabilities of a Turing complete computer". You do a disservice to this site with your abundant nonsense.

Although the digits of π are computable (and only if we ask for infinite digits do we get non-termination), the question asks to compute a search for existence of a string of 0 digits, which means it undecidable whether the computation halts or not. Thus the computation is not computable, because we don't know if it is still busy or looping forever. This is just fundamental computable function theory. If you mess this up, you need to go back to Compsci 101.

"f(n) requires a Turing-complete programming language because it requires unbounded recursion to expand the digits of π" -- that's a non-sequitur, and JeffE has proven that $f$ does not have to expand any digits. "If you mess this up, you need to go back to Compsci 101" -- agreed. See you there, I'll be your TA.

The only way that the 1st case of JeffE's answer is correct, is if we can assume that every finite string of 0s is contained within the expanded digits of π, without needing to actually expand the digits. On further thought (not sleepless now), I suppose that is true because the expansion of π is unbounded thus eventually every pattern is produced i the expansion. I don't recall if that has been formally proven & n isn't finite? JeffE hedges against that with the 2nd case which says if there is a maximum N, then we enumerate to it. This hedge doesn't work because N is undecidable.

Thus either the first case is always true or his idea of avoiding actually expanding the digits is undecidable per the logic I presented. When I wrote my comments (for me after being awake for 20+ hours nonstop working) I wasn't really paying attention to the point that JeffE was proposing not to do any computation (expansion) and always return true. So in that case, can we formally prove the first case? If yes, then JeffE's answer is correct. But I think the problem is that n isn't bounded. So I don't think that can be proven.

"N is undecidable" -- that statement does not make sense, and if it did mean what you think it does, it would still not make your case -- because we don't have to determine $N$ algorithmically.

You've almost got it. I am indeed hedging my bets by presenting an infinite sequence of algorithms. I don't know which one is correct; maybe the first one, maybe the second one with N=1234567890 hardcoded in, maybe the second one with N=104378942378493027478326457832 hardcoded in, or maybe the second one with some other value of N hardcoded in. I'm quite willing to believe that no one will ever know which of these algorithms is correct. But I don't have to know which algorithm is correct. All I have to know is that my infinite collection contains a correct algorithm.

You can't know N without running an algorithm to confirm it. Sorry guys. And you can't prove the first case either, c.f. Yuval's comment that not all sequences of 0 are proven to occur in irrational numbers, besides n is also unbounded (in addition to the sequence being unbounded) which afaik mucks up any attempt to prove the assumption of the first case. Thus afaics this answer is wrong. Raphael ever faithful to your endearing personality defect, you continue ASS-U-ME you know what I am thinking.

You can't know N. That's correct. But I don't have to know N.

You need to confirm N because as Yuval pointed out, you can't assume any finite sequence of 0s will exist in a finite sequence of the expansion

No, I don't need to confirm N. I don't need to confirm that any particular finite sequence occurs in π. I need to know that either all strings of 0's appear in π or there is some longest string of 0's in π. If the first alternative is true, the first algorithm is correct. If the second alternative is true, then for some value of N the second algorithm is correct. Either way, at least one of the algorithms is correct.

But you can't compute the second case without knowing the value of N for which n > N should return 0. f(n) must always return a correct value for any n. Without knowing N, f(n) in second case can't return the correct value. I can't believe you are making me write this. What the heck could I possibly not be seeing? It seems so obvious to me.

You can algorithmically compute the correct return value for the second case and be sure to terminate for any finite value of n. But for the unbounded case of n, then the second case is undecidable (maximum N can't be known without confirming it so you don't know whether to return 0 or 1), and the first case is unproven. What am I missing? Your logic is based on not needing to use an algorithm in the unbounded case, but Seems the error in your logic was forgetting that the return value of the second case matters in the unbounded case. 75 wrong up votes. Oh my.

I have moved over to chat, but please note I am recording this discussion offsite in a popular forum, bcz I view chat as a closed source and I believe i in open source. Also I will post any final summary back to the comments on the page from whence we originated.

The error is precisely that you assume your proposed second case of f(n) will only be necessary for some finite N and that it can be computed because N is a maximum finite value. But if it is true that N is finite, then your first proposed case for f(n) must always be invoked in the unbounded case. So the precise error is it is undecidable which case should be invoked, because your logic requires that 2nd case be invoked when the first case would fail. That proves your answer is flawed.

Let me rephrase to make it more concrete that the answer is broken. We must know N, because above values of N we need to return either 0 from the second case of the proposed f(n) or 1 from the first case of the proposed f(n). Because your logic assumes the second case of f(n) will be invoked when the first case is inapplicable. But this assumes the first case will always be applicable in the unbounded case, otherwise we must assume that the second case will be invoked with an unbounded value of n. Since we can't prove that the first case is always applicable, then the second case is undecidable. QED. I could probably improve the proof if I wanted to spend more time on it, but this was an enormaous waste of my precious time already. Adios to this site!!

Comments are not for extended discussion; this conversation has been moved to chat.

You deleted the entire comment trail because I showed that everyone here was wrong! I will be publishing the entire comment trail and pointing out how corrupt this cs.stackexchange is!

YarkoL
Legendary
*
Offline Offline

Activity: 996
Merit: 1013


View Profile
December 12, 2015, 06:46:29 PM
 #5

Well, my layman's journey did not take very far.
(I remembered I left the oven on so had to go back home).

“God does not play dice"
TPTB_need_war (OP)
Sr. Member
****
Offline Offline

Activity: 420
Merit: 262


View Profile
December 12, 2015, 06:48:17 PM
Last edit: December 12, 2015, 07:35:37 PM by TPTB_need_war
 #6

Well, my layman's journey did not last very far.
(I remembered I left the oven on so had to go back home).

Apologies I will try to bring it back to layman's level of explanation on the next post. Arguing with mathtards over at cs.stackexchange can certainly fly over the heads of laymen.

Some of the first principles stuff is difficult to explain to someone who has no programming and math experience at all. But eventually I will also make summaries that can make sense to layman who don't even have those rudimentary skills.

I view myself as a layman relative to those who are (more and probably formerly) trained in the field of cryptography, such as Adam Back and Gregory Maxwell of Blockstream and the academics who are writing the white papers I will be trying to understand and explain in this thread.

YarkoL
Legendary
*
Offline Offline

Activity: 996
Merit: 1013


View Profile
December 12, 2015, 06:51:38 PM
 #7


All good and waiting for the next installment  Smiley

“God does not play dice"
bhokor
Legendary
*
Offline Offline

Activity: 966
Merit: 1000


View Profile
December 12, 2015, 07:21:07 PM
 #8

Really beautifull technical explanation, lots of thanks for the beauty and interesting  explanation, one really worth thread
TPTB_need_war (OP)
Sr. Member
****
Offline Offline

Activity: 420
Merit: 262


View Profile
December 12, 2015, 11:12:33 PM
Last edit: December 14, 2015, 07:02:34 AM by TPTB_need_war
 #9

Computational Complexity Classes: P, NC, NP, E, #P, ⊕P

P, NP, E, #P, ⊕P, NC are some of the computational complexity classes that are upper bounded by 1) type of result, 2) machine model employed for the computation, and 3) polynomial time (algorithmic steps) consumption.

ClassResult type
Machine model
Resource consumption
P
boolean
deterministic
polynomial time
NC
boolean
deterministic
parallel processing
polynomial time
NP
boolean
nondeterministic
polynomial time
and space
EXP or E
boolean
deterministic
exponential time
polynomial space
#P
count
nondeterministic
polynomial time
⊕P
count % 2
nondeterministic
polynomial time
where % means modulo and boolean is a binary (two possible values) result, e.g. "yes or no", "true or false", "1 or 0".

Determinism

In this section we embark on a fantastical journey to scratch the surface of globally (over the entire space of computer science and beyond) holistically integrated overviews about the deep theoretical insights I've accumulated—in my 3 decades of (mostly autodidact) programming experience—about programming language theory combined with theoretical physics. Much of this I've learned in an accelerated fashion in erratic autodidact bursts of study since roughly 2010 when I first started to dive into Haskell and functional programming. Which btw sent my mind into a twirl (with many months of frustration to grasp strange beasts such as Monads, State transformers, category theory, etc) and turned my former C, C++, OOP notion of programming upside down and inside out. Next on my theoretical journey will hopefully be the unification of the Holy Trinity (hey I am getting old so coding Godspiring seems apt). To that practical end, recently I am employing inversion-of-control "don't call me, I will call you" more liberally even writing my own superior Async library that leverages ES6 (Javascript) generators instead of the async/await coming for ES7.



One of the ways to think about the concept of entropy is the minimum possible size that some data can be losslessly compressed to, e.g. the text 'ababab' can be compressed to '3ab' where the first character is the repeated count. This is the Kolmogorov complexity (c.f. Nick Szabo's point that computing the complexity is a NP problem). Thus I suggest to conceptualize entropy in this context as a measurement of the number of unique configurations in some data, such that the repetitive configurations can be discarded via compression without losing any information. Repetition is not additional Shannon information, and thus doesn't increase the complexity.



Figure 2: deceptive appearance of what some laymen would interpret to be complexity, is actually low entropy due to repeating structure

Algorithms produce a result (output) which is the interaction of the program's source code (i.e. the description of the logical steps of the algorithm) and the input state (variables). Thus separately the source code and the input variables each have their own Shannon information content, i.e. a close analog of entropy. Only in perfectly dependently typed programming languages are both sets of entropy statically unified, i.e. where the statically compiled source code is a complete specification of the combined entropy. Otherwise the entropy of the source code is dynamically combined at invocation of the algorithm with the entropy of each variant of input state. The combined entropy determines the output of the algorithm. Determinism means that this combined entropy for all possible inputs is cumulatively finite for finite invocations (state transformations) of the algorithm.  Since the “unique configurations” are finite, it is possible to precompute (i.e. determine) the interaction of all of them for all possible inputs and predetermine whether the algorithm will terminate or not terminate for each possible input state. Determinism doesn't mean the algorithm will terminate (e.g. we can write an algorithm that gets stuck in an endless loop every time it is run, thus being entirely predetermined to not terminate), but rather deterministic (thus finite entropy) means it is possible to precompute whether the algorithm terminates for each possible input state. The formal definition of a deterministic algorithm is a transformation function from some input state to some output state (and the “transformation function” is the source code).



In contrast, a nondeterministic algorithm when combined with all input possibilities has an unbounded entropy and thus has an uncountable number of “unique configurations”. Unbounded means the algorithm may or may not have infinite entropy, but it is impossible to count the limit of the entropy until the algorithm has terminated, if it terminates (which can't be known until the algorithm terminates, i.e. non-termination is never known). Thus the entropy can't be quantified a priori.

Quote from: jmite@http://cs.stackexchange.com/
The thing is, Turing machines don't have an infinite amount of memory. There's never a time where an infinite amount of symbols are written to the tape. Instead, a Turing[-complete] machine has “unbounded” memory, meaning that you can keep getting more sources of memory when you need it. Computers are like this. You can add RAM, or USB sticks, or hard drives, or network storage.

Note there are scenarios where the unbounded entropy derives entirely from the source code, e.g. expanding the infinite, non-repeating sequence of digits of an irrational number such as π. And for example the input entropy can even interact to make the unbounded entropy of the source code finite, e.g. where the input variable is the finite count of the number of digits of  π to output. Yet for example there are cases where the input entropy interacts to retain the unbounded entropy of the source code, e.g. where the input variable is the count of the number of consecutive 0 digits to find in the digits of π (which can't be proven to terminate or not).

Tangentially note that in a perfectly dependently typed programming language which such as Agda or Epigram, the statically compiled type(s) of the input state entirely determine this combined entropy of the source code and the input variation; thus such programs are always deterministic (regardless of the dynamic input state of each invocation) due to specifying all the possible dependencies of variables in the input types the compiler enforces. Since this static typing enumerates and enforces a finite number of input dependencies, thus by definition the entropy can't be unbounded. However, even perfectly dependently typed algorithms become unbounded in entropy when they are composed as modules into derivative algorithms because the permutations of cross-dependencies between modules is impossible to model in a closed form due to Russell's Paradox; which even the Philip Wadler (the creator of the Expression Problem and one of the creators of Haskell) seemed to not address in his comment about modularity that immediately followed mine. The venerable Robert Harper also weighed on the fact that dynamically typed languages are actually undertyped statically typed languages which belies the point that dependently typing modularity is impossible. Typing is not a panacea. Thus the algorithm for the finding a number of consecutive 0 digits in π can't be implemented in a perfectly dependently typed programming language. Real world program semantics naturally contain module composition and thus an exponential explosion of dependent types and thus can't realistically be perfectly dependently typed. This is the reason a statically typed compiled program can still contain bugs. (The theoretical physics explanation is that in life, time is not reversible so the forward dependencies of relativism due to the Butterfly Effect exponential explode. This is why I have theorized that the only way to model the fundamental matter of the universe is in the higher dimensionality of the chaos frequency domain.)



It is impossible to compute the interaction of uncountable (unquantifiable) entropy. An a priori unknown can't be precomputed. Thus it is impossible precompute whether a nondeterministic algorithm will terminate or not. The only way to find out is to run the algorithm on some particular input state and wait for a result. For some types of algorithms the unbounded entropy is so uniformly distributed that is impossible to discern if the algorithm is progressing towards termination, e.g. finding the consecutive number of 0 digits in π. In that case, you could be waiting forever to find out if the algorithm will terminate. In programs that we use in every day life which are in fact written with Turing-complete (i.e. for nondeterministic machine) programming languages, the unbounded entropy is sufficiently non-uniformly distributed such that the nondeterminism manifests as what programmers lovingly named “bugs” (e.g. the infamous Windoze Blue Screen of Death) which are sometimes (but not deterministically so) discerned from random behavior thus identified as errors incongruent with the expected functionality.



The machine (or programming language) model must match the determinism requirements of the algorithm. A deterministic Turing machine only becomes nondeterministic and referred to as Turing-complete iff it has the unbounded (in space and time) capability to compute all possible configurations of computation. That is equivalent to allowing unbounded (tail optimized or not) recursion because for example unbounded recursion (or its non-functional programming equivalent of unbounded looping) is required to process unbounded size of input. Unbounded input size is one variant of unbounded input entropy, but not the only variant. Yet in all cases of unbounded entropy, the unbounded recursion of Turing-completeness is required. This fact will be applied to understanding the NP complexity class.



Figure 6: depiction of conceptualized unbounded recursion using mirrors that face (thus reflect or "invoke") each other

Unbounded recursion enables unbounded composition which (is the antithesis of declarative programming and) includes complex cases of functions that invoke functions which invoke other functions, which at some point recursively invoke one of those functions already pushed on the stack (or do the non-functional programming equivalent which is unbounded looping). Unbounded composition is unbounded entropy.

The tradeoff for this generality (to be able to compute any computation), is that a Turing-complete machine model can't generally determine if it will terminate. This is known as the Halting problem, which even has a proof written in the style of Dr. Suess which employs unbounded recursion to disprove the possibility of existence of a program which could compute whether a program halts. Another way of visualizing the undecideability of the Halting problem is to note that if a program is designed so it can accept unbounded input size, then it can't be proven whether it will halt or not halt. More abstractly undecideability results from any rule of containment that contains everything that can contain itself (c.f last two paragraphs of linked answer), because this unbounded recursion opens the generality to the universe of possibilities where the entropy is unbounded (per the Second Law of Thermodynamics).

There is a class of algorithms where the input entropy is not unbounded, yet it increases as an exponential function of some parameters such as exponential computation time. For example, although computing a cryptographic hash function is deterministic and polynomial time, finding the preimage of its output requires (on average) searching half of the 2N possibilities where N is the number of bits of the hash function. For example, the unfathomably large number of “unique configurations” 2128 = 3.402823669×1038! I explained more about the bit security of hash functions in my recent white paper on Winternitz hash-based signatures.

Note randomization should not be conflated with nondeterminism because randomized or probabilistic algorithms are not necessarily nondeterministic algorithms, for if they don't require unbounded entropy (i.e. unbounded recursion) it may be possible to prove whether they terminate or not.



The unpredictability of nondeterministic algorithms is due to inability to enumerate all the possible permutations of the algorithm's entropy. The unpredictability of a randomized algorithm is due to decoupling the control of all of the variable entropy from input entropy. Randomization makes some of the variable entropy equiprobable (or any statistical distribution), but it does not require making the entropy uncountable. Sources of statistically controlled randomization could be deterministic (e.g. a PRNG with a chosen seed) or nondeterministic (e.g. /dev/random).

The misnamed nondeterministic finite automa (NFA) is not nondeterministic because the state transformation of each invocation of the algorithm is deterministic (and thus are not Turing-complete and can't simulate Turing-complete algorithms) and even the eventual state of many successive NFA transformations is deterministic because the entropy is finite since every NFA can be converted to a deterministic finite automa (DFA)— albeit such a conversion incurs an exponential combinatorial explosion of explicitly enumerated state. NFAs are more accurately characterized as a compression mechanism where the eventual entropy is not combinatorially expanded, and not the misnomer of nondeterminism.

Polynomial time

Polynomial time requires the computation time of the algorithm to be a function of some variable(s) where all exponents are constants. Cobham's thesis is the rule of thumb that polynomial time algorithms are in the asymptotic case not intractable to compute, but remember from Computational Complexity Classes: Overview that such asymptotic intuitions are not absolute.

P

P includes all (boolean result a.k.a. 'decision') algorithms which can be run on a deterministic (non-Turing-complete) machine in polynomial time. In other words, those algorithms which are known to terminate in polynomial time. Deterministic algorithms (and deterministic Turing machines, i.e. not Turing-complete) require a bounded entropy, meaning that the inputs and computations to the program require an upper bound on (i.e. maximum) size and time. Another way of stating this is that it should be decidable (provable) whether the program terminates. Programs that accept input of unbounded size or perform an unbounded loop (recursion) are not deterministic.

Note that since P is deterministic, we can prove all algorithms in class P will terminate and thus we don't have to include the type of non-termination in (a disjunction of) P's boolean output type.

So the significance of the P class of algorithms is:

  • Provably terminates for all possible inputs.
  • Polynomial time often implies the computability is feasible (tractable).

NP

NP includes all (boolean result) algorithms for which the computed results can be deterministically verified on a nondeterministic (Turing-complete) machine in polynomial time (and polynomial space or size for the proof being verified). The point is that the algorithm may require a Turing-complete machine thus it is undecidable (i.e. unknowable a priori) whether the algorithm will terminate. In the cases where the algorithm terminates, the verification will also terminate thus for verification the algorithm behaves deterministically and could even be run on a deterministic (not Turing-complete) machine. Also the verification may even run a different algorithm than the generation of the proof.

So the significance of the NP class of algorithms is:

  • Verification of the proof is an algorithm in P, so all the significance of P applies to the verification.
  • NP algorithms occur in so many natural problems.
  • The significance of verification computed in P coupled with the ability to prove NP computation.
  • The undecidability and/or intractability of algorithms in NP can be employed as an advantage.

An example helps explain the significance of the last two bullet points. Note that NP means deterministic, polynomial time verification of proofs of nondeterministic, polynomial time computation. An analogous example of proving computation for a more complex class but instead of for NP then for the complexity class EXP (or E) coupled with P verification. That is deterministic, polynomial time verification of proofs of deterministic, exponential time computation. The example is a cryptographic hash function (which btw approximates infinite Kolmogorov complexity in the Random Oracle model[1]) requires EXP to find a preimage (analogously intractable to precompute as is the nondeterminism in NP, due to the intractability of precomputing all the entropy of the hash function); and the algorithm is in P for verifying (the proof of computation from EXP of) a commitment to a preimage. Hash commitment schemes will be crucial for understanding Zerocash.

In simpler words, the point is proof verification can be tractable (because the algorithm is in P) while computing (i.e. generating) the proof can be intractable (because the algorithm is in NP or EXP) when the adversarial (cheating) prover doesn't know the hash preimage (i.e. the private key). So complexity classes of the kind such as NP (or EXP coupled with P verification) which incorporate P for verification and an intractable complexity class for proving, are the conceptual complexity class paradigm for public key cryptography.

Mathematically I am stating that if public key = hash(private key) where hash() is an algorithm in P but hash-1() is an algorithm in EXP, then the algorithm for verification that public key = hash(private key) is in P thus computation is theoretically tractable. But the algorithm for computing private key = hash-1(public key) is in EXP thus theoretically intractable.

Note the intractability of NP can be due to the cost of not being able to discern when to terminate an algorithm. Thus for some hypothetical public key cryptography algorithm in NP, unless you already know the private key for which the algorithm will terminate, then finding an input state which maps to termination is potentially intractable. When NP algorithms do terminate, they do so in polynomial time, so the problem of intractability is not due to intractable running time of the termination case as it is for EXP. Nevertheless both NP and EXP can be intractable to compute, which can be employed as an advantage to enable public key cryptography. For example, information-theoretic security (such as for Pedersen commitments) is due to insufficient information which is equivalent to unbounded entropy because breaking the security is only plausible for an omniscient algorithm, and thus is an example of a problem that is analogous to those in the NP class (except Pedersen commitments aren't boolean). Again remember from Computational Complexity Classes: Overview that such asymptotic intuitions about intractability are not absolute. Even theoretical intractability due to unbounded entropy for NP can fail in the real world considerations, which thus must be very carefully studied and considered in cryptography.

...please check back again later, as I am not yet done composing this post...

[1]Margus Niitsoo, One Way Functions and Permutations from Kolmogorov-random Bit Sequences.
https://en.wikipedia.org/wiki/Algorithmically_random_sequence#Three_equivalent_definitions
http://blog.cryptographyengineering.com/2011/09/what-is-random-oracle-model-and-why.html

TPTB_need_war (OP)
Sr. Member
****
Offline Offline

Activity: 420
Merit: 262


View Profile
December 13, 2015, 08:20:33 AM
 #10

I've made some edits to the prior post and I think the section on Determinism is more complete and more accurate now.

TPTB_need_war (OP)
Sr. Member
****
Offline Offline

Activity: 420
Merit: 262


View Profile
December 13, 2015, 10:03:26 AM
Last edit: December 13, 2015, 10:27:01 AM by TPTB_need_war
 #11

I had a huge error in the NP section which has now been corrected. I was conflating as NP, the computational complexity class of EXPTIME coupled with P verification. The broad conceptual analogy was correct (in my groggy head), but I had previously messed up the articulation which is now fixed. Also I have briefly explained why that kind of complexity class is the paradigm for public key cryptography. Note I discovered this error on my own. I was suffering from lack of sleep before (recurrent issue for me at my current age 50+ and how many excessive hours a day I am working).

YarkoL
Legendary
*
Offline Offline

Activity: 996
Merit: 1013


View Profile
December 13, 2015, 12:18:58 PM
 #12

Wow, I actually understood some of that. But only because I've
read some popular books dealing with limits of computability,
otherwise it would been all Greek to me.

So to sum it up we have seen a very in-depth exposition of
theoretical underpinnings behind one-way functions. Interesting
to see what all this is leading towards.

Getting a little bit philosophical,
Quote
More abstractly undecideability results from any rule of containment that can point to itself (c.f last two paragraphs of linked answer), because this unbounded recursion opens the generality to the universe of possibilities where the entropy is unbounded 

would you agree that the "rule of containment that can point to itself" is equal to saying
"a containment so rich, that it can model all rules"

“God does not play dice"
TPTB_need_war (OP)
Sr. Member
****
Offline Offline

Activity: 420
Merit: 262


View Profile
December 13, 2015, 02:54:06 PM
Last edit: December 13, 2015, 05:12:06 PM by TPTB_need_war
 #13

Wow, I actually understood some of that. But only because I've
read some popular books dealing with limits of computability,
otherwise it would been all Greek to me.

Hopefully not every post will be as deep as that one especially at the end of this journey once I have the clarity of mastery of the subject matter. Even I did try to write in common language terms as much as possible but not to the n00b extreme of Randal Munroe's Thing Explainer, lol.

I hope to eventually have some posts where I summarize without so much technobabble terminology. I am first making sure I understand the subject matter. Forcing myself to write it down, forces me to catch my mental errors. I typically like to do my learning all conceptually in my head, but I gain clarity and consistency of my thoughts when serializing them out to a written format (which is a love and hate task...hate because so slow to articulate details and I'm not a highly gifted writer...love because it advances my knowledge and is a social sharing given I am a gregarious extrovert with the capability to go introvert when programming).

So to sum it up we have seen a very in-depth exposition of
theoretical underpinnings behind one-way functions. Interesting
to see what all this is leading towards.

I assume you are aware that post is not done yet, and I still need to cover the remaining computational complexity classes. You are speaking to what I wrote thus far mostly the Determinism section and the part in the NP section about hash functions.

Getting a little bit philosophical,
Quote
More abstractly undecideability results from any rule of containment that can point to itself (c.f last two paragraphs of linked answer), because this unbounded recursion opens the generality to the universe of possibilities where the entropy is unbounded  

would you agree that the "rule of containment that can point to itself" is equal to saying
"a containment so rich, that it can model all rules"

Bertrand Russell's Paradox essentially points out that a set that has the containment rule that requires it contain any sets which contain it, can't exist unless it is globally total/complete, meaning all possible sets with such rules must contain each other. Thus would require a static world that can't ever change. A static world can't exist (for numerous reasons). Russel's Paradox is similar to the paradox of perfectly dependently typed programming languages which can model all the dynamic runtime dependencies of the program in the static type checking done at compile time of the source code. Problem is that only the expected inputs and outputs are allowed. But under composition of modules, new permutations of inputs and outputs are created from the sets of the modules and these permutations weren't modeled by the modules when they were separated compiled. This is what I meant in the section where I was writing about Philip Wadler, S-modularity, the Expression Problem, and dependently typed languages. You can click the links in that section to dig more into that. For layman, I am not going to try to explain that to them from first principles such as “what is a type in a programming language”. That is just too much verbiage and I doubt they would benefit from it (they can always go digging with Google if they want to deepen their understanding of the technobabble terms). They need summaries.

Gödel's incompleteness theorem says that all arithmetic truths which are complete, are thus inconsistent. This is conceptually the same as Russell's Paradox in that a rule which attempts to contain everything that can itself is impossible unless it can contain everything, which can't exist.

All of this is stating that we live in a relativistic world (i.e. where each thing can only make rules about its perspective but not a total knowledge) by necessity otherwise change wouldn't exist and thus nothing would exist. Socialism (a total rule not allowing for differing perspectives) is a denial of the existence of relativity and thus doomed to catastrophic failure. I have pointed out in my writings that the omniscience required for a centralized knowledge set would require the speed-of-light to be infinite (so that all real times actions can be centrally decided without causing the local actors to pause and wait), but if the speed-of-light were infinite, then the past and the present would have no separation in spacetime and the universe would collapse into an infinitesimal point. Thus decentralization, decentralized version control, decentralized decision making, and open source are not philosophical decisions for me, rather they are harmonizing with the reality of the physics of existence which requires decentralized change and relative perspective. Note CoinCube (who has a degree in Applied Math as well as medical degree) and I had some discussions about this in the Economic Devastation thread and he pointed out some math about how a totally undamped (i.e. underdamped in the differential equations sense of a model on oscillation) decentralization spawns mutations too fast to anneal on any optimization. This is pointing out that we still need leadership. Decentralization is positive but up to a limit of the information capacity for change. I will hope to PM CoinCube soon to see if he can come over to this thread. I doubt he is aware I created it.

But both Russell's Paradox and Gödel's incompleteness theorems afaics model containment per how it is defined. So if the test for containment is enumeration then the limitations of total containment apply to enumeration. If the definition of containment is paradigm shifted to some attribute other than enumeration, perhaps we can get more power into what can be put in an an enumerated container. By more power I mean that if an application gains from enumeration and containment can be defined orthogonal to enumeration, then the enumeration aspects can perhaps gain additional degrees-of-freedom. A new concept I am learning about is Super-Turing computation and Hypercomputation.

In discrete computation even the reals or natural numbers are unbounded. In an analog computation described in functional limits, perhaps not so. I am not mathematical enough to speak about that until I've studied it.

The shift from discrete to analog can shift the assumptions about computation. For example (and I mentioned this in the Overview post) the quantum computing model machine breaks number theoretic intractability for problems relying on factoring by employing Shor's algorithm which (in a butchered summarization) uses parallelized (due to quantum mechanics) Fourier analysis to shift the potential factors from the time to the frequency domain where the solution space is reduced to polynomial, e.g. RSA and ECC security reduced from exponential to polynomial time.

It appears that Super-Turing computation shifts the model of inclusion from discrete to analog or distributed fuzzy logic. I need to delve into it more before I will fully understand the implications and viability. Btw, I noted the prominent leading researcher Siegelmann is apparently female.

What I expect to remain the generative essence is that any rule of containment that contains everything that can contain itself (and note I need to correct what I wrote in the post to this improved wording) that does not leak in the type of the computation model employed, would need to be so rich as to model everything that can be modeled, and thus can't exist because the world is open to unbounded composition.

Tangentially I believe my improvement to Satoshi's design essentially hinges on the similar kind of paradigm shift where the containment test for double-spends shifted to an orthogonality via a separation-of-concerns.

That is why I warn people never to underestimate me. Who knows before this thread is done, I may have solved the open $1 million bounty problem of whether P equals NP. (I already have some ideas rolling in my head of how to paradigm shift the question. Even one of the prominents in the field pointed out that any meaningful advance would provide paradigmatic shifts of insight into complexity theory otherwise it wouldn't be interesting and useful). I encourage everyone to be creative. I don't claim any special ownership of creativity. I enjoy the creativity of others as well. Creativity comes from being in the right place at the right time (such as happening to read something that gives you a Eureka) and being busy on learning and discovering new things improves the chances/opportunities for creativity to occur.

Wekkel
Legendary
*
Offline Offline

Activity: 3108
Merit: 1531


yes


View Profile
December 13, 2015, 04:07:49 PM
 #14

looking forward to see where you wind up in your journey  Grin

TPTB_need_war (OP)
Sr. Member
****
Offline Offline

Activity: 420
Merit: 262


View Profile
December 13, 2015, 05:52:08 PM
 #15

YarkoL, writing my response to you caused me to find several errors in the post you were commenting on. I had made further improvements to the prose based on your feedback, which I think make it easier to comprehend. On re-reading my prose, I observed numerous weak grammatical and semantic constructions, as well as incorrect logic illuminated by the correct logic I made in my reply to your comment about containment rule. Thank you.

Wekkel et al, thank you for reading. Hoping this thread will help laymen appreciate some of the complexity of the technology in cryptography. And appreciate from some level of understanding rather than just as a magical blackbox.

TPTB_need_war (OP)
Sr. Member
****
Offline Offline

Activity: 420
Merit: 262


View Profile
December 13, 2015, 10:41:22 PM
Last edit: December 14, 2015, 01:12:22 PM by TPTB_need_war
 #16

My immediately prior installment in this journey was about Turing-completeness, so it is highly relevant to note that Nick Szabo just demonstrated to me that he is lacking knowledge about Turing-completeness compared to this Craig Wright that some are claiming might be Satoshi.

At roughly the 17 minute mark in this conference video, Wright correctly explains that due to unbounded recursion, the Bitcoin block chain scripting is effectively Turing-complete. Afaics, he is entirely correct and Nick Szabo is wrong, because although the scripting language stack can't loop within one transaction, one can use multiple transactions to simulate looping. This is precisely the point I made in my recent post wherein I explained that under composition it is impossible to prevent unbounded recursion and thus unbounded entropy. Review the Dr. Suess proof of Turing-completeness. It doesn't matter what the called script answers, the calling script can always change the outcome. If you have the ability to store state on the block chain across multiple invocations of a script, then the block chain becomes the stack. Nick Szabo just demonstrated to me that he isn't as smart as I thought. Dr. Wright makes a relevant point that many people these days seem to forget that in machine code there is no virtual machine that controls what the instruction set can do in terms of which memory it can treat as a stack. Bitcoin's script instruction set can be viewed as machine code w.r.t. to its ability to read and store state any where in the memory space of the block chain UTXO.

What Dr. Wright meant when he said, "the looping function is actually separate from the loop itself ... that would assume the only way of devising code would be to put it directly in the script". Szabo made really stoopid statement implying that the language can only be Turing-complete if the script stack is, but he completely fails to realize that the block chain is state and thus can be an orthogonal stack. And most definitely then you can loop. When I say "loop", I mean in the sense relative to the block chain as the stack, so I do not mean that any one transaction can loop. Yet such a distinction is arbitrary any way, because I can have a client interacting with the block chain causing it to loop.

Here is more about conjecture about Craig Wright being Satoshi:

http://www.wired.com/2015/12/bitcoins-creator-satoshi-nakamoto-is-probably-this-unknown-australian-genius/

http://www.theguardian.com/technology/2015/dec/09/bitcoin-creator-satoshi-nakamoto-alleged-to-be-australian-academic

http://www.theguardian.com/technology/2015/dec/09/bitcoin-founder-craig-wrights-home-raided-by-australian-police?CMP=twt_a-technology_b-gdntech


Edit: and add Gregory Maxwell (nullc) to list of people who don't understand Turing-completeness:

   He's discussion at the All Star Panel, was very odd, and not in any way lucid or clear. Here is /u/nullc take on a transcript I made. https://www.reddit.com/r/Bitcoin/comments/3w027x/dr_craig_steven_wright_alleged_satoshi_by_wired/cxsfy8p



   He's discussion at the All Star Panel, was very odd, and not in any way lucid or clear. Here is /u/nullc take on a transcript I made. https://www.reddit.com/r/Bitcoin/comments/3w027x/dr_craig_steven_wright_alleged_satoshi_by_wired/cxsfy8p

    user /u/jarederaj points out a book Wright wrote full of grammatical errors, nothing related to cryptocurrency, digital cash https://www.reddit.com/r/Bitcoin/comments/3vzgnd/bitcoins_creator_satoshi_nakamoto_is_probably/cxsfeon

jarederaj, nullc (Gregory Maxwell), and Szabo fail to understand Turing-completeness. Craig Wright apparently does.

https://bitcointalk.org/index.php?topic=1282144.msg13239629#msg13239629

Before you downvote, make sure you read my entire thread about Determinism and unbounded entropy. Because you are wrong. Period.

Preventing Turing-completeness is damn hard to accomplish. Idiots think they have.

Craig Wright may or may not be a fraud. I do not have an opinion on that aspect. I am just referring to the allegation about who is correct about Turing-completeness.

Edit: Although I think my explanation on my thread is more insightful on why, I found this which explains that it is very difficult to not have Turing-completeness:

http://www.gwern.net/Turing-complete

Thank you for the sleuthing. Maxwell has demonstrated on occasions that he doesn't do thorough enough analysis to question his own biases and lashes out at others.

Theymos how about the possibility (and almost a certainty) that Satoshi isn't one person and that this Craig Wright is asserting his role. He mentions working in a group on research and also mentions having other "beta coders" involved.

I haven't researched the case enough to form an opinion on the likelihood he is a scammer or other motive.

I offer a theory. I am not claiming this theory is correct or even likely. Take it as one example of why circumstantial assumptions are not the same as irrefutable mathematical proofs.

What we do know is that Kleiman and Wright were associates since at least 2008, Kleiman died in 2013, online documents were edited after this time to illustrate Wright's links to Bitcoin and Wright's companies are tied up in a disastrous situation with the ATO that has been ongoing for years.

Did Wright ever claim he is Satoshi? My vague recollection of what I have perused thus far is he wrote something akin to his level of involvement would be coming out soon. And that he had represented to others that he was deeply involved with Bitcoin. But did he actually claim to be Satoshi? I haven't seen such a quote implicating that he made such an explicit claim.

So given the above assumption, why is everyone analyzing whether the man is Satoshi and blaming him for claiming that he is?

Let me assume (without having studied deeply all the background evidence) that he has represented to others that he has a special involvement with Bitcoin. And that he is even doctoring signatures to make it look like he was involved since 2009. If his motive was not to prove to the public he is Satoshi but rather to convince to some investors or Australian investment board that he had deep involvement with Bitcoin (leaving the question of whether he is Satoshi or not an intentional enigma), then he may not have cared about getting every last detail correct on the back-dated PGP public keys. He may have been doing the minimum level of publicity to assert this position to those who he had such a motive to convince. Thus a back-dated public key doesn't necessarily indicate he intended to deceive the Bitcoin community-at large or even that he has no bonafide deep connections.

One plausible possibility is that he was somehow connected to the real Satoshi and somehow that relationship soured or changed in such a way that he now has to fabricate data in order to protect the investments and/or commitments he had formed. For example, what if Kleiman was Satoshi (or was the liason to Satoshi) and Kleiman unexpectedly died in 2013 (of an antibiotic resistant staph infection) so Wright has to meet his ongoing commitments somehow.

In other words, I doubt very much that Satoshi was one person. Bitcoin was likely created by some highly secretive group. Maybe that group had some subcontractors. Maybe Kleiman was connected. Maybe Wright rode the coattails of Kleiman. Kleiman dies, Wright tries to rewrite history to maintain the continuity.

My plea is please be more objective and less brash at jumping to conclusions. Reality is sometimes not as binary (black and white). Rationality and reason should prevail if we are to be a sane community.

Again I am not asserting this theory is likely. I too have strong intuitions that Wright is caught up in some level of deception or gaming the system.


The real "Satoshi" (or multiple people) may be dead. If it was a highly secretive plot then it is plausible if not also probable the creators have conveniently died of "natural causes" to preserve the secrecy. It is virtually implausible that the national security agencies don't know who the real Satoshi is/was. Delve a little bit into the impossibility that a government level power was not involved in planting the military-grade nano-thermite that toppled the WTC towers on 9/11. Be realistic. And in that case of 9/11, the discovery phase of evidence has been extensive and is beyond-any-reasonable-doubt. And is nearly irrefutable, but still not a mathematical proof.

As I remember the accounts we associate with Satoshi stopped communicating after the publicity started for Bitcoin. And afair (from reading his post) he stated the publicity was one of the issues. This doesn't indicate a fear of the NSA, but rather a fear of the public sector. Normally a person afraid of harassment from sovereign powers wants publicity in order to gain more security. That is one of the small tidbits of circumstantial evidence that lead me to believe that Satoshi was connected to the secret and powerful groups behind the curtain of governmental level power. I wish Satoshi was a Libertarian phenomenon. I have contemplated scenarios where the truth is a mix of both, e.g. a group of researchers outsmarts their handlers. The latter point was inspired by Eric Raymond's blog about how the researchers outsmarted the government in order to create the internet. In other words, you build a system that you know is flawed and will end up handing the power to the government because it can't scale without being controlled by the professional miners and exchanges which can be easily located and regulated. But you outsmart your handlers by realizing that you will launch a paradigm that will be improved by the community and in a grassroots manner it will subvert the government while the government is lost in complacency thinking they have Bitcoin regulation under control. Or an alternative scenario (and the one I favor) is a powerful group wants to destroy the nation-state Central banking and the nation-state currencies and finance systems to accelerate the move to a one-world reserve currency and the netizens using electronic currency. This powerful group might think they benefit from such a transition and also given that electronic currency is more traceable than physical cash (well at least so far that seems to be the case but I had a new relevation recently that maybe Zerocash is the only technology that can liberate from that reality that IP addresses can be correlated to human identities). Another possibility is Bitcoin was created by some developer (or developers) from Bittorrent realm as they would have had the requisite P2P network and decentralization experience (and libertarian ideals motivation), but based on my interaction with them about decentralized tit-for-tat economics back in 2008, I think that is less likely.

I favor the scenario where Wright is either just another person gaming and trying to make money in the Bitcoin ecosystem, or at best he is a pawn of the people who are really behind the creation of Bitcoin. Or perhaps Satoshi was a lone brilliant engineer, but I place the odds of that at very near to 0. Even the smartest people can't work and develop without collaboration. To execute what Satoshi did as a loner, would be amazing nearly beyond belief.

I have not been able to confirm that he claimed PhDs from CSU. I read he also studied in London. I heard on video he claimed 3 masters degrees and maybe 2 doctorates but he also said, "I forget exactly what I have".

He has 3 more Masters degrees than most of you do.

And he apparently was mining Bitcoin. I can't see where he has claimed to be Satoshi. It is true that miners run Bitcoin. If he was mining back in 2009 and had spent the $1 million he claims on mining equipment, then he was likely literally running Bitcoin. And he may have a lot of mined BTC.

Why do you hate him for mining Bitcoin?

http://www.theguardian.com/technology/2015/dec/09/who-is-craig-wright-and-how-likely-is-it-that-hes-behind-bitcoin

Quote
During the interview, the person the transcript names as Wright says: “I did my best to try and hide the fact that I’ve been running bitcoin since 2009 but I think it’s getting – most – most – by the end of this half the world is going to bloody know.”

Guardian Australia has been unable to independently verify the authenticity of the transcripts published by Gizmodo, or whether the transcript is an accurate reflection of the audio if the interview took place. It is also not clear whether the phrase “running” refers merely to the process of mining bitcoin using a computer.

YarkoL
Legendary
*
Offline Offline

Activity: 996
Merit: 1013


View Profile
December 14, 2015, 09:25:56 AM
 #17


At roughly the 17 minute mark in this conference video, Wright correctly explains that due to unbounded recursion, the Bitcoin block chain scripting is effectively Turing-complete. Afaics, he is entirely correct and Nick Szabo is wrong, because although the scripting language stack can't loop within one transaction, one can use multiple transactions to simulate looping. This is precisely the point I made in my recent post wherein I explained that under composition it is impossible to prevent unbounded recursion and thus unbounded entropy. Review the Dr. Suess proof of Turing-completeness. It doesn't matter what the called script answers, the calling script can always change the outcome. If you have the ability to store state on the block chain across multiple invocations of a script, then the block chain becomes the stack. Nick Szabo just demonstrated to me that he isn't as smart as I thought. Dr. Wright makes a relevant point that many people these days seem to forget that in machine code there is no virtual machine that controls what the instruction set can do in terms of which memory it can treat as a stack. Bitcoin's script instruction set can be viewed as machine code w.r.t. to its ability to read and store state any where in the memory space of the block chain UTXO.


You can use the blockchain as a stack (using embedded metadata),
but then it is being manipulated by the client, not by the script.
The script works in the context of a single transaction and does not
know about the blockchain.

Just as a single transaction is a part of the entire blockchain,
the script engine is a submodule of the client. They are on
different logical levels*. The client can loop, the script can't.


_____
*) I use the term "logical level" in the same sense as Russell
and Whitehead did when they resolved the set-of-all-sets paradox
by establishing that it makes no sense to call the collection of
all sets a set, as it is on "higher" logical level or context.


“God does not play dice"
TPTB_need_war (OP)
Sr. Member
****
Offline Offline

Activity: 420
Merit: 262


View Profile
December 14, 2015, 10:48:26 AM
Last edit: December 14, 2015, 11:29:03 AM by TPTB_need_war
 #18


At roughly the 17 minute mark in this conference video, Wright correctly explains that due to unbounded recursion, the Bitcoin block chain scripting is effectively Turing-complete. Afaics, he is entirely correct and Nick Szabo is wrong, because although the scripting language stack can't loop within one transaction, one can use multiple transactions to simulate looping. This is precisely the point I made in my recent post wherein I explained that under composition it is impossible to prevent unbounded recursion and thus unbounded entropy. Review the Dr. Suess proof of Turing-completeness. It doesn't matter what the called script answers, the calling script can always change the outcome. If you have the ability to store state on the block chain across multiple invocations of a script, then the block chain becomes the stack. Nick Szabo just demonstrated to me that he isn't as smart as I thought. Dr. Wright makes a relevant point that many people these days seem to forget that in machine code there is no virtual machine that controls what the instruction set can do in terms of which memory it can treat as a stack. Bitcoin's script instruction set can be viewed as machine code w.r.t. to its ability to read and store state any where in the memory space of the block chain UTXO.


You can use the blockchain as a stack (using embedded metadata),
but then it is being manipulated by the client, not by the script.
The script works in the context of a single transaction and does not
know about the blockchain.

Just as a single transaction is a part of the entire blockchain,
the script engine is a submodule of the client. They are on
different logical levels*. The client can loop, the script can't.


_____
*) I use the term "logical level" in the same sense as Russell
and Whitehead did when they resolved the set-of-all-sets paradox
by establishing that it makes no sense to call the collection of
all sets a set, as it is on "higher" logical level or context.


Wright explicitly stated that he wasn't claiming the script can loop, but that factors orthogonal to the scripting can implement the paradigm of looping. His point is the Bitcoin block chain stack is already Turing-complete. And on that point he is correct, because the entropy of the block chain state is unbounded. Anything that can be done with Ethereum due to Turing-completeness which is not an attack on scripting node resources, can also theoretically (in terms of asymptotic computational complexity theory) be done on the Bitcoin block chain.

But remember from my Overview section that intuitions from computational complexity theory are not an absolute guarantee of how it works out in the real world implementation.

So there are real world implementation differences in efficiency, practicality, atomicity, and storage/state per transaction comparing Bitcoin and Ethereum. And I don't claim to be very knowledgeable about the scripting capabilities of either rather I am just speaking from a high level conceptual view.

For example, there are some protections gained from making the scripting deterministic (bounded with provable termination), such as the verification nodes can be sure the script will terminate (but even that alone doesn't prevent the script from consuming too many verification resources). Specifically the Bitcoin scripting node algorithms are (in a non-boolean variant of) P. So in that sense there is asymptotic complexity distinction between Bitcoin and Ethereum, except that Ethereum uses "gas" to be sure scripts terminate so it is in P also (and thus the Ethereum scripts are not Turing-complete either apparently but don't quote me on that because I haven't studied Ethereum's scripting)[on further thought Ethereum scripts are in some variant related to RP complexity because the termination due to "gas" is an "I don't know" result].

I am stating that Szabo and Maxwell have conflated Turing-completeness with differences in efficiency, practicality, atomicity, and storage/state per transaction.

Conflating an asymptotic attribute (computational complexity) with a set of non-asymptotic attributes (efficiency, practicality, atomicity, and per txn storage/state) is akin to not understanding computational complexity. Which is strange because I know Szabo and Maxwell do both understand. They probably didn't intend to conflate. They've had their minds deep in many issues and just didn't take some moments to deeply think about it. I happened to be writing about this, so my mind was very focused on the distinction.

YarkoL
Legendary
*
Offline Offline

Activity: 996
Merit: 1013


View Profile
December 14, 2015, 11:16:31 AM
Last edit: December 14, 2015, 12:08:52 PM by YarkoL
 #19


Wright explicitly stated that he wasn't claiming the script can loop, but that factors orthogonal to the scripting can implement the paradigm of looping. His point is the Bitcoin block chain stack is already Turing-complete. And on that point he is correct, because the entropy of the block chain state is unbounded. Anything that can be done with Ethereum due to Turing-completeness which is not an attack on scripting node resources, can also theoretically (in terms of asymptotic computational complexity theory) be done on the Bitcoin block chain.

I agree with the last sentence quoted here. What Ethereum
VM provides is a succinct and unified way of writing smart
contracts, but these can be implemented on Bitcoin blockchain
as well. It's just more laborious and leads to a profiliteration
of standards (colored coins, Mastercoin etc)

But my point was that even then, the contractual logic and
reading/writing data on the blockchain and Turing-completeness
happens on a higher level than that of scripting. (edit: thinking
about it, contractual logic can and should be executed in script, but would need
blockchain-derived data handed down to it by the client or user)


Based on the transcription snippet,
Craig Wright seems to envision a separate control stack that is
used by the script, and I doubt he had the blockchain in mind.

“God does not play dice"
TPTB_need_war (OP)
Sr. Member
****
Offline Offline

Activity: 420
Merit: 262


View Profile
December 14, 2015, 11:32:29 AM
Last edit: December 14, 2015, 11:45:22 AM by TPTB_need_war
 #20

Based on the transcription snippet,
Craig Wright seems to envision a separate control stack that is
used by the script, and I doubt he had the blockchain in mind.

I also did note that he seemed to be thinking about multiple stacks within the scripting, but then at one point in his response to Szabo, he also said, "...that would assume the only way of devising code would be to put it directly in the script".

And he also stated that the objective or overall point is that we don't need to replace the Bitcoin stack with Ethereum (which I agree is probably a dubious claim due to practical concerns).

So although I agree he did make that statement about multiple stacks within Forth, he also seemed to be alluding conceptually to the unbounded situation in general. It seemed as though he hadn't well prepared his thought process and was also conflating in his mind how he would accomplish it and then at the last moment he added the above quote. That he recovered by making that quoted statement evidenced to me that he was reasonably well informed about complexity theory. He was able to recover (in a live setting on a video feed!) from his initial mistake of asserting that multiple stacks within Forth would be anything other than a finite unrolling of a loop.

I haven't yet observed evidence that he is a total dunce. All of us make mistakes sometimes and have to recover from a faulty statement or thought process. I haven't seen evidence that he is astute enough to have created Bitcoin, but I am questioning whether he ever claimed that. I haven't reviewed enough of his work to form an opinion on what level of expert he likely is.

For example, I can tell you for sure that Gregory Maxwell and Nick Szabo are better mathematicians than me, and will embarrass me if we get into any discussion about algebraic geometry. I would need perhaps up to a year or more of focused deep study to catch up with them in that field. And I don't have that time, so I will defer to experts on that field.

Pages: [1] 2 »  All
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!