I find interesting that in 2010, John Nash was contemplating the implications of
completeness being stated from a relativistic (i.e. partial order) instead of a total order perspective. This is a different way of thinking about everything including probably leading to a different way of thinking about gravity, space-time and Einstein's General Relativity, because instead of modeling the universe as if there is an total observer or totally complete model, we instead model the universe as an unbounded number partial orders each from the perspective of the observer (and clusters of observers), i.e. a fractal model of completeness. I
had also been lately headed this similar direction with my TOE work. My interpretation above appears to be somewhat more abstracted/generalized (generative essence) than
Nash's axiomatic mechanics conceptualization:
But better than adding the Goedel or the Goedel-Rosser assertion
to the initial system as an axiom one can instead add to it an axiom
of consistency. That is one can add an axiom stating that the initial
system was formally consistent. This does not also say that the new
system including the added axiom is itself consistent.
This was after Nash had created Bitcoin
(move evidence of that is coming in my next post), so all he had learned from that was impacting his thought process. Ditto for myself as I learned about how total orders are impossible yet consensus requires one, as part of my research on blockchain consensus algorithms.
Nash reminds me of myself
(although he was clearly more accomplished and knowledgeable than me especially in the field of mathematics). Always questioning the foundational assumptions that others make. We also both have gone through periods of detachment from reality and unbounded creativity, i.e. "crazy".
Nash obviously realized that his ICPI would be a problem that could lead to world empire context. He realized the flaw that any one standard of absolute value such as "kilogram" could become corrupted.
It is interesting to note that I had predicted Nash would come to the above conclusion in my upthread posts, before finding that hidden writing from him above on his personal web site.
But you see here i see a fundemental problem with this approach
I will try to explain shortly lol
But the whole idea of immutable truth comes from parmenides thinking, saying only truth ( or god ) is eternal / immutable, and it has direct influence on idealism, but in any case all material things are mutable.
The closest things are gold or diamond, not because of federal reserve or wall street, but if you put a gold bar in the bottom of the ocean it is still there in 1000 years. That's immutability
Bitcoin immutable .. ha ha let me laugh , it's not ever sure it will still be there in 1 year
Immutability can have resonance in alchemy or gnostism (aka immortal soul etc), but it's more interesting with alchemy as it study how time affect things, and also lead to algorithm, and thinking you can represent the equivalent of time transformation with algorithm.
In the context of computer, the effect of time is a core processing an instruction from a program, it's how time will effect the system represented/maintained by this computer program, with Turing law you can come to the conclusion you could get a completely deterministic representation of time using a computer program based on mathematical / philosophy theory and Turing law.
The problem is when you want to have 'real time' simulation, aka based on hardware or system timer, like for real time physic simulation, you will never have exactly the same result after 30 sec on a multi tasking system.
Even for very simple case using algorithm that are supposed to be deterministic with turing law (or if computer can't compute any physics, what are they good for ?), once you introduce even a single external variable like a timer in the algorithm, it stop to be really deterministic.
It's in part also why i wanted to make my micro kernel, to do real time physic simulation having all the core and memory dedicated to the program, even disabling completely interrupts, virtual paging, and that sort of things to have more linear execution of programs, and to have something more deterministic and closer to Turing machine.
But running on the average busy windows or linux, even plain real time physics applying plain deterministic linear algebra based on real time timer (aka coming from source external to the program) is not deterministic.
With something running on top of a world wide P2P network, thinking having some kind of distributed pow and synchronization over network packet will give something very immutable is very idealistic.
Something based on a model of state based P2P network can never be immutable on the whole network scale. Impossible. Or it need quantum entanglement thing
There is no science to describe immutability of distributed P2P network.
Even on the fundamental 'real time' program execution level, which is how the effect of time on the system is represented, there is no way to apply any deterministic to theory to a global program running on millions of P2P node.
Not even speaking on the economic theory and various theorical problem to solve with how to compute ratio of exchange against different things, either they can be stock options, bonds, other currencies, or direct goods or services, in real time, and keeping the whole system deterministic on the global P2P network scale.
Not even possible
And indexing the stability/immutability on market law, well it never give any stability at all. Markets are even more unstable than multithread programs