So you are saying basically that every number or text that physically exists in the world as an object has minimum as much entropy as the computational representation of it?
No. You have that backwards.
I'm saying that any representation of a number has at minimum as much entropy as the choice of the number itself. Entropy isn't lost from the selection method when the selection is represented, regardless of whether that representation is a die, pencil on paper, a computer file, or a wall painting.
So if we have the number 2384923482983, and this number in Chinese characters painted on a wall. Then if we take a picture of that wall, then that picture will have minimum as much entropy as the number itself? Or if I say out the number 2384923482983 in German and record it in an MP3 file, then that MP3 file will also have minimum as much entropy as the number itself?
Note that "a number" (such as 2384923482983) doesn't have ANY entropy at all. In order for that number on that wall to have an amount of informational entropy there needs to be some randomness to what that value
could be. If that number on that wall was a choice between 2384923482981 and 2384923482985, then it has very little entropy at all (no matter how it was chosen). If it was a completely random selection between 0 and 1000
1000 where every value had equal probability of being selected, then it has significantly more entropy.
https://en.wikipedia.org/wiki/Entropy_(information_theory)
Entropy is zero when one outcome is certain. Shannon entropy quantifies all these considerations exactly when a probability distribution of the source is known. The meaning of the events observed (the meaning of messages) does not matter in the definition of entropy. Entropy only takes into account the probability of observing a specific event, so the information it encapsulates is information about the underlying probability distribution, not the meaning of the events themselves.
Generally, entropy refers to disorder or uncertainty.
That being said, I think it should be self-evident that any representation of a number
such that all representations in the selection set are significantly distinguishable from each other will have at minimum as much entropy as the number selection method itself. Note that in some languages (or representations) a number may NOT be sufficiently distinguishable from another number. In that case, entropy
CAN be lost. For example, if you write down a randomly generated number in english, and your handwriting is sloppy such that all your 7's look like your 1's, then there will be less entropy in your written representation than in the number generated since your written number will effectively never have any 7's (they will always be 1's) in the written form.
A very interesting theory, do you know any scientific papers on it, I would like to read more?
Entropy is a measure of randomness in information. If the information being represented is a number, and the representation of that information is distinguishable from the other representations. Then the randomness of the representation of the information will never be less than the randomness of the information itself, since the actual information hasn't changed (only the representation).
Think about this for a moment...
Lets say I have a machine that gives me truly random numbers between 0 and 2
256 (perhaps it uses radioactive decay as a source of randomness). Is the entropy of the representation any less (or more) if I represent that number in binary than if I represent that number in decimal or base 58, or hexadecimal? The "information" (the concept of the number itself) hasn't been lost with any of those representations, has it?