Bitcoin Forum
November 13, 2024, 06:41:06 PM *
News: Check out the artwork 1Dq created to commemorate this forum's 15th anniversary
 
   Home   Help Search Login Register More  
Pages: « 1 [2] 3 4 »  All
  Print  
Author Topic: Black Holes and The Internet  (Read 2995 times)
Roger_Murdock
Sr. Member
****
Offline Offline

Activity: 342
Merit: 250



View Profile
November 03, 2012, 02:23:02 PM
 #21

1) information = inverse of entropy

2) the entropy of a closed system always increase with time (2nd law of thermodynamics)

3) the Universe is a closed system

4) There is not such law as of "information conservation" Information can be easely destroyed. Kick a jigsaw for example, or crash a harddisk by dropping it on the floor for that matter.

Are you sure about number 3?
flynn
Hero Member
*****
Offline Offline

Activity: 728
Merit: 540



View Profile
November 03, 2012, 02:32:16 PM
 #22

1) information = inverse of entropy

2) the entropy of a closed system always increase with time (2nd law of thermodynamics)

3) the Universe is a closed system

4) There is not such law as of "information conservation" Information can be easely destroyed. Kick a jigsaw for example, or crash a harddisk by dropping it on the floor for that matter.

Are you sure about number 3?


tbh I am sure of none of these. But as far as thermodynamics are concerned, yes. And as of today, the future of the Universe is a cold place with entropy maximum and all information gone.

Maybe some place may be used as vaults to preserve some of it for some time ...



intentionally left blank
ElectricMucus
Legendary
*
Offline Offline

Activity: 1666
Merit: 1057


Marketing manager - GO MP


View Profile WWW
November 03, 2012, 02:33:08 PM
 #23

There is a major fallacy within this whole argument.

The Universe is information itself. When you store information somewhere, you haven't created new information. Instead, all you've done is change the information which already existed, plus change the existing information within your brain to interpret it as being meaningful to you.

No new quantity of information is created.

Information is a concept, the universe is made of objects.
But aren't "objects" also a concept? I've considered the "universe is information" formulation before and found it appealing. I've also considered the possibility that the universe is consciousness. But perhaps the most we can say is that "the universe is."

It doesn't matter what you find appealing. Objects are no concept they exist.
Information is a property of an object, interpreted by our brains. The notion that objects consist of information is ludicrous, a typical fallacy perpetrated by those who follow esoteric teachings.
It might suit their purposes but in a scientific sense it is just plain wrong.
MysteryMiner
Legendary
*
Offline Offline

Activity: 1512
Merit: 1049


Death to enemies!


View Profile
November 03, 2012, 02:45:10 PM
 #24

Quote
information = inverse of entropy
Are You sure? The universe increases in complexity over time. So the interpretable properties of universe that someone might call information also increases.
Quote
the entropy of a closed system always increase with time (2nd law of thermodynamics)
True.
Quote
the Universe is a closed system
I'm not so sure but it is likely. The scientists does not have a definite answer about such fundamental properties of our universe but we will see.
Quote
There is not such law as of "information conservation" Information can be easely destroyed. Kick a jigsaw for example, or crash a harddisk by dropping it on the floor for that matter.
Both cases have the information preserved. jigsaw can be reassembled by examining trajectories of flying pieces. The hard drive will not be damaged by dropping on the floor. I got accident when rack collapsed and spare hard drives fell on hard floor from height of almost 3 meters. All drives tested OK. And even if the hard drive is damaged by dropping, the information is still there, only inaccessible. Of course the information can be destroyed by other means.

bc1q59y5jp2rrwgxuekc8kjk6s8k2es73uawprre4j
flynn
Hero Member
*****
Offline Offline

Activity: 728
Merit: 540



View Profile
November 03, 2012, 03:07:02 PM
 #25


Quote
information = inverse of entropy
Are You sure? The universe increases in complexity over time. So the interpretable properties of universe that someone might call information also increases.

See Shannon.

Quote
the Universe is a closed system
I'm not so sure but it is likely. The scientists does not have a definite answer about such fundamental properties of our universe but we will see.

What I think is important is to not mix models here. I know quantuum physicists have exotic models which exchange things from an Universe to another one, but I stay in a thermodynamic POV here

Quote
There is not such law as of "information conservation" Information can be easely destroyed. Kick a jigsaw for example, or crash a harddisk by dropping it on the floor for that matter.

Both cases have the information preserved. jigsaw can be reassembled by examining trajectories of flying pieces. The hard drive will not be damaged by dropping on the floor. I got accident when rack collapsed and spare hard drives fell on hard floor from height of almost 3 meters. All drives tested OK. And even if the hard drive is damaged by dropping, the information is still there, only inaccessible. Of course the information can be destroyed by other means.
[/quote]

Altho I admit the dropping a HDD is a bad example, the very true relation between entropy and information needs to be understood. Information is a way to order things, entropy is the destruction of that order.

intentionally left blank
nebulus (OP)
Hero Member
*****
Offline Offline

Activity: 490
Merit: 500


... it only gets better...


View Profile
November 03, 2012, 03:28:12 PM
Last edit: November 03, 2012, 03:39:54 PM by nebulus
 #26

As far as 2nd law of thermodynamics goes it only applies to classically behaving things.
When you get to quantum I think this law does not apply...

2nd law of thermodynamics hold macroscopically but it is definitely not the entire picture. I think the public is completely off the chart when they say "But wait! The laws of thermodynamics!"

Take, for example, quantum entanglement, according to this....

http://www.technologyreview.com/view/428670/entangled-particles-break-classical-law-of-thermodynamics-say-physicists/
entanglement breaks laws of thermodynamics...

Black holes are quantum entities.

For the sake of argument let's say that entropy is actually information itself, ("Hey, this egg is broken"). If its the case then black holes decrease entropy (also information) and convert it to a simple particle of gravity (graviton, Higgs, what have you). I think it is an established thing that the more stuff goes in the more pull a black hole has.

MysteryMiner
Legendary
*
Offline Offline

Activity: 1512
Merit: 1049


Death to enemies!


View Profile
November 03, 2012, 04:02:03 PM
 #27

Quote
I know quantuum physicists have exotic models which exchange things from an Universe to another one, but I stay in a thermodynamic POV here
We still have no "theory of everything" and for now our best bet is to apply laws that are applicable to the system we are talking about. Your car runs on Newtonian laws of physics and it is hard to describe your car's engine using quantum mechanics.
Quote
Information is a way to order things, entropy is the destruction of that order.
The disorder still contains information of both current disorder and the previous order. I don't know how this relate but I think that quantity of information and entropy is not strictly proportional.
Quote
black holes decrease entropy (also information) and convert it to a simple particle of gravity (graviton, Higgs, what have you). I think it is an established thing that the more stuff goes in the more pull a black hole has.
Black holes have no additional gravity than the mass of black hole itself. The gravity of our Sun and the gravity of black hole with the same mass as Sun will be equal.

bc1q59y5jp2rrwgxuekc8kjk6s8k2es73uawprre4j
flynn
Hero Member
*****
Offline Offline

Activity: 728
Merit: 540



View Profile
November 03, 2012, 04:07:01 PM
 #28


http://en.wikipedia.org/wiki/Entropy_%28information_theory%29

intentionally left blank
FirstAscent
Hero Member
*****
Offline Offline

Activity: 812
Merit: 1000


View Profile
November 03, 2012, 04:30:59 PM
Last edit: November 03, 2012, 05:28:21 PM by FirstAscent
 #29

Fair coins produce greater entropy in their sequence of tosses. Unfair coins produce less entropy. Imagine a coin so unfair, all it produces is heads. That's zero entropy. It takes more bits to encode a sequence of fair coin tosses.

A fair coin will produce an image of white noise. A picture of a perfectly blue sky will only be one color, the opposite of white noise. Thus white noise has greater entropy. A picture of a single color has next to no entropy.

Which has more information? A picture of pure white noise? A picture of a pure clear blue sky where all pixels are one color? A picture of an interior with many diverse objects?

I would be disinclined to claim that the inverse of entropy is information.
nebulus (OP)
Hero Member
*****
Offline Offline

Activity: 490
Merit: 500


... it only gets better...


View Profile
November 03, 2012, 05:15:52 PM
 #30

I think people still use term entropy just like they did in the 19 century. They did not think about it in terms of information.

I would be disinclined to claim that the inverse of entropy is information.
I agree with this. The more entropy there is the more information.

MysteryMiner
Legendary
*
Offline Offline

Activity: 1512
Merit: 1049


Death to enemies!


View Profile
November 03, 2012, 05:24:33 PM
 #31

Quote
I think people still use term entropy just like they did in the 19 century.
Did someone tried to read that Wikipedia article and the additional links? At least partial understanding can give additional sense to this discussion.

bc1q59y5jp2rrwgxuekc8kjk6s8k2es73uawprre4j
FirstAscent
Hero Member
*****
Offline Offline

Activity: 812
Merit: 1000


View Profile
November 03, 2012, 05:35:09 PM
 #32

From the Wikipedia entry:

Read it very carefully (emphasis is mine).

Quote
Entropy, in an information sense, is a measure of unpredictability. For example, consider the entropy of a coin toss. When a coin is fair, that is, the probability of heads is the same as the probability of tails, the entropy of a coin toss is as high as it could be. There is no way to predict what will come next based on knowledge of previous coin tosses, so each toss is completely unpredictable. A series of coin tosses with a fair coin has one bit of entropy, since there are two possible states, each of which is independent of the others. A string of coin tosses with a coin with two heads and no tails has zero entropy, since the coin will always come up heads, and the result can always be predicted. Most collections of data in the real world lie somewhere in between. It is important to realize the difference between the entropy of a set of possible outcomes, and the entropy of a particular outcome. A single toss of a fair coin has an entropy of one bit, but a particular result (e.g. "heads") has zero entropy, since it is entirely "predictable".

Zero entropy is not encoding more information.
FirstAscent
Hero Member
*****
Offline Offline

Activity: 812
Merit: 1000


View Profile
November 03, 2012, 05:42:49 PM
 #33

The best compression algorithms will produce a stream of data that is indistinguishable form white noise. That means it cannot be compressed any further, because white noise is not compressible. The size of your file, after perfect compression, is a good indicator of how much information lies within.

An egg on the table has structure and pattern. Assuming all eggs are exactly alike, I can convey the structure of it to you with the term 'egg'. A broken egg on the floor cannot be conveyed as precisely. I might have to use words like this: "There is a fragment of a shell 1/4" in size over here, a splattering of yoke over there, and so on."

Each broken egg is different.
ElectricMucus
Legendary
*
Offline Offline

Activity: 1666
Merit: 1057


Marketing manager - GO MP


View Profile WWW
November 03, 2012, 05:53:14 PM
 #34

The best compression algorithms will produce a stream of data that is indistinguishable form white noise. That means it cannot be compressed any further, because white noise is not compressible. The size of your file, after perfect compression, is a good indicator of how much information lies within.

Except there is no best compression algorithm. It depends on the nature of the data and the understanding of it. If I take all prime numbers within a certain magnitude and add up the corresponding oscillations with respect to the wavelength until I used up all the bandwidth the resulting signal will be indistinguishable from white noise. Still the knowledge how the signal was created makes it possible to reproduce it with just the algorithm.

Information Entropy in respect to computer science is pseudoscience, any mathematician would be laughed at if he were to represent such a non-rigorous concept.
fergalish
Sr. Member
****
Offline Offline

Activity: 440
Merit: 250


View Profile
November 03, 2012, 05:58:46 PM
 #35

4) There is not such law as of "information conservation" Information can be easely destroyed. Kick a jigsaw for example, or crash a harddisk by dropping it on the floor for that matter.
AFAIK information cannot be destroyed. You can change its form, you can scatter it so it looks nonsensical to us, but the information is still there. In the case of dropping a harddisk, you can hardly say that the magnetic bits have vanished.

Next, when something falls into a black hole, its information is preserved. Assuming you knew PRECISELY everything that had ever fallen into the black hole, and you were willing to wait a VERY long time to observe ALL information that the black hole gives out as it evaporates, then you could THEORETICALLY reconstruct the object you lost to the black hole.

Shannon's theorems prove that information is equivalent to entropy. The entropy of the universe is continuously increasing, therefore so it the information contained in the universe. In the "heat death" scenario of the universe, in the far far faaaar future (let's say, far enough away that bitcoin keys might be cracked [anyone wanna do the calculation to see which will happen first?]), the entire universe is a uniform cloud of individual, randomly placed, particles. Since a random signal contains maximal information, the entropy of the universe will at that point be maximal.

OP should look at the "Holographic Principal". To put it briefly, imagine a pile of computer memory chips. You can imagine adding more and more chips to the pile, increasing their density, and so forever increasing the density of information in the pile. Eventually, however, the pile will be so dense that it will collapse into a black hole, and the surface area of the black hole will be directly proportional to the amount of information in the pile. In a certain sense, all the information about everything that ever fell into the black hole is "written" on the surface.  Now instead of a black hole, and consider the whole universe, and the inescapable conclusion is that we, and all our 3D universe, is actually a hologram, being equivalent to, and derivable from, information and laws governing the interaction of those quanta of information, all written somewhere, very far away, on the 2D surface enclosing the (observable) universe. Fascinating, eh? New Scientist had a great article describing it a few years ago, but it's paywalled.

Now, after that tangent, OP is confusing correlation for causation. Just 'cos transistors are getting smaller and using less mass, therefore decreasing the gravity due to a single bit of human-stored information, doesn't indicate that the lesser gravity is due to information itself somehow requiring less 'gravity', but due to humans requiring less 'gravity' in order to encode information.  Think, once upon a time you needed a 10kg stone to write 100 bytes or so (think Moses).

Finally, the amount of quantum information in the memory chip of your computer is far far far more than the bytes of memory it contains. OP should read more of Shannon's theories of informational entropy. That's the theory you're looking for. The 'internet' is not creating information. It is merely storing information (very inefficiently, at that) that is already available, if only we could somehow understand it.

Whoops! 9 new posts since I started.
FirstAscent
Hero Member
*****
Offline Offline

Activity: 812
Merit: 1000


View Profile
November 03, 2012, 06:00:47 PM
 #36

The best compression algorithms will produce a stream of data that is indistinguishable form white noise. That means it cannot be compressed any further, because white noise is not compressible. The size of your file, after perfect compression, is a good indicator of how much information lies within.

Except there is no best compression algorithm. It depends on the nature of the data and the understanding of it. If I take all prime numbers within a certain magnitude and add up the corresponding oscillations with respect to the wavelength until I used up all the bandwidth the resulting signal will be indistinguishable from white noise. Still the knowledge how the signal was created makes it possible to reproduce it with just the algorithm.

Information Entropy in respect to computer science is pseudoscience, any mathematician would be laughed at if he were to represent such a non-rigorous concept.

Whatever your domain is, there is indeed a best compression algorithm. If the Universe is deterministic, then I suspect the best compression algorithm starts with the seed of its beginnings.
fergalish
Sr. Member
****
Offline Offline

Activity: 440
Merit: 250


View Profile
November 03, 2012, 06:09:40 PM
 #37

From the Wikipedia entry:

Read it very carefully (emphasis is mine).

Quote
Entropy, in an information sense, is a measure of unpredictability. For example, consider the entropy of a coin toss. When a coin is fair, that is, the probability of heads is the same as the probability of tails, the entropy of a coin toss is as high as it could be. There is no way to predict what will come next based on knowledge of previous coin tosses, so each toss is completely unpredictable. A series of coin tosses with a fair coin has one bit of entropy, since there are two possible states, each of which is independent of the others. A string of coin tosses with a coin with two heads and no tails has zero entropy, since the coin will always come up heads, and the result can always be predicted. Most collections of data in the real world lie somewhere in between. It is important to realize the difference between the entropy of a set of possible outcomes, and the entropy of a particular outcome. A single toss of a fair coin has an entropy of one bit, but a particular result (e.g. "heads") has zero entropy, since it is entirely "predictable".

Zero entropy is not encoding more information.
This wikipedia entry is a bit misleading. If the coin toss is truly random, then irrespective of whether it's fair or not, then there is still no way to predict the next toss based on previous tosses - each toss is still completely unpredictable. I don't know enough to say how many bits of entropy there are in a weighted coin toss; according to the definition quoted above, it would still be one bit, since there are always two possible outcomes, even if (e.g.) the coin is weighted 99% in favor of heads. But that seems a little strange since the string of bits from a 99% weighted coin would be much more compressible than a fair 50% coin.

AFAIRemember, a best compression algorithm *does* exist. However, it is either impossible, or is NP-hard, to prove that any algorithm is actually the best possible one (can't remember which).  This relates to Turing's work, and also Godel. A better expert than me is surely visiting these forums.
FirstAscent
Hero Member
*****
Offline Offline

Activity: 812
Merit: 1000


View Profile
November 03, 2012, 06:17:24 PM
 #38

From the Wikipedia entry:

Read it very carefully (emphasis is mine).

Quote
Entropy, in an information sense, is a measure of unpredictability. For example, consider the entropy of a coin toss. When a coin is fair, that is, the probability of heads is the same as the probability of tails, the entropy of a coin toss is as high as it could be. There is no way to predict what will come next based on knowledge of previous coin tosses, so each toss is completely unpredictable. A series of coin tosses with a fair coin has one bit of entropy, since there are two possible states, each of which is independent of the others. A string of coin tosses with a coin with two heads and no tails has zero entropy, since the coin will always come up heads, and the result can always be predicted. Most collections of data in the real world lie somewhere in between. It is important to realize the difference between the entropy of a set of possible outcomes, and the entropy of a particular outcome. A single toss of a fair coin has an entropy of one bit, but a particular result (e.g. "heads") has zero entropy, since it is entirely "predictable".

Zero entropy is not encoding more information.
This wikipedia entry is a bit misleading. If the coin toss is truly random, then irrespective of whether it's fair or not, then there is still no way to predict the next toss based on previous tosses - each toss is still completely unpredictable. I don't know enough to say how many bits of entropy there are in a weighted coin toss; according to the definition quoted above, it would still be one bit, since there are always two possible outcomes, even if (e.g.) the coin is weighted 99% in favor of heads. But that seems a little strange since the string of bits from a 99% weighted coin would be much more compressible than a fair 50% coin.

AFAIRemember, a best compression algorithm *does* exist. However, it is either impossible, or is NP-hard, to prove that any algorithm is actually the best possible one (can't remember which).  This relates to Turing's work, and also Godel. A better expert than me is surely visiting these forums.

There is a difference between a weighted coin and a two headed coin though, which the Wikipedia article uses as an example.

Regarding coins in general, and infinite flips, I believe the important thing to note (and you alluded to it in your prior post about how informational content only changes, but still contains information) is the fact that one picture may be more compressible than another, but its capacity for information storage does not change. A 500 x 500 pixel image, whether an even blue sky, or a detailed still life, still has the same capacity for information storage. I think it's important to distinguish between capacity and content.
ElectricMucus
Legendary
*
Offline Offline

Activity: 1666
Merit: 1057


Marketing manager - GO MP


View Profile WWW
November 03, 2012, 06:24:26 PM
 #39

The best compression algorithms will produce a stream of data that is indistinguishable form white noise. That means it cannot be compressed any further, because white noise is not compressible. The size of your file, after perfect compression, is a good indicator of how much information lies within.

Except there is no best compression algorithm. It depends on the nature of the data and the understanding of it. If I take all prime numbers within a certain magnitude and add up the corresponding oscillations with respect to the wavelength until I used up all the bandwidth the resulting signal will be indistinguishable from white noise. Still the knowledge how the signal was created makes it possible to reproduce it with just the algorithm.

Information Entropy in respect to computer science is pseudoscience, any mathematician would be laughed at if he were to represent such a non-rigorous concept.

Whatever your domain is, there is indeed a best compression algorithm. If the Universe is deterministic, then I suspect the best compression algorithm starts with the seed of its beginnings.

The Big Bang theory has nothing to do with determinism. On the contrary, it actually reverses it by making assumptions about the past.
nebulus (OP)
Hero Member
*****
Offline Offline

Activity: 490
Merit: 500


... it only gets better...


View Profile
November 03, 2012, 06:29:32 PM
 #40

Next, when something falls into a black hole, its information is preserved.
What exactly makes you draw this conclusion? I think it is rather the opposite. Information is destroyed inside the black hole actually it is the only place where destruction can happen.

Quote
Eventually, however, the pile will be so dense that it will collapse into a black hole, and the surface area of the black hole will be directly proportional to the amount of information in the pile.
You are assuming two things. 1) you definitely need mass to store information 2) there is a limit to how much information could be stored in a given amount of mass

Quote
Now, after that tangent, OP is confusing correlation for causation. Just 'cos transistors are getting smaller and using less mass, therefore decreasing the gravity due to a single bit of human-stored information, doesn't indicate that the lesser gravity is due to information itself somehow requiring less 'gravity', but due to humans requiring less 'gravity' in order to encode information.  Think, once upon a time you needed a 10kg stone to write 100 bytes or so (think Moses).

I was not suggesting gravity decreases because information goes up. I was only stating the fact that less mass is need to store the same amount of information and hence there is a drop in gravity per. I was trying to illustrate the point that the opposite process happen in a black holes - a black hole minimizes the amount of information to increase gravity (also conversion).

Quote
The 'internet' is not creating information. It is merely storing information (very inefficiently, at that) that is already available, if only we could somehow understand it.
Just like a black hole I guess, which merely stores gravity? According to the article information is converted into gravity. I was using term internet as a black-box term for supermind something capable to produce information. This discussion for example is the product of the internet (our minds or what have you). So yes, internet produces information.

Pages: « 1 [2] 3 4 »  All
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!