Bitcoin Forum
June 22, 2024, 08:01:39 AM *
News: Latest Bitcoin Core release: 27.0 [Torrent]
 
   Home   Help Search Login Register More  
Pages: « 1 2 [3] 4 »  All
  Print  
Author Topic: Who could be trusted to do governance?  (Read 3367 times)
iamnotback (OP)
Sr. Member
****
Offline Offline

Activity: 336
Merit: 265



View Profile
March 02, 2017, 07:21:59 AM
Last edit: March 02, 2017, 08:35:59 AM by iamnotback
 #41

This is where you are simply wrong concerning the concept of entropy, and where you deify it.

That is not a rebuttal.

You don't seem to comprehend that you can't just grab an initial condition out-of-your-ass (as if you weren't a product of the continuous living human network and the environment) and declare that your inertial frame's entropy is only dependent only on your perceived macrostates at that instant in time. That is fundamentally incorrect. Sorry. You may not realize it, but your paradigmatic conceptualization implies the assumption of the reversibility of thermodynamic processes, which is of course impossible.

You'd like a top-down model of the universe to be true, but sorry mate, the microstates are just as intertwined also.

Please talk to someone like Roger Penrose or Nicholas Taleb.

I added to my prior post to aid your understanding:

You are conflating the perception of information by any finite perspective (i.e. any partial order) with the entropy of the unbounded universe. The perception of a total order can't exist (because it would require an unquantifiable speed-of-light and the past and future would collapse into indistriguishable), but the (Butterfly) effects of the total entropy exist via unbounded space-time.

Existence is a very complex, unbounded system. We don't just pop into existence out of nothing with only a finite entropy from our history contributing to our initial conditions. The history of the universe is also unbounded, as is the future. Those who try to compute the start of the universe give me a good laugh. Such computations are only bounded by our (humanity's ability to share a common) partial order of perceptions.

P.S. The entropic force is not a deification:

https://en.wikipedia.org/wiki/Entropic_gravity#Erik_Verlinde.27s_theory
https://en.wikipedia.org/wiki/Entropic_force#Gravity

I resent the slander. Please make cogent arguments if you have any.
dinofelis
Hero Member
*****
Offline Offline

Activity: 770
Merit: 629


View Profile
March 02, 2017, 08:41:09 AM
Last edit: March 02, 2017, 09:09:39 AM by dinofelis
 #42

This is where you are simply wrong concerning the concept of entropy, and where you deify it.

That is not a rebuttal.

You don't seem to comprehend that you can't just grab an initial condition out-of-your-ass (as if you weren't a product of the continuous living human network and the environment) and declare that your inertial frame's entropy is only dependent only on your perceived macrostates at that instant in time. That is fundamentally incorrect. Sorry. You may not realize it, but your paradigmatic conceptualization implies the assumption of the reversibility of thermodynamic processes, which is of course impossible.

You'd like a top-down model of the universe to be true, but sorry mate, the microstates are just as intertwined also.

Please talk to someone like Roger Penrose

I'm very much aware of Penrose's work, and he's right about almost everything.  But this has not much to do with what I'm trying to make you see.  You are confusing the "entropy of the universe" (which is an ill-defined concept) with the entropy of a sub-system in relation to an "observer", which is entirely well-defined, and to which almost all of what is scientifically claimed about entropy, is about.  
This entropy is defined as a function of the possible cases that the observer needs/wants to consider, and hence limited to the sub-system under study.

The (quantum) entanglement you are talking about is in fact exactly the generator of entropy in a system state: it is the fact of limiting one's attention to a sub-system, while this subsystem entangles with the rest of the universe, that changes the quantum state of the subsystem from a pure state into a mixed state, and while the entropy of a pure state (which implies it is also a *known* state) is zero, a mixed state has a (well-defined) entropy.

I don't know how much you're acquainted with this, I can hardly write out a three-year course on quantum physics and statistical physics here, but essentially:

If you have a sub-system A that is a priori isolated from the environment B ("the rest of the universe"), and you happen to know the exact quantum state of A, then the whole universe is in a special product state |a> |b> ; where |a> is the quantum state of A.  The entropy of A as such, with respect to you, is zero, because you know its micro state perfectly, it is |a>.

Once A interacts unavoidably with the rest of the universe, A gets entangled with it, and there's no specific quantum state of A any more.  However, you can still limit your attention to system A.
The quantum state of the whole universe is now a sum of |a1>|b1> + |a2>|b2> + .... |an> |bn> where we have been running over all possible micro states of A: there are n of them.
The quantum description of just system A now reduces to a density operator, which takes on the form:

| a1 > < a1 | (<b1 | b1>) + |a2 > < a2 | (<b2|b2>) + ... |an > <an | (<bn | bn > ).

Here, ( <bi | bi > ) are positive real numbers, the norms of the quantum states of "the rest of the universe" that are entangled with our system's quantum state ai.  There are only n such states of the universe that MATTER even though there are infinitely more of course, but they are not solicited by the entanglement with our system.

As such, our density operator consists of n quantum states, with statistical weights given by ( <bi | bi> ).  In the worst case, these weights are all equal, which means that our density operator corresponds to "total ignorance of the micro state of A".  Usually, we take extra conditions on the interaction with the rest of the universe, like "energy conserving" or "thermal equilibrium".  This changes the values of ( <bi | bi >), and leads to things like micro-canonical ensemble, canonical ensemble and so on.

In the worst case, we don't know which of the n micro states our system is in (due to entanglement with the rest of the universe), and then its entropy is log_2 (n) in bits, or 1/K_b ln(n) in thermodynamic units.

From an information perspective, if a system can be in N possible states, its entropy is (at most) log_2 (N).

Now, let us compare.  A cup of hot water, at 80 centigrade, 100 grams, say, will contain MORE than the entropy increase from freezing to 80 degrees, right ?

Now, the heat delivered to 100 grams of water to go from 0 to 80 centigrade is 418.6 J/K x 80 = 33488 J.
This heat is transferred at a temperature of less than 80 centigrade, so less than 360 K.   As such, the entropy *increase* is at least 93J/K.

Now, this corresponds to a number of states N equal to e^(6.739 10^24), or to a number of bits equal to 9.7 x 10^24.

Note that I didn't even consider the entropy of icy water, which is not zero.

So my cup of hot water has already an entropy of 10^25 bits.  No computing system on earth masters a memory capacity (*) of 10^25 bits, not even the whole internet.

(*) I take memory capacity as "state space" proxy.
topesis
Hero Member
*****
Offline Offline

Activity: 630
Merit: 500



View Profile
March 02, 2017, 09:32:57 AM
 #43


Essentially they copied Dash which has a sneaky "bug" "pre"mine so that the insiders got a huge portion of the tokens and thus control the money supply and voting

I can tell you there was no bug, this was done intentionally, if this was a bug, they should have relaunch the code and the mining process.



Those doing governance would need to be very technical so they understand the visionary developers. Yet they need to be solidly grounded. And they need to not have a conflicting agenda.


I think the main thing is having people who have the interest of the project at heart above their selfish interest. I think this is what makes Bitcoin a success, I have heard some arguement that Satoshi is better at economic than coding that is why he left the project to others and this type of succession has continued. Another thing is the separation of power between developers and the miners, though their interest need to aligned to move the project forward.

Most people in crypto now are in it for money, it is only Satoshi that is in this because he has the vision, but most people will ask what about the 1 million Bitcoin stash but who has any proof that he has been spending it. Today you see developers investing in another projects hedging their profit
kelsey
Legendary
*
Offline Offline

Activity: 1876
Merit: 1000


View Profile
March 02, 2017, 11:43:57 AM
 #44

Whereas, telling me not to creating the Bitcoin Killer advance because you don't want me to get paid what I am worth, is nonsensical. If you could somehow stop me, then I would go work on some other project not in crypto and earn the (up to) $350,000 a year I used to earn before.

then please do, if this is your opinion then honestly the development of a 'bitcoin killer' isn't for you

Communists have never been wrong.

tis nothing to do with political ideology in facts its the lack there of.

@AusKipper refuted your logic.

Auskipper's point had little to do with what i was referring to by greedy bankers.
kelsey
Legendary
*
Offline Offline

Activity: 1876
Merit: 1000


View Profile
March 02, 2017, 12:45:30 PM
 #45

whats the point of being an alternative to the greedy bankers system if we ourselves become the greedy bankers  Huh

The greedy bankers system we have today concentrates more and more power to the greedy bankers. Ie, because they have money, they can get more money, because they are in charge of the interest rates etc they can affect the economy.

In a cryptocurrency greedy banker scenario as the banker "consumes" his wealth (stake in the crypto currency) he becomes less relevent because as a percentage his share is going down.

eg: I realease a coin and pre-mine 50% of the coins. People start buying the other 50% and put the price up, so I decide i'm going to dump 50% of my holdings, I make some nice alternate currency (ie, USD) and feel very happy, fat and greedy, but, now I only own 25% of the currency. Thats not really how it goes in the current system of greedy bankers, where they get rich, spend it, the print some more and give it to themselves.

 Huh i think you really misunderstand what i meant by crypto's greedy bankers. your example is comparing apples with oranges; ie comparing fiat bankers with the premine scam coin pump and dumpers (which are far removed from the new breed of greedy bankers forming in crypto).

iamnotback (OP)
Sr. Member
****
Offline Offline

Activity: 336
Merit: 265



View Profile
March 02, 2017, 01:59:06 PM
Last edit: March 02, 2017, 09:43:16 PM by iamnotback
 #46

whats the point of being an alternative to the greedy bankers system if we ourselves become the greedy bankers  Huh

The greedy bankers system we have today concentrates more and more power to the greedy bankers. Ie, because they have money, they can get more money, because they are in charge of the interest rates etc they can affect the economy.

In a cryptocurrency greedy banker scenario as the banker "consumes" his wealth (stake in the crypto currency) he becomes less relevent because as a percentage his share is going down.

eg: I realease a coin and pre-mine 50% of the coins. People start buying the other 50% and put the price up, so I decide i'm going to dump 50% of my holdings, I make some nice alternate currency (ie, USD) and feel very happy, fat and greedy, but, now I only own 25% of the currency. Thats not really how it goes in the current system of greedy bankers, where they get rich, spend it, the print some more and give it to themselves.

 Huh i think you really misunderstand what i meant by crypto's greedy bankers. your example is comparing apples with oranges; ie comparing fiat bankers with the premine scam coin pump and dumpers (which are far removed from the new breed of greedy bankers forming in crypto).

I made a point which preceded your posts:

Thus there is nothing wrong with the creator obtaining some exponentially diminishing seigniorage.

Any premine becomes an ever diminishing portion of the total supply if the emission of new money supply is perpetual. Whereas the greedy banksters and their fiat system continues to extract seigniorage ongoing and doesn't diminish. Any blockchain which continues to extract tokens to pay to the developers is akin to a greedy banksters, fiat system. They may employ a democratic governance to obscure the fact that they are really in control of voting to pay themselves from the blockchain. Notice I have not named the blockchains which do this, but you can figure it out.

Edit: more about this.
iamnotback (OP)
Sr. Member
****
Offline Offline

Activity: 336
Merit: 265



View Profile
March 02, 2017, 02:05:23 PM
 #47

Most people in crypto now are in it for money, it is only Satoshi that is in this because he has the vision, but most people will ask what about the 1 million Bitcoin stash but who has any proof that he has been spending it.

For example, to say I am in it only for the money and not for my vision is going to be mistake on your part.

Creators never do something only for the money. If I mostly wanted money, I would have manipulated with reputation and launched a snazzy ICO already.

In my case, I am doing it for: a) expression of my creativity, b) love of the process and challenge of creating (i.e. not boring), and c) because I want to have an impact. Yes I do need money, but I choose my work based on those a - c as my overriding priorities. I trust society to reward money (capital) to those who prove they allocate capital resources efficiently.

We don't know how much Satoshi was paid by someone (Hint: Rothschilds). The odds that Satoshi was one person are very slim (one person doesn't produce what he with the organization and foresight he had and disappear without a trace ... only covert agencies or their equivalents can do that).
iamnotback (OP)
Sr. Member
****
Offline Offline

Activity: 336
Merit: 265



View Profile
March 02, 2017, 02:27:31 PM
Last edit: March 02, 2017, 02:40:59 PM by iamnotback
 #48

the "entropy of the universe" (which is an ill-defined concept)...

That is what I've been trying to tell you. (not ill-defined, rather unbounded)

...in relation to an "observer", which is entirely well-defined

And provable to no other observer.

for you?

So my cup of hot water has already an entropy of 10^25 bits.  No computing system on earth masters a memory capacity (*) of 10^25 bits, not even the whole internet.

As if memory capacity is the entropy of the Internet (and which memory specifically?). Since when was the Internet an isolated system and not entangled with the user's lives? Someone posts something to Facebook shares with a friend who is talking about and hours later posts something back to the Internet. The offline surprises are not included in the entropy of the Internet? How did you decide where to draw this arbitrary border around what is the Internet? Can you define this border unambiguously (be careful!)?
dinofelis
Hero Member
*****
Offline Offline

Activity: 770
Merit: 629


View Profile
March 02, 2017, 03:11:41 PM
 #49

the "entropy of the universe" (which is an ill-defined concept)...

That is what I've been trying to tell you. (not ill-defined, rather unbounded)


It really is ill-defined, and in as much as you'd like to estimate it, you'll probably end up with 0.  Yes, zero.  Like the total energy content of the universe is also most probably zero, in as much as that even could have a meaning.

Most people talking about the "entropy of the universe" talk about the "entropy of the REST OF THE UNIVERSE with respect to an observer inside that universe".  But there's a subtle difference between what one would call the entropy of the universe (with respect to an "outside observer" Huh hence ill defined) and the "entropy of the rest of the universe".


Quote
As if memory capacity is the entropy of the Internet (and which memory specifically?). Since when was the Internet an isolated system and not entangled with the user's lives?

That really doesn't matter.  My cup of hot water is also entangled with the rest of the universe, but that doesn't increase its entropy with respect to me.
The "states of the internet" is the number of different states the internet can be in, and that is limited to the technical capacity of that system.  If we limit ourselves to the *digital states* (and not to, say, the heat in a micro processor - like I'm not taking into account the nuclear states when accounting for the entropy of my cup of coffee) of the devices that make up the internet: the nodes, the network devices and so on, their digital state is entirely determined by all bi-stable digital components, essentially memory bits (and a few register bits in processor units and FPGA).  If you know every single state of every single bi-stable circuitry of every device that makes up the internet, then you know the entire state of the internet.   That's equivalent to me knowing the quantum state of the entire set of molecules making up my hot cup of water: i'd know the microstate of my hot water.  The IGNORANCE of that microstate is what makes up the entropy I have about that cup of hot water, and the IGNORANCE I have of the state of every bistable state of every component of the internet is what makes up the entropy I have about the internet.

My rather secure guess is that the internet doesn't have 10^25 bistable circuits.  It would mean about 10 billion devices with each a Petabit memory state.  We aren't there yet (in a few years we will maybe).

Quote
Someone posts something to Facebook shares with a friend who is talking about and hours later posts something back to the Internet. The offline surprises are not included in the entropy of the Internet?

No, of course not, not any more than the swimming of a whale, which is entangled with every molecule in my cup of hot water for historical reasons, increases the entropy of my cup of coffee.  Because that's exactly what entropy is about: my ignorance of *just the cup of water*, and NOT its environment.  

The posts only contain at most the amount of bits that are needed to download the page (in compressed format).  And they were already taken into account by counting the empty bits on the page's server's hard disks.  So someone posting some stuff on the internet doesn't really increase its entropy.  A technician adding a few more disk units in a data center, does.

Quote
How did you decide where to draw this arbitrary border around what is the Internet? Can you define this border unambiguously (be careful!)?

Ah, if you call "the internet" all of physical spacetime between here and Andromeda, of course we're talking about different stuff.  I'm talking about all the data that is available to be transferred and stored by all nodes with an internet connection.
Now, of course, you can say, there are sensors connected to the internet, so they bring in an "unbounded amount of entropy".  No, not really.  When that camera takes a picture, say, of 8MB, then these 8 MB are available and are entropy to you if you didn't know the picture.  But if at the next instant, the camera takes a new picture, the previous one is gone.  There's still 8 MB of entropy because of that camera.  If the previous picture wasn't STORED, and made available for download, the new pictures don't add entropy.  If you are looking at each picture, you get 8 MB of information THROUGH the internet.  But each time that picture gets refreshed, and not stored (nor on the camera side, nor at a cache site, nor at your side) on an internet-connected device, the internet's entropy doesn't increase.  In fact, when the camera takes the picture, its entropy rises (for you) with 8 MB.  When you look at it, you receive 8 MB of information, and hence the internet's entropy LOWERS with 8 MB wrt you.  When the camera takes a new picture you haven't seen yet, the internet's entropy rises again with 8 MB.  When you look at it, it lowers again with 8 MB.  Etc.

This is like a telephone wire.  The (digital) state of the wire is 1 bit at a time.  Whole conversations can go through a telephone wire.  But the state of the wire is at most 1 bit.  When you haven't received the bit yet, it has 1 bit of entropy.  When you read it out, it has 0 bit of entropy, until the transmitter puts another bit on it. Etc...

The internet, as defined by all the devices connected to it and their digital states, has less entropy than my cup of hot water.

A sensor, looking at the entire universe, has only the entropy of the data it has at a certain point in time, while you didn't look at it.  From the moment you know the data, its entropy falls back to zero, until the next (unknown) data are taken.

iamnotback (OP)
Sr. Member
****
Offline Offline

Activity: 336
Merit: 265



View Profile
March 02, 2017, 03:19:35 PM
 #50

...in relation to an "observer", which is entirely well-defined

And provable to no other observer.

for you?

You ignored the most damning point.

Are you chasing your tail yet?
dinofelis
Hero Member
*****
Offline Offline

Activity: 770
Merit: 629


View Profile
March 02, 2017, 03:29:19 PM
Last edit: March 02, 2017, 03:39:23 PM by dinofelis
 #51

You ignored the most damning point.

Ok, if you really want to get into this: physics is a relationship between an observer and "the rest of the universe".  That "rest of the universe" contains entities which the first observer might identify as observers (I will call them "secondary observers"), and in order for his physics to be consistent, whatever the first observer observes from those secondary observers, must be coherent with what the first observer observes, and what he would observe if he were in the place of those secondary observers.   In other words, the only consistency must come from what the first observer observes "directly" and what he observes when observing those secondary observers.   This is the "consistent history" view of physics.  At no point, an observer "needs to prove" anything to a secondary observer.  He simply needs a consistent view between his "direct observations of nature" and his "observations of other observer entities".

The idea that there is a single objective reality out there goes out of the window with that view, but for an observer, the consistent view of his observations of nature, and his observation of secondary observers, is what comes closest to his best guess of what his objective reality might look like.

This kind of consistent history approach (first formulated if I remember correctly by Gell Mann in the 1960ies) explains very easily a lot of quantum "paradoxes", like the EPR "paradox", and also explains apparent paradoxes like Maxwell's demon.

In as much as those "secondary observers" are actual observers of their own, what is needed for everything to be consistent is that they too, in THEIR view of reality, see the first observer as consistent with that.

The simplest solution to this is that all observers observe the same objective reality, which is a classical viewpoint, but difficult to reconcile with quantum mechanics.  But it is not the only solution to this, and "consistent histories" is another, much more complicated, but quantum mechanically compatible, view of things.  If you want to stick to the classical "objective world" view, you have to introduce many strange things in order to explain quantum mechanical effects, such as "spooky action at a distance", "non-causal effects going back in time" and other weird things, while consistent histories don't need all that stuff.

In the same vein, entropy is a relationship between an observer and a system: it is his ignorance about the system's micro state.  In as much as another observer DOES know this micro state (like Maxswell's demon), then that system has no entropy wrt to that secondary observer, but that secondary observer now has entropy wrt to the primary one.

However, in most all cases, "human observers" don't know microstates and hence, they all agree more or less on the physical entropy of most objects, give or take a few bytes.

A secret key has no entropy with respect to its owner, and has (hopefully) full entropy wrt to an attacker.  Maxwell's demon is like the guy with his secret key: he known the micro state of the gas ; but as such, the demon has as much entropy wrt us, than the gas itself.

iamnotback (OP)
Sr. Member
****
Offline Offline

Activity: 336
Merit: 265



View Profile
March 03, 2017, 12:20:03 AM
Last edit: March 03, 2017, 01:06:00 AM by iamnotback
 #52

In as much as those "secondary observers" are actual observers of their own, what is needed for everything to be consistent is that they too, in THEIR view of reality, see the first observer as consistent with that.

Do ALL humans communicate ALL of their detailed existence to ALL humans? How could they even communicate this without an instantaneous speed-of-light? The state of everyone's perspective would always be on a lag at best, thus not ALL including the instantaneous present (otherwise stated as "the present never exists" because its always gone instantly before we can perceive/decohere it as mutual information).

In other words, is it plausible that everyone on earth could know (or at least received communication about) everything about everyone on earth.

Obviously it is a rhetorical question and it is intended to point to an inconsistency in your holistic conceptualization (although of course many of the points you've made are correct).

The following may be interesting:

http://esr.ibiblio.org/?p=690
TheKoolaider
Member
**
Offline Offline

Activity: 84
Merit: 10


View Profile
March 03, 2017, 01:38:51 AM
 #53

I'm new, as you know, so take with a grain of salt, as you will.....

1. The top 500 (random number, enter number there that you see fit) miners in the last 30 days or X blocks should be able to vote. Same people are the only ones able to submit ideas to be funded.

Their economic incentives don't always align with the best directions for the development of the coin. Also they may be deficient in technical understanding of complex issues.

3. No end time on the treasury, ongoing.

Problem is it becomes a centralized resource to fight over. It eventually it will be controlled by those at the top of the power-law distribution. So it will be a "the rich collect rents and parasite" formula. So I don't think perpetual is a good idea. The centralized bootstrap should get out of the way and let the ecosystem fund itself decentralized as Satoshi did when he stepped aside.



How about decentralized voting by the token owners wherein they vote their stake in the treasury separately from the others, i.e. not monolithic appropriation?

But after developers get this ICO money, it is down to them to use that money, how to use  and allocate it, how to control the development process etc. I don't think there are any procedure to punish them.

An idea popped into my mind.

What if ICO coin buyers vote on each release of the budget. They only get to vote up to the value of the tokens they own. They can vote any fraction of their tokens on any budget release. Once they've voted all their tokens, they can't vote any more.

The approved releases are taken from the pool of BTC. Any of the pool not released after a certain period of time, is returned back to all ICO investors proportionally.

What do you think? See any flaws in it?

One flaw is that it means some ICO owners can hold the other owners hostage, by refusing to fully fund what the developer thought had been raised already. But I am not sure that is really a flaw. It means the devs have an ongoing incentive to perform. I am very sleepy so I might have a major flaw in this idea.

Wouldn't the ICO buyers just be another form of centralization?  What about the people who want to participate in the voting because they like and idea? Because they didn't participate in the ICO for whatever reason, but own tokens, they don't get to vote? Unless these tokens are specific tokens identifiable as ICO tokens?

I guess now that I think about it a bit more, having everyone vote would almost lead to an issue similar to Bitcoin's current issue, no?
iamnotback (OP)
Sr. Member
****
Offline Offline

Activity: 336
Merit: 265



View Profile
March 03, 2017, 01:49:15 AM
Last edit: March 03, 2017, 02:19:04 AM by iamnotback
 #54

That really doesn't matter.  My cup of hot water is also entangled with the rest of the universe, but that doesn't increase its entropy with respect to me.

Afaics, your conceptualization mistakes aliasing error as reality.

You take the point sample in spacetime of pouring a cup of coffee and presume that the microstates are ignored by yourself because they don't impact your perception of any event or state which is perceivable by you. But you are not factoring in the future of the Butterfly effect of those microstates' entanglement with the environment and the changes to the future you will perceive in the future based on the microstates in the cup of coffee you are pouring at the moment in time.

Thus your entire conceptualization collapses.  Wink Kiss

Quantum mechanics models the microstates superimposed, because otherwise measuring the position at any point sample in time is aliasing error (a random result). Analogously we must model the entropy as superimposed into the future in our macro perspective. Your conceptualization of entropy as relative to an observer ignores the fact that the entire future is entangled with the entire past. Our perception of reality (decoherence) is but an aliasing error illusion. Information (order) is our mutual synchronization of a specific instance of illusion.

I hope you realize I'm stating that the objective reality is an unbounded superimposition of multiverses. Our perception of our universe is but an aliasing error illusion. And even humans don't share one consistent illusion (c.f. my prior post). The entropy of the universe is unbounded. Entropy w.r.t. to an observer is an ill-defined concept.

(P.S. I am still having significant issues with cognitive energy and its root concomitant gut/liver health. I am not able to explore this space we are discussing with the assimilation vigor that I feel I had been accustomed to. So do please understand if the limits of my current health prevent me from continuing the discussion or making the level of insights and explanations that I might be capable of at some other time.)
dinofelis
Hero Member
*****
Offline Offline

Activity: 770
Merit: 629


View Profile
March 03, 2017, 05:55:49 AM
 #55

In as much as those "secondary observers" are actual observers of their own, what is needed for everything to be consistent is that they too, in THEIR view of reality, see the first observer as consistent with that.

Do ALL humans communicate ALL of their detailed existence to ALL humans?

No, of course not, and that is not necessary.

In order for a perception to be consistent, only those things that are *observed* from the secondary observer need to be consistent.  The rest doesn't matter.  If I can't observe it, ever, I don't care, it doesn't exist in a certain way.

Take the following case:

There is me, Joe, and a light bulb (in my reality).

I see the lightbulb glowing green.  (direct observation).
I see Joe saying "the light is green" (my observation of Joe)

This is a consistent world view, a consistent history of observations.

Nothing stops one, however, from considering now Joe's point of view (in Joe's reality).

Joe sees the lightbulb glowing red (direct observation)
Joe sees me saying "the light is red" (Joe's observation of me)

This is (another) consistent world view, a(nother) consistent history of observations.

In the consistent history view of things, both are not in contradiction.  My observing a green light, and observing Joe say "green light" is not in contradiction with Joe observing a red light and observing me say "red light".  Joe, as primary observer, simply has a different consistent history than me.

What would be inconsistent, is that I see a green light, and I see Joe saying "red light".  Or Joe seeing a red light, and seeing me say "green light".

Joe only exists for me to the extend that I can observe him, and that I can observe other things that might observe Joe and so on.  If all these observations are consistent, then that's my consistent view of reality.  Which may be entirely different for another primary observer, but to which I have no access.
dinofelis
Hero Member
*****
Offline Offline

Activity: 770
Merit: 629


View Profile
March 03, 2017, 06:26:18 AM
 #56

That really doesn't matter.  My cup of hot water is also entangled with the rest of the universe, but that doesn't increase its entropy with respect to me.

Quantum mechanics models the microstates superimposed, because otherwise measuring the position at any point sample in time is aliasing error (a random result). Analogously we must model the entropy as superimposed into the future in our macro perspective.

You are losing me here.  I can read that 10 times, and still not know what it might mean.  I try to wrap my mind around what you mean with "microstates superinposed".  If the system is isolated and in a pure state, it is of course in one single state (which may be a superposition of "base states" in which I like to express this state, but it is still just one single quantum state).  If the system is entangled (which it most certainly is) with the rest of the universe, it is entangled, which means exactly what I wrote earlier: several individual quantum states of the cup are, well, entangled, with states of the environment.  And that is exactly the entropy we are talking about: the number of those states that is involved.  If there were only state, the entropy would be 0, and if there are many, the logarithm of 2 of that number of states is the entropy of my cup (I'm assuming uniform distribution, which is not necessarily true, but which would diminish the entropy if not uniform).

There's no such thing as "cumulative entropy over time".  I think that is the error you are making, by thinking that the entropy is about the *entire history - and future* of a system.  No, entropy is the ignorance of the *current state*, not about its entire past and future.  That doesn't make sense, because if that were the case, entropy wouldn't be a time-dependent concept, and the second law could not even be formulated !

It is as if you were saying that the "velocity of an object is the whole of movement that an object did and will do in the future" or something.  But if that is the case, you couldn't talk about the acceleration (the *change* in velocity) and not write down Newton's second law !  Velocity is instantaneous, now, and doesn't care about the state of motion yesterday or tomorrow.  If it did, velocity would be an a-temporal notion, and you couldn't ask how velocity changes over time.

In the same way, entropy is an "instantaneous" notion, because the second law of thermodynamics tells us how entropy needs to CHANGE over time.  If all our past and future ignorance were already included, the entropy tomorrow would be the same one as today, as all ignorance would already been included, as well today as tomorrow.

I think you are confusing the notion of entropy itself, with the notion of dynamics.  If I know the dynamics of an isolated system perfectly well (which is very rare), then there's a kind of Liouville's theorem that tells me that my knowledge of the microstate (even if imperfect) is conserved.  When I know something about the microstate today, that knowledge is still pertinent tomorrow, and my entropy about that micro state didn't grow.
If however, my knowledge of the dynamics of the system is not perfectly accurate, OR the system interacts with the environment (of which, by definition, I don't consider the dynamics), then my knowledge of the microstate today will probably get lost gradually over time: my entropy of the system grows, until it reaches its full value of my total ignorance of it, given by one or another canonical statistical ensemble, constrained only by a few macroscopic parameters.

This "loss of knowledge over time" is intimately related to the second law.  It is always related to "the environment" of which the unknown microstate is much more entropy-rich than my system, and is an infinite source (for all practical purposes) of ignorance (= entropy).

This is also why the notion of "entropy of the universe" is problematic.  The universe cannot be entirely observed from the outside, and cannot connect to "its environment" ; as such, this notion breaks down.

iamnotback (OP)
Sr. Member
****
Offline Offline

Activity: 336
Merit: 265



View Profile
March 06, 2017, 08:16:35 PM
 #57

@dinofelis, due to my ongoing medication for TB, I don't have the cognitive energy to (assimilate all the information I need to) finish our discussion/debate right now. Maybe soon...

And I apologize that I don't explain the part that wasn't clear, but I'd rather not encourage the discussion to continue until I am back up to full brain power.

In the meantime, some tidbits:

The human brain consists of about one billion neurons. Each neuron forms about 1,000 connections to other neurons, amounting to more than a trillion connections. If each neuron could only help store a single memory, running out of space would be a problem. You might have only a few gigabytes of storage space, similar to the space in an iPod or a USB flash drive. Yet neurons combine so that each one helps with many memories at a time, exponentially increasing the brain’s memory storage capacity to something closer to around 2.5 petabytes (or a million gigabytes). For comparison, if your brain worked like a digital video recorder in a television, 2.5 petabytes would be enough to hold three million hours of TV shows.


Kurzweil's Singularity (nonsense!) also has an energy efficiency deficiency compared to humans:

The efficiency of the two systems depends on what SNR (signal to noise) ratio you need to maintain within the system.

One of the other differences between existing supercomputers and the brain is that neurons aren’t all the same size and they don’t all perform the same function. If you’ve done high school biology you may remember that neurons are broadly classified as either motor neurons, sensory neurons, and interneurons. This type of grouping ignores the subtle differences between the various structures — the actual number of different types of neurons in the brain is estimated between several hundred and perhaps as many as 10,000 — depending on how you classify them.

Compare that to a modern supercomputer that uses two or three (at the very most) CPU architectures to perform calculations and you’ll start to see the difference between our own efforts to reach exascale-level computing and simulate the brain, and the actual biological structure.

...

All three charts are interesting, but it’s the chart on the far right that intrigues me most. Relative efficiency is graphed along the vertical axis while the horizontal axis has bits-per-second. Looking at it, you’ll notice that the most efficient neurons in terms of bits transferred per ATP molecule (ATP is a biological unit of energy equivalent to bits-per-watt in computing) is also one of the slowest in terms of bits per second. The neurons that can transfer the most data in terms of bits-per-second are also the least efficient.

So a typical adult human brain runs on around 12 watts—a fifth of the power required by a standard 60 watt lightbulb. Compared with most other organs, the brain is greedy; pitted against man-made electronics, it is astoundingly efficient. IBM's Watson, the supercomputer that defeated Jeopardy! champions, depends on ninety IBM Power 750 servers, each of which requires around one thousand watts.

I've been trying to make the point to you that raw processing speed isn't a sufficient condition to be indicative of superiority. Such a conclusion is very simpleton.
dinofelis
Hero Member
*****
Offline Offline

Activity: 770
Merit: 629


View Profile
March 07, 2017, 06:24:41 AM
 #58

@dinofelis, due to my ongoing medication for TB, I don't have the cognitive energy to (assimilate all the information I need to) finish our discussion/debate right now. Maybe soon...

And I apologize that I don't explain the part that wasn't clear, but I'd rather not encourage the discussion to continue until I am back up to full brain power.

In the meantime, some tidbits:

The human brain consists of about one billion neurons. Each neuron forms about 1,000 connections to other neurons, amounting to more than a trillion connections. If each neuron could only help store a single memory, running out of space would be a problem. You might have only a few gigabytes of storage space, similar to the space in an iPod or a USB flash drive. Yet neurons combine so that each one helps with many memories at a time, exponentially increasing the brain’s memory storage capacity to something closer to around 2.5 petabytes (or a million gigabytes). For comparison, if your brain worked like a digital video recorder in a television, 2.5 petabytes would be enough to hold three million hours of TV shows.


"neurons" aren't, of course, "processors" in the machine paradigm, but are run-time objects (computer science objects).  In a run-time environment, objects are just as varied as biological neuron types.  The variety of "neurons" in a biological brain corresponds, if you want to, to the class structure of the running software in the object oriented paradigm.   

The "processor" is the underlying layer: the bio-chemistry of neurons, at least the part of the bio-chemistry that is essential for the data processing (the metabolism biochemistry is not necessary).

But again, the idea is most probably NOT that we implement a bio brain imitation in silicon. 

Quote
Kurzweil's Singularity (nonsense!) also has an energy efficiency deficiency compared to humans:

This is not a problem.  You cannot upscale the human brain to a 1 MW processor.  We would simply catch fire.   You can easily upscale a silicon machine to 1 MW.   And cheap energy is available for machines, much more so than it is available for human brains. 

That's what I try to make as a point: the human brain is a brilliant invention of evolution.  But it is what it is, and can only very slowly evolve.  In as much as it is radically altered artificially, it is btw also not a human any more, but a machine.  By definition, a human will be a thing with a current-human like brain.  It has huge capacity and so on, but only very slowly evolving. 
Technology under Moore's law evolves millions of times faster, and so the hypothesis of a singularity is simply that a faster-growing curve will, at some point in the future, cross an almost constant curve, and that this is inevitable.

The two exceptions are that the point may be further in the future than one thinks, or that the rising curve may flat out (end of Moore's law).  But if not, then their crossing is inevitable.

Quote
One of the other differences between existing supercomputers and the brain is that neurons aren’t all the same size and they don’t all perform the same function.

Again, you're confusing run-time objects and execution layer.  Neurons and their organisation and connection is "run time".  Like objects in a run time environment.  The execution layer is biochemistry of neurons for one, and silicon circuits ("processors", but maybe not von neumann architectures, but rather FPGA like structures, who knows) on the other.
And we don't have to implement brain like run time systems onto silicon, it may take totally different configurations: there's more than one path to intelligence.

Quote
If you’ve done high school biology you may remember that neurons are broadly classified as either motor neurons, sensory neurons, and interneurons. This type of grouping ignores the subtle differences between the various structures — the actual number of different types of neurons in the brain is estimated between several hundred and perhaps as many as 10,000 — depending on how you classify them.

You may very well have run time environments with thousands of classes of objects.
dinofelis
Hero Member
*****
Offline Offline

Activity: 770
Merit: 629


View Profile
March 07, 2017, 06:30:34 AM
 #59

I've been trying to make the point to you that raw processing speed isn't a sufficient condition to be indicative of superiority. Such a conclusion is very simpleton.

I think it is.  Of course, not immediately.  Raw processing speed is a *necessary* condition.  But it will become a sufficient condition because what is possible, will happen in this kind of thing.  Simply because it not happening is a meta-stable situation: if it happens once, that's enough, and will introduce the equivalent of a phase transition.  It is like undercooled water.  It has the potential to freeze, but it simply didn't... yet.  One perturbation that starts nucleating the freezing, and the water undergoes a phase transition.
There is no "stabilizing force" that takes any step towards machine intelligence "back again", on the contrary.  Once machine intelligence can improve itself, there's no stopping to the run-away effect.

dinofelis
Hero Member
*****
Offline Offline

Activity: 770
Merit: 629


View Profile
March 07, 2017, 08:20:09 AM
 #60

I consider that it is not an investment as such, because there's no economic value produced by the investment - well, there is the ultra-small economic value produced by the utility of the crypto currency, which is essentially zilch: almost no consumer good, in the end, has seen the daylight that wouldn't have seen the daylight if crypto currencies didn't exist, and consumer goods/services are the bottom line of economic value of course.

It is difficult for me the fathom how you can be so myopic.

Incredible value has already been created by the degrees-of-freedom of being able to move funds globally without permission. (I had already mentioned my personal examples to you in another thread, yet you cling your myopic viewpoint, so can see Monero and anonymity as the only feature of interest)

What value ?  The actual consumer-happiness value (the ultimate source of all value) by the existence of bitcoin is essentially nothing.  It is that economic creation of value that would not have existed if bitcoin didn't exist.  What is that value ?  What better cars are running because of bitcoin ?  What better food is being eaten because of bitcoin ?  What better education is sold because of bitcoin ?  What markets became more efficient and allocated their resources better than bitcoin ?  I mean, outside of the little crypto world itself ?  Yes, probably a few people are "creating value" by earning salaries when setting up coindesk.com and so on.
But apart from a self-serving little world, what value creation did bitcoin allow that fiat didn't ?
There is, at the moment, only one true application: dark markets.  There is true value creation in dark markets, especially of drugs.  People (consumers) really value drugs a lot.  The value creation for these consumers is huge.  But because of the legal barriers to that value creation, dark markets are risky to do with fiat.  So, yes, value creation happens on dark markets thanks to crypto in general, and bitcoin in particular.

There's another utility, which is related.  Privacy-related services on the internet.  VPN for instance.  This is value creation that is facilitated by the existence of crypto.  

However, on these two aspects, bitcoin is dangerous, because not private enough.  Only crypto with privacy-related features can, in the end, deliver on this.

But how much is this in terms of value creation, as compared to everything done with fiat ?  Crypto has enabled WAY WAY WAY less than 1/30000 of the world's value creation, because essentially all of the market cap of crypto (bitcoin included) is NOT by Fisher's formula for its demand as a use as a currency, but just by greater-fool speculation.  The *REAL* currency market cap of bitcoin is ridiculously small, probably less than 1% of its official market cap.  Remove all hope for "greater fools", let bitcoin's price crash to its level of demand as a currency (its only "value creation enabling" feature) and you end up with the true economic value of bitcoin, which is essentially insignificant.
Pages: « 1 2 [3] 4 »  All
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!