Your error is you don't seem to deeply understand Chaos theory (although I presume you at least understand it superficially or definitionally).
If you'd know me, you'd understand that that comment is, well, misguided. But you don't know me, and you shouldn't. Of course it is pretentious to claim to know a subject deeply, and there are people on this planet who know much more about it than I do, but they are not very numerous, I can say. I'll easily concede that I'm not in the top 10, nor even in the top 100. Beyond that, we'd have to discuss
![Wink](https://bitcointalk.org/Smileys/default/wink.gif)
I also know enough about it to know that it is not of enormous interest here, in what I'm saying. It has some importance, but that importance is vastly over-estimated in your rebuttals.
But let me frame somewhat more what I try to demonstrate, because there may very well be a big misunderstanding on that.
My theorem is the following:
"in as much as we can rely on Moore's law in the coming century, and in as much as there is no major backlash in technological, economical and political development, the day that machines will have a political, economical and scientific superiority over humans will fall within this century. That day is the day that we are not the dominant species on earth any more, but they are".
My demonstration is that the premises lead to the existence of a network of nodes which surpasses the human network in all respects concerning political, economical, scientific intelligence: the nodes are individually more performing than individual humans, and the network is at least as performing as the human network.
The last bit is simple to demonstrate: as the human network is built ON TOP of a machine network for most of its information flow, then of course the machine network, as a network is at least as performing as the human network built on top of it. This also goes for "public knowledge". All public knowledge "on the internet" is of course also available to machines.
So it is not on the network side that humans will continue to outperform machines, as the human network is built on top of it: machines can very easily have a virtual network that is just as performant, and that has just as much access to public knowledge (political, financial, scientific, historical, social, ....) than humans have through that same network.
This is why the thing that matters is the comparison of individual nodes. I take it that the human network will be of similar size than the machine network (a few billion nodes). I take it that there will be (at least) as many machine nodes than there are human nodes. So the only thing that makes the difference is the individual node intelligence. When individual node intelligence surpasses individual human intelligence, my theorem is demonstrated.
So how can I demonstrate that node intelligence will, at a certain point, be capable of surpassing human intelligence ? I don't know what FORM machine intelligence will take. It most probably will NOT be an imitation of the human brain, because the human brain structure is not adapted to silicon, and what is performing with carbon chemistry may be extremely wasteful of resources with silicon. But a WORST CASE would be when machines implement human-brain-like processing. As I said, this will most probably not be what will happen, as in silicon, there are probably *much faster ways* to implement brain-like intelligence, but the worst case is when we imitate the human brain in silicon.
For this, we need to have:
1) the hardware capacity
2) the software running on it
Moore's law demonstrates that in about 20-30 years, the equivalence of individual PC will reach/surpass the raw memory and processing power of human brains. So in as much as we can run the right software on it, an individual PC will be able to do similar kinds of processing as your average human brain. The estimates taken for that are most probably exaggerated. Probably the processing power needed for our *intelligent political, financial, economical and scientific thinking* is much, much smaller than that. But it can of course not surpass the total potential capacity of processing of our brain. So in the worst of cases, individual nodes will have sufficient raw hardware capacity to do all the needed processing.
The question that remains is: the software. You may have a gigantic supercomputer, if the software running on it serves to calculate prime numbers, that machine will never do any political or economical thinking. The question is: is it *thinkable* to have software that will implement sufficient intelligent thinking, processing like the human brain is doing ? It might be that the "software" the human brain is running, is immensely complex. Is it ?
That was the essence of my previous posts: no, it isn't that terribly complex. The human brain is a processing system of which the fundamental software cannot surpass the full genetic information needed to *build a brain*. Most probably it is a very small part of that, because that genetic information must also tell how digestion works, how procreation works, and even in the construction of the brain, there are a lot of aspects that don't matter concerning the processing, but are just the metabolism of brain cells. I (strongly over)estimated the total information contents of genetic and epi-genetic information (everything that is needed to build physically a human body) to be 4 TB. But in reality, the part that corresponds to the computational aspects of the human brain must be a very very small part of it, simply because much of this genetic and epigenetic information is common with fish, mollusks and chimps.
So the specific instruction set needed to get to the computational structure of the human brain mustn't be such a big deal.
Now, the human brain is a self-modifying piece of software, which "learns" by obtaining sensory information. You're entirely correct on that. But to "make a brain" you only need to BOOTSTRAP its construction, in exactly the same way as the human brain is constructed from genetic and epi-genetic information in the womb of the mother.
One shouldn't confuse the "run-time" structure of the human brain (which can be very complex) with the code needed to implement that run-time. For instance, the code needed to set up a computational neural network with 1 billion nodes and 10 layers can probably be written in a few pages. That's all that is needed to implement such a huge neural network on a running machine. THAT is the code I'm talking about, that must be smaller (much, much smaller) than 4 TB. THAT is the code that implements the computational aspects of the "raw" human brain, with its "native" (literally) structure (the "pre-wired things").
Once you get that raw brain up and running on sufficiently powerful hardware, you can FEED it similar stimuli than a small baby receives from its sensory inputs, the most important one being the visual stimuli. We're talking about fluxes less than a few MB/s, which can very easily be fed into the run-time object that is running and is the "raw brain", modifying itself like the brain is modifying itself when a baby is growing up. Have this object running for 20 years with similar stimuli as a human brain, and you obtain a thinking adult brain-like run-time state.
If all of this succeeds, you will end up with a run-time equivalent of a human brain.
I only wanted to demonstrate that all these steps fall largely within reasonable boundaries, totally feasible in principle on the informational side, some even today, and all very easily if Moore's law applies, within a few decades, not even a century.
So the "worst case human brain simulation" is feasible. As in silicon, most probably much more efficient and different ways will be found to implement intelligence, not simulating clumsily a human brain, there is no fundamental problem, nor on the software side, nor on the hardware side, to obtain nodes that have sufficiently intelligent individual behaviour to outsmart us on the political, social, scientific, economic and financial side.
But what is more, biological nature cannot clone a brain state, while silicon can very easily clone a brain state. The learning doesn't need to be done over for every brain. You simply do it a few times, to have some diversity in the obtained mature brain states, and then clone those a billion-fold into other nodes.
There are no information/entropy fluxes that are problematic for silicon in this respect. Once we have a few thousand alternative mature brain states, they can be cloned, distributed, mixed, .... to make a myriad of different mature brain states in, most probably, a matter of days, on billions of machines.
You don't seem to understand that a total perspective on information is always contingent on the future outcomes (i.e. to distinguish information from noise requires understanding the future outcomes to which the current body of entropy will be applied) and due to the Butterfly effect then you will egregiously underestimate the possible permutations of outcomes. That is why it is incalculable. And this is also the reason that the network is the vastly greater portion of the entropy and why it is itself also alive. If we had more time and inclination, we could elucidate this more formally.
You are making a major mistake here. You are perfectly right that it is essentially impossible to reproduce exactly a VERY PARTICULAR brain state: the brain state of Mary on Monday morning. That will depend on details and is prone to chaotic divergence you talked about. But we don't need Mary's brain state on Monday morning. These details don't matter. If Mary didn't look at a particular movie when she was 7 years old, she would be a different person last Monday. But we don't care. The different Mary will do too. It will also be an intelligent brain that can think politically, economically, financially and scientifically. In a totally different way than the Mary version that saw the movie. But that doesn't matter. The Mary that saw the movie, and the Mary that didn't see the movie, are both human brains that outsmart chimps. In the same way, the exact brain state our silicon arrives at doesn't matter, if it can outsmart systematically most humans. This is why chaos theory and so on don't matter in this.
It is sufficient that the possibility exists, and sooner or later, it will be realized. As its realization will be irreversible, once is enough. You are totally right that the KIND of society that will evolve is not predictable because prone to chaotically impossible to trace effects, but that's not what I'm talking about. This kind of discussion is like me saying that a big meteorite is going to hit the earth and this is going to eradicate a lot of species, and you are telling me that I can't know that because I cannot predict the details of every aspect of the collision: where will what piece of rock fly ? I don't need to do these (indeed impossible) predictions to know that the impact of the meteorite will kill off a lot of species. I would need to do this impossible thing if I'd have to predict what new species would arise afterwards. But just predicting the broad lines of the extinction doesn't need to delve into the details.
Let's use the
equation for Pi as an example. We can communicate all of the digits of Pi by simply sending the equation for it. So it seems the entropy is very low in isolation. Now let's introduce a network of actors which respond to input by computing from Nth digit as a function of the input and their prior state, plus the unbounded nondeterminism of the communication latency across the network. Now you have unbounded entropy. That is Chaos theory. The entropy is incalculable and unbounded because it is alive. This is why top-down control always fails. This is the why the free market anneals better because the decisions are made by actors closer to their local gradients.
This is totally wrong. What I'm saying is that, indeed, sending the equation tells you how to calculate Pi. If there is enough raw computing power, you will be able to calculate the 100 billionth digit, while I sent you under one KB of information. So *it will be possible to calculate Pi's 100th billionth digit* with just 1 KB of crucial information. You don't need the more than 100 GB of run-state information to do so. Thank you for giving an example that illustrates what I'm saying. That it would be difficult or impossible to predict the EXACT STATE of a network of nodes trying to calculate that digit doesn't change the fact that in the end, that digit can be calculated. That's the point. We don't care about the exact state of a particular realisation of that computation. We only want to show that it is possible, and not even very difficult to do so.
You could say that that those 100 billion digits don't contain much entropy: it contains much less than 1 KB of entropy. It is all the difference between pseudo-random and truly random number generation, and is of utmost importance in cryptography.
P.S. another problem is it is very likely that the Singularity has become an ideological cause or religion for you. You've likely invested a lot into it being true. So it not being true is going to be a big blow.
No, not really. I'm actually much more of a half solipsist, inspired by many-minds of quantum theory. (if all worlds exist of which I observe only one, then I'm the creator of that world, if you see where I'm coming from. Of course, when I die, that world doesn't disappear, but it loses its specificity: it is one amongst all possible ones. The Landscape style of thing).
What is nice about the singularity argument, is that you can stop worrying about the world.