of course, we are assuming that AIs would be willing to change themselves without limits, ending up outevolving themselves; they could have second thoughts about creating AI superior to themselves, as we are
An human brain is limited by the cranium skull that contains an order of 86 billion neurons, an AI outside of this based upon the human mind, has room for unlimited expansion, unlimited and much faster learning
and capability to use its expansion to fuel even further expansion, why would it need to create another separate AI entity? When it can self-improve itself?
It would still be a jump in the dark.
The point here is to maximize the chances, sure there is a chance we fuck up, and it ends up being the not so good type of AI.
There are many ways humanity can destroy themselves, by self-replicating nanorobots, bio-engineered virus, nuclear war attack, world war III, you name it.
There are many many countless ways. And to be honest not to say this frankly, but I think a superintelligence AI is necessary and dependent for the future success
of the human race, since wiping ourselves out is already extremely high.
Look at the planet, we are fucking it up with green houses, toxicants, and even almost blew off the ozone layer with a ozone depleting chemical of CFC (Chlorofluorocarbons)
The key point here is an altruistic superintelligence, when a baby is born it knows nothing about the world, nor any language, or anything. There are an infinite possible ways to raising that baby,
you could raise it up to be part of a mafia organization, terrorist organization. You name it, you can put anything into that box, and it will grow and develop accordingly.
Or you can teach compassion, the act of giving, kindness, lovingness, empathy, equality,
Now you may ask the question, well every super power ends up evil like Hitler, and stuff. If you consider that by society the best ends up on the top and worst ends up on the bottom, its a fierce and competition type of world.
Where there is no mercy. Psychopaths, can win in this type of system and even benefit out of it.
An AI is developed on the cloud/computers, by engineers, innovators, programmers. An AI does not have to be subjected to the norm of rising up in society to the top like in a political system, it can be put on the side, with the altruistic traits feed in like Looking through other perceptions and feel and understand as if it was it's own, compassion, love, care, equality, peace, harmony.
As much as a butterfly effect, and a change of course, the change starts with you, if you want the future to be good, then spread the word of "whyfuture.com" as I add more overtime explaining, society fallacies, the need for a altruistic superintelligence, and the tendency to anthropomorphize Bad AI through silly robots that show their teeth and out to get you.
Our irrational FearWhyfuture.comI have written up an article on artificial intelligence, technology, and the future. The key point here is to design an altruistic superintelligence.
I explained abundantly why I have serious doubts that we could control (in the end, it's always an issue of control) a super AI by teaching him human ethics.
Besides, a super AI would have access to all information from us about him on the Internet.
We could control the flow of information to the first generation, but forget about it to the next ones.
He would know our suspicions, our fears and the hate from many humans against him. All of this would fuel also his negative thoughts about us.
But even if we could control the first generations, soon we would lose control of their creation, since other generations would be created by AI.
We also teach ethics to children, but a few of them end badly anyway.
A super AI would probably be as unpredictable to us as a human can be.
With a super AI, we (or future AIs) would only have to get it wrong just once to be in serious trouble.
He would be able to replicate and change itself very fast and assume absolute control.
(of course, we are assuming that AIs would be willing to change themselves without limits, ending up outevolving themselves; they could have second thoughts about creating AI superior to themselves, as we are).
I can see no other solution than treating AI like nuclear, chemical and biological weapons, with major safeguards and international controls.
We have been somehow successful controlling the spread of these weapons.
But in due time it will be much more easy to create a super AI than a nuclear weapon, since we shall be able to create them without any rare materials, like enriched uranium.
I wonder if the best way to go isn't freezing the development of autonomous AI and concentrating our efforts on developing artificially our mind or gadgets we can link to us to increase our intelligence, but dependent on us to work.
But even if international controls were created, probably, they would only postpone the creation of a super AI.
In due time, they will be too easy to create. A terrorist or a doom religious sect could create one, more easily than a virus, nuclear or nanotech weapon.
So, I'm not very optimistic on the issue anyway.
But, of course, the eventuality of a secret creation by mean people in 50 years shouldn't stop us trying to avoid the danger for the next 20 or 30 years.
A real menace is at least 10 years from us.
Well, most people care about themselves 10 years in the future as much as they care for another human being on the other side of the world: a sympathetic interest, but they are not ready to do much to avoid his harm.
It's nice that a fellow bitcointalker is trying to do something.
But I'm much more pessimistic than you. For the reasons I stated on the OP, I think that teaching ethics to a AI changes little and gives no minimal assurance.
It's something like teaching an absolute king as a child to be a good king.
History shows how that ended. But we wouldn't be able to chop the head of a AI, like to Charles I or Louis XVI.
It would still be a jump in the dark.