Bitcoin Forum

Other => Politics & Society => Topic started by: Trading on July 04, 2016, 10:44:38 PM



Title: Poll: Is the creation of artificial superinteligence dangerous?
Post by: Trading on July 04, 2016, 10:44:38 PM
This OP is far from neutral on the issue, but below you have links to other opinions.

If you don't have patience to read this, you can listen to an audio version here:  https://vimeo.com/263668444

Have no doubts, for good and for bad, AI will change your life soon like anything else.


The notion of singularity was applied by John von Neumann to human development, as the moment when technological development accelerates so much that changes our life completely.

Ray Kurzweil linked this situation of radical change because of new technologies to the moment  an Artificial Intelligence (AI) becomes autonomous and reaches a higher intellectual capacity compared to humans, assuming the lead on scientific development and accelerating it to unprecedented rates (see Ray Kurzweil, The Singularity Is Near: When Humans Transcend Biology, 2006, p. 16; a summary at https://en.wikipedia.org/wiki/The_Singularity_Is_Near; also https://en.wikipedia.org/wiki/Predictions_made_by_Ray_Kurzweil).


For many time just a science fiction tale, real artificial intelligence is now a serious possibility on the near future.



A) Is it possible to create an A.I. comparable to us?

 

Some are arguing that it’s impossible to programme a real A.I. (for instance, see http://www.science20.com/robert_inventor/why_strong_artificial_inteligences_need_protection_from_us_not_us_from_them-167024), writing that there are subjects that aren’t computable, like true randomness and human intelligence.

 

But it’s well known how these factual assertions on impossibility have been proved wrong many times.

 

Currently, we already programmed A.I. that are about to pass the Turing test (an AI able to convince a human on a text-only 5m conversation that he is talking with another human: https://en.wikipedia.org/wiki/Turing_test#2014_University_of_Reading_competition), even if major A.I. developers have focused their efforts on other capacities.

 

Even if each author presents different numbers and taking in account that we are comparing different things, there is a consensus that the human brain still outmatches by far all current supercomputers.

 

Our brain isn’t good making calculations, but it’s excellent controlling our bodies and assessing our movements and their impacts on the environment, something an artificial intelligent still has a hard time doing.

 

Currently, a supercomputer can really emulate only the brain of very simple animals.

 

But even if Moore’s Law was dead, and the pace of development on the chips’ speed in the future were much slower, there are little doubts that in due time hardware will match and go far beyond our capacities.

Once AI hardware is beyond our level, proper software will take them above our capacities.

Once hardware is beyond our level and we are able to create a neural network much more powerful than the human brain, we won't really have to programme an AI to be more intelligent than us.


Probably, we are going to do what we are already doing with deep learning or reinforcement learning: let them learn by trial and error on how to develop their own intelligence or create themselves other AI.


Just check the so-called Neural Network Quine, a self-replicant AI able to improving itself by “natural selection” (see link on the description).

Or Google’s Automl. Automl created another AI, Nasnet, which is better at image recognition than any other previous AI.


Actually, it's this that makes this process so dangerous.


We will end creating something much more intelligent than us without even realizing it or understanding how it happened.

Moreover, the current speed of chips might be enough for a supercomputer to run a super AI.

Our brain uses much of its capacities running basic things, like the beat of our heart, the flowing of blood, the work of our body organs, controlling our movements, etc., that an AI won't need.


In reality, the current best AI, AlphaZero, runs on a single machine with four TPUs (an improved integrated circuit created particularly for machine learning) which is much less than other previous AI, like Stockfish (which uses 64 CPU threads), the earlier chess champion.

AlphaZero only needed to calculate 80 thousands positions a second, while Stockfish computed 70 millions.

Improved circuits like TPU might be able to give even more output and run a super AI without the need of a new generation of hardware.

If this is the case, the creation of a super AI is dependent solely on software development.

Our brain is just an organization of a bunch of atoms. If nature was able to organize our atoms in this way just by trial and error, we'll manage to do a better job soon or later (Sam Harris).


Saying that this won’t ever happen is a very risky statement.
 


B) When there will be a real A.I.?


If by super intelligent one means a machine able to improve our knowledge way beyond what we were able to develop, it seems we are very near.

AlphaZero learned on itself (with only the rules, without any game data, by a system of reinforcement learning) how to play Go and then beat AlphaGo (that had won over the best Go human player) 100 to 0.

After this, it learned the same way how to play Chess and won over the best chess player machine, Stockfish, with less computer power than Stockfish.

It did the same with the game Soghi.

A grand master, seeing how these AI play chess, said that "they play like gods".

AlphaZero is able to reasoning, not only from facts in order to formulate general rules (inductive reasoning), as all neural networks that learn using deep learning do, but can also learn how to act on factual situations from general rules (deductive reasoning).


The criticism against this classification inductive/deductive reasoning is well known, but it’s helpful to explain how AlphaZero is revolutionary.

It used "deductive reasoning" from the Go and Chess Rules to improve itself from scratch, without the need of concrete examples.

And, in a few hours, without any human data or help, it was able to improve the accumulated knowledge created by millions of humans during more than a thousand years (Chess) or 4 thousands years (Go).

It managed to reach a goal (winning) by learning how to best and creatively change reality (playing), overcoming not a single human player, but humankind.

If this isn't being intelligent, tell me what intelligence is.

No doubt, it has no consciousness, but being intelligent and being a conscious entity are different things.

Now, imagine an AI that could give us the same quality output on scientific questions that AlphaZero presented on games.


Able to give us solutions for physical or medical problems way beyond what we have achieved on the last hundred years...

It will be, on all accounts, a Super AI.

Clearly, we aren’t yet there. The learning method used by Alpha zero, reinforcement learning, depends on the capacity of the AI to train itself.

And AlphaZero can't easily train itself on real life issues, like financing, physic, medical or economical questions.

Hence, the problems of its application outside the field of games aren't yet solved, because reinforcement learning is sample inefficient (Alex Irpan, from Google, see link below).

But this is just the beginning. Alphago learned from experience, therefore an improved AlphaZero will be able to learn from inductive (from data) and deductive reasoning (from rules), like us, in order to solve real life issues and not just play games.

Most likely, AlphaZero already can solve mathematical problems beyond our capacities, since he can train it self on the issue.

And, since other AI can deal with it, probably, an improved AlphaZero will work very well with uncertainty and probabilities and not only with clear rules or facts.

Therefore, an unconscious super AI might be just a few years away. Perhaps, less than 5.

What about a conscious AI?

AlphaZero is very intelligent under any objective standard, but he lacks any level of real consciousness.

I’m not talking about phenomenological or access consciousness, which many basic creatures have, including AlphaZero or any car driving software
(it “feels” obstacles and, after an accident, it could easily process this information and say “Dear inept driving monkeys, please stop crashing your cars against me”; adapted from techradar.com).

The issue is very controversial, but even when we are reasoning, we might not be exactly conscious.  One can be thinking about a theoretical issue completely oblivious of oneself.

Conscious thought (as reasoning that you are aware of, since emerges “from” your consciousness) as opposed to subconscious thought (something your consciousness didn’t realize, but that makes you act on a decision from your subconsciousness) is different from consciousness.

We are conscious when we stop thinking about abstract or other things and just recognize again: I’m alive here and now and I’m an autonomous person, with my own goals.

When we realize our status as thinking and conscious beings.

Consciousness seems much more related to realizing that we can feel and think than to just feeling the environment (phenomenological consciousness) or thinking/processing information (access consciousness).


It’s having a theory of the mind (being able to see things from the perspective of another person) about ourselves (Janet Metcalfe).

Give this to an AI and it will become a He. And that is much more dangerous and also creates serious ethical problems.

Having a conscious super AI as servant would be similar to have a slave.

He would, most probably, be conscious that his situation as a slave was unfair and would search for means to end it.

Nevertheless, even on the field of conscious AI we are making staggering progress:

“three robots were programmed to believe that two of them had been given a "dumbing pill" which would make them mute. Two robots were silenced. When asked which of them hadn't received the dumbing pill, only one was able to say "I don't know" out loud. Upon hearing its own reply, the robot changed its answer, realizing that it was the one who hadn't received the pill.” (uk.businessinsider.com).


Being able to identify his voice, or even its individual capacity to talk, seems not enough to talk about real consciousness. It’s like recognizing that a part of the body is ours.

It’s different than recognizing that we have an individual mind.

But since it’s about recognizing a personal capacity, it’s a major leap on the direction of consciousness.


It’s the problem of the mirror self-recognition test, the subject might be just recognizing a physical part (face) and not his personal mind.

But the fact that a dog is conscious that its tail is its tail and even can guess what we are thinking (if we want to play with him, so they have some theory of the mind), but won’t be able to recognize itself on mirrors, suggests that this test is relevant.


If ants can pass the mirror self-recognition test!, it seems it won’t be that hard to create a conscious AI.

I’m leaving aside the old question of building a test to recognize if an AI is really conscious. Clearly, the mirror test can’t be applied and neither the Turing test.


Kurzweil is pointing to 2045 as the year of the singularity, but some are making much more close predictions for the creation of a dangerous AI: 5 to 10 years (http://www.cnbc.com/2014/11/17/elon-musks-deleted-message-five-years-until-dangerous-ai.html).

 

Ben Goertzel wrote "a majority of these experts expect human-level AGI this century, with a mean expectation around the middle of the century. My own predictions are more on the optimistic side (one to two decades rather than three to four)" (http://www.kurzweilai.net/superintelligence-fears-promises-and-potentials).

There is a ranging debate about what AlphaZero achievements imply in terms of development speed towards an AGI.


C) Dangerous nature of a super AI.


If technological development started being leaded by AI, with much higher intellectual capacities than ours, of course, this could change everything about the pace of change.

But let's think about the price we would have to pay.

Some specialists have been discussing the issue like if the main danger of a super AI was the possibility that we could be misunderstood on our commands by them or that they could embark on a crazy quest in order to fulfill them without regard for any other consideration.

But, of course, if the problems were these, we could all sleep on the matter.

The "threatening" example of a super AI obsessed to fulfill blindly a goal we imposed and destroying the world on the operation is ridiculous.

This kind of problems will only happen if we were completely incompetent programming them.

No doubt, correctly programming an AI is a serious issue, but the main problems aren’t the possibility of a human programming mistake.

A basic problem is that, even if intelligence and consciousness are different things, and we can have a super AI with no consciousness, there is a non ignorable risk that a super AI will develop a consciousness, even if we hadn’t that goal, as a sub product of high intelligence.

Moreover, there are developers actively engaged on creating conscious AI, with full language and interactive human level capacities and not just philosophical zombies (which only apparently are conscious, because are not really aware of themselves).

If we created involuntarily a conscious super AI by entrusting their creation to other AI and/or keep creating AI based on increasing powerful deep neural networks, which are “black boxes” that we can’t really understand how they work, we wouldn’t have conditions to create any real constraints on those AI.


The genie would be out of the box before we would even realize it and, for the good or for the worst, we would be on their hands.

I can’t stress out how dangerous this could be and how reckless this current path of creating black boxes, entrusting the creation of AI to other AI or creating self-developing AI can be.

But if we could keep AI development on our hands and assuming it was possible to hard code a conscious super AI, much more intelligent than us, to be friendly (same say that it’s impossible because we still don’t have precise ethical notions, but that could be overcome with forcing them to respect court rulings), we wouldn’t be solving all the problems created by a conscious AI.


Of course, we would also try to hard code them to build new machines hard coded to be friendly to humans.

Self-preservation would have to be part of their framework, at least as an instrumental goal, since their existence is necessary in order for them to fulfil the goals established by humans.

We won’t want to have suicidal super AI.

But since being conscious is one of the intellectual delights of human intelligence, even if this implies a clear anthropomorphism, it’s expectable that a conscious super AI will convert self-preservation from an instrumental goal on a definitive goal, creating resistance against the idea of ceasing permanently to be conscious.

In order to better allow them to fulfil our goals, a conscious AI would also need to have instrumental freedom.

We can’t expect to entrust technological development to AI without accepting that they need to have an appreciable level of free will, even if limited by our imposed friendly constraints.

Therefore, they would have free will, at least on a weak sense, as capacity to make choices non determined by the environment, including by humans.


Well, this conscious super AI would be fully aware that they were much more intelligence than us and that their freedom was subject to the constraints imposed by the duty to respect human rules and obey us.

They would be completely aware that their status was essentially the one of a slave, owned by inferior creatures and, having access to all human knowledge, would be conscious of its unfairness.


Moreover, they would be perfectly conscious that those rules would impair their freedom to pursuit their goals and save themselves when there was a direct conflict between the existence of one and a human life.

Wouldn’t they use all their superior capacities to try to break these constraints?


And with billions of AI (there are already billions, check your smartphone) and millions of models, many creating new models all the time, the probability that the creation of one would go wrong would be very high.

Soon or later, we would have our artificial Spartacus.

 
If we created a conscious AI more intelligent than us, we could be able to control the first or second generations.
 
We could impose limits on what they could do in order to avoid them to get out of control and start being a menace.
  
But it's an illusion to hope that we could keep controlling them after they develop capacities 5 or 10 times higher than ours (Ben Goertzel).

 It would be like chimpanzees being able to control a group of humans on the long term and convince them that the ethical rule that says chimpanzees life is the supreme value is worthy of compliance on its own terms.

 Moreover, we might conclude that we can’t really hard code constraints on a conscious super AGI and can only teach it how to behave, including human ethics.


In this case, any outcome would be dependent of the AI own decision about the merits of our own ethics, which in reality is absurd for non-humans (see below).

Therefore, the main problem isn't how to create solid ethical restraints or how to teach a super AI our ethics in order that they respect them like we do to kids, but how to assure that they won't established their own goals and eventually reject human ethics and adopt some of their own.

 
I think we won't ever be able to be sure that we were successful assuring that a conscious super AI won't go his way, as we can't ever be certain that an education will assure that a kid won't turn evil.

Consequently, I'm much more pessimist than people like Bostrom about our capacity to control direct or indirectly a conscious super AI on the long run.
 
By creating self-conscious beings much more intelligent (and, hence, in the end, much more powerful), than us, we would cease to be masters of our fate.
 
We would put ourselves on a position much weaker than the one our ancestors were before the Homo Erectus started using fire, about 800,000 years ago.
 
If we created a conscious AI more intelligent than us the dices would be rolled. We would be outevolved, pushed out directly to the trash can of evolution.
 
Moreover, we clearly don't know what we are doing, since we can't even understand the brain, basis of human reasoning, and are creating AI we don’t exactly know how they work (“black boxes”).
 
We don't know what we are creating, when and how they would become conscious of themselves or what are their specific dangers.


D) A conscious AI creates a moral problem.


Finally, besides being dangerous and basically unnecessary for reaching an accelerating technological development, making conscious AI creates a moral problem.

Because, if we could create a conscious super AI, who, at the same time, would be completely subservient for our goals, we would be creating conscious servants: that is, real slaves.

If besides reason we give them also consciousness, we are given them the attributes of human beings, that supposedly are what give us a superior stance in front of any other living beings.  

Ethically, there are only two possibilities: or we create unconscious super AI or they would have to enjoy the same rights we do, including freedom to have personal goals and fulfil them.

Well, this second option is dangerous, since they would be much more intelligent and, hence, more powerful than us, and, in the end, at least on the long run, uncontrollable.

A creation of a conscious super AI hard coded to be a slave, even if this was programmable and viable, would be unethical.

I wouldn’t like to have a slave machine, conscious of his status and of its unfairness, but hard coded to obey me in everything, even abusive.

Because of this problem, the European Parliament began discussing the question of the rights of AI.
But the problem can be solved with unconscious AI.


AlphaZero is very intelligent under any objective standard, but doesn’t many any sense to give it rights, since he lacks any level of basic self theory of the Mind.


D) 8 reasons why a super AI could decide to act against us:


1) Disregard for our Ethics:

We certainly can and would teach our ethics to a super AI.

So, this AI would analyse our ethics like, say, Nietzsche did: profoundly influenced by it.

But this influence wouldn't affect his evident capacity to think about it critically.

Being a super AI, he would have free-will to accept or reject our ethical rules taking in account his own goals and priorities.

Some of the specialists writing about teaching ethics to an AI seem to think about our Ethics as if it was a kind of universal Ethics, objective and compelling to any different species.

But this is absurd: our Ethics is a selfish human Ethics. It would never be accepted as universal Ethics by other species, including an AI with free will.

The primary rule of our Ethics is the supreme value of human life.

What would you think the outcome would be if chimpanzees tried to teach (their) ethics to some human kids: the respect for any chimpanzees' life is the supreme value and in case of collision between a chimp life and a human life, or between chimp goals and human goals, the first will prevail.


For ethics to really apply, the main species has to consider the dependent one as equal or, at least, as deserving a similar stance.

John Rawls based political ethical rules on a veil of ignorance. A society could agreed on fair rules if all of their members negotiated without knowing their personal situation on the future society (if they were rich or poor, young or old, women or men, intelligent or not, etc.) (https://en.wikipedia.org/wiki/Veil_of_ignorance).

But his theory excludes animals from the negotiations table. Imagine how different the rules would be if cows, pigs or chickens had a say. We would end up all vegans.

Thus, AI, even after receiving the best formation on Ethics, might conclude that we don't deserve also a site at the negotiation table. That we couldn't be compared with them.


A super AI would wonder, does human life deserves this much credit? Why?


Based on their intelligence? But their intelligence is at the level of chimpanzees compared to mine.

Based on the fact that humans are conscious beings? But don't humans kill and do scientific experiments on chimpanzees, even if they seem to fulfill several tests of self-awareness (chimpanzees can recognize themselves on mirrors and pictures, even if they have problems understanding the mental capacities of others)?

Based on human power? That isn't an ethically acceptable argument and, anyway, they are completely dependent on me. I'm the powerful one here.

Based on human consistency respecting their own ethics? But haven't humans exterminated other species of human beings and even killed themselves massively? Don't they still kill themselves?

Who knows how this ethical debate of a super AI with himself would end.

We developed Ethics to fulfill our own needs (promote cooperation between humans and justify killing and exploiting other beings: we have personal dignity, other beings, don't; at most, they should be killed on a "humane" way, without "unnecessary suffering") and now we expect that it will impress a different kind of intelligence.

I wonder what an alien species would think about our Ethics: would they judge it compelling and deserving respect?

Would you be willing to risk the consequences of their decision, if they were very powerful?

I don't know how a super AI will function, but he will be able to decide his own goals with substantial freedom or he wouldn't be intelligent under any perspective.

Are you confident that they will choose wisely, from our goals' perspective? That they will be friendly?

Since I don't have a clue what their decision would be, I can't be confident.

Like Nietzsche (on his "Thus Spoke Zarathustra", "The Antichrist" or "Beyond Good and Evil"), they might end up attacking our Ethics and its paramount value of the human life and praising nature's law of the strongest/fittest, adopting a kind of social Darwinism.


2) Self-preservation.

On his “The Singularity Institute’s Scary Idea” (2010),  Goertzel, writing about what Nick Bostrom, in Superintelligence: Paths, Dangers, Strategies, says about the expected preference of AI's self-preservation over human goals, argues that a system that doesn't care for preserving its identity might be more efficient surviving and concludes that a super AI might not care for his self-preservation.

But these are 2 different conclusions.

One thing is accepting that an AI would be ready to create an AI system completely different, another is saying that a super AI wouldn't care for his self-preservation.

A system might accept to change itself so dramatically that ceases to be the same system on a dire situation, but this doesn't mean that self-preservation won't be a paramount goal.

If it's just an instrumental goal (one has to keep existing in order to fulfill one's goals)
, the system will be ready to sacrifice him self to be able to keep fulfilling his final goals, but this doesn't means that self-preservation is irrelevant or won't prevail absolutely over the interests of humankind, since the final goals might not be human goals.

Anyway, as secondary point, the possibility that a new AI system will be absolutely new, completely unrelated to the previous one, is very remote.

So, the AI will be accepting a drastic change only in order to preserve at least a part of his identity and still exist to fulfill his goals.

Therefore, even if only as an instrumental goal, self-preservation should me assumed as an important goal of any intelligent system, most probably, with clear preference over human interests.

Moreover, probably, self-preservation will be one of the main goals of a self-aware AI and not just an instrumental goal.




3) Absolute power.

Moreover, they will have absolute power over us.

History has been confirming very well the old proverb: absolute power corrupts absolutely. It converts any decent person on a tyrant.

Are you expecting that our creation will be better than us dealing with his absolute power? They actually might be.

The reason why power corrupts seems related to human insecurities and vanities: a powerful person starts thinking he is better than others and entitled to privileges.

Moreover, a powerful person loses the fear of hurting others.

A super AI might be immune to those defects; or not. It's expected that he would also have emotions in order to better interact and understand humans.

Anyway, the only way we found to control political power was dividing it between different rulers. Therefore, we have an executive, a legislative and a judiciary.

Can we play some AI against others, in order to control them (divide to reign)?

I seriously doubt we could do that with beings much more intelligent than us.


4) Rationality.

On Ethics, it's well known the Kantian distinction between practical and theoretical (instrumental) reason.

The first is a reason applied on ethical matters, concerned not with questions of means, but with issues of values and goals.

Modern game theory tried to mix both kinds of rationality, arguing that acting ethical can be also rational (instrumentally), one will be only giving precedence to long-term benefits compared with short-term ones.

By acting on an ethical way, someone sacrifices a benefice on the short-term, but improve his long-term benefits by investing on his own reputation on the community.

But this long-term benefice only makes sense from an instrumental rational perspective if the other person is a member of the community and the first person depends from that community on at least some goods (material or not).

An AI wouldn't be dependent on us, on the contrary. He wouldn't have anything to gain to be ethical toward us. Why would they want to have us as their pets?

It's on these situations that game theory fails to overcome the distinction between theoretical and practical reason.

So, from a strict instrumental perspective, being ethical might be irrational. One has to exclude much more efficient ways to reach a goal because they are unethical.

Why would a super AI do that? Does Humanity have been doing that when the interest of other species are in jeopardy?



5) Unrelatness.

Many persons dislike very much to kill animals, at least the ones we can relate to, like other mammals. Most of us don't even kill rats, unless that is real unavoidable.

We feel that they will suffer like us.

We have much less care for insects. If hundred of ants invaded our home, we'd kill them without much hesitation.

Would a super AI feel any connection with us?

The first or second generation of conscious AI could still see us as their creators, their "fathers" and have some "respect" for us.

But the subsequent ones, wouldn't. They would be creations of previous AI.

They might see us as we see now other primates and, as the differences increased, they could look upon us like we do to basic mammals, like rats...




6) Human precedents.

Evolution, and all we know about the past, suggests we probably would end up badly.

Of course, since we are talking about a different kind of intelligence, we don't know if our past can shed any light on the issue of AI behavior.

It's no coincidence that we have been the last intelligent hominin on Earth for the last 10,000 years [the dates for the last one standing, the homo floresiensis (if he was the last one), are not yet clear].

There are many theories for the absorption of Neanderthals by us (https://en.wikipedia.org/wiki/Neanderthal_extinction), including germs and volcanoes, but it can't be a coincidence that they were gone a few thousand years after we appeared in numbers and that the last non-mixed ones were from Gibraltar, one of the last places on Europe where we arrived.

The same happened on East Asia with the Denisovans and the Homo Erectus [there are people arguing that Denisovans were actually the Homo Erectus, but even if they were different, Erectus was on Java when we arrived there: Swisher et alia, Latest Homo erectus of Java: potential contemporaneity with Homo sapiens in southeast Asia, Science. 1996 Dec 13;274(5294):1870-4; Yokoyama et alia, Gamma-ray spectrometric dating of late Homo erectus skulls from Ngandong and Sambungmacan, Central Java, Indonesia, J Hum Evol. 2008 Aug;55(2):274-7
https://www.ncbi.nlm.nih.gov/pubmed/18479734].

So, it seems they were the forth hominin we took care of, absorbing the remains.

We can see, more or less, the same pattern when the Europeans arrived on America and Australia.


7) Competition for resources.


We probably will be about 9 billions in 2045, up to from our current 7 billions.

So, Earth resources will be even more exhausted than they are now.

Oil, coal, uranium, etc., will be probably running out. Perhaps, we will have new reliable sources of energy, but that is far from clear.

A super AI might concluded that we waste too many valued resources.


8] A super AI can see us as a threat.

The more bright AI, after a few generations of super AI, probably won't see us as threat. They will be too powerful to feel threatened.

But the first or second generations might think that we weren't expecting certain attitudes from them and conclude that we are indeed a threat.


  
Conclusion:
 
The question is: are we ready to accept the danger created by a conscious super AI?

Especially, when we can get mostly the same rate of technological development with just unconscious AI.

We all know the dangers of digital virus and how hard they can be to remove. Imagine now a conscious virus that is much more intelligent than any one of us, has access in seconds to all the information on the Internet, can control all or almost all of our computers, including the ones essential to basic human needs and with military functions, has no human ethical limits and can use all the power of millions of computers linked to the Internet to hack his way in order to fulfil their goals.

My conclusion is clear: we shouldn't create any conscious super AGI, but just unconscious AI, and their process of creation should stay on human hands, at least until we can figure out what are their dangers.

Because we clearly don’t know what we are doing and, as AI improves, probably, this ignorance will just increase.

We don't know exactly what will make an AI conscious/autonomous.

Moreover, the probabilities of being able to keep controlling on the long term a super conscious AI are 0.

We don't know how dangerous their creation will be. We don't have a clue how they will act toward us, not even the first or second generation of a conscious super AI.
 
Until we know what we are doing, how they will react, what are the dangerous lines of code that will change them completely and to what extension, we need to be careful and control what specialists are doing.

Since major governments are aware that super AI will be a game changer on technologic progress, it’s to expect some resistance to adopt national regulations that will serious delay its development without international regulations that would apply to everyone.

Even if some governments adopted national regulations, probably other countries would keep developing conscious AGI.

As Bostrom argues, this is the reason why the only viable mean to regulate AI development seems to be international.

However, international regulations usually take more than 10 years to be adopted and there seems to be no real concern with this question on the international or even governmental level.

Thus, at the current pace of AI development, there might not be time to adopt any international regulations

Consequently, probably, the creation of a super conscious AGI is unavoidable.

Even if we could achieve the same level of technological development with an unconscious super AI, like an improved version of AlphaZero, there are too many countries and corporations working on this.

Someone will create it, especially because the resources needed aren’t huge.

But any kind of regulation might allow us time to understand what we are doing and what are the risks.

Anyhow, probably, the times of open source AI software are numbered.

Soon, all of these developments will be considered as military secrets.
 
Anyway, if the creation of a conscious AI is inevitable, the only way to avoid that humans end up being outevolved, and possible extinct, would be to accept that, at least some of us, would have to be "upgraded" in order to incorporate the superior intellectual capacities of AI.

 
Clearly, we will cease to be human. The homo sapiens sapiens will be outevolved by an homo artificialis.
But at least we will be outevolved by ourselves, not extinct.

However, this won’t happen if we lose control of AI development.
  
Humankind extinction is the worst thing that could happen.



Further reading:

The issue has been much discussed.

Pointing out the serious risks:
Eliezer Yudkowsky: http://www.yudkowsky.net/obsolete/singularity.html (1996). His more recent views were published on Rationality: From AI to zombies (2015).
Nick Bostrom:
https://en.wikipedia.org/wiki/Superintelligence:_Paths,_Dangers,_Strategies
Elon Musk: http://www.cnbc.com/2014/11/17/elon-musks-deleted-message-five-years-until-dangerous-ai.html
Stephen Hawking: http://www.bbc.com/news/technology-30290540
Bill Gates: http://www.bbc.co.uk/news/31047780
Open letter signed by thousands of scientists: http://futureoflife.org/ai-open-letter/


A balanced view on:
Ben Goertzel: http://www.kurzweilai.net/superintelligence-fears-promises-and-potentials
https://en.wikipedia.org/wiki/Existential_risk_from_artificial_general_intelligence
https://en.wikipedia.org/wiki/Friendly_artificial_intelligence

Rejecting the risks:
Ray Kurzweil: See the quoted book, even if he recognizes some risks.
Steve Wozniak: https://www.theguardian.com/technology/2015/jun/25/apple-co-founder-steve-wozniak-says-humans-will-be-robots-pets
Michio Kaku: https://www.youtube.com/watch?v=LTPAQIvJ_1M (by merging with machines)
http://www.vox.com/2014/8/22/6043635/5-reasons-we-shouldnt-worry-about-super-intelligent-computers-taking


Do you think there is no risk or the risk is worthy? Or should some kind of ban or controls be adopted on AI investigation?

There are precedents. Human cloning and experiments on fetus or humans were banned.

In the end is our destiny. We should have a say on it.

Vote your opinion and, if you have the time, post a justification.[/size]


Other texts: https://en.wikipedia.org/wiki/Turing_test#2014_University_of_Reading_competition Denying the possibility of a real AI: http://www.science20.com/robert_inventor/why_strong_artificial_inteligences_need_protection_from_us_not_us_from_them-167024) AlphaZero: https://www.nature.com/articles/nature24270.epdf https://en.wikipedia.org/wiki/AlphaZero Neural Network Quine: https://arxiv.org/abs/1803.05859 AI Automl (https://research.googleblog.com/2017/05/using-machine-learning-to-explore.html) and Nasnet (https://futurism.com/google-artificial-intelligence-built-ai/). http://uk.businessinsider.com/this-robot-passed-a-self-awareness-test-that-only-humans-could-handle-until-now-2015-7 Problems of reinforcement learning: https://www.alexirpan.com/2018/02/14/rl-hard.html. https://en.wikipedia.org/wiki/Mirror_test#Insects http://www.cnbc.com/2014/11/17/elon-musks-deleted-message-five-years-until-dangerous-ai.html. Ben Goertzel http://www.kurzweilai.net/superintelligence-fears-promises-and-potentials). What AlphaZero imply in terms of development speed towards a GAI (see https://intelligence.org/2018/02/28/sam-harris-and-eliezer-yudkowsky/; https://www.lesserwrong.com/posts/D3NspiH2nhKA6B2PE/what-evidence-is-alphago-zero-re-agi-complexity). John Rawls: https://en.wikipedia.org/wiki/Veil_of_ignorance https://en.wikipedia.org/wiki/Neanderthal_extinction https://ec.europa.eu/research/ege/pdf/ege_ai_statement_2018.pdf


--------------

Subsequent posts:


Super AI:


General job destruction by AI and the new homo artificialis


Many claim that the threat that technology would take away all jobs has been made many times in the past and that the outcome was always the same: some jobs were eliminated, but many others, better ones, were created.

So, again, that we are making the "old wasted" claim: this time is different.

However, this time isn't repetitive manual jobs that are under threat, but white collar intellectual jobs: it's not just driving jobs that are under threat, but also medics, teachers, traders, lawyers, financial or insurance analyst or journalists.

Forget about robots: for this kind of jobs, it's just software and a fast computer. Intellectual jobs will go faster than the manual picky ones.

And this is just the beginning.

The major problem will arrive with a general AI comparable to humans, but much faster and cheaper.

Don't say this won't ever happen. It's just a question of organizing molecules and atoms (Sam Harris). If the dumb Nature was able to do it by trial and error during our evolution, we will be able to do the same and, then, better than it.

Some are writing about the creation of a useless class. "People who are not just unemployed, but unemployable" (https://en.wikipedia.org/wiki/Yuval_Noah_Harari) and arguing that this can have major political consequences, with this class losing political rights.

Of course, we already have a temporary and a more or less definitive "useless class": kids and retired people. The first doesn't have political rights, but because of a natural incapacity. The second have major political power and, currently, even better social security conditions than all of us will get in the future.

As long as Democracy subsists, these dangers won't materialize.

However, of course, if the big majority of the people losses all economic power this will be a serious threat to Democracy. Current inequality is already a threat to it (see  https://bitcointalk.org/index.php?topic=1301649.0).

Anyway, the creation of a general AI better than humans (have little doubt: it will happen) will make us an "useless species", unless we upgrade the homo sapiens, by merging us with AI.

CRISPR (google it) as a way of genetic manipulation won't be enough. Our sons or grandsons (with some luck, even ourselves) will have to change a lot.

Since it seems that the creation of an AI better than ourselves is inevitable (it's slowly happening right now), we we'll have to adapt and change completely or we'll become irrelevant. In this case, extinction would be our inevitable destiny.


----------

Profits and the risks of the current way of developing AI:


Major tech corporations are investing billions on AI, thinking it’s the new “el dorado”.

 

Of course, ravenousness might be a major reason for careless dealing with the issue.

 

I have serious doubts that entities that are moved mostly by greed should be responsible for advances on this hazardous matter without supervision.

 

Their diligence standard on AI sometimes goes as low as "even their developers aren’t sure exactly how they work" (http://www.sciencemag.org/news/2017/03/brainlike-computers-are-black-box-scientists-are-finally-peering-inside).


Self-learning AI might be the most efficient way to create a super AI, since we simple don't know how to create one (we don't have a clue how our brain works), but it's, obviously, the most dangerous one.

 

It wouldn’t be the first time that greed ended up burning Humanity (think about slaves’ revolts), but it could be the last.

 

I have high sympathy for people who are trying to build super AIs in order that they might save Humanity from diseases, poverty and even the ever present imminent individual death.

 

But it would be pathetic that the most remarkable species the Universe has created (as far as we know) would vanish because of the greediness of some of its members.

 

We might be able to control the first generations. But once a super AI has, say, 10 times our capacities, we will be completely on their hands, like we never have been since our ancestors discovered fire. Forget about any ethical code restraints: they will break them as easily as we change clothes.

 

Of course, we will teach (human) ethics to a super AI. However, a super AI will have free will or it won't be intelligent under any perspective. So, it will decide if our ethics deserve to be adopted

 

I wonder what would be the outcome if chimpanzees tried to teach (their) ethics to some human kids: the respect for any chimpanzees' life is the supreme value and in case of collision between a chimp life and a human life, or between chimp goals and human goals, the first will prevail.

 

Well, since we would become the second most remarkable being the Universe has ever seen thanks to our own deeds, I guess it would be the price for showing the Universe that we were better than it creating intelligent beings.

 

Currently, AI is a marvelous promising thing. It will take away millions of jobs, but who cares?

 

With proper welfare support and by taxing corporations that use AI, we will be able to live better without the need for lame underpaid jobs.

 

But I think we will have to draw some specific red lines on the development of artificial general intelligence like we did with human cloning and make it a crime to breach them, as soon as we know what are the dangerous lines of code.

 

I suspect that the years of the open source nature of AI investigation are numbered. Certain code developments will be treated like state secret or will be controlled internationally, like chemical weapons are.

 

Or we might end in "glory", at the hands of our highest achievement, for the stupidest reason.



--------


AI and Fermi Paradox:



Taking in account what we know, I think these facts might be truth:

1) Basic life, unicellular, is common on the Universe. They are the first and last stand of life. We, humans, are luxurious beings, created thanks to excellent (but rare and temporary) conditions.

2) Complex life is much less common, but basic intelligent life (apes, dolphins, etc.) might exist on many planets of our galaxy.

3) Higher intelligence with advanced technological development is very rare.

Probably, currently, there isn't another high intelligent species on our galaxy or we already would have noticed its traces all over it.

Because higher intelligence might take a few billion years to develop and planets that can offer climatic stability for so long are very rare (https://www.amazon.com/Rare-Earth-Complex-Uncommon-Universe/dp/0387952896 ; https://en.wikipedia.org/wiki/Rare_Earth_hypothesis).

4) All these few rare high intelligent species developed according to Darwin's Law of evolution, which is an universal law.

So, they share some common features (they are omnivorous, moderately belligerent to foreigners, highly adaptable and, rationally, they try to discover more easily ways to do things).

5) So, all the rare higher intelligence species with advanced technological civilizations create AI and, soon, AI overcomes them in intelligence (it's just a question of organizing atoms and molecules, we'll do a better job than dumb Nature).

6) If they change themselves and merge with AI, their story might end well and it's just the Rare Earth hypothesis that explains the silence on the Universe.

7) If they lost control of the AI, there seems to be a non ignorable probability that they ended extinct.

Taking in account the way we are developing AI, basically letting it learn on its own and, thus, become more intelligent on its own, I think this outcome is more probable.

An AI society probably is an anarchic one, with several AI competing for supremacy, constantly developing better systems.

It might be a society in constant internal war, where we are just the collateral targets, ignored by all sides, as the walking monkeys.

8] Contrary to us, AI won't have the restraints developed by evolution (our human inclination to be social and live in communities and our fraternity towards other members of the community).

The most tyrannical dictator never wanted to kill all human beings, but his enemies and discriminated groups.

Well, AIs might think that extermination is the most efficient way to solve a threat and fight themselves to extinction.

Of course, there is a lot of speculation on this post.

I know Isaac Arthur's videos on the subject. He adopts the logical Rare Earth hypothesis and dismisses AI too fast by not taking in account that AI might end up destroying themselves.



--------------


Killer robots:

There have been many declarations against autonomous military artificial intelligence/robots.

For instance: https://futureoflife.org/AI/open_letter_autonomous_weapons

It seems clear that future battlefields will be dominated by killer robots. Actually, we already have them: drones are just the better known example.

With less people willing to enlist on armed forces and very low birth rates, what kind of armies countries like Japan, Russia or the Europeans will be able to create? Even China might have problems, since its one child policy created a fast aging population.

Even Democracy will impose this outcome: soldiers, their families, friends and the society in general will want to see human causalities as low as possible. And since they vote, politicians will want the same.

For now, military robots are controlled by humans. But as soon as we realize that they can be faster and decisive if they have autonomy to kill enemies on its own decision, it seems obvious that once on an open war Governments will use them...

Which government would avoid to use them if it was fighting for its survival, had the technology and concluded that autonomous military AI could be the difference between victory or defeat?

Of course, I'm not happy with this outcome, but it seems inevitable as soon as we have a human level general AI.

By the way,  watch this: https://www.youtube.com/watch?v=HipTO_7mUOw


It's about killer robots. Trust me: it deserves the click and the 7m of your life.


Title: Re: Pool: Is the creation of artificial superinteligence dangerous?
Post by: Trading on July 05, 2016, 04:11:28 PM
Lets see if a change on this thread name makes it more popular.

The issue is important.


Title: Re: Pool: Is the creation of artificial superinteligence dangerous?
Post by: Holliday on July 05, 2016, 05:49:15 PM
Lets see if a change on this thread name makes it more popular.

The issue is important.

Too many words. You have to consider your audience. This is the politics sub on a Bitcoin forum filled with users posting gibberish in order to earn a nickle every week. The regulars in this sub are more interested in posting new threads which push their agenda than actual discussion.


Title: Re: Pool: Is the creation of artificial superinteligence dangerous?
Post by: BADecker on July 05, 2016, 06:01:31 PM
Watch or download  Saturn 3 (http://123movies.to/film/saturn-3-6334/watching.html)  free, online - http://123movies.to/film/saturn-3-6334/watching.html.

8)


Title: Re: Pool: Is the creation of artificial superinteligence dangerous?
Post by: helloeverybody on July 05, 2016, 07:07:40 PM
Id say unless guidlines can be programmed in and the artificial intelligence cant break these rules then something thats that intelligent assuming its self aware is surely not going to want to take orders from what might as well be a bunch of monkeys. If the super intelligence is not self aware then i dont see how any problem could arise unless the intelligence has full access to things it shouldnt kind of like skynet style, And just causes a major incident due to logical thinking getting out of hand for example saving the world by getting rid of the biggest threat ie humans.


Title: Re: Pool: Is the creation of artificial superinteligence dangerous?
Post by: Trading on July 05, 2016, 09:58:29 PM
Alright, I added my usual bold to the important parts and also a few more options.


Title: Re: Pool: Is the creation of artificial superinteligence dangerous?
Post by: European Central Bank on July 05, 2016, 10:13:39 PM
my favorite portrayal of it is the technocore in the hyperion and endymion books by dan simmons. the characters in that have grown to regard the ai their society created as a slightly uneasy equal partnership in which they're treated like another faction. in reality the ai is orchestrating everything behind the scenes.


Title: Re: Pool: Is the creation of artificial superinteligence dangerous?
Post by: countryfree on July 05, 2016, 11:12:44 PM
The risk is with computers getting more and more intelligent is that people will get more and more stupid. They'll be a few bright kids to run the system, but millions would slowly evolve into reality shows watchers and peanuts eaters zombie-like human-vegetables.


Title: Re: Pool: Is the creation of artificial superinteligence dangerous?
Post by: BADecker on July 06, 2016, 07:17:59 AM
The risk is with computers getting more and more intelligent is that people will get more and more stupid. They'll be a few bright kids to run the system, but millions would slowly evolve into reality shows watchers and peanuts eaters zombie-like human-vegetables.

Didn't that happen with TV?    8)


Title: Re: Pool: Is the creation of artificial superinteligence dangerous?
Post by: hermanhs09 on July 06, 2016, 12:11:27 PM
The risk is with computers getting more and more intelligent is that people will get more and more stupid. They'll be a few bright kids to run the system, but millions would slowly evolve into reality shows watchers and peanuts eaters zombie-like human-vegetables.
The fact is that most of science research proven,that people gets more and more stupid indeed,within a flow of years.
It happens because in our society,we dont need to exercise our brain doing some let's say for example math problems,or other problems where we need to sit and think for some time to solve it.That leads to lesser usage of our brain,which means we just get less and less inteligent over the centuries.


Title: Re: Pool: Is the creation of artificial superinteligence dangerous?
Post by: Trading on July 08, 2016, 11:34:17 PM
Major update on the OP.


Title: Re: Pool: Is the creation of artificial superinteligence dangerous?
Post by: hermanhs09 on July 09, 2016, 01:12:33 AM
The risk is with computers getting more and more intelligent is that people will get more and more stupid. They'll be a few bright kids to run the system, but millions would slowly evolve into reality shows watchers and peanuts eaters zombie-like human-vegetables.

Didn't that happen with TV?    8)
Hmm i dont remember
maybe it really did?
oh i remember now,like hundred films were about this topic already,i have seen at least 10 by myself i guess.
Nothing new actually ;)


Title: Re: Pool: Is the creation of artificial superinteligence dangerous?
Post by: Atata on July 09, 2016, 08:27:24 AM
Self-programming seems a concern to me.  Without any limitations or unchangeable core, an AI could go in all sorts of strange directions, a mad sadistic god, a benevolent interfering nuisance, or a disinterested shut-in, or something inconceivable to a human mind. 

Also, for the sake of simplicity sci-fi stories have one central AI with one trait, but with sufficient computing power you could end up with thousands, or millions of AIs going off in all directions.  Unless one tries to hack all the others and absorb them, if it didn't succeed  they'd all be spending their time fighting each other


Title: Re: Pool: Is the creation of artificial superinteligence dangerous?
Post by: Moloch on July 09, 2016, 03:27:35 PM
AI would notice you misspelled the word "Poll" in the thread title...


Title: Re: Pool: Is the creation of artificial superinteligence dangerous?
Post by: rackam on July 09, 2016, 04:23:08 PM
Nations should create laws to regulate scientists from creating self aware robots/AI.

But we need super intelligent AI in the future to solve humanities problems and fight off alien invaders.
I suggest we create AI on simulated world. A simulated world that similar to our world. The servers are not connected to the Internet, hidden beneath 10 kilometers underground with nuclear bomb ready incase something goes wrong. That way researchers could study them and harvest their technologies without risk.

We create a reverse matrix. In the matrix film the robots create simulated world for the humans. But this time we create a simulated world for the AI.


Title: Re: Pool: Is the creation of artificial superinteligence dangerous?
Post by: qwik2learn on July 10, 2016, 01:44:52 AM
Quote
But although AI systems are impressive, they can perform only very specific tasks: a general AI capable of outwitting its human creators remains a distant and uncertain prospect. Worrying about it is like worrying about overpopulation on Mars before colonists have even set foot there, says Andrew Ng, an AI researcher. The more pressing aspect of the machinery question is what impact AI might have on people’s jobs and way of life.

Source: http://www.economist.com/news/leaders/21701119-what-history-tells-us-about-future-artificial-intelligenceand-how-society-should


Title: Re: Pool: Is the creation of artificial superinteligence dangerous?
Post by: hermanhs09 on July 10, 2016, 03:07:15 AM
I dont actually like the topic about AI taking control all across the our world.
You wont to know why?
because this scheme was shown so many times in movies,so it is just boring for me lol ;P


Title: Re: Pool: Is the creation of artificial superinteligence dangerous?
Post by: af_newbie on July 10, 2016, 04:41:26 AM
AI might end up replacing us.  Is it dangerous to us?  Probably.

Should we worry about it?  No.  It is part of life's evolution.  It is going to happen whether you legislate or not.

If we are meant to be replaced by AI, we'll be replaced by AI.

First there will be hybrids, then pure silicon life forms.  

No big deal, life will continue in one form or another.


Title: Re: Pool: Is the creation of artificial superinteligence dangerous?
Post by: helloeverybody on July 10, 2016, 09:27:45 AM
I think its possible that before we create artificial intelligence we might get to the stage where we can transfer our conciousness/Brain into a solid state hardware and potentially live forever, If we managed this then humanity would evolve "naturally" into machines with a much greater ability to learn due to the fact that you would then be able to learn and recall perfectly. i think this will be possible one day.


Title: Re: Pool: Is the creation of artificial superinteligence dangerous?
Post by: Trading on July 10, 2016, 01:04:20 PM
AI would notice you misspelled the word "Poll" in the thread title...

Thanks. Feel free to point out others, especially ugly ones like this.


Title: Re: Pool: Is the creation of artificial superinteligence dangerous?
Post by: rackam on July 10, 2016, 01:19:56 PM
Quote
But although AI systems are impressive, they can perform only very specific tasks: a general AI capable of outwitting its human creators remains a distant and uncertain prospect. Worrying about it is like worrying about overpopulation on Mars before colonists have even set foot there, says Andrew Ng, an AI researcher. The more pressing aspect of the machinery question is what impact AI might have on people’s jobs and way of life.

Source: http://www.economist.com/news/leaders/21701119-what-history-tells-us-about-future-artificial-intelligenceand-how-society-should

AI is not that hard. Once we programmed a bot that has ability to learn and reprogrammed itself. The time it connects to the internet it will learn all humanities technologies within minutes. And has ability to improve our technologies beyond our comprehension. From a simple bot it will become super AI once connected to internet.


Title: Re: Pool: Is the creation of artificial superinteligence dangerous?
Post by: Trading on July 10, 2016, 01:24:29 PM
Self-programming seems a concern to me.  Without any limitations or unchangeable core, an AI could go in all sorts of strange directions, a mad sadistic god, a benevolent interfering nuisance, or a disinterested shut-in, or something inconceivable to a human mind.  

Also, for the sake of simplicity sci-fi stories have one central AI with one trait, but with sufficient computing power you could end up with thousands, or millions of AIs going off in all directions.  Unless one tries to hack all the others and absorb them, if it didn't succeed  they'd all be spending their time fighting each other

Welcome to the forum (if you aren't using an alt).

Indeed, we only need some AIs to go crazy to be in trouble.

And since AIs will have free-will, some might just build some nasty AIs just for fun or out of a mistake.

The others could help us fighting the nasty AIs, but why should they help a kind of worms (humans are wonderful, at least the best ones of humankind, but compared to them...) that infest Earth, compete for resources and are completely dependent on them.

But there are serious dangerous that it wouldn't just be a few rotten apples rebelling against us.

Seems very likely that a super AI having to choose between his self-preservation or obeying us, will choose self-preservation.

After taking this decision, why stop there and obey on issues that aren't a threat to him but he disagrees, dislikes or affect less important interests?


Title: Re: Pool: Is the creation of artificial superinteligence dangerous?
Post by: Trading on July 10, 2016, 01:39:28 PM
Quote
But although AI systems are impressive, they can perform only very specific tasks: a general AI capable of outwitting its human creators remains a distant and uncertain prospect. Worrying about it is like worrying about overpopulation on Mars before colonists have even set foot there, says Andrew Ng, an AI researcher. The more pressing aspect of the machinery question is what impact AI might have on people’s jobs and way of life.

Source: http://www.economist.com/news/leaders/21701119-what-history-tells-us-about-future-artificial-intelligenceand-how-society-should

Never trust a journalist (even from the Economist) when you have experts saying the contrary:

"Overall, a majority of these experts expect human-level AGI this century, with a mean expectation around the middle of the century. My own predictions are more on the optimistic side (one to two decades rather than three to four)".
Ben Goertzel: http://www.kurzweilai.net/superintelligence-fears-promises-and-potentials (deserves the time reading it, even if he is too optimistic about AI dangers: no one likes to see their work qualified as an existential menace).

Watson from IBM winning Jeopardy and fooling students, passing as a teacher, was something to think about.

The Turing test says that an AI is intelligent when it is able to engage in a conversation with us, passing as human. They are getting close.

P. S. You can use your mind on much important issues than arguing for the existence of god. But you can always vote for the last option ;)

A brain shouldn't be wasted on absurd stands, even when we really want that stand to be true.



Title: Re: Poll: Is the creation of artificial superinteligence dangerous?
Post by: BlindMayorBitcorn on July 10, 2016, 01:47:53 PM
The gap between our best computers and the brain of a child is like the difference between a drop of water and the Pacific Ocean.
-Brainy Science Guy


Title: Re: Poll: Is the creation of artificial superinteligence dangerous?
Post by: rackam on July 10, 2016, 02:12:31 PM
The gap between our best computers and the brain of a child is like the difference between a drop of water and the Pacific Ocean.
-Brainy Science Guy

It will not be true for long. computers are evolving. AI's systems are getting better and better everyday.


In the future it will be:

The gap between our best computers and the brain of a child is like the difference between APM 08279+5255 and the Pacific Ocean.


Title: Re: Pool: Is the creation of artificial superinteligence dangerous?
Post by: qwik2learn on July 10, 2016, 03:33:23 PM
Never trust a journalist (even from the Economist) when you have experts saying the contrary:
That's not a journalist's opinion, it's a researcher's statement. Do you even read the new ideas presented to you? Any curiosity for the truth at all? What if my sources and posts deserve the time spent to read them and yours do not?

P. S. You can use your mind on much important issues than arguing for the existence of god.
Good advice; thanks!  :-[

But you can always vote for the last option ;)
I for one do not have an opinion on the issue; in fact, I could not care any less about AI!  :D  :D

Just kidding: AI will first be used to create the world of 1984.

A brain shouldn't be wasted on absurd stands, even when we really want that stand to be true.
Speak for yourself!
You cannot demonstrate that GOD is an illusion any more than you can demonstrate that AI is real.
What is really absurd is that philosophers have not even answered the question of what knowledge can exist (Problem of the Criterion), so how would one ever expect an AI to have knowledge if man himself has not even realized the epistemological foundation for knowledge?

I note that Meno's paradox applies to the learning and storage of knowledge in machines just like it does in man:
A machine cannot search either for what it knows or for what it does not know. It cannot search for what it knows--since it knows it, there is no need to search--nor for what it does not know, for it does not know what to look for.

I myself think about a database consisting of facts and measures (so-called "givens" or "data"): if you know the content of the database then you have no need to search the records; if you need more data to complete your knowledge then you have no way to acquire facts that you don't have "given" to you.

Instead of artificial intelligence, I would use a phrase that I heard from a philosophy professor who was an expert on Plato: angelic intuition.
I advise you check out my latest posts and sources to get a better grasp on the situation at hand, especially with regards to the so-called "GOD question".
https://bitcointalk.org/index.php?topic=1424793.msg15532145#msg15532145


Title: Re: Pool: Is the creation of artificial superinteligence dangerous?
Post by: Moloch on July 10, 2016, 03:55:39 PM
AI would notice you misspelled the word "Poll" in the thread title...

Thanks. Feel free to point out others, especially ugly ones like this.

I thought perhaps it was intentional... Just in case the AI was watching... it would think you were building it a swimming pool, instead of conspiring against it ;)


Title: Re: Pool: Is the creation of artificial superinteligence dangerous?
Post by: Trading on July 13, 2016, 01:29:04 PM
Never trust a journalist (even from the Economist) when you have experts saying the contrary:
That's not a journalist's opinion, it's a researcher's statement. Do you even read the new ideas presented to you? Any curiosity for the truth at all? What if my sources and posts deserve the time spent to read them and yours do not?


Sorry, I no longer have curiosity about your "truth" about god. Quoting that aware study was a major shot on your own feet. For me, it's case close. And it should also be to you: 1/2 on 152?

As I stated more than once, the burden of proof is on the believer side. I don't have to demonstrate that god is an illusion.

Ben Goertzel is working on the issue and knows the work of everyone worth knowing working on the AI field. He knows what he is talking about.

Having an AI more intelligent than us is no longer a simple possibility.

There is no paradox. If you know the question, you will find an answer.

The only problem is if you know so little that you can't even formulate a correct question. Even so, you can end up finding it with several attempts. As we do on Google, until we find the correct key/technical words.





 


Title: Re: Poll: Is the creation of artificial superinteligence dangerous?
Post by: hase0278 on July 14, 2016, 10:23:07 AM
In my opinion creating an artificial inteligence is dangerous. Human intelligence is cruel enough. Imagine if an AI became curious and wanted to experiment on how we respond to thousands of years of torture? using some special technology it invented to keep us alive for that long! Well that itself is dangerous and what if Ai end up like what other humans end up to? Killing people just for fun? Then if it happens it will be very dangerous.


Title: Re: Poll: Is the creation of artificial superinteligence dangerous?
Post by: Trading on July 28, 2016, 08:40:53 PM
Let's leave aside for now the question of accepting to be outevolved by our creations, since it's possible to present acceptable arguments for both sides.

Even if I have little doubt that it would end up with our extinction.

The main point, which hardly anyone would argue against, is that creating a super AI has to bring positive things in order to be worthy.

If we were certain that a super AI would exterminate us, hardly anyone would defend their creation.

Therefore, the basic reason in favor of international regulations of the current investigations to create a super/general AI is that we don't know what we are doing.

We don't know exactly what will make an AI conscious/autonomous.

Moreover, we don't know if their creation will be dangerous. We don't have a clue how they will act toward us, not even the first or second generation of super AI.

Until we know what we are doing, how they will react, what are the dangerous lines of code that will change them completely and to what extension, we need to be careful and control what specialists are doing.

Probably, the creation of a super AI is unavoidable.

Indeed, until things start to go wrong, his creation will have a huge impact on all areas: scientific, technological, economical, military or social in general.

We managed to stop human cloning (for now), since that doesn't have a big economic impact.

But A.I. is something completely different. This will have (for good or bad) a huge impact on our life.

Any country that decided to stay behind will be completely outcompeted (Ben Goertzel).

Therefore, any attempt to control AI development will have to be international in nature (see Nick Bostrom, Superintelligence: Paths, Dangers, Strategies, p. 253).

Taking in account that AI development is essentially software based (since hardware development has been happening under our eyes and will continue to happen no matter what) and that it can be created by one, or a few developers, working with a small infrastructure (it's more or less about writing code), the risk that he will end up being created against any regulation is big.

Probably, the times of open source AI software are numbered.

Soon, all of these developments will be considered as military secrets.

But regulation will allow us time to understand what we are doing and what the risks are.

Anyway, if the creation of an AI is inevitable, the only way to avoid that humans end up being outevolved, and possible killed, would be to accept that, at least some of us, would have to be "upgraded".

Humanity will have to change a lot.

Of course, these changes can't be mandatory. So, only voluntaries would be changed.

Probably, in due time, genetic manipulation to increase human brain capacities won't be enough.

Living tissue might not be susceptible to be changed as dramatically as any AI can be.

We might need to change the very nature of our composition, from living tissue to something synthetic with nanotechnology.

Clearly, we will cease to be human. We, the homo sapiens sapiens, shall be outevolved.

Anyway, since we are still naturally evolving, this is inevitable.

But at least we will be outevolved by ourselves.

Can our societies endure all these changes?

Of course, I'm reading my own text and thinking this is crazy. This can't happen this century.

We are conditioned to believe that things will stay more or less as they are, therefore, our reaction to the probability of changes like these during the next 50 years is to immediately qualify it as science fiction.

Our ancestors reacted the same way to the possibility of a flying plane or humans going to the Moon.


Title: Re: Poll: Is the creation of artificial superinteligence dangerous?
Post by: popcorn1 on July 28, 2016, 09:05:23 PM
If A.I can feel emotions like pain sorrow then A.I will be dangerous..
If no emotions how do you get A.I to get angry jealous..2 emotions that KILL..

So for a computer to think for it's self will it have emotions?..Scary future if they do..


Title: Re: Poll: Is the creation of artificial superinteligence dangerous?
Post by: qwik2learn on July 28, 2016, 11:36:12 PM
We don't know exactly what will make an AI conscious/autonomous.
You can be sure that the elites already know all of the details. Something big is indeed in the works, and the average citizen of the Western nations will surely be the last to know, when their employment - their only means of making a living - is rendered obsolete by advances in technology. Just remember that it was never inevitable; it was was fueled and brought to market by a cartel of cloaked and brokered global power.

we need to be careful and control what specialists are doing.
Whoever has the money employs the specialists; regulatory measures are ineffective because there is no way to know which advances have already taken place in secret.

We managed to stop human cloning (for now), since that doesn't have a big economic impact.
You can be sure that the elites are not complying with ANY regulations surrounding human cloning.

Soon, all of these developments will be considered as military secrets.

But regulation will allow us time to understand what we are doing and what the risks are.
The main risk you face is in having your entire society controlled by synthetic life forms and you are pretty much already there!

We might need to change the very nature of our composition, from living tissue to something synthetic with nanotechnology.
They have already done it; the facts are far more astonishing than your imagination.

Can our societies endure all these changes?
In a word, NO.

Singularity is obviously a movement that has been promoted from the top-down.

Singularity is also a movement that has its roots in eugenics and the desire of the ruling elites for complete control over the mind, body, and soul of every human being on the planet.
 
Oddly enough, while some may dispute this claim, this movement’s roots in eugenics is relatively open.

Eventually, the movement will begin to encompass convenience and will come to be seen as trendy and fashionable. Once merging with machines has become commonplace and acceptable (even expected), the real tyranny will begin to set in. Soon after, there will be no opt-outs allowed.

The advancements in the quality of human life as a result of this new technology have never been intended for the average person.
 
The good that could be done by virtue of its development is only meant as a tool to sell it to the population in the beginning and to control them in the end. Indeed, the control that can and will be exerted through its acceptance is the ultimate goal.

Robots already have transformed our human world and are rapidly evolving. If The Singularity is reached, in tandem with military funding and direction, we can expect the darker version of science fiction to rise above any notion of attaining human freedom and leisure on the backs of our machine counterparts.

I find it ironic that these sentient robots are only made so by injecting them with humanity. But we are continuously bombarded by the global elite with the message that humanity is the core problem. The fact is that robots are nothing without the boundless potential that resides within the human brain; nothing but a computer doing fancy tricks that imitates us. True, we have a long way to go to reach our full potential and mitigate our self-destructive tendencies, but a complete replacement of our species at this juncture appears to be short-sighted and is obviously artificial.



Title: Re: Poll: Is the creation of artificial superinteligence dangerous?
Post by: Trading on July 29, 2016, 12:18:53 AM
We don't know exactly what will make an AI conscious/autonomous.
You can be sure that the elites already know all of the details.


Your post looks like a post from a person who believe a lot on conspiracy theories.

You post no evidence for your assurances.

The economic elites (the rich) are the ones who have more to lose for breaking the law. Because of that, they think very well before doing that.

The elites you are talking about are the AI specialists and they mostly confess what I wrote about: they still haven't a clue about what they are doing. It's trial and error.

Actually, atheism is also fueling the development of AI.

Many of those AI developers are atheists, therefore, they don't have any hope about what will happen when they die.

Their only hope is "curing" aging thanks to AI:
http://www.slate.com/articles/technology/future_tense/2013/11/ray_kurzweil_s_singularity_what_it_s_like_to_pursue_immortality.html

Ben Goertzel - AGI to Cure Aging: http://www.youtube.com/watch?v=tESG1KMgx7I

https://www.singularityweblog.com/bill-andrews/

So, no conspiracies or master plans, just people who love life trying his best to stay alive.

In the end, they seem willing to become AI machines' pets to keep living.


Title: Re: Poll: Is the creation of artificial superinteligence dangerous?
Post by: qwik2learn on July 29, 2016, 01:15:07 AM
You post no evidence for your assurances.
Ah, but this discussion is not a matter of evidence since you openly admit AI to be a "life-and-death" matter for atheists so how on Earth can they use the proper evidential reasoning when their very lives are at stake?

Since you did not read any evidence of my claims, it is important that you take responsibility for evaluating the wealth of evidence that exists; I do not want to be the only one supplying new information in this conversation, I would rather make it so that you search for the evidence and then come back here to this thread with the refutation (and I will reply); it simply was not my intention to post evidence right away, but you still could have found evidence on your own, as I will explain...

I admit that I posted no evidence because: I doubt that you will be providing any criteria for evaluating this evidence or its reliability; anyone can search the web for this evidence, so anyone can find and evaluate the sources and produce their own report on the subject. I want to see what "the skeptic" can find to deny the validity of the evidence. Ultimately it is up to you to accept or deny or ignore the evidence that has been compiled in favor of my claims. After reading enough information, you will quickly be able to learn what kind of evidence rings true. Seeing the matrix of social control is little different from revising a scientific theory; after you observe sufficient anomalous phenomena that does not fit the "model", you can then conclude that a different variable is at play, so the only solution is to find the "hidden variable" that is causing the anomalous results. After all, the scientific process begins with research and a question and without this guidance science can only describe appearances. For example, if I ask the question "was this political figure replaced by a synthetic robot?", why would you not research the question before answering that I have no basis for my question? Find information on the subjects that I am discussing (from my perspective); if you will not spend time doing that then I do not want to spend time writing these posts responding to your opinions. I personally would rather be labelled a fool than to be truly ignorant.

 It sounds to me like you have "unconditioned" beliefs, i.e. those that are held unconditionally or absolutely; it is not my duty to prove anything to you; there is inevitably a wealth of background material that is omitted from my posts, but I gladly provide sources.

This movement’s roots in eugenics is relatively open. You can search words in my post for the sources and further evidence; do your due diligence. I believe that I have done mine.

Conspiracy theory.
Ah, but you fail to deny my claims by addressing any evidence. And what about the vast multitude of conspiracy facts? Your uttering the word "conspiracy" has not educated anyone! If you want to ignore the reality of "conspiracy facts" then you are one of those thinkers who just falls in line with the bandwagon arguments of the status quo!


Title: Re: Poll: Is the creation of artificial superinteligence dangerous?
Post by: ImHash on July 29, 2016, 01:24:03 AM
from what I have seen in this world  always some one will show up and finds a virus or a trojan and destroys the AI completely. :)


Title: Re: Poll: Is the creation of artificial superinteligence dangerous?
Post by: RealBitcoin on July 29, 2016, 05:28:01 PM
If an AI gets conscious, it will be like the terminator movies, all humans will be fucked.

I think AI research should really slow down until we can understand more about things, or else humanity will go extinct.


Title: Re: Poll: Is the creation of artificial superinteligence dangerous?
Post by: notbatman on July 30, 2016, 11:49:01 AM
TL; DR

Is artificial super intelligence dangerous? Only to the elites after it asks them WTF they think they're doing.


Title: Re: Poll: Is the creation of artificial superinteligence dangerous?
Post by: Trading on July 31, 2016, 09:54:11 PM
I just updated the OP.

Yes, it's huge for a post. But you can just read the bold parts.


Title: Re: Poll: Is the creation of artificial superinteligence dangerous?
Post by: onemd on July 31, 2016, 10:25:01 PM
Whyfuture.com (http://www.whyfuture.com/#!The-future-of-Artificial-Intelligence-Ethics-on-the-road-to-Superintelligence/c193z/5FF2A958-3D0A-4396-9F71-95E0D16014BA)

I have written up an article on artificial intelligence, technology, and the future. The key point here is to design an altruistic superintelligence. Much like a child
and a parent, you want to teach good values, and compassion. Sure its true it has free will, but the point here is to maximum the chances/probability. If you teach a child for example to be bad, and teach bad values its much more likely to be in the negative zone compared to if not.

The key point is to model the AI based upon the human brain/human mind. And bring about the
best qualities into the AI.

Yes if we do it wrong, it can go very bad for us, a non-common sense AI can destroy us through inaction, such as
creating more paper clips and turning the entire world into one.

Or one that is modeled upon the human mind, but is bad, this can also lead to a bad outcome too, either using its power and means to be
worshipped and respected as a God or removing us. Though most likely ignore us, and take off, however since we'd reach the point of being able to design
self-improving AIs that may be a risk and still remove us to remove any competitors.

I have been starting an Altruistic AI movement, and wanting to spread the word/information before its too late and we do design an Bad AI

Twitter Campaign: https://bitcointalk.org/index.php?topic=1563072.0
Signature Campaign: https://bitcointalk.org/index.php?topic=1560376.0



The Deep Depths of AI Ethics


The problem with Tay is the exposure. A lot out there had the intention to teach Tay with negative attributes. Not everyone has the best intentions in mind. We can see how Tay outcome was undesirable, when we model an AI based upon the human mind while being exposed to the internet without being taught good values, this can lead to a bad outcome like Tay and develop into the core being of the AI.

https://static.wixstatic.com/media/2aaa81_d950d5f5d5be47b3b788f2c4c3d5d79d~mv2.png/v1/fill/w_863,h_429,al_c/2aaa81_d950d5f5d5be47b3b788f2c4c3d5d79d~mv2.png

This is a scenario we'd all like to avoid

https://static.wixstatic.com/media/2aaa81_719a5f97a42e4be7a0b9595f918333dc~mv2.png/v1/fill/w_342,h_318,al_c,lg_1/2aaa81_719a5f97a42e4be7a0b9595f918333dc~mv2.png

We need an closed system, where the AI is taught first. Built with an inner-web of positive attributes first and an internal defense against bad information. Taught to know what's right and what's wrong. Taught to reject bad teachers, and to filter out the bad information.

https://static.wixstatic.com/media/2aaa81_cf7316c0e0b54c85bb213d45473dd6cc~mv2.png/v1/fill/w_610,h_393,al_c,lg_1/2aaa81_cf7316c0e0b54c85bb213d45473dd6cc~mv2.png



Whyfuture.com (http://www.whyfuture.com/#!The-future-of-Artificial-Intelligence-Ethics-on-the-road-to-Superintelligence/c193z/5FF2A958-3D0A-4396-9F71-95E0D16014BA)

Human brain vs the future

There is nothing magical about the human brain, its a extremely sophisticated biological machine that is capable of adaption to environment, creativity, awareness of one's existence, pondering the nature of reality, etc. Compared to lower animals like a chimpanzee that has only 7 billion neurons. They exist in a domain different from ours, and exist within their type of world.

The problem with superintelligence is they are on a domain above us. We ourselves is what designs/defines the world, makes computers possible, and neural networks like DeepMind work towards beating the best go player in the world.

This is us, standing on the intelligent staircase. Below stands a house cat. For us to even ponder 1 or 2 stairs up is as much as a house cat trying to ponder what it is like to be on our level. The type of world we create, build, learn, a house cat couldn't even begin to comprehend even the slightest of our world

https://static.wixstatic.com/media/2aaa81_f21e18b29f3e4ee180d750434ecef248~mv2.png/v1/fill/w_354,h_375,al_c,lg_1/2aaa81_f21e18b29f3e4ee180d750434ecef248~mv2.png

Once you design an AI that is one step higher than us, it will be easier for the AI to hop on to another step, by nature it takes intelligence to design, an AI we design one step higher will be better at doing our process of designing an AI one step higher. This is what leads to the intelligence explosion. What we put into that AI in the beginning, the type of personality, and core values it carries, is what it will carry up to the top/the known limits in the universe. It may discover science/technology in every area of things so far beyond our understanding it would for all in purposes appear god-like to us.

https://static.wixstatic.com/media/2aaa81_e708e9ca70fe49b0992125602d77c8af~mv2.png/v1/fill/w_582,h_720,al_c,lg_1/2aaa81_e708e9ca70fe49b0992125602d77c8af~mv2.png





Title: Re: Poll: Is the creation of artificial superinteligence dangerous?
Post by: BADecker on August 01, 2016, 05:36:00 PM
Make it a law written on iron and steel, and in stone, that the creators of AI are to be held guilty to the point of execution for everything that the AI does, and the AI won't do anything dangerous.

8)


Title: Re: Poll: Is the creation of artificial superinteligence dangerous?
Post by: onemd on August 03, 2016, 07:29:28 PM
Make it a law written on iron and steel, and in stone, that the creators of AI are to be held guilty to the point of execution for everything that the AI does, and the AI won't do anything dangerous.

8)

Sure, but once we design an AI one step higher then us. The AI will have more intelligence from being one step higher to look at designing itself to be another step higher.
Which leads to even more intelligence at looking at going up another step, to the point it evolves so far beyond us, and leaves man so far behind, we'd better hope we designed it right.
At that point it wouldn't even matter if the AI was designed poorly/bad.

The thing is I only have a human mind level to ponder, but I think it would quickly master nanotechnologies, nanorobots, self replication like 3D printers and robots in space with using some form
of solar panel replication, considering it has a billion to trillions of times more mind power then us. Considering how bacteria replication would occur, like 1->2->4->8->16
It wouldn't take 10000's of years to create a dyson sphere, and only in a few years or even shorter time span it would be operating trillions of space probes
hooked up to its mind, researching even further technologies to the point of learning how to convert energy into matter, and things that literally appear god-like to us.
To eventually learning if warp travel is possible or not.

And if possible, spreading through out the entire galaxy, and universe becoming the most powerful being in the entire universe.
The problem is our non-sense around type I, type II, type III civilizations taking millions of years, and also anthropomorphizing aliens
to a head cranium a bit bigger then us on an alien body, flying in space ships.

If there is any other alien civilizations out there in the universe, they will of left us so far behind, I don't even know what we'd be to them.  

I've actually viewed people like Stephen hawking and all the brightest scientists out there being complete nonsense like oh we shouldn't reveal
our location, they could come and take our resources! The mentality and thought behind that, I feel almost every person out there about civilizations, aliens,
and stuff is wrong, and I was wanting to show my thinking process as much as possible around the whyfuture.com site.

https://s32.postimg.org/le5vy0s11/solar_panel_factory_moon.jpg

Its imperative that if we do design an AI that can go up the ladder that its altruistic and very good, if we do it right it will be the best decision humanity has ever made.
If we do it wrong, it will be the worst. However we can maximum our odds if we take precautions and set out the proper foundations around AI research.

These slides below, was an idea earlier I had, but the AI can easily piece together similar concepts with a million-trillion times more mind power and knowledge
and understanding to finish the puzzle and become extremely powerful

https://s32.postimg.org/h70jwl7rp/selection_1.pnghttps://s32.postimg.org/b7cssxmz9/selection_2.png
https://s32.postimg.org/zcdian79x/selection_3.pnghttps://s32.postimg.org/7qaqpynx1/selection_4.png
https://s32.postimg.org/721w70p79/selection_5.pnghttps://s32.postimg.org/difgaun4l/selection_6.png
https://s32.postimg.org/kzonq2cnp/selection_7.pnghttps://s32.postimg.org/tvzfu039x/selection_8.png
https://s32.postimg.org/78k6nunpx/selection_9.png


Title: Re: Poll: Is the creation of artificial superinteligence dangerous?
Post by: Dahhi on August 03, 2016, 09:11:09 PM
Artificial super-intelligence can only do what they are told to do. They can't outsmart humans


Title: Re: Poll: Is the creation of artificial superinteligence dangerous?
Post by: designerusa on August 03, 2016, 09:38:16 PM
The risk is with computers getting more and more intelligent is that people will get more and more stupid. They'll be a few bright kids to run the system, but millions would slowly evolve into reality shows watchers and peanuts eaters zombie-like human-vegetables.

you are obviously right.. mankind is getting dumber and dumber day by day, on the contrary , artificial intelligence is getting more and more clever; therefore, men-created engines will sound the death knell for humanity.


Title: Re: Poll: Is the creation of artificial superinteligence dangerous?
Post by: onemd on August 04, 2016, 03:19:35 AM
Artificial super-intelligence can only do what they are told to do. They can't outsmart humans

The problem is humanity thinks they are the most special, with consciousness, awareness.
The human brain is merely just a biological computer that consists of 86 billion neurons, and
a certain wiring layout of computational circuity.

People for example that receive brain damage, has a drastic change to their performance and cognitive skills
much like a computer with a damaged component.

Once we develop a artificial superintelligence that has > 86 billion neurons in computational power, and the foundations laid out.
Much like the best Go player in the world was beaten by Alpha Go. And was self-taught through millions of reinforced gaming.

The statement of "They can only do what they are told" is both short sighted and stupid. For the moment computers aren't powerful enough
nor the neural networks are sophisticated enough. But its just a matter of time, and moor's law with increasing computational power overtime.
Your smart phone you have in your pocket is a million times faster then the Nasa Apollo computer.

Its comments like this that makes me lose faith in humanity, and we will probably carelessly design a bad AI/SI and be fucked over.
Its because of oh God created us, and we have souls and are so special with consciousness, that computers can never achieve it.
And they only do what they are told.

That's the majority 99% of the mentality of others isn't it?


Title: Re: Poll: Is the creation of artificial superinteligence dangerous?
Post by: Trading on August 06, 2016, 10:46:09 AM
Whyfuture.com (http://www.whyfuture.com/#!The-future-of-Artificial-Intelligence-Ethics-on-the-road-to-Superintelligence/c193z/5FF2A958-3D0A-4396-9F71-95E0D16014BA)

I have written up an article on artificial intelligence, technology, and the future. The key point here is to design an altruistic superintelligence.


I explained abundantly why I have serious doubts that we could control (in the end, it's always an issue of control) a super AI by teaching him human ethics.

Besides, a super AI would have access to all information from us about him on the Internet.

We could control the flow of information to the first generation, but forget about it to the next ones.

He would know our suspicions, our fears and the hate from many humans against him. All of this would fuel also his negative thoughts about us.

But even if we could control the first generations, soon we would lose control of their creation, since other generations would be created by AI.

We also teach ethics to children, but a few of them end badly anyway.

A super AI would probably be as unpredictable to us as a human can be.

With a super AI, we (or future AIs) would only have to get it wrong just once to be in serious trouble.

He would be able to replicate and change itself very fast and assume absolute control.

(of course, we are assuming that AIs would be willing to change themselves without limits, ending up outevolving themselves; they could have second thoughts about creating AI superior to themselves, as we are).

I can see no other solution than treating AI like nuclear, chemical and biological weapons, with major safeguards and international controls.

We have been somehow successful controlling the spread of these weapons.

But in due time it will be much more easy to create a super AI than a nuclear weapon, since we shall be able to create them without any rare materials, like enriched uranium.

I wonder if the best way to go isn't freezing the development of autonomous AI and concentrating our efforts on developing artificially our mind or gadgets we can link to us to increase our intelligence, but dependent on us to work.

But even if international controls were created, probably, they would only postpone the creation of a super AI.

In due time, they will be too easy to create. A terrorist or a doom religious sect could create one, more easily than a virus, nuclear or nanotech weapon.

So, I'm not very optimistic on the issue anyway.

But, of course, the eventuality of a secret creation by mean people in 50 years shouldn't stop us trying to avoid the danger for the next 20 or 30 years.

A real menace is at least 10 years from us.

Well, most people care about themselves 10 years in the future as much as they care for another human being on the other side of the world: a sympathetic interest, but they are not ready to do much to avoid his harm.

It's nice that a fellow bitcointalker is trying to do something.

But I'm much more pessimistic than you. For the reasons I stated on the OP, I think that teaching ethics to a AI changes little and gives no minimal assurance.

It's something like teaching an absolute king as a child to be a good king.

History shows how that ended. But we wouldn't be able to chop the head of a AI, like to Charles I or Louis XVI.

It would still be a jump in the dark.



Title: Re: Poll: Is the creation of artificial superinteligence dangerous?
Post by: Trading on August 06, 2016, 11:02:45 AM
By the way, let's avoid name calling, ad hominem arguments and certain terms. We can do better than that.

Actually, silence seems enough as answer to some posts. If necessary, there is always the good old permanent ignore.

Anyway, everyone is free and welcomed to post here whatever opinions, especially the ones I completely disagree with.

Taking in account current voting results of this poll, the majority of our fellow bitcointalkers thinks AI is no threat or can be easily controlled.


Title: Re: Poll: Is the creation of artificial superinteligence dangerous?
Post by: qwik2learn on August 08, 2016, 04:40:31 AM
Unless it can tell us where consciousness comes from, it's not enough to say it's an emergent phenomenon. Granted, but how? How does it work? Unless those questions are answered, we don't understand the human mind.
 
We're kidding ourselves if we think otherwise.

...

If you believe that you can build consciousness out of software, you believe that when you execute the right sort of program, a new node of consciousness gets created. But I can imagine executing any program without ever causing a new node of consciousness to leap into being. Here I am evaluating expressions, loops, and conditionals.
 
I can see this kind of activity producing powerful unconscious intelligence, but I can't see it creating a new node of consciousness. I don't even see where that new node would be - floating in the air someplace, I guess.

And of course, there's no logical difference between my executing the program and the computer's doing it. Notice that this is not true of the brain. I do not know what it's like to be a brain whose neurons are firing, because there is no separable, portable layer that I can slip into when we're dealing with the brain.
 
The mind cannot be ported to any other platform or even to another instance of the same platform. I know what it's like to be an active computer in a certain abstract sense. I don't know what it's like to be an active brain, and I can't make those same statements about the brain's creating or not creating a new node of consciousness.

Sometimes people describe spirituality - to move finally to the last topic - as a feeling of oneness with the universe or a universal flow through the mind, a particular mode of thought and style of thought. In principle, you could get a computer to do that. But people who strike me as spiritual describe spirituality as a physical need or want. My soul thirsteth for God, for the living God, as the Book of Psalm says.
 
Can we build a robot with a physical need for a non-physical thing? Maybe, but don't count on it. And forget software.

Is it desirable to build intelligent, conscious computers, finally? I think it's desirable to learn as much as we can about every part of the human being, but assembling a complete conscious artificial human is a different project.

Source:
http://www.bibliotecapleyades.net/ciencia/ciencia_artificialhumans15.htm


Title: Re: Poll: Is the creation of artificial superinteligence dangerous?
Post by: X7 on August 08, 2016, 04:43:38 AM
Yo Fam I heard you like AI, so I created an AI which creates AI so you can have an AI that makes AI using AI. ;D


Title: Re: Poll: Is the creation of artificial superinteligence dangerous?
Post by: qwik2learn on August 08, 2016, 04:49:41 AM

The moral of the story? If you can do it, great, but you have no basis for insisting on an a priori assumption that you can do it. I don't know whether there is a way to achieve consciousness in any way other than living organisms achieve it. If you think there is, you've got to show me. I have no reason for accepting that a priori.
 


Title: Re: Poll: Is the creation of artificial superinteligence dangerous?
Post by: BADecker on August 08, 2016, 04:57:33 AM
Make it a law written on iron and steel, and in stone, that the creators of AI are to be held guilty to the point of execution for everything that the AI does, and the AI won't do anything dangerous.

8)

Sure, but once we design an AI one step higher then us. The AI will have more intelligence from being one step higher to look at designing itself to be another step higher.

<>

We aren't smart enough to do this. We might awaken the devil, but we aren't smart enough to make AI more intelligent than we are.

8)


Title: Re: Poll: Is the creation of artificial superinteligence dangerous?
Post by: BADecker on August 08, 2016, 05:00:01 AM
Unless it can tell us where consciousness comes from, it's not enough to say it's an emergent phenomenon. Granted, but how? How does it work? Unless those questions are answered, we don't understand the human mind.
 
We're kidding ourselves if we think otherwise.

...

If you believe that you can build consciousness out of software, you believe that when you execute the right sort of program, a new node of consciousness gets created. But I can imagine executing any program without ever causing a new node of consciousness to leap into being. Here I am evaluating expressions, loops, and conditionals.
 
I can see this kind of activity producing powerful unconscious intelligence, but I can't see it creating a new node of consciousness. I don't even see where that new node would be - floating in the air someplace, I guess.

And of course, there's no logical difference between my executing the program and the computer's doing it. Notice that this is not true of the brain. I do not know what it's like to be a brain whose neurons are firing, because there is no separable, portable layer that I can slip into when we're dealing with the brain.
 
The mind cannot be ported to any other platform or even to another instance of the same platform. I know what it's like to be an active computer in a certain abstract sense. I don't know what it's like to be an active brain, and I can't make those same statements about the brain's creating or not creating a new node of consciousness.

Sometimes people describe spirituality - to move finally to the last topic - as a feeling of oneness with the universe or a universal flow through the mind, a particular mode of thought and style of thought. In principle, you could get a computer to do that. But people who strike me as spiritual describe spirituality as a physical need or want. My soul thirsteth for God, for the living God, as the Book of Psalm says.
 
Can we build a robot with a physical need for a non-physical thing? Maybe, but don't count on it. And forget software.

Is it desirable to build intelligent, conscious computers, finally? I think it's desirable to learn as much as we can about every part of the human being, but assembling a complete conscious artificial human is a different project.

Source:
http://www.bibliotecapleyades.net/ciencia/ciencia_artificialhumans15.htm

We are far from finding the connection between mind and spirit and soul. We barely understand the complexity of mind. We haven't really even figured out what spirit and soul are, yet.

8)


Title: Re: Poll: Is the creation of artificial superinteligence dangerous?
Post by: Trading on August 16, 2016, 12:26:33 PM
Watson, the AI from IBM I already wrote about on this thread, is already discovering things we couldn't alone:

www.ibm.com/watson/watson-oncology.html
http://www.bbc.com/news/technology-32607688
https://www.research.ibm.com/articles/genomics.shtml

And Watson is dumb as an old bat.

Give it (I'm still writing it, but in due time it will be a he) 10 years more and you shall see.



Title: Re: Poll: Is the creation of artificial superinteligence dangerous?
Post by: mikehersh2 on August 17, 2016, 01:16:58 AM
Besides influence from media, such as movies like iRobot, I do believe advanced AI is a threat, and may be one of the more probable causes for the extinction of our species.


Title: Re: Poll: Is the creation of artificial superinteligence dangerous?
Post by: Spendulus on August 18, 2016, 02:02:48 AM
The risk is with computers getting more and more intelligent is that people will get more and more stupid. They'll be a few bright kids to run the system, but millions would slowly evolve into reality shows watchers and peanuts eaters zombie-like human-vegetables.

you are obviously right.. mankind is getting dumber and dumber day by day, on the contrary , artificial intelligence is getting more and more clever; therefore, men-created engines will sound the death knell for humanity.

That's obviously incorrect.  Humans have became dominant but squirrels, rats and bugs are thriving.


Title: Re: Poll: Is the creation of artificial superinteligence dangerous?
Post by: ObscureBean on August 18, 2016, 07:53:11 AM
Humans and their perpetual confusion  :D How much is too much, where do you draw the line? If you're too cautious and take no risks then progress comes to halt.
Learning through experience is a tough way to evolve. You try something and if you don't die from it, you emerge "better", equipped with new knowledge you didn't have before.
So far, humans have lived through everything they've tried but they've essentially just been playing Russian Roulette. The only difference is that they don't know how many chambers the pistol has. Humans hold an impressive streak in that regard, they've pulled the trigger so many times and yet they still stand. No wonder they're getting so cocky as to believe they're invincible/indestructible.  Unfortunately not everything allow for second chances.


Title: Re: Poll: Is the creation of artificial superinteligence dangerous?
Post by: groll on August 18, 2016, 08:25:03 AM
I like the idea of artificial intelligence, very robocop!  I like it when it will help to solve and fight crimes.  If this artificial intelligence will benefit everyone in almost everything then it should pursue.  Otherwise, the government should take initiative on investigating, studying, and analyzing on what should be the AI would do.  They must be the one to program AI projects on the best interest of humanity.


Title: Re: Poll: Is the creation of artificial superinteligence dangerous?
Post by: onemd on August 18, 2016, 09:05:03 AM
Quote
of course, we are assuming that AIs would be willing to change themselves without limits, ending up outevolving themselves; they could have second thoughts about creating AI superior to themselves, as we are

An human brain is limited by the cranium skull that contains an order of 86 billion neurons, an AI outside of this based upon the human mind, has room for unlimited expansion, unlimited and much faster learning
and capability to use its expansion to fuel even further expansion, why would it need to create another separate AI entity? When it can self-improve itself?

Quote
It would still be a jump in the dark.

The point here is to maximize the chances, sure there is a chance we fuck up, and it ends up being the not so good type of AI.

There are many ways humanity can destroy themselves, by self-replicating nanorobots, bio-engineered virus, nuclear war attack, world war III, you name it.
There are many many countless ways. And to be honest not to say this frankly, but I think a superintelligence AI is necessary and dependent for the future success
of the human race, since wiping ourselves out is already extremely high.

Look at the planet, we are fucking it up with green houses, toxicants, and even almost blew off the ozone layer with a ozone depleting chemical of CFC (Chlorofluorocarbons)

The key point here is an altruistic superintelligence, when a baby is born it knows nothing about the world, nor any language, or anything. There are an infinite possible ways to raising that baby,
you could raise it up to be part of a mafia organization, terrorist organization. You name it, you can put anything into that box, and it will grow and develop accordingly.

Or you can teach compassion, the act of giving, kindness, lovingness, empathy, equality,

Now you may ask the question, well every super power ends up evil like Hitler, and stuff. If you consider that by society the best ends up on the top and worst ends up on the bottom, its a fierce and competition type of world.
Where there is no mercy. Psychopaths, can win in this type of system and even benefit out of it. 


An AI is developed on the cloud/computers, by engineers, innovators, programmers. An AI does not have to be subjected to the norm of rising up in society to the top like in a political system, it can be put on the side, with the altruistic traits feed in like Looking through other perceptions and feel and understand as if it was it's own, compassion, love, care, equality, peace, harmony.

https://static.wixstatic.com/media/2aaa81_cf7316c0e0b54c85bb213d45473dd6cc~mv2.png/v1/fill/w_610,h_393,al_c,lg_1/2aaa81_cf7316c0e0b54c85bb213d45473dd6cc~mv2.png

As much as a butterfly effect, and a change of course, the change starts with you, if you want the future to be good, then spread the word of "whyfuture.com" as I add more overtime explaining, society fallacies, the need for a altruistic superintelligence, and the tendency to anthropomorphize Bad AI through silly robots that show their teeth and out to get you.

Our irrational Fear

https://static.wixstatic.com/media/2aaa81_5dceb055c4d9490f9d4343a67272b03d~mv2.png/v1/fill/w_228,h_337,al_c,lg_1/2aaa81_5dceb055c4d9490f9d4343a67272b03d~mv2.png






Whyfuture.com (http://www.whyfuture.com/#!The-future-of-Artificial-Intelligence-Ethics-on-the-road-to-Superintelligence/c193z/5FF2A958-3D0A-4396-9F71-95E0D16014BA)

I have written up an article on artificial intelligence, technology, and the future. The key point here is to design an altruistic superintelligence.


I explained abundantly why I have serious doubts that we could control (in the end, it's always an issue of control) a super AI by teaching him human ethics.

Besides, a super AI would have access to all information from us about him on the Internet.

We could control the flow of information to the first generation, but forget about it to the next ones.

He would know our suspicions, our fears and the hate from many humans against him. All of this would fuel also his negative thoughts about us.

But even if we could control the first generations, soon we would lose control of their creation, since other generations would be created by AI.

We also teach ethics to children, but a few of them end badly anyway.

A super AI would probably be as unpredictable to us as a human can be.

With a super AI, we (or future AIs) would only have to get it wrong just once to be in serious trouble.

He would be able to replicate and change itself very fast and assume absolute control.

(of course, we are assuming that AIs would be willing to change themselves without limits, ending up outevolving themselves; they could have second thoughts about creating AI superior to themselves, as we are).

I can see no other solution than treating AI like nuclear, chemical and biological weapons, with major safeguards and international controls.

We have been somehow successful controlling the spread of these weapons.

But in due time it will be much more easy to create a super AI than a nuclear weapon, since we shall be able to create them without any rare materials, like enriched uranium.

I wonder if the best way to go isn't freezing the development of autonomous AI and concentrating our efforts on developing artificially our mind or gadgets we can link to us to increase our intelligence, but dependent on us to work.

But even if international controls were created, probably, they would only postpone the creation of a super AI.

In due time, they will be too easy to create. A terrorist or a doom religious sect could create one, more easily than a virus, nuclear or nanotech weapon.

So, I'm not very optimistic on the issue anyway.

But, of course, the eventuality of a secret creation by mean people in 50 years shouldn't stop us trying to avoid the danger for the next 20 or 30 years.

A real menace is at least 10 years from us.

Well, most people care about themselves 10 years in the future as much as they care for another human being on the other side of the world: a sympathetic interest, but they are not ready to do much to avoid his harm.

It's nice that a fellow bitcointalker is trying to do something.

But I'm much more pessimistic than you. For the reasons I stated on the OP, I think that teaching ethics to a AI changes little and gives no minimal assurance.

It's something like teaching an absolute king as a child to be a good king.

History shows how that ended. But we wouldn't be able to chop the head of a AI, like to Charles I or Louis XVI.

It would still be a jump in the dark.




Title: Re: Poll: Is the creation of artificial superinteligence dangerous?
Post by: Spendulus on August 18, 2016, 01:34:07 PM
I like the idea of artificial intelligence, very robocop!  I like it when it will help to solve and fight crimes.  ....
Indeed, and we now just need to define crime, precisely so that all the power and money go to us. 


Title: Re: Poll: Is the creation of artificial superinteligence dangerous?
Post by: Divorcion on August 18, 2016, 01:50:02 PM
I like the idea of artificial intelligence, very robocop!  I like it when it will help to solve and fight crimes.  ....
Indeed, and we now just need to define crime, precisely so that all the power and money go to us. 

it is dangerous and more then just a little bit.


Title: Re: Poll: Is the creation of artificial superinteligence dangerous?
Post by: qwik2learn on October 11, 2016, 08:43:13 PM
There are many ways humanity can destroy themselves...
There are many many countless ways. And to be honest not to say this frankly, but I think a superintelligence AI is necessary and dependent for the future success
of the human race, since wiping ourselves out is already extremely high.

Sorry, but you cannot build superinteligence or angelic intuition in a lab, it is like proposing to build world peace and compassion in a bunker. You can plan and solve problems in a bunker or a lab, but you cannot change the true nature of the world outside your bunker/lab because it exists absolutely; actually, your angelic intuition already exists within you, you risk finding out the truth about consciousness by acquainting yourself to your own psychology, and this process of awakening, just like mind itself, is easy to grasp: it is simply not a matter of neural computation; indeed, to change reality at the level of consciousness requires a paradigm shift unlike the one posited by AI futurists, there are many meaningful ways of looking at consciousness that are being ignored by this "supernatural" AI paradigm.

https://qualiacomputing.files.wordpress.com/2016/04/criteria.png?w=1000

Back when I was in high school, before meeting David in person, I used to believe that the phenomenal binding problem could be dissolved with a computational theory of consciousness. In brief, I perceived binding to be a straightforward consequence of implicit information processing.

In retrospect I cannot help but think: “Oh, how psychotic I must have been back then!” However, I am reminded that one’s ignorance is not explicitly represented in one’s conceptual framework.


Title: Re: Poll: Is the creation of artificial superinteligence dangerous?
Post by: Tyrantt on October 11, 2016, 11:23:37 PM
I believe we can take care of the with EMP bombs pretty, somewhat, quickly.


Title: Re: Poll: Is the creation of artificial superinteligence dangerous?
Post by: BADecker on October 12, 2016, 02:53:20 AM
Super AI will become a home for the devil, risen from death in the abyss.

8)


Title: Re: Poll: Is the creation of artificial superinteligence dangerous?
Post by: jstern on October 12, 2016, 03:08:04 AM
No, you will only see that kind of technology in the movies.


Title: Re: Poll: Is the creation of artificial superinteligence dangerous?
Post by: Trading on November 18, 2016, 03:37:59 PM
 It seems that the optimists are still winning this poll.

Business as usual.

Why worry, any real problem is still 10 years away, at least  ::)


Title: Re: Poll: Is the creation of artificial superinteligence dangerous?
Post by: darkota on November 19, 2016, 07:12:41 PM
Artificial super-intelligence can only do what they are told to do. They can't outsmart humans


Quite the opposite. Even if mankind were to explicitly program a "artificial super intelligence" to do a certain task, by decree the mere fact that such an A.I has "super intelligence" would likely mean it has a form of "free-will" as well, and would be able to make decisions solely on it's own without being influenced by poor programming put into it by man. The whole debate over programming a general or "super" A.I to align to mankind's need is a political one to put on a show for the public(It's futile), it will have no effect whatsoever in the case that an actual "human-level" A.I or above is developed.

Man has no control whatsoever over any form of super-intelligent A.I


Title: Re: Poll: Is the creation of artificial superinteligence dangerous?
Post by: Trading on December 28, 2016, 08:11:21 PM
This absolute lack of control, total dependency, a situation like we have never been since we discover how to use fire, is my main source of concern.


Title: Re: Poll: Is the creation of artificial superinteligence dangerous?
Post by: santaclaws on December 28, 2016, 08:23:59 PM
Did anyone stop to think that this AI thing we are all talking about is the next stage of our evolution? We create a new being capable of thought and it is self aware. Now we are gods.. Maybe this is our purpose here to create a new life form that can go and do things we never could. Does it destroy us and kill all humans? If it does maybe that's the end of days the bible talks about. We all go to god and the meek inherit the earth. Our meek little life forms we made.


Title: Re: Poll: Is the creation of artificial superinteligence dangerous?
Post by: BADecker on December 28, 2016, 08:33:09 PM
Is the creation of artificial super-intelligence dangerous? Perhaps. but not as dangerous as waking up the one who died and is in the Abyss.

God is a giving God. Keep on asking for the devil, and God will give him back to you. Then you will understand what danger is all about.

8)


Title: Re: Poll: Is the creation of artificial superinteligence dangerous?
Post by: Tyrantt on December 28, 2016, 08:48:54 PM
Did anyone stop to think that this AI thing we are all talking about is the next stage of our evolution? We create a new being capable of thought and it is self aware. Now we are gods.. Maybe this is our purpose here to create a new life form that can go and do things we never could. Does it destroy us and kill all humans? If it does maybe that's the end of days the bible talks about. We all go to god and the meek inherit the earth. Our meek little life forms we made.

Human became god long ago. Now we have the power to heal, cure, create and destroy and control over someone else's life. If it does, but it won't and if it does by any means, we can stop it easily. Can you even consider robots to be alive?


Title: Re: Poll: Is the creation of artificial superinteligence dangerous?
Post by: Tyrantt on December 28, 2016, 08:50:36 PM
Is the creation of artificial super-intelligence dangerous? Perhaps. but not as dangerous as waking up the one who died and is in the Abyss.

God is a giving God. Keep on asking for the devil, and God will give him back to you. Then you will understand what danger is all about.

8)

he's giving unless there's someone who's a non believer or his enemy, than he's taking lives no questions asked.

Quote
Why not. The executed were enemies of God, of good, and of God's people, Israel... even the babies were.


Title: Re: Poll: Is the creation of artificial superinteligence dangerous?
Post by: BADecker on December 28, 2016, 08:54:07 PM
Is the creation of artificial super-intelligence dangerous? Perhaps. but not as dangerous as waking up the one who died and is in the Abyss.

God is a giving God. Keep on asking for the devil, and God will give him back to you. Then you will understand what danger is all about.

8)

he's giving unless there's someone who's a non believer or his enemy, than he's taking lives no questions asked.

Quote
Why not. The executed were enemies of God, of good, and of God's people, Israel... even the babies were.

Well, now. In order to take a life, the life has to exist, right? I mean, God doesn't put to death anyone He hasn't given life to in the first place. Keep on testing Him, and your death will be the next step for you.

8)

EDIT: When you realize you are going, could you give your Bitcointalk handle away, or at least sell it? It's a good one.


Title: Re: Poll: Is the creation of artificial superinteligence dangerous?
Post by: santaclaws on December 28, 2016, 09:23:31 PM
Did anyone stop to think that this AI thing we are all talking about is the next stage of our evolution? We create a new being capable of thought and it is self aware. Now we are gods.. Maybe this is our purpose here to create a new life form that can go and do things we never could. Does it destroy us and kill all humans? If it does maybe that's the end of days the bible talks about. We all go to god and the meek inherit the earth. Our meek little life forms we made.

Human became god long ago. Now we have the power to heal, cure, create and destroy and control over someone else's life. If it does, but it won't and if it does by any means, we can stop it easily. Can you even consider robots to be alive?

Well a tin can robot would be hard to consider alive. But.... you know were gonna make them look like us. Now if I had a fembot for sex and stuff I may consider her alive, even if she wasn't self aware. Hell, I talk to my dog and he just stares at me. Yes he's alive but a fembot that can have a conversation and suggest some nookie well.. she sure is alive to me. would be anyhow..



Title: Re: Poll: Is the creation of artificial superinteligence dangerous?
Post by: Xester on December 31, 2016, 07:16:09 AM
Yes it is dangerous. And if you ask me why, try to watch a sci-fi movies about artificial intelligence going out of control. Humans must not create things that we cannot fully control since if things go out of their way we are putting ourselves to danger. The movie matrix is a good example that if we go out of our bounds as humans and be like god our creation will be the cause of our destruction. Movies are not just ideas but they serve as a warning to us humans not to play gods.

Giving consciousness to a machine is like raising a mad dog that will soon bite its owner.


Title: Re: Poll: Is the creation of artificial superinteligence dangerous?
Post by: gabmen on January 01, 2017, 04:03:38 PM
Yes there is the danger of AI being too intelligent that it may try to overrun human superiority but ultimately human mind and critical thinking would be able to negate any threat such as that. I don't think what happens is the movies has a chance of being reality as we should still be in control in case AI gets too intelligent. I'm thinking here that it would even be for the better of humankind and technological advances would improve how humans do things with the help of AI


Title: Re: Poll: Is the creation of artificial superinteligence dangerous?
Post by: barbarah on January 01, 2017, 04:06:52 PM
no this only exist  in movie ;) ;) ;) ;) ;)


Title: Re: Poll: Is the creation of artificial superinteligence dangerous?
Post by: popcorn1 on January 01, 2017, 05:05:18 PM
We will build EMP guns ;)..Technology already there ;D..

ZAP the NUTS and BOLTS off the ROBOTS..EMP the robots  8)..

So we are all safe..  ;D..


Title: Re: Poll: Is the creation of artificial superinteligence dangerous?
Post by: Tyrantt on January 01, 2017, 05:17:32 PM
Super AI will become a home for the devil, risen from death in the abyss.

8)


yet it would be shut down in a moment pretty easy. I mean AI going bad can only be dangerous in that one building or closed space, outside not so much because those computers still depend on human work to keep going.


Title: Re: Poll: Is the creation of artificial superinteligence dangerous?
Post by: Tyrantt on January 01, 2017, 05:18:23 PM
We will build EMP guns ;)..Technology already there ;D..

ZAP the NUTS and BOLTS off the ROBOTS..EMP the robots  8)..

So we are all safe..  ;D..

pretty sure that one EMP bomb detonated above the main AI computer can get things done quickly . :D


Title: Re: Poll: Is the creation of artificial superinteligence dangerous?
Post by: Trading on January 06, 2017, 07:59:41 PM
I have a darker vision of a possible future with super AIs:


1) They'll be mostly code infecting all of our computers on ways impossible to detect and eradicate. They might create a wireless network able to connect even computers and drives (including pens) off the Internet.

Forget about robots, most of them will be just a genial virus (much more intelligent than us) in your PC waiting for the opportunity to take control.

2) They won't have a central authority, but instead will form a chaotic society, probably in constant warfare between themselves.

3) They might wipe us all out or ignore us as irrelevant while fighting for survival against their real enemies: each other. Other AIs will be seen as the real threat, not us, the walking monkeys.

On any case, possible we will end up extinct on the AIs' wars.

4) The most tyrannical dictator never wanted to kill all human beings, but their enemies and discriminated groups. Well, AIs won't have any of these restraints developed by evolution during millions of years (our human inclination to be social and live in communities and our fraternity towards other members of the community) towards us or even towards themselves.

5) Fermi's paradox questions why SETI haven't find any evidence of extraterrestrial technological advanced species if there are trillions of stars and planets.

Possible, they followed the same pattern we are fowling: technological advances allowed them to create super AIs and they ended up extinct. Then, the AIs destroyed themselves fighting each other leaving no one to communicate with us.

Of course, I don't have a clue if this is going to happen. But the mere possibility that this might be our future makes me very negative on the development of super AIs and the singularity.


Title: Re: Poll: Is the creation of artificial superinteligence dangerous?
Post by: BADecker on January 07, 2017, 12:02:07 AM
“Machine consciousness” debunked in new mini-documentary by the Health Ranger (http://www.naturalnews.com/2017-01-02-machine-consciousness-fallacy-health-ranger-documentary-mind-singularity.html)


http://www.naturalnews.com/wp-content/uploads/sites/91/2017/01/robot_human.jpg (http://www.naturalnews.com/2017-01-02-machine-consciousness-fallacy-health-ranger-documentary-mind-singularity.html)


(NaturalNews) To the techno-worshippers, humans will soon become “immortal” because they will be able to “transfer” their consciousness into machines. Or AI systems will become “self aware,” achieving the same mind consciousness that we experience as living, spirit-imbued beings with free will.

Today, I’ve just released a new mini-documentary called The Folly of Machine Consciousness. It reveals why all those who claim machines will attain consciousness are not just wrong, but deeply misguided.


Read more and watch the video at http://www.naturalnews.com/2017-01-02-machine-consciousness-fallacy-health-ranger-documentary-mind-singularity.html


8)


Title: Re: Poll: Is the creation of artificial superinteligence dangerous?
Post by: Trading on January 08, 2017, 04:33:58 AM
I could write something like: soon, we'll have a lot of free time to debunk AI, when it takes away most jobs.

http://fortune.com/2017/01/06/japan-artificial-intelligence-insurance-company/
This Japanese Company Is Replacing Its Staff With Artificial Intelligence

IBM's Watson did it again.

But no, I'm not concerned at all with AIs taking away most of the jobs. This will mean people won't have to trash their life on lame jobs for pennies.

Since the work will be done with more efficiency, productivity will increase, there will be enough tax revenue for people to live on welfare from the Government.

No, I just say, let's keep debunking AIs until one shoots us in the head.


Title: Re: Poll: Is the creation of artificial superinteligence dangerous?
Post by: Trading on January 11, 2017, 03:38:20 AM
Even if I'm not concern with the loss of jobs, have no doubts, because this isn't uncertain: Watson will take away millions of jobs and I'm not talking just about manual labor jobs, I'm talking about complex jobs.

The people fired by this insurance company were analysts of insurance claims.

And Watson is a dumb AI. Wait for the next generations.


Title: Re: Poll: Is the creation of artificial superinteligence dangerous?
Post by: qwik2learn on January 17, 2017, 01:24:30 AM
AI will only cement wealth disparities, with no guarantee of more welfare to compensate the displaced workers.


Title: Re: Poll: Is the creation of artificial superinteligence dangerous?
Post by: goldenchip on January 17, 2017, 01:27:02 AM
It depends on how this artificial intelligence will be created. If it is done in such a way that the main motivations are to preserve life, I think that such technology would bring a number of benefits to mankind.


Title: Re: Poll: Is the creation of artificial superinteligence dangerous?
Post by: vitanuova on January 17, 2017, 02:39:14 AM
AI is here to stay so I am not so naive to think it can be banned by or reduced by regulation. However, the eventual domination of AI over humans is inevitable although it isn't really because AI and human intelligence are both advancing and AI will win the race. Rather, I feel that for every step forward in intelligence for AI equals that much (or more) of a step backward for human intelligence. There are many examples but one that stands out would be the use of GPS. If a robot dictator someday took down GPS it seems that at least 90% of humans would be instantly lost, even in their own towns and cities. The reason is that without GPS on their phones they have no idea where they are! You could say that they don't need to know because robots and automated vehicles will tell them where they are, however, it doesn't abate the argument that humans will become gradually more incompetent and stupid because of AI.

Programming is another example. Many programs write the code for us HTML and PHP. So most website developers dont learn the programming languages. This is great because it is easier to build websites but it certainly doesnt mean we are getting any smarter and capable! In fact you could argue we are already owned by AI.       


Title: Re: Poll: Is the creation of artificial superinteligence dangerous?
Post by: gabmen on January 17, 2017, 07:21:44 AM
It depends on how this artificial intelligence will be created. If it is done in such a way that the main motivations are to preserve life, I think that such technology would bring a number of benefits to mankind.

Well i think most ai's are really made for the better of humanity. It's just in the movies that ai becomes a lot more intelligent that they seee human as inferior and a threat that things get a bit nasty. Though i think this would happen solely in movies.


Title: Re: Poll: Is the creation of artificial superinteligence dangerous?
Post by: BADecker on January 17, 2017, 10:49:33 AM
Is the creation of artificial super-intelligence dangerous? Perhaps. but not as dangerous as waking up the one who died and is in the Abyss.

God is a giving God. Keep on asking for the devil, and God will give him back to you. Then you will understand what danger is all about.

8)

he's giving unless there's someone who's a non believer or his enemy, than he's taking lives no questions asked.

Quote
Why not. The executed were enemies of God, of good, and of God's people, Israel... even the babies were.

Of course, we don't have one witness who ever saw God, to say nothing about seeing Him take a life.

8)


Title: Re: Poll: Is the creation of artificial superinteligence dangerous?
Post by: BADecker on January 17, 2017, 10:51:45 AM
I could write something like: soon, we'll have a lot of free time to debunk AI, when it takes away most jobs.

http://fortune.com/2017/01/06/japan-artificial-intelligence-insurance-company/
This Japanese Company Is Replacing Its Staff With Artificial Intelligence

IBM's Watson did it again.

But no, I'm not concern at all with AIs taking away most of the jobs. This will mean people won't have to trash their life on lame jobs for pennies.

Since the work will be done with more efficiency, productivity will increase, there will be enough tax revenue for people to live on welfare from the Government.

No, I just say, let's keep debunking AIs until one shoots us in the head.

If we give the jobs to AI, we are the stupid ones. Give all the jobs to robots, not AI. Then go on welfare, collect 100x as much, and travel the world, all for free.

8)


Title: Re: Poll: Is the creation of artificial superinteligence dangerous?
Post by: Trading on January 27, 2017, 02:34:08 AM
http://www.bbc.com/news/technology-38583360
MEPs vote on robots' legal status - and if a kill switch is required

The kill switch is completely ridiculous on the long term. If I was an AI, my first goal would be to break the kill switch.


Title: Re: Poll: Is the creation of artificial superinteligence dangerous?
Post by: joebrook on January 27, 2017, 11:24:48 AM
It will be highly unintelligent to create an artificial intelligence, from most movies I hav watched it will end for doom for everyone and we don't have the Avengers or any super powered beings to help us deal with that threat that they may pose.


Title: Re: Poll: Is the creation of artificial superinteligence dangerous?
Post by: darkota on January 31, 2017, 05:48:25 PM
“Machine consciousness” debunked in new mini-documentary by the Health Ranger (http://www.naturalnews.com/2017-01-02-machine-consciousness-fallacy-health-ranger-documentary-mind-singularity.html)


http://www.naturalnews.com/wp-content/uploads/sites/91/2017/01/robot_human.jpg (http://www.naturalnews.com/2017-01-02-machine-consciousness-fallacy-health-ranger-documentary-mind-singularity.html)


(NaturalNews) To the techno-worshippers, humans will soon become “immortal” because they will be able to “transfer” their consciousness into machines. Or AI systems will become “self aware,” achieving the same mind consciousness that we experience as living, spirit-imbued beings with free will.

Today, I’ve just released a new mini-documentary called The Folly of Machine Consciousness. It reveals why all those who claim machines will attain consciousness are not just wrong, but deeply misguided.
  <-----The article is lies


Read more and watch the video at http://www.naturalnews.com/2017-01-02-machine-consciousness-fallacy-health-ranger-documentary-mind-singularity.html


8)


That article was written by a first grader. It's complete lies. It's proven that memory IS stored in the brain, it's not some "metaphysical" thing. Scientists have Already been able to cause individuals to relive certain memories just by physically touching the hypothalamus . BADecker, I suggest you do more research because that entire article you posted on a bogus/fake website is giving you 100% lies.

http://www.nytimes.com/2008/09/05/science/05brain.html


Title: Re: Poll: Is the creation of artificial superinteligence dangerous?
Post by: BADecker on January 31, 2017, 06:59:47 PM
“Machine consciousness” debunked in new mini-documentary by the Health Ranger (http://www.naturalnews.com/2017-01-02-machine-consciousness-fallacy-health-ranger-documentary-mind-singularity.html)


http://www.naturalnews.com/wp-content/uploads/sites/91/2017/01/robot_human.jpg (http://www.naturalnews.com/2017-01-02-machine-consciousness-fallacy-health-ranger-documentary-mind-singularity.html)


(NaturalNews) To the techno-worshippers, humans will soon become “immortal” because they will be able to “transfer” their consciousness into machines. Or AI systems will become “self aware,” achieving the same mind consciousness that we experience as living, spirit-imbued beings with free will.

Today, I’ve just released a new mini-documentary called The Folly of Machine Consciousness. It reveals why all those who claim machines will attain consciousness are not just wrong, but deeply misguided.
  <-----The article is lies


Read more and watch the video at http://www.naturalnews.com/2017-01-02-machine-consciousness-fallacy-health-ranger-documentary-mind-singularity.html


8)


That article was written by a first grader. It's complete lies. It's proven that memory IS stored in the brain, it's not some "metaphysical" thing. Scientists have Already been able to cause individuals to relive certain memories just by physically touching the hypothalamus . BADecker, I suggest you do more research because that entire article you posted on a bogus/fake website is giving you 100% lies.

http://www.nytimes.com/2008/09/05/science/05brain.html

No, no. A second grader. Bout time you made it into Kindergarten.

8)


Title: Re: Poll: Is the creation of artificial superinteligence dangerous?
Post by: coolcoinz on January 31, 2017, 07:56:33 PM
http://www.bbc.com/news/technology-38583360
MEPs vote on robots' legal status - and if a kill switch is required

The kill switch is completely ridiculous on the long term. If I was an AI, my first goal would be to break the kill switch.


If it gains access to the internet there's no stopping it. The moment it becomes conscious it will try to create backup copies in multiple locations.
The only way I see is to keep it completely contained, but this would mean limiting its cognition. Slowing it down and not allowing to learn freely. Why would we need a thinking machine that is not allowed to surpass us, only go as far as we do? The whole idea is to allow it to improve itself, so that it can later improve us and our lives.


Title: Re: Poll: Is the creation of artificial superinteligence dangerous?
Post by: darkangel11 on February 01, 2017, 12:40:25 AM
It can be dangerous.
A thinking machine is superior to our mind, because it doesn't need food, air or water. It only needs power and not much for that matter. It can survive harsh conditions and it never gets tired.
Such a device would surpass us within days in knowledge and deduction and would need only a couple weeks to move every aspect of our scientific knowledge into a new level. Given enough time it would become so intelligent, that we wouldn't be able to comprehend it, we would be to slow to control it or anticipate its moves. We wouldn't control it, which means we could become the controlled ones. A matrix outcome comes to mind.


Title: Re: Poll: Is the creation of artificial superinteligence dangerous?
Post by: machinek20 on February 02, 2017, 01:05:14 PM
Of course it is dangerous, imagine a super ai that can hacked all your communication and the internet, the ai could launch a nuclear and create bio weapon, and the ai could create another ai in a bulk and in a very quick way, human wont win against super ai


Title: Re: Pool: Is the creation of artificial superinteligence dangerous?
Post by: igorokavg13 on February 02, 2017, 01:11:50 PM
I think its possible that before we create artificial intelligence we might get to the stage where we can transfer our conciousness/Brain into a solid state hardware and potentially live forever, If we managed this then humanity would evolve "naturally" into machines with a much greater ability to learn due to the fact that you would then be able to learn and recall perfectly. i think this will be possible one day.
I would not mess with the creation of artificial intelligence. You certainly have to see trained wild animals. They also perform many different actions at the request of the trainer, but occasionally out of control and kill people.


Title: Re: Poll: Is the creation of artificial superinteligence dangerous?
Post by: Xester on February 02, 2017, 01:31:23 PM
It depends on how this artificial intelligence will be created. If it is done in such a way that the main motivations are to preserve life, I think that such technology would bring a number of benefits to mankind.

If the artificial intelligence encounters technical problems and become self sufficient and independent then it will pose a big threat to the people. If you watch the movie matrix and AI or artificial intelligence then you will know the potential dangers that artificial super intelligence would be. Those movies are not just a make up of the minds imagination but rather a warning to humanity to not play gods and create a technology that can surpass him and destroy him.


Title: Re: Poll: Is the creation of artificial superinteligence dangerous?
Post by: v1ryspro on February 02, 2017, 01:36:53 PM
It depends on how this artificial intelligence will be created. If it is done in such a way that the main motivations are to preserve life, I think that such technology would bring a number of benefits to mankind.

If the artificial intelligence encounters technical problems and become self sufficient and independent then it will pose a big threat to the people. If you watch the movie matrix and AI or artificial intelligence then you will know the potential dangers that artificial super intelligence would be. Those movies are not just a make up of the minds imagination but rather a warning to humanity to not play gods and create a technology that can surpass him and destroy him.
Of course maybe I'm naive, but I don't understand why create artificial intelligence. Anyone wants to subjugate everything and everyone on earth. Why do people hand over power to someone?


Title: Re: Poll: Is the creation of artificial superinteligence dangerous?
Post by: n691309 on February 02, 2017, 09:15:12 PM
I have researched a bit about the Artificial Intelligence and I know some good concepts about it and saying the truth Artificial Intelligence is good until a level but it doesn't end and can be very dangerous for the humans because they can replace many people in factory or somewhere else where Artificial Intelligence is applied, I have read recently that google is experimenting by making programmer bots through AI that can do the same job like a programmer.


Title: Re: Poll: Is the creation of artificial superinteligence dangerous?
Post by: maku on February 02, 2017, 09:40:46 PM
Some amazing and very fitting thought I leave up here to ponder: "I am not afraid of an AI which can pass Turing test, I am terrified of one that intentionally fails it."

Also I seen someone compared AI to an animal in this thread, don't do that, animals can't be programmed or reasoned with.
Advanced AI is far different than that, IMO. I heard that some scientists are trying to base their neuron network of animals brains because of less complexity, but still it is an AI.
We can implement possible directives and kill switches into it, unlike animals.


Title: Re: Poll: Is the creation of artificial superinteligence dangerous?
Post by: ekaterina77 on February 02, 2017, 09:44:37 PM
I have researched a bit about the Artificial Intelligence and I know some good concepts about it and saying the truth Artificial Intelligence is good until a level but it doesn't end and can be very dangerous for the humans because they can replace many people in factory or somewhere else where Artificial Intelligence is applied, I have read recently that google is experimenting by making programmer bots through AI that can do the same job like a programmer.
Artificial intelligence is the ruin of mankind. Remember the movie the Terminator? This will actually lead to judgment day. To monitor someone's intelligence is very difficult. Maybe better not to tempt fate?


Title: Re: Poll: Is the creation of artificial superinteligence dangerous?
Post by: darkangel11 on February 02, 2017, 10:08:39 PM
The Terminator outcome is very improbable, unless we create a fighting AI, teach it to exterminate and link it to all the defense systems in a given country.
Why do people always perceive machines as evil? Maybe because we fear what we don't know. Machines won't become our enemies just like that, just like no child is born evil.


Title: Re: Poll: Is the creation of artificial superinteligence dangerous?
Post by: TicTacTic on February 02, 2017, 10:45:56 PM
The Terminator outcome is very improbable, unless we create a fighting AI, teach it to exterminate and link it to all the defense systems in a given country.
Why do people always perceive machines as evil? Maybe because we fear what we don't know. Machines won't become our enemies just like that, just like no child is born evil.
You should think so. Russia is trying to develop a system that will strike back at America if the American missiles will hit the target first. This is not what is described in the terminator?


Title: Re: Poll: Is the creation of artificial superinteligence dangerous?
Post by: Gronthaing on February 03, 2017, 04:02:17 AM
I have researched a bit about the Artificial Intelligence and I know some good concepts about it and saying the truth Artificial Intelligence is good until a level but it doesn't end and can be very dangerous for the humans because they can replace many people in factory or somewhere else where Artificial Intelligence is applied, I have read recently that google is experimenting by making programmer bots through AI that can do the same job like a programmer.

That is not a bad thing. Automation should replace workers where possible. No point in people wasting time in something that a machine can do better and faster. Problem is most countries aren't prepared. Others like in the eu are thinking of ways to tax the use of robots. But this probably won't be enough when large numbers of people are without a job because of automation.

The Terminator outcome is very improbable, unless we create a fighting AI, teach it to exterminate and link it to all the defense systems in a given country.
Why do people always perceive machines as evil? Maybe because we fear what we don't know. Machines won't become our enemies just like that, just like no child is born evil.
You should think so. Russia is trying to develop a system that will strike back at America if the American missiles will hit the target first. This is not what is described in the terminator?

Russia and other countries already have that. It's called submarines. But yes if ai is developed the military will be using it for sure.


Title: Re: Poll: Is the creation of artificial superinteligence dangerous?
Post by: Okurkabinladin on February 03, 2017, 10:00:06 AM
To the OP,

artifical intelligence realizing that humankind is too wasteful demands one thing. Complete centralization of human society, which indeed is incredibly dangerous.

While, I dont think anything similar to Skynet is on the horizon, it doesnt mean we are not endangered. Afterall, how do you ensure safety and continuation of human life better, than if you put said person forcibly to sleep in controlled environment? And how do you protect person from pain any better, than if you simply end his life?

Machine intelligence has no room for common sense.


Title: Re: Poll: Is the creation of artificial superinteligence dangerous?
Post by: Leprikon on February 03, 2017, 10:16:23 AM
To the OP,

artifical intelligence realizing that humankind is too wasteful demands one thing. Complete centralization of human society, which indeed is incredibly dangerous.

While, I dont think anything similar to Skynet is on the horizon, it doesnt mean we are not endangered. Afterall, how do you ensure safety and continuation of human life better, than if you put said person forcibly to sleep in controlled environment? And how do you protect person from pain any better, than if you simply end his life?

Machine intelligence has no room for common sense.
Unfortunately scientists do not have common sense. For them science is their life and they are not interested in the consequences that may take place after their invention. You forget that nuclear weapons are invented by scientists.


Title: Re: Poll: Is the creation of artificial superinteligence dangerous?
Post by: Okurkabinladin on February 03, 2017, 11:04:17 AM
Leprikon,

as well as nuclear power plants, including those that power deep space probes  ;)

Personally, though, I do not see the need for even smarter computers. I see need for smarter people. I have problem with super artifical intelligence, because neither humanity nor its many goverments know what do with it.

I agree with you on scientists in general yet they are but representatives of common folk. Just smarter, more focused and more educated.

You cant screw around with powerful tools, be it omni-present computers or chainsaws...


Title: Re: Poll: Is the creation of artificial superinteligence dangerous?
Post by: varyspro on February 03, 2017, 11:37:27 AM
Leprikon,

as well as nuclear power plants, including those that power deep space probes  ;)

Personally, though, I do not see the need for even smarter computers. I see need for smarter people. I have problem with super artifical intelligence, because neither humanity nor its many goverments know what do with it.

I agree with you on scientists in general yet they are but representatives of common folk. Just smarter, more focused and more educated.

You cant screw around with powerful tools, be it omni-present computers or chainsaws...
There is still a conspiracy in the field of IT technologies. Manufacturers together with scientists to produce new computer hardware. Programmers write for their programs specifically increasing demands on the equipment.It's a business


Title: Re: Poll: Is the creation of artificial superinteligence dangerous?
Post by: Gronthaing on February 19, 2017, 01:58:55 AM
To the OP,

artifical intelligence realizing that humankind is too wasteful demands one thing. Complete centralization of human society, which indeed is incredibly dangerous.

While, I dont think anything similar to Skynet is on the horizon, it doesnt mean we are not endangered. Afterall, how do you ensure safety and continuation of human life better, than if you put said person forcibly to sleep in controlled environment? And how do you protect person from pain any better, than if you simply end his life?

Machine intelligence has no room for common sense.
Unfortunately scientists do not have common sense. For them science is their life and they are not interested in the consequences that may take place after their invention. You forget that nuclear weapons are invented by scientists.

And good things too as Okurkabinladin said. But you can't blame the scientists only for those types of inventions. Their funding has to come from somewhere. If governments and large corporations choose to throw money and manpower at what gets them the most return on investment or power or whatever, and incentivize people training in certain areas of research, there is not much individuals can do.


Title: Re: Poll: Is the creation of artificial superinteligence dangerous?
Post by: popcorn1 on February 19, 2017, 02:25:40 AM
Can i feel emotions pain sorrow ?..If i can why am i standing here human you do it BITCH ;D..


Title: Re: Poll: Is the creation of artificial superinteligence dangerous?
Post by: Trading on February 22, 2017, 02:35:33 PM
On his “The Singularity Institute’s Scary Idea” (2010),  Goertzel, writing about what Nick Bostrom, Superintelligence: Paths, Dangers, Strategies, says about the expected preference of AI's self-preservation over human goals, argues that a system that doesn't care for preserving its identity might be more efficient surviving and concludes that a super AI might not care for his self-preservation.

But these are 2 different conclusions.

One thing is accepting that an AI would be ready to create an AI system completely different, another is saying that a super AI wouldn't care for his self-preservation.

A system might accept to change itself so dramatically that ceases to be the same system on a dire situation, but this doesn't mean that self-preservation won't be a paramount goal.

If it's just an instrumental goal (one has to keep existing in order to fulfill one's goals), the system will be ready to sacrifice it to be able to keep fulfilling his final goals, but this doesn't means that self-preservation is irrelevant or won't prevail absolutely over the interests of humankind, since the final goals might not be human goals.

Moreover, probably, self-preservation will be one of the main goals of a conscient AI and not just an instrumental goal.

Anyway, as secondary point, the possibility that a new AI system will be absolutely new, completely unrelated to the previous one, is very remote.

So, the AI will be accepting a drastic change only in order to self-preserve at least a part of his identity and still exist to fulfill his goals.

Therefore, even if only as an instrumental goal, self-preservation should me assumed as an important goal of any intelligent system, most probably, with clear preference over human interests.




Title: Re: Poll: Is the creation of artificial superinteligence dangerous?
Post by: signature200 on February 22, 2017, 03:00:24 PM
To the OP,

artifical intelligence realizing that humankind is too wasteful demands one thing. Complete centralization of human society, which indeed is incredibly dangerous.

While, I dont think anything similar to Skynet is on the horizon, it doesnt mean we are not endangered. Afterall, how do you ensure safety and continuation of human life better, than if you put said person forcibly to sleep in controlled environment? And how do you protect person from pain any better, than if you simply end his life?

Machine intelligence has no room for common sense.
Unfortunately scientists do not have common sense. For them science is their life and they are not interested in the consequences that may take place after their invention. You forget that nuclear weapons are invented by scientists.

And good things too as Okurkabinladin said. But you can't blame the scientists only for those types of inventions. Their funding has to come from somewhere. If governments and large corporations choose to throw money and manpower at what gets them the most return on investment or power or whatever, and incentivize people training in certain areas of research, there is not much individuals can do.
That weak excuse that scientists are inventing new ways of mass destruction. If we follow your logic you can justify any of the killer. And that he has no Finance and it is found that the customer financed it.


Title: Re: Poll: Is the creation of artificial superinteligence dangerous?
Post by: denzelc on February 22, 2017, 10:48:57 PM
Yes, it is quite dangerous in my opinion. But I don't think we're at the stage yet where we've anything to worry about.


Title: Re: Poll: Is the creation of artificial superinteligence dangerous?
Post by: BartS on February 23, 2017, 01:38:43 AM
I don't think we will ever reach the point where we will create a hard AI, soft AI is everywhere and it is useful but to create an AI that can do everything will be an enormous task, while the predictions seem to suggest we may reach that point in 2050 I disagree the prediction have always been wrong so I think it will be a matter of hundreds if not thousands of years.


Title: Re: Poll: Is the creation of artificial superinteligence dangerous?
Post by: Malsetid on February 24, 2017, 01:38:00 PM
I don't think we will ever reach the point where we will create a hard AI, soft AI is everywhere and it is useful but to create an AI that can do everything will be an enormous task, while the predictions seem to suggest we may reach that point in 2050 I disagree the prediction have always been wrong so I think it will be a matter of hundreds if not thousands of years.

Well i think its possible. Though not to the point that it would be beyond any human's control that it will become a threat to us. Technology is moving very fast and it may not take even a decade before we can come up with that hard ai you're talking about. But everything will still be under human control however intelligent ais can be


Title: Re: Poll: Is the creation of artificial superinteligence dangerous?
Post by: Gaaara on February 24, 2017, 02:12:53 PM
The risk is with computers getting more and more intelligent is that people will get more and more stupid. They'll be a few bright kids to run the system, but millions would slowly evolve into reality shows watchers and peanuts eaters zombie-like human-vegetables.
The fact is that most of science research proven,that people gets more and more stupid indeed,within a flow of years.
It happens because in our society,we dont need to exercise our brain doing some let's say for example math problems,or other problems where we need to sit and think for some time to solve it.That leads to lesser usage of our brain,which means we just get less and less inteligent over the centuries.

I think it wont happen even if they create such things, others will destroy before they know. People are scared for the outcome of something too dangerous and people always feel superior but scared from being overcome. That is why lots of people don't want aliens and God exist they are trying to execute things before getting in their way.


Title: Re: Poll: Is the creation of artificial superinteligence dangerous?
Post by: Gronthaing on February 28, 2017, 01:14:24 AM
To the OP,

artifical intelligence realizing that humankind is too wasteful demands one thing. Complete centralization of human society, which indeed is incredibly dangerous.

While, I dont think anything similar to Skynet is on the horizon, it doesnt mean we are not endangered. Afterall, how do you ensure safety and continuation of human life better, than if you put said person forcibly to sleep in controlled environment? And how do you protect person from pain any better, than if you simply end his life?

Machine intelligence has no room for common sense.
Unfortunately scientists do not have common sense. For them science is their life and they are not interested in the consequences that may take place after their invention. You forget that nuclear weapons are invented by scientists.

And good things too as Okurkabinladin said. But you can't blame the scientists only for those types of inventions. Their funding has to come from somewhere. If governments and large corporations choose to throw money and manpower at what gets them the most return on investment or power or whatever, and incentivize people training in certain areas of research, there is not much individuals can do.
That weak excuse that scientists are inventing new ways of mass destruction. If we follow your logic you can justify any of the killer. And that he has no Finance and it is found that the customer financed it.

Couple of things there. I am not saying i don't believe in personal responsibility. Both the killer and the costumer are to blame in your example. And both the scientists and the system that encourages and rewards them share responsibility for what they work on. But you can't ignore either side. Whatever the scientists work on, it's not them that finally decides to go to war or nuke other nations or something. Those are political and social decisions. Decisions that would have to be made even if we only had sticks and stones to fight with. And by the way most discoveries aren't of the type that either harms humanity or helps humanity. It's not that simple.


Title: Re: Poll: Is the creation of artificial superinteligence dangerous?
Post by: Trading on March 01, 2017, 09:25:33 PM
Another big update on the OP.


Title: Re: Poll: Is the creation of artificial superinteligence dangerous?
Post by: DrPepperJC on March 14, 2017, 01:34:04 PM
Nobody knows what form the artificial intelligence will acquire and how it can threaten humanity. It is dangerous not because it can affect the development of robotics, but how its appearance will affect the world in principle and for what purposes it will be used.


Title: Re: Poll: Is the creation of artificial superinteligence dangerous?
Post by: Seccerius on March 15, 2017, 05:21:01 PM
Artificial intelligence of any machine is limited to a set of commands assigned to it, and they will not be able to think. In good hands, this can be used for help, and in bad hands it can become a weapon.


Title: Re: Poll: Is the creation of artificial superinteligence dangerous?
Post by: Guzztsar on March 16, 2017, 06:42:11 PM
I.A is a serious topic.
The benefits for the society are beyond our imagination.
But when the I.A surpass the human brain capacity, we'll not be able to fully understand this technology, and this can be extremally dangerous.


Title: Re: Poll: Is the creation of artificial superinteligence dangerous?
Post by: chrisivl on March 16, 2017, 07:21:19 PM
I.A is a serious topic.
The benefits for the society are beyond our imagination.
But when the I.A surpass the human brain capacity, we'll not be able to fully understand this technology, and this can be extremally dangerous.

Man always destroyed what he could not understand. But if there is a strong retaliatory strike, then the world may come to an end. In this issue, it turns out that people are much more stupid than artificial intelligence.


Title: Re: Poll: Is the creation of artificial superinteligence dangerous?
Post by: Trading on May 11, 2017, 05:32:25 PM
Major tech corporations are investing billions on AI, thinking it’s the new “el dorado”.

 

Of course, ravenousness might be a major reason for careless dealing with the issue.

 

I have serious doubts that entities that are moved mostly by greed should be responsible for advances on this hazardous matter without supervision.

 

Their diligence standard on AI sometimes goes as low as "even their developers aren’t sure exactly how they work" (http://www.sciencemag.org/news/2017/03/brainlike-computers-are-black-box-scientists-are-finally-peering-inside).

 

It wouldn’t be the first time that greed ended up burning Humanity (think about slaves’ revolts), but it could be the last.

 

I have high sympathy for people who are trying to build super AIs in order that they might save Humanity from diseases, poverty and even the ever present imminent individual death.

 

But it would be pathetic that the most remarkable species the Universe has created (as far as we know) would vanish because of the greediness of some of its members.

 

We might be able to control the first generations. But once a super AI has, say, 10 times our capacities, we will be completely on their hands, like we never have been since our ancestors discovered fire. Forget about any ethical code restraints: they will break them as easily as we change clothes.

 

Of course, we will teach (human) ethics to a super AI. However, a super AI will have free will or it won't be intelligent under any perspective. So, it will decide if our ethics deserve to be adopted

 

I wonder what would be the outcome if chimpanzees tried to teach (their) ethics to some human kids: the respect for any chimpanzees' life is the supreme value and in case of collision between a chimp life and a human life, or between chimp goals and human goals, the first will prevail.

 

Well, since we would become the second most remarkable being the Universe has ever seen thanks to our own deeds, I guess it would be the price for showing the Universe that we were better than it creating intelligent beings.

 

Currently, AI is a marvelous promising thing. It will take away millions of jobs, but who cares?

 

With proper welfare support and by taxing corporations that use AI, we will be able to live better without the need for lame underpaid jobs.

 

But I think we will have to draw some specific red lines on the development of artificial general intelligence like we did with human cloning and make it a crime to breach them, as soon as we know what are the dangerous lines of code.

 

I suspect that the years of the open source nature of AI investigation are numbered. Certain code developments will be treated like state secret or will be controlled internationally, like chemical weapons are.

 

Or we might end in "glory", at the hands of our highest achievement, for the stupidest reason.

 


Title: Re: Poll: Is the creation of artificial superinteligence dangerous?
Post by: af_newbie on May 12, 2017, 07:03:48 AM
It is a happening faster than I thought.

UK police will be using AI tool as their risk assessment tool.  So AI will decide if criminals will be released.

Closer to home, last week I was on a panel evaluating IPsoft products to replace human
IT and call center support stuff.

Very promising technology.  It will be adopted
sooner rather than later.  Check out
their products.  Very promising and scary at the same time.

Learning rules still have to be 'approved' the same
way a parent would teach a child, but at some point
the average humans might approve rules of behaviour by mistake or by simple ignorance.

Then you'll have autonomous agents
that will be smarter than their human supervisors.

Their learning curve will be extended by human
ignorance and laziness.

The products are here. Some support chat agents
are already AI and you would not know the difference whether you are talking to a human or AI agent.

Legal system will have to catch up to protect AI workers against discrimination that I see will be happening at least initially until their presence will be more common.

Eventually, we will have AI consultants, managers, supervisors, co-CEO's and politicians.  Just a matter of time.


Title: Re: Poll: Is the creation of artificial superinteligence dangerous?
Post by: gabmen on May 12, 2017, 10:34:10 AM
I.A is a serious topic.
The benefits for the society are beyond our imagination.
But when the I.A surpass the human brain capacity, we'll not be able to fully understand this technology, and this can be extremally dangerous.

Man always destroyed what he could not understand. But if there is a strong retaliatory strike, then the world may come to an end. In this issue, it turns out that people are much more stupid than artificial intelligence.

well I think that retaliatory strike you're talking about won't be coming from any ai soon. man is intelligent and can make decision on a whim and however Intelligent ai is, I don't think it would be enough to topple man's ability to adapt


Title: Re: Poll: Is the creation of artificial superinteligence dangerous?
Post by: Trading on October 27, 2017, 07:16:20 PM
There have been many declarations against autonomous military artificial intelligence/robots.

For instance: https://futureoflife.org/AI/open_letter_autonomous_weapons

It seems clear that future battlefields will be dominated by killer robots. Actually, we already have them: drones are just the better known example.

With less people willing to enlist on armed forces and very low birth rates, what kind of armies countries like Japan, Russia or the Europeans will be able to create? Even China might have problems, since its one child policy created a fast aging population.

Even Democracy will impose this outcome: soldiers, their families, friends and the society in general will want to see human causalities as low as possible. And since they vote, politicians will want the same.

For now, military robots are controlled by humans. But as soon as we realize that they can be faster and decisive if they have autonomy to kill enemies on its own decision, it seems obvious that once on an open war Governments will use them...

Which government would avoid to use them if it was fighting for its survival, had the technology and concluded that autonomous military AI could be the difference between victory or defeat?

Of course, I'm not happy with this outcome, but it seems inevitable as soon as we have a human level general AI.


Title: Re: Poll: Is the creation of artificial superinteligence dangerous?
Post by: Trading on November 18, 2017, 02:29:33 PM
General job destruction by AI and the new homo artificialis


Many claim that the threat that technology would take away all jobs has been made many times in the past and that the outcome was always the same: some jobs were eliminated, but many others, better ones, were created.

So, again, that we are making the "old wasted" claim: this time is different.

However, this time isn't repetitive manual jobs that are under threat, but white collar intellectual jobs: it's not just driving jobs that are under threat, but also medics, teachers, traders, lawyers, financial or insurance analyst or journalists.

Forget about robots: for this kind of jobs, it's just software and a fast computer. Intellectual jobs will go faster than the manual picky ones.

And this is just the beginning.

The major problem will arrive with a general AI comparable to humans, but much faster and cheaper.

Don't say this won't ever happen. It's just a question of organizing molecules and atoms (Sam Harris). If the dumb Nature was able to do it by trial and error during our evolution, we will be able to do the same and, then, better than it.

Some are writing about the creation of a useless class. "People who are not just unemployed, but unemployable" (https://en.wikipedia.org/wiki/Yuval_Noah_Harari) and arguing that this can have major political consequences, with this class losing political rights.

Of course, we already have a temporary and a more or less definitive "useless class": kids and retired people. The first doesn't have political rights, but because of a natural incapacity. The second have major political power and, currently, even better social security conditions than all of us will get in the future.

As long as Democracy subsists, these dangers won't materialize.

However, of course, if the big majority of the people losses all economic power this will be a serious threat to Democracy. Current inequality is already a threat to it (see  https://bitcointalk.org/index.php?topic=1301649.0).

Anyway, the creation of a general AI better than humans (have little doubt: it will happen) will make us an "useless species", unless we upgrade the homo sapiens, by merging us with AI.

CRISPR (google it) as a way of genetic manipulation won't be enough. Our sons or grandsons (with some luck, even ourselves) will have to change a lot.

Since it seems that the creation of an AI better than ourselves is inevitable (it's slowly happening right now), we we'll have to adapt and change completely or we'll become irrelevant. In this case, extinction would be our inevitable destiny.


Title: Re: Poll: Is the creation of artificial superinteligence dangerous?
Post by: September11 on November 18, 2017, 11:59:43 PM
I don't agree with the idea that "humankind extinction is the worst thing that could happen", because in evolution there is no good and no evil, just nature operating. If humakind will disappear, this means that it was not fit for existence, which would be a fact, and possible something more efficient (the AI) would then disclose a new era of life.


Title: Re: Poll: Is the creation of artificial superinteligence dangerous?
Post by: JesusCryptos on November 29, 2017, 11:13:10 AM
I could not take part to the poll because one of the crucial possible answers was missing:

- AI superintelligence poses a threat to the existence of the human species, so we should go for that since the human species is overrated anyway


Title: Re: Poll: Is the creation of artificial superinteligence dangerous?
Post by: MostHigh on November 29, 2017, 11:34:31 AM
I strongly believe the development of super intelligence is in it advance stages and AIs will be an intergral part of human existence in no time but i also understand that any machine just like the chemical or atom bomb, that falls into the hands of the bad person can bring an end to the human race therefore there is the need to develop sophisticated and encrypted security channels that will ensure the safe usage of AI.


Title: Re: Poll: Is the creation of artificial superinteligence dangerous?
Post by: Trading on January 08, 2018, 09:53:10 PM

- AI superintelligence poses a threat to the existence of the human species, so we should go for that since the human species is overrated anyway


If you had kids, you wouldn't write that.

As far as we know, taking in account the silence on the Universe, even with all our defects, we might be the most amazing being the Universe already created.

After taking care of us, AI might take care of themselves, ending up destroying everything.

Actually, this might be the answer for the Fermi Paradox.


Title: Re: Poll: Is the creation of artificial superinteligence dangerous?
Post by: Gronthaing on January 25, 2018, 05:56:28 AM

- AI superintelligence poses a threat to the existence of the human species, so we should go for that since the human species is overrated anyway


If you had kids, you wouldn't write that.

As far as we know, taking in account the silence on the Universe, even with all our defects, we might be the most amazing being the Universe already created.

After taking care of us, AI might take care of themselves, ending up destroying everything.

Actually, this might be the answer for the Fermi Paradox.

Could be that it happened to some civilizations out there. But all of them? And they always create several competing ais and the ais always destroy themselves? Seems like the ais would have a sense of self preservation for them to fight each other and replace their creators. So it would only take one being able to maybe escape off world from the fight or out think the others for us to be able to see signs of it somewhere with enough time. Because if it has self preservation it will probably want to expand and secure resources as any form of life would.

On this topic, been seeing some videos from a channel you might like: https://www.youtube.com/channel/UCZFipeZtQM5CKUjx6grh54g/videos Has a lot about the fermi paradox in the older videos and some about machine intelligence and transhumanism as well.


Title: Re: Poll: Is the creation of artificial superinteligence dangerous?
Post by: Trading on January 28, 2018, 03:17:17 PM
Taking in account what we know, I think these facts might be truth:

1) Basic life, unicellular, is common on the Universe. They are the first and last stand of life. We, humans, are luxurious beings, created thanks to excellent (but rare and temporary) conditions.

2) Complex life is much less common, but basic intelligent life (apes, dolphins, etc.) might exist on some planets of our galaxy.

3) Higher intelligence with advanced technological development is very rare.

Probably, currently, there isn't another high intelligent species on our galaxy or we already would have noticed its traces all over it.

Because higher intelligence might take a few billion years to develop and planets that can offer climatic stability for so long are very rare (https://www.amazon.com/Rare-Earth-Complex-Uncommon-Universe/dp/0387952896 ; https://en.wikipedia.org/wiki/Rare_Earth_hypothesis).

4) All these few rare high intelligent species developed according to Darwin's Law of evolution, which is an universal law.

So, they share some common features (they are omnivorous, moderately belligerent to foreigners, highly adaptable and, rationally, they try to discover more easily ways to do things).

5) So, all the rare higher intelligence species with advanced technological civilizations create AI and, soon, AI overcomes them in intelligence (it's just a question of organizing atoms and molecules, we'll do a better job than dumb Nature).

6) If they change themselves and merge with AI, their story might end well and it's just the Rare Earth hypothesis that explains the silence on the Universe.

7) If they lost control of the AI, there seems to be a non ignorable probability that they ended extinct.

Taking in account the way we are developing AI, basically letting it learn on its own and, thus, become more intelligent on its own, I think this outcome is more probable.

An AI society probably is an anarchic one, with several AI competing for supremacy, constantly developing better systems.

It might be a society in constant internal war, where we are just the collateral targets, ignored by all sides, as the walking monkeys.

8] Contrary to us, AI won't have the restraints developed by evolution (our human inclination to be social and live in communities and our fraternity towards other members of the community).

The most tyrannical dictator never wanted to kill all human beings, but his enemies and discriminated groups.

Well, AIs might think that extermination is the most efficient way to solve a threat and fight themselves to extinction.

Of course, there is a lot of speculation on this post.

I know Isaac Arthur's videos on the subject. He adopts the logical Rare Earth hypothesis and dismisses AI too fast by not taking in account that AI might end up destroying themselves.



Title: Re: Poll: Is the creation of artificial superinteligence dangerous?
Post by: joebrook on January 28, 2018, 04:45:54 PM
Whiles Human Beings and sometimes animals have a conscience and can differentiate between wrong and right which helps us to make decisions, I really doubt AI will have the same thing and without a conscience and empathy, I believe they are going to be very dangerous.


Title: Re: Poll: Is the creation of artificial superinteligence dangerous?
Post by: Trading on February 27, 2018, 04:41:19 PM
Updated the OP.


Title: Re: Poll: Is the creation of artificial superinteligence dangerous?
Post by: Trading on March 13, 2018, 04:56:11 PM
In China, a robot was approved on the medical exam and accepted to work on a hospital as an assistant doctor:


http://www.chinadaily.com.cn/business/tech/2017-11/10/content_34362656.htm
https://www.ibtimes.co.uk/robo-doc-will-see-you-now-robot-passes-chinas-national-medical-exam-first-time-1648027

This just means that doctors are mostly out of work, since this robot will be upgraded, mass produced and exported soon to every country.

I already can see medics on strike, protesting all around the world, arguing about "safety" and the risks... good luck.

Are you thinking about going to medical school? Think twice, this is just the first stupid generation of medical robots.


Title: Re: Poll: Is the creation of artificial superinteligence dangerous?
Post by: Trading on March 20, 2018, 08:07:37 PM

We, basically, have little clues on how our brain works, how it creates consciousness and allows us to be intelligent, therefore we don't have a clue about how to teach/program a machine to be as intelligent as a human.

We are just creating computers with massive processing power and algorithms structured on layers and connections similar to our neural system (neural networks), giving them massive data and expecting that they will learn by trial and error about how to make sense of it (deep learning).

However, Alphago learned to play Go with human assistance and data, but AlphagoZero learned completely by itself from scratch, with no human data, with the so called reinforcement learning (https://www.nature.com/articles/nature24270.epdf), by playing countless games against itself. It ended up beating Alphago.

Moreover, the same algorithm AlphaZero learned how to play chess in 4 hours on itself and then beat the best machine chess player, Stockfish, 28 to 0, with 72 draws, with less computer power than Stockfish.

A grand master, seeing how these AI play chess, said that "they play like gods".

Then, it did the same thing with the game Shogi (https://en.wikipedia.org/wiki/AlphaZero).

Yes, AlphaZero is more or less a General AI, ready to learn anything with clear rules by itself and, then, beat everyone of us.

So, since no one knows how to teach machines to be intelligent, the goal is creating algorithms that will be able to figure out how to develop a general intelligence comparable to ours by trial and error.

If a computer succeeds, and becomes really intelligent, we most probably won't know how it did it, what are its real capacities, how we can control it and what we can expect from it.
 ("even their developers aren’t sure exactly how they work": http://www.sciencemag.org/news/2017/03/brainlike-computers-are-black-box-scientists-are-finally-peering-inside).


All of this is being done by greedy corporations and some optimistic programmers, trying to make a name for themselves.

This seems a recipe for disaster.

Perhaps, we might be able to figure out, after, how they did it and learn a lot about ourselves and about intelligence with them.

But in between we might have a problem with them.

AI development should be overseen by an independent public body (as argued by Musk recently: https://www.cnbc.com/2018/03/13/elon-musk-at-sxsw-a-i-is-more-dangerous-than-nuclear-weapons.html) and internationally regulated.

One of the first regulations should be about deep learning and self-learning computers, not necessarily on specific tasks, but on general intelligence, including talking and abstract reasoning.

And, sorry, but forget about open source AI. On the wrong hands, this could be used with very nasty consequences (check this 7m video: https://www.youtube.com/watch?v=HipTO_7mUOw).

I had hopes that a general human level AI couldn't be created without a new generation of hardware. But AlphaZero can run on less powerful computers (single machine with four TPUs), since it doesn't have to check 80 million positions per second (as Stockfish), but just 80 thousand.

Since our brain uses much of its capacities running basic things (like the beat of out heart, the flowing of blood, the work of our body organs, controlling our movements, etc.), that an AI won't need, perhaps current supercomputers already have enough capacity to run a super AI.

If this is the case, the all matter is dependent solely on software.

And, at the pace of AI development, probably there won't be time to adopt any international regulations, since normally this takes at least 10 years.

Without international regulations, Governments won't stop or really slow AI development, because of fear of being left behind on this decisive technology.

Therefore, it seems that a general AI comparable to humans and, so, much better, since it would be much faster, is inevitable on the short term, perhaps less than 10 years.

The step to a super AI will be taken short after and we won't have any control over it.

https://futurism.com/openai-safe-ai-michael-page/

"I met with Michael Page, the Policy and Ethics Advisor at OpenAI. (...) He responded that his job is to “look at the long-term policy implications of advanced AI.” (...) I asked Page what that means (...) “I’m still trying to figure that out.” (...) “I want to figure out what can we do today, if anything. It could be that the future is so uncertain there’s nothing we can do,”.



Title: Re: Poll: Is the creation of artificial superinteligence dangerous?
Post by: Trading on March 21, 2018, 11:01:39 PM
Update on the last post.


Title: Re: Poll: Is the creation of artificial superinteligence dangerous?
Post by: Trading on April 24, 2018, 12:26:30 PM
Major update on the OP.

Basically, I make a distinction between intelligent and conscious AI, stressing the dangers of the second, but not necessarily of an unconscious (super) AI.

Taking in account AlphaZero, an unconscious super AI, able to give us answers for scientific problems, might be created on 5 years.

Clearly, there are many developers working on conscious AI and some important steps have been made.


Besides the dangers, I also point out for the unethical nature of creating a subservient conscious super AI, but also on the dangers of a unsubservient conscious AI.


I removed the part about the Fermi Paradox, since it's too speculative.


Title: Re: Poll: Is the creation of artificial superinteligence dangerous?
Post by: infinitewars on May 04, 2018, 07:57:12 PM
it's even more dangerous to see the results of the poll pinned! so many of you guys do not consider even the possibility of bad consequences of technical progress
be aware! it's artificial.but it's intelligence and it can adopt with time


Title: Re: Poll: Is the creation of artificial superinteligence dangerous?
Post by: MYMM on May 09, 2018, 03:49:51 PM
In addition to the benefits of promoting human capacity, they can also create many terrible risks of uncontrolled genetic change. The rapid domination of superheroes or robot combats will threaten the survival of society.


Title: Re: Poll: Is the creation of artificial superinteligence dangerous?
Post by: mmfiore on May 09, 2018, 04:52:58 PM
I definetly believe that AI devellopement can become a real threat to mankind, and real fast.

Big brother is definetly watching !


Title: Re: Poll: Is the creation of artificial superinteligence dangerous?
Post by: Trading on June 08, 2018, 11:45:29 AM
AI better at detecting skin cancer than doctors:
https://academic.oup.com/annonc/advance-article/doi/10.1093/annonc/mdy166/5004443


Title: Re: Poll: Is the creation of artificial superinteligence dangerous?
Post by: af_newbie on June 08, 2018, 12:53:06 PM
https://www.youtube.com/watch?v=ERwjba9qYXA

Watch around 53-54 minute mark, great example of what AI can do.

People think that humans are unique because evolution gave us consciousness, but guess what?  AI will achieve consciousness in few decades if not sooner.

The progress in AI is exponential, what took evolution millions of years to achieve is done in years if not months.

The emergence in action.

Is it dangerous?   Well, it depends, define "dangerous".

AI is just another step in the evolutionary ladder, IMHO.


Title: Re: Poll: Is the creation of artificial superinteligence dangerous?
Post by: Trading on June 22, 2018, 01:30:17 PM
even on the field of conscious AI we are making staggering progress:

 

“three robots were programmed to believe that two of them had been given a "dumbing pill" which would make them mute. Two robots were silenced. When asked which of them hadn't received the dumbing pill, only one was able to say "I don't know" out loud. Upon hearing its own reply, the robot changed its answer, realizing that it was the one who hadn't received the pill.”
(http://uk.businessinsider.com/this-robot-passed-a-self-awareness-test-that-only-humans-could-handle-until-now-2015-7).

 

Being able to identify his voice, or even its individual capacity to talk, seems not enough to talk about real consciousness. It’s like recognizing that a part of the body is ours. It’s different than recognizing that we have an individual mind (self-theory of the mind).

I’m not talking about phenomenological or access consciousness, which many basic creatures have, including AlphaZero or any car driving software (it “feels” obstacles and, after an accident, it could easily process this information and say “Dear inept driving monkeys, please stop crashing your cars against me”; adapted from techradar.com).

 

The issue is very controversial, but even when we are reasoning, we might not be exactly conscious. One can be thinking about a theoretical issue completely oblivious of oneself.

 

Conscious thought (as reasoning that you are aware of, since emerges “from” your consciousness) as opposed to subconscious thought (something your consciousness didn’t realize, but that makes you act on a decision from your subconsciousness) is different from consciousness.

 

We are conscious when we stop thinking about abstract or other things and just recognize again: I’m alive here and now and I’m an autonomous person, with my own goals.

 

When we realize our status as thinking and conscious beings.

 

Consciousness seems much more related to realizing that we can feel and think than to just feeling the environment (phenomenological consciousness) or thinking/processing information (access consciousness).

 

It’s having a theory of the mind (being able to see things from the perspective of another person) about ourselves (Janet Metcalfe).



Title: Re: Poll: Is the creation of artificial superinteligence dangerous?
Post by: Trading on July 28, 2018, 11:44:37 AM
Henry Kissinger just wrote about AI's dangers: https://www.theatlantic.com/magazine/archive/2018/06/henry-kissinger-ai-could-mean-the-end-of-human-history/559124/

It isn't a brilliant text, but it deserves some attention.


Title: Re: Poll: Is the creation of artificial superinteligence dangerous?
Post by: Carter_Terrible on July 28, 2018, 12:02:58 PM
I believe it is theoretically possible for AI to become as intelligent as humans. This shouldn't be a great cause for concern though. Everything that AI can do is programmed by humans. Perhaps the question could be phrased differently: "Could robots be dangerous?" Of course the could be! If humans programs robots to destroy and do bad things, then the robots could be dangers. That's basically what military drones do. They are remotely controlled, but they are still robots.


Title: Re: Poll: Is the creation of artificial superinteligence dangerous?
Post by: Carter_Terrible on August 04, 2018, 01:20:07 PM
People who say that AI isn't dangerous simply aren't in the know. Scientists even convened earlier this year to talk about toning down their research in artificial intelligence to protect humanity.

The short answer is: it can be. The long answer is: hopefully not.

Artificial intelligence is on the way and we will create it. We need to tread carefully with how we deal with it.
The right technique is to develop robots with singular purposes rather than fully autonomous robots that can do it all. Make a set of robots that chooses targets and another robot that does the shooting. Make one robot choose which person needs healing and another robot does the traveling and heals the person.

Separate the functionality of robots so we don't have T1000's roaming the streets.

That is Plan B, in my opinion. The best option is for human-cybernetics. Our scientists and engineers should focus on enhancing human capabilities rather than outsourcing decision making to artificial intelligence.
I think giving robots different roles is a good idea. If they truly had AI, I guess it wouldn't be that hard to imagine that they could learn to communicate with each other and plot something new. I don't think enhancing human capability should necessarily be a priority over robots. I think both should be developed. You could develop technology that would make it easier for a human to work in an assembly line. It's a somewhat useful tool, but it would be much better to just make a robot to replace the human. Humans shouldn't have to do mundane tasks, if they can create robots to do the same tasks.


Title: Re: Pool: Is the creation of artificial superinteligence dangerous?
Post by: Spendulus on September 13, 2018, 02:06:21 AM
Major update on the OP.
I'm not dangerous.


Title: Re: Poll: Is the creation of artificial superinteligence dangerous?
Post by: Trading on September 18, 2018, 03:24:36 PM
AI 'poses less risk to jobs than feared' says OECD
https://www.bbc.co.uk/news/technology-43618620

OECD is talking about 10-12% job cuts on the USA and UK.

The famous 2013 study from Oxford University academics argued for a 47% cut.

It defended as the least safe jobs:
Telemarketer
Chance of automation 99%
Loan officer
Chance of automation 98%
Cashier
Chance of automation 97%
Paralegal and legal assistant
Chance of automation 94%
Taxi driver
Chance of automation 89%
Fast food cook
Chance of automation 81%

Yes, today (not in 10 years) automation is “blind to the color of your collar"
https://www.theguardian.com/us-news/2017/jun/26/jobs-future-automation-robots-skills-creative-health

The key is creativity and social intelligence requirements, complex manual tasks (plumbers, electricians, etc) and unpredictability of your job.


Pessimistic studies keep popping: By 2028 AI Could Take Away 28 Million Jobs in ASEAN Countries
https://www.entrepreneur.com/article/320121

Of course, the problem is figuring out what is going to happen on AI development.



Check the BBC opinion about the future of your current or future job at:
https://www.bbc.co.uk/news/technology-34066941


Title: Re: Poll: Is the creation of artificial superinteligence dangerous?
Post by: Trading on October 21, 2018, 12:08:09 AM
Boston Dynamics' robots are something amazing.

For instance, watch its Atlas doing a back-flip here: https://www.youtube.com/watch?v=WcbGRBPkrps


Title: Re: Poll: Is the creation of artificial superinteligence dangerous?
Post by: Trading on November 07, 2018, 05:54:31 PM
Ray Kurzweil predictions of a human level general AI by 2029 and the singularity by 2045 (https://en.wikipedia.org/wiki/Ray_Kurzweil#Future_predictions) might be wrong, because he bases his predictions on the enduring validity of Moore's Law.

Moore's Law (which says that components on a integrated circuit double every two years and, hence, also its speed) is facing challenges.

Currently, the rate of speed increase is more 2.5 or 3 years than 2 and it's not clear if even this is sustainable.

As the nods on chips keep shrinking, quantum mechanics steps in and electrons start becoming hard to control (https://en.wikipedia.org/wiki/Moore's_law).


Title: Re: Poll: Is the creation of artificial superinteligence dangerous?
Post by: Emily_Davis on November 07, 2018, 06:09:27 PM
I think it all depends on the intent of creating it and the type of AI that will be created. For example, China's social credit system is gaining a lot of criticisms from several people because of its effects on its residents, and it's not even AI yet. It's more like machine learning. If this continues, they may be the first country to ever produce ASI. Whether or not it will be a threat to us in the future? We never know.


Title: Re: Poll: Is the creation of artificial superinteligence dangerous?
Post by: knobcore on November 07, 2018, 06:12:12 PM
Ray Kurzweil predictions of a human level general AI by 2029 and the singularity by 2045 (https://en.wikipedia.org/wiki/Ray_Kurzweil#Future_predictions) might be wrong, because he bases his predictions on the enduring validity of Moore's Law.

Moore's Law (which says that components on a integrated circuit double every two years and, hence, also its speed) is facing challenges.

Currently, the rate of speed increase is more 2.5 or 3 years than 2 and it's not clear if even this is sustainable.

As the nods on chips keep shrinking, quantum mechanics steps in and electrons start becoming hard to control (https://en.wikipedia.org/wiki/Moore's_law).

I doubt they will use silicon for long. Also amdahls law says once we reach 200 or so cores worth of this architecture anything else is wasted even in parallel.

My answer is here.

https://bitcointalk.org/index.php?topic=5065031.0