Bitcoin Forum
November 17, 2018, 06:31:09 PM *
News: Latest Bitcoin Core release: 0.17.0 [Torrent].
 
   Home   Help Search Login Register More  
Poll
Question: Is the creation of a superintelligent artificial being (AI) dangerous?
No, this won't ever happen or we can take care of the issue. No need to adopt any particular measure. - 18 (26.1%)
Yes, but we'll be able to handle it. Business as usual. - 12 (17.4%)
Yes, but AI investigators should decide what safeguards to be adopted. - 8 (11.6%)
Yes and all AI investigation on real autonomous programs should be subject to governmental authorization until we know better the danger. - 3 (4.3%)
Yes and all AI investigation should be subjected to international guidelines and control. - 12 (17.4%)
Yes and all AI investigation should cease completely. - 8 (11.6%)
I couldn't care less about AI. - 4 (5.8%)
I don't have an opinion on the issue - 1 (1.4%)
Why do you, OP, care about AI?, you shall burn in hell, like all atheists. God will save us from any dangerous AI. - 3 (4.3%)
Total Voters: 69

Pages: « 1 2 3 4 5 [6] 7 8 »  All
  Print  
Author Topic: Poll: Is the creation of artificial superinteligence dangerous?  (Read 7880 times)
ekaterina77
Sr. Member
****
Offline Offline

Activity: 275
Merit: 250



View Profile
February 02, 2017, 09:44:37 PM
 #101

I have researched a bit about the Artificial Intelligence and I know some good concepts about it and saying the truth Artificial Intelligence is good until a level but it doesn't end and can be very dangerous for the humans because they can replace many people in factory or somewhere else where Artificial Intelligence is applied, I have read recently that google is experimenting by making programmer bots through AI that can do the same job like a programmer.
Artificial intelligence is the ruin of mankind. Remember the movie the Terminator? This will actually lead to judgment day. To monitor someone's intelligence is very difficult. Maybe better not to tempt fate?
1542479469
Hero Member
*
Offline Offline

Posts: 1542479469

View Profile Personal Message (Offline)

Ignore
1542479469
Reply with quote  #2

1542479469
Report to moderator
1542479469
Hero Member
*
Offline Offline

Posts: 1542479469

View Profile Personal Message (Offline)

Ignore
1542479469
Reply with quote  #2

1542479469
Report to moderator
1542479469
Hero Member
*
Offline Offline

Posts: 1542479469

View Profile Personal Message (Offline)

Ignore
1542479469
Reply with quote  #2

1542479469
Report to moderator
Advertised sites are not endorsed by the Bitcoin Forum. They may be unsafe, untrustworthy, or illegal in your jurisdiction. Advertise here.
1542479469
Hero Member
*
Offline Offline

Posts: 1542479469

View Profile Personal Message (Offline)

Ignore
1542479469
Reply with quote  #2

1542479469
Report to moderator
darkangel11
Legendary
*
Offline Offline

Activity: 1218
Merit: 1054


★OneHash.com - Leading Mutual Betting & Casi


View Profile
February 02, 2017, 10:08:39 PM
 #102

The Terminator outcome is very improbable, unless we create a fighting AI, teach it to exterminate and link it to all the defense systems in a given country.
Why do people always perceive machines as evil? Maybe because we fear what we don't know. Machines won't become our enemies just like that, just like no child is born evil.

TicTacTic
Sr. Member
****
Offline Offline

Activity: 259
Merit: 250



View Profile
February 02, 2017, 10:45:56 PM
 #103

The Terminator outcome is very improbable, unless we create a fighting AI, teach it to exterminate and link it to all the defense systems in a given country.
Why do people always perceive machines as evil? Maybe because we fear what we don't know. Machines won't become our enemies just like that, just like no child is born evil.
You should think so. Russia is trying to develop a system that will strike back at America if the American missiles will hit the target first. This is not what is described in the terminator?
Gronthaing
Legendary
*
Offline Offline

Activity: 1142
Merit: 1001


View Profile
February 03, 2017, 04:02:17 AM
 #104

I have researched a bit about the Artificial Intelligence and I know some good concepts about it and saying the truth Artificial Intelligence is good until a level but it doesn't end and can be very dangerous for the humans because they can replace many people in factory or somewhere else where Artificial Intelligence is applied, I have read recently that google is experimenting by making programmer bots through AI that can do the same job like a programmer.

That is not a bad thing. Automation should replace workers where possible. No point in people wasting time in something that a machine can do better and faster. Problem is most countries aren't prepared. Others like in the eu are thinking of ways to tax the use of robots. But this probably won't be enough when large numbers of people are without a job because of automation.

The Terminator outcome is very improbable, unless we create a fighting AI, teach it to exterminate and link it to all the defense systems in a given country.
Why do people always perceive machines as evil? Maybe because we fear what we don't know. Machines won't become our enemies just like that, just like no child is born evil.
You should think so. Russia is trying to develop a system that will strike back at America if the American missiles will hit the target first. This is not what is described in the terminator?

Russia and other countries already have that. It's called submarines. But yes if ai is developed the military will be using it for sure.

Okurkabinladin
Hero Member
*****
Offline Offline

Activity: 574
Merit: 505



View Profile
February 03, 2017, 10:00:06 AM
 #105

To the OP,

artifical intelligence realizing that humankind is too wasteful demands one thing. Complete centralization of human society, which indeed is incredibly dangerous.

While, I dont think anything similar to Skynet is on the horizon, it doesnt mean we are not endangered. Afterall, how do you ensure safety and continuation of human life better, than if you put said person forcibly to sleep in controlled environment? And how do you protect person from pain any better, than if you simply end his life?

Machine intelligence has no room for common sense.
Leprikon
Sr. Member
****
Offline Offline

Activity: 293
Merit: 250



View Profile
February 03, 2017, 10:16:23 AM
 #106

To the OP,

artifical intelligence realizing that humankind is too wasteful demands one thing. Complete centralization of human society, which indeed is incredibly dangerous.

While, I dont think anything similar to Skynet is on the horizon, it doesnt mean we are not endangered. Afterall, how do you ensure safety and continuation of human life better, than if you put said person forcibly to sleep in controlled environment? And how do you protect person from pain any better, than if you simply end his life?

Machine intelligence has no room for common sense.
Unfortunately scientists do not have common sense. For them science is their life and they are not interested in the consequences that may take place after their invention. You forget that nuclear weapons are invented by scientists.
Okurkabinladin
Hero Member
*****
Offline Offline

Activity: 574
Merit: 505



View Profile
February 03, 2017, 11:04:17 AM
 #107

Leprikon,

as well as nuclear power plants, including those that power deep space probes  Wink

Personally, though, I do not see the need for even smarter computers. I see need for smarter people. I have problem with super artifical intelligence, because neither humanity nor its many goverments know what do with it.

I agree with you on scientists in general yet they are but representatives of common folk. Just smarter, more focused and more educated.

You cant screw around with powerful tools, be it omni-present computers or chainsaws...
varyspro
Sr. Member
****
Offline Offline

Activity: 291
Merit: 250



View Profile
February 03, 2017, 11:37:27 AM
 #108

Leprikon,

as well as nuclear power plants, including those that power deep space probes  Wink

Personally, though, I do not see the need for even smarter computers. I see need for smarter people. I have problem with super artifical intelligence, because neither humanity nor its many goverments know what do with it.

I agree with you on scientists in general yet they are but representatives of common folk. Just smarter, more focused and more educated.

You cant screw around with powerful tools, be it omni-present computers or chainsaws...
There is still a conspiracy in the field of IT technologies. Manufacturers together with scientists to produce new computer hardware. Programmers write for their programs specifically increasing demands on the equipment.It's a business
Gronthaing
Legendary
*
Offline Offline

Activity: 1142
Merit: 1001


View Profile
February 19, 2017, 01:58:55 AM
 #109

To the OP,

artifical intelligence realizing that humankind is too wasteful demands one thing. Complete centralization of human society, which indeed is incredibly dangerous.

While, I dont think anything similar to Skynet is on the horizon, it doesnt mean we are not endangered. Afterall, how do you ensure safety and continuation of human life better, than if you put said person forcibly to sleep in controlled environment? And how do you protect person from pain any better, than if you simply end his life?

Machine intelligence has no room for common sense.
Unfortunately scientists do not have common sense. For them science is their life and they are not interested in the consequences that may take place after their invention. You forget that nuclear weapons are invented by scientists.

And good things too as Okurkabinladin said. But you can't blame the scientists only for those types of inventions. Their funding has to come from somewhere. If governments and large corporations choose to throw money and manpower at what gets them the most return on investment or power or whatever, and incentivize people training in certain areas of research, there is not much individuals can do.

popcorn1
Legendary
*
Offline Offline

Activity: 1218
Merit: 1027


View Profile
February 19, 2017, 02:25:40 AM
 #110

Can i feel emotions pain sorrow ?..If i can why am i standing here human you do it BITCH Grin..
Trading
Legendary
*
Offline Offline

Activity: 1427
Merit: 1017


Nothing like healthy scepticism and hard evidence


View Profile
February 22, 2017, 02:35:33 PM
 #111

On his “The Singularity Institute’s Scary Idea” (2010),  Goertzel, writing about what Nick Bostrom, Superintelligence: Paths, Dangers, Strategies, says about the expected preference of AI's self-preservation over human goals, argues that a system that doesn't care for preserving its identity might be more efficient surviving and concludes that a super AI might not care for his self-preservation.

But these are 2 different conclusions.

One thing is accepting that an AI would be ready to create an AI system completely different, another is saying that a super AI wouldn't care for his self-preservation.

A system might accept to change itself so dramatically that ceases to be the same system on a dire situation, but this doesn't mean that self-preservation won't be a paramount goal.

If it's just an instrumental goal (one has to keep existing in order to fulfill one's goals), the system will be ready to sacrifice it to be able to keep fulfilling his final goals, but this doesn't means that self-preservation is irrelevant or won't prevail absolutely over the interests of humankind, since the final goals might not be human goals.

Moreover, probably, self-preservation will be one of the main goals of a conscient AI and not just an instrumental goal.

Anyway, as secondary point, the possibility that a new AI system will be absolutely new, completely unrelated to the previous one, is very remote.

So, the AI will be accepting a drastic change only in order to self-preserve at least a part of his identity and still exist to fulfill his goals.

Therefore, even if only as an instrumental goal, self-preservation should me assumed as an important goal of any intelligent system, most probably, with clear preference over human interests.



My main posts and a few more are reposted here: https://oneskeptic.tumblr.com
signature200
Member
**
Offline Offline

Activity: 84
Merit: 10


View Profile
February 22, 2017, 03:00:24 PM
 #112

To the OP,

artifical intelligence realizing that humankind is too wasteful demands one thing. Complete centralization of human society, which indeed is incredibly dangerous.

While, I dont think anything similar to Skynet is on the horizon, it doesnt mean we are not endangered. Afterall, how do you ensure safety and continuation of human life better, than if you put said person forcibly to sleep in controlled environment? And how do you protect person from pain any better, than if you simply end his life?

Machine intelligence has no room for common sense.
Unfortunately scientists do not have common sense. For them science is their life and they are not interested in the consequences that may take place after their invention. You forget that nuclear weapons are invented by scientists.

And good things too as Okurkabinladin said. But you can't blame the scientists only for those types of inventions. Their funding has to come from somewhere. If governments and large corporations choose to throw money and manpower at what gets them the most return on investment or power or whatever, and incentivize people training in certain areas of research, there is not much individuals can do.
That weak excuse that scientists are inventing new ways of mass destruction. If we follow your logic you can justify any of the killer. And that he has no Finance and it is found that the customer financed it.
denzelc
Newbie
*
Offline Offline

Activity: 34
Merit: 0


View Profile
February 22, 2017, 10:48:57 PM
 #113

Yes, it is quite dangerous in my opinion. But I don't think we're at the stage yet where we've anything to worry about.
BartS
Sr. Member
****
Offline Offline

Activity: 686
Merit: 254


View Profile
February 23, 2017, 01:38:43 AM
 #114

I don't think we will ever reach the point where we will create a hard AI, soft AI is everywhere and it is useful but to create an AI that can do everything will be an enormous task, while the predictions seem to suggest we may reach that point in 2050 I disagree the prediction have always been wrong so I think it will be a matter of hundreds if not thousands of years.
Malsetid
Hero Member
*****
Offline Offline

Activity: 728
Merit: 500


View Profile
February 24, 2017, 01:38:00 PM
 #115

I don't think we will ever reach the point where we will create a hard AI, soft AI is everywhere and it is useful but to create an AI that can do everything will be an enormous task, while the predictions seem to suggest we may reach that point in 2050 I disagree the prediction have always been wrong so I think it will be a matter of hundreds if not thousands of years.

Well i think its possible. Though not to the point that it would be beyond any human's control that it will become a threat to us. Technology is moving very fast and it may not take even a decade before we can come up with that hard ai you're talking about. But everything will still be under human control however intelligent ais can be
Gaaara
Hero Member
*****
Offline Offline

Activity: 798
Merit: 501



View Profile
February 24, 2017, 02:12:53 PM
 #116

The risk is with computers getting more and more intelligent is that people will get more and more stupid. They'll be a few bright kids to run the system, but millions would slowly evolve into reality shows watchers and peanuts eaters zombie-like human-vegetables.
The fact is that most of science research proven,that people gets more and more stupid indeed,within a flow of years.
It happens because in our society,we dont need to exercise our brain doing some let's say for example math problems,or other problems where we need to sit and think for some time to solve it.That leads to lesser usage of our brain,which means we just get less and less inteligent over the centuries.

I think it wont happen even if they create such things, others will destroy before they know. People are scared for the outcome of something too dangerous and people always feel superior but scared from being overcome. That is why lots of people don't want aliens and God exist they are trying to execute things before getting in their way.


                ▄▄▄██████▄▄▄▄
            ▄███▀▀▀▀     ▀▀▀███▄▄
         ▄███▀                 ▀██▄
        ██▀          ▄█          ▀██▄
      ▄██            ██  ▄▄██      ▀██
     ▐██             █████▀         ▀██
     ██              ██▌       ▄▄▄   ██▌
    ▐██              ██   ▄▄████▀▀   ▐██
    ▐█▌              ██████▀▀         ██
    ▐██              ██▀         ▄▄█▄▐██
     ██▌             ██     ▄▄████▀▀ ██
      ██▄            ██▄▄████▀▀     ██▀
       ███           ███▀▀        ▄██▀
        ▀██▄ ▐██     ██         ▄██▀
          ▀▀█████    ██     ▄▄███▀
          ████████   ████████▀▀
                      ▀▀
.

▄▄▄▄▄▄▄▄▄                  ▄▄█████▄▄
█▌▀▀▀▀▀▀▀▀█▄            ▄█▀▀       ▀▀█▄
█▌         ██         ██▀     ▐█  ▄▄  ▀█▄
█▌         ██        ██       ▐███▀▀    ██
█▌      ▄▄██        ▐█        ▐█  ▄▄▄██ ▐█▌
██▀▀▀▀██▀▀          ▐█        ▐███▀▀     █▌
█▌     ▀█▄          ▐█▄       ▐█   ▄▄██▀██
█▌      ▀█▄          ▀█▄      ▐███▀▀   ▄█▀
█▌       ▀█▄           ██▄█▌  ▐█     ▄██
▀▀        ▀█            █████ ▐█▄▄▄██▀
.

 ████████▄▄          ████████▄          █         ███████████         ▄▄████▄▄
▐█        ██        ▐█       ▀█▄        █▌        █▌                 █▀      ▀█
▐█        ▐█        ▐█        ▐█        █▌        █▌                ██
▐█      ▄▄█▀        ▐█      ▄▄█▀        █▌        ██                 ██▄
▐█▀▀▀▀▀▀▀██▄        ▐█▀▀▀▀██▀           █▌        ██▀▀▀▀▀▀▀            ▀▀▀▀▀██▄
▐█         █        ▐█     ▀█           █▌        █▌                          █▌
▐█        ▄█        ▐█      ▀█          █▌        █▌                ▐█       ▄█
▐█▄▄▄▄▄▄▄█▀         ▐█       ▀█         █▌        ██▄▄▄▄▄▄▄▄▄        ▀██▄▄▄▄█▀
             ▄▓▓▓▓▓▓▓▓
             ▓▓      ▓▌
     ▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓
     ▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▌
     ▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀

     ▐▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓
     ▐▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▌
      ▓▓▓  ▐▓▓▓▓   ▓▓▓▓▓  ▐▓▓▌
      ▓▓▓  ▐▓▓▓▓   ▓▓▓▓▓  ▓▓▓▒
      ▓▓▓   ▓▓▓▓   ▓▓▓▓▌  ▓▓▓
      ▓▓▓   ▓▓▓▓   ▓▓▓▓▌  ▓▓▓
      ▓▓▓▌  ▓▓▓▓   ▓▓▓▓▒  ▓▓▓
      ▓▓▓▌  ▓▓▓▓   ▓▓▓▓   ▓▓▓
      ▐▓▓▌  ▓▓▓▓   ▓▓▓▓   ▓▓▓
      ▐▓▓▓▄▄▓▓▓▓▓▄▄▓▓▓▓▄▄▓▓▓▓
      ▐▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▌
                 ▄▓▓▓▓▓▓▓▓▒▄▄▄▄
                ▓▓▓▓▓▓▓▓▀▄▓▓▓▓▓▓
               ▓▓▓▓▓▓▓▓ ▐▓▓▓▓▓▓▓▓
              ▓▓▓▓▓▓▓▓   ▐▓▓▓▓▓▓▓▓▄▄▓
             ▓▓▓▓▓▓▓▓     ▀▓▓▓▓▓▓▓▓▓
              ▀▓▓▓▓▀        ▓▓▓▓▓▓▓
      ▄▄▄▄▄▄▄▄▄  ▀        ▄▓▓▓▓▀▀▀  ▄
        ▓▓▓▓▓▓▓▓                ▄▄▓▓▓▓
       ▐▓▓▓▓▓▓▓▓▓▄             ▓▓▓▓▓▓▓▓
      ▓▓▓▓▓▓▓▓▓▀ ▀▀             ▓▓▓▓▓▓▓▓▄
     ▐▓▓▓▓▓▓▓▓                   ▀▓▓▓▓▓▓▓▄
     ▐▓▓▓▓▓▓▓             ▓        ▓▓▓▓▓▓▓
      ▒▄▓▓▓▀             ▓▓▄▄▄▄▄▄▄▄ ▓▓▓▓▓
       ▐▓▓▓▓▓▓▓▓▓▓▓▓▓▓  ▓▓▓▓▓▓▓▓▓▓▓▓░▓▓▓
         ▓▓▓▓▓▓▓▓▓▓▓▓▓ ▓▓▓▓▓▓▓▓▓▓▓▓▓▓▐▀
          ▓▓▓▓▓▓▓▓▓▓▓▓  ▓▓▓▓▓▓▓▓▓▓▓▓▌
                         ▀▓▓
                           ▓
[/
Gronthaing
Legendary
*
Offline Offline

Activity: 1142
Merit: 1001


View Profile
February 28, 2017, 01:14:24 AM
 #117

To the OP,

artifical intelligence realizing that humankind is too wasteful demands one thing. Complete centralization of human society, which indeed is incredibly dangerous.

While, I dont think anything similar to Skynet is on the horizon, it doesnt mean we are not endangered. Afterall, how do you ensure safety and continuation of human life better, than if you put said person forcibly to sleep in controlled environment? And how do you protect person from pain any better, than if you simply end his life?

Machine intelligence has no room for common sense.
Unfortunately scientists do not have common sense. For them science is their life and they are not interested in the consequences that may take place after their invention. You forget that nuclear weapons are invented by scientists.

And good things too as Okurkabinladin said. But you can't blame the scientists only for those types of inventions. Their funding has to come from somewhere. If governments and large corporations choose to throw money and manpower at what gets them the most return on investment or power or whatever, and incentivize people training in certain areas of research, there is not much individuals can do.
That weak excuse that scientists are inventing new ways of mass destruction. If we follow your logic you can justify any of the killer. And that he has no Finance and it is found that the customer financed it.

Couple of things there. I am not saying i don't believe in personal responsibility. Both the killer and the costumer are to blame in your example. And both the scientists and the system that encourages and rewards them share responsibility for what they work on. But you can't ignore either side. Whatever the scientists work on, it's not them that finally decides to go to war or nuke other nations or something. Those are political and social decisions. Decisions that would have to be made even if we only had sticks and stones to fight with. And by the way most discoveries aren't of the type that either harms humanity or helps humanity. It's not that simple.

Trading
Legendary
*
Offline Offline

Activity: 1427
Merit: 1017


Nothing like healthy scepticism and hard evidence


View Profile
March 01, 2017, 09:25:33 PM
 #118

Another big update on the OP.

My main posts and a few more are reposted here: https://oneskeptic.tumblr.com
DrPepperJC
Full Member
***
Offline Offline

Activity: 121
Merit: 100



View Profile
March 14, 2017, 01:34:04 PM
 #119

Nobody knows what form the artificial intelligence will acquire and how it can threaten humanity. It is dangerous not because it can affect the development of robotics, but how its appearance will affect the world in principle and for what purposes it will be used.
Seccerius
Full Member
***
Offline Offline

Activity: 131
Merit: 100



View Profile
March 15, 2017, 05:21:01 PM
 #120

Artificial intelligence of any machine is limited to a set of commands assigned to it, and they will not be able to think. In good hands, this can be used for help, and in bad hands it can become a weapon.
Pages: « 1 2 3 4 5 [6] 7 8 »  All
  Print  
 
Jump to:  

Sponsored by , a Bitcoin-accepting VPN.
Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!