Bitcoin Forum
January 21, 2019, 12:53:10 PM *
News: Latest Bitcoin Core release: 0.17.1 [Torrent]
 
   Home   Help Search Login Register More  
Poll
Question: Is the creation of a superintelligent artificial being (AI) dangerous?
No, this won't ever happen or we can take care of the issue. No need to adopt any particular measure. - 18 (25.7%)
Yes, but we'll be able to handle it. Business as usual. - 12 (17.1%)
Yes, but AI investigators should decide what safeguards to be adopted. - 8 (11.4%)
Yes and all AI investigation on real autonomous programs should be subject to governmental authorization until we know better the danger. - 3 (4.3%)
Yes and all AI investigation should be subjected to international guidelines and control. - 12 (17.1%)
Yes and all AI investigation should cease completely. - 8 (11.4%)
I couldn't care less about AI. - 5 (7.1%)
I don't have an opinion on the issue - 1 (1.4%)
Why do you, OP, care about AI?, you shall burn in hell, like all atheists. God will save us from any dangerous AI. - 3 (4.3%)
Total Voters: 70

Pages: « 1 2 3 4 5 6 7 [8]  All
  Print  
Author Topic: Poll: Is the creation of artificial superinteligence dangerous?  (Read 7935 times)
MYMM
Newbie
*
Offline Offline

Activity: 1
Merit: 0


View Profile
May 09, 2018, 03:49:51 PM
 #141

In addition to the benefits of promoting human capacity, they can also create many terrible risks of uncontrolled genetic change. The rapid domination of superheroes or robot combats will threaten the survival of society.
Advertised sites are not endorsed by the Bitcoin Forum. They may be unsafe, untrustworthy, or illegal in your jurisdiction. Advertise here.
mmfiore
Hero Member
*****
Offline Offline

Activity: 810
Merit: 502



View Profile
May 09, 2018, 04:52:58 PM
 #142

I definetly believe that AI devellopement can become a real threat to mankind, and real fast.

Big brother is definetly watching !



        ▄██▄            ▄██▄
       ██  ███████████ ██  ██
        ▀██▀            ▀███▌
       ▄                  ▐██
     ▄██▀   ▄▄▄▄▄▄▄        ▐██▄
    ▄██    ██▀ ▄▄▄▄         ▀▀
 ▄███▀     █ ▄█▀ ▄▄▄           ▄██▄
██  ██     █ █  █████  █ ▄    ██  ██
 ▀██▀            ▀▀▀ ▄██ █     ███▀
     ▄            ▄▄██▀ ▄█   ▄██▌
    ▀██▄          ▄▄▄▄▄█▀   ▄██
      ██▄                   ▀▀
       ▀███▄            ▄██▄
       ██  ██  ██████████  ██
        ▀██▀            ▀██▀


        ▄██▄            ▄██▄
       ██  ███████████ ██  ██
        ▀██▀            ▀███▌
       ▄                  ▐██
     ▄██▀   ▄▄▄▄▄▄▄        ▐██▄
    ▄██    ██▀ ▄▄▄▄         ▀▀
 ▄███▀     █ ▄█▀ ▄▄▄           ▄██▄
██  ██     █ █  █████  █ ▄    ██  ██
 ▀██▀            ▀▀▀ ▄██ █     ███▀
     ▄            ▄▄██▀ ▄█   ▄██▌
    ▀██▄          ▄▄▄▄▄█▀   ▄██
      ██▄                   ▀▀
       ▀███▄            ▄██▄
       ██  ██  ██████████  ██
        ▀██▀            ▀██▀
.
PRE-REGISTER NOW FOR
THE MAIN TOKENSALE


██
██
██ ██
██ ██
██ ██
██ ██
██ ██
██ ██
   ██
   ██
.MONETIZE YOUR IOT DATA!.

Telegram
Twitter
▄▄███████████████████████████████████▄▄
██████████▀▀▀▀███████████▀▀▀▀██████████
█████████▌ ██          █▌ ██  █████████
██████████▄▄▄▄███████████▄▄  ▐█████████
█████████▀▀█████████████████  ▀████████
████████  ▄███▀ ▄▄▄▄█████████▌ ▀███████
██████▀  ▄██▀ ▄█▀ ▄▄██████████▄▄███████
███▀▀▀  ███▌ ▄█ ▄█▀▀▀███████████▀▀▀▀███
██▌ ██ ▐███▌ █▌ █ ▐█▌ █▌ █ ▐███  ██ ▐██
███▄▄▄▄███████████▄▄▄█▀ ▄█ ▐███▌  ▄▄███
███████▀▀██████████▄▄▄▄█▀ ▄███▀  ██████
███████▄  █████████▄▄▄▄▄▄████▀  ███████
████████▄  ▀█████████████████▄▄████████
██████████    ▀██████████▀▀▀▀██████████
██████████▌ ██ ▐█         ██ ▐█████████
███████████▄▄▄▄██████████▄▄▄▄██████████
▀▀███████████████████████████████████▀▀
Trading
Legendary
*
Offline Offline

Activity: 1430
Merit: 1018


Nothing like healthy scepticism and hard evidence


View Profile
June 08, 2018, 11:45:29 AM
 #143

AI better at detecting skin cancer than doctors:
https://academic.oup.com/annonc/advance-article/doi/10.1093/annonc/mdy166/5004443

My main posts and a few more are reposted here: https://oneskeptic.tumblr.com
af_newbie
Legendary
*
Offline Offline

Activity: 1162
Merit: 1206



View Profile
June 08, 2018, 12:53:06 PM
Merited by Majormax (1)
 #144

https://www.youtube.com/watch?v=ERwjba9qYXA

Watch around 53-54 minute mark, great example of what AI can do.

People think that humans are unique because evolution gave us consciousness, but guess what?  AI will achieve consciousness in few decades if not sooner.

The progress in AI is exponential, what took evolution millions of years to achieve is done in years if not months.

The emergence in action.

Is it dangerous?   Well, it depends, define "dangerous".

AI is just another step in the evolutionary ladder, IMHO.
Trading
Legendary
*
Offline Offline

Activity: 1430
Merit: 1018


Nothing like healthy scepticism and hard evidence


View Profile
June 22, 2018, 01:30:17 PM
 #145

even on the field of conscious AI we are making staggering progress:

 

“three robots were programmed to believe that two of them had been given a "dumbing pill" which would make them mute. Two robots were silenced. When asked which of them hadn't received the dumbing pill, only one was able to say "I don't know" out loud. Upon hearing its own reply, the robot changed its answer, realizing that it was the one who hadn't received the pill.”
(http://uk.businessinsider.com/this-robot-passed-a-self-awareness-test-that-only-humans-could-handle-until-now-2015-7).

 

Being able to identify his voice, or even its individual capacity to talk, seems not enough to talk about real consciousness. It’s like recognizing that a part of the body is ours. It’s different than recognizing that we have an individual mind (self-theory of the mind).

I’m not talking about phenomenological or access consciousness, which many basic creatures have, including AlphaZero or any car driving software (it “feels” obstacles and, after an accident, it could easily process this information and say “Dear inept driving monkeys, please stop crashing your cars against me”; adapted from techradar.com).

 

The issue is very controversial, but even when we are reasoning, we might not be exactly conscious. One can be thinking about a theoretical issue completely oblivious of oneself.

 

Conscious thought (as reasoning that you are aware of, since emerges “from” your consciousness) as opposed to subconscious thought (something your consciousness didn’t realize, but that makes you act on a decision from your subconsciousness) is different from consciousness.

 

We are conscious when we stop thinking about abstract or other things and just recognize again: I’m alive here and now and I’m an autonomous person, with my own goals.

 

When we realize our status as thinking and conscious beings.

 

Consciousness seems much more related to realizing that we can feel and think than to just feeling the environment (phenomenological consciousness) or thinking/processing information (access consciousness).

 

It’s having a theory of the mind (being able to see things from the perspective of another person) about ourselves (Janet Metcalfe).


My main posts and a few more are reposted here: https://oneskeptic.tumblr.com
Trading
Legendary
*
Offline Offline

Activity: 1430
Merit: 1018


Nothing like healthy scepticism and hard evidence


View Profile
July 28, 2018, 11:44:37 AM
 #146

Henry Kissinger just wrote about AI's dangers: https://www.theatlantic.com/magazine/archive/2018/06/henry-kissinger-ai-could-mean-the-end-of-human-history/559124/

It isn't a brilliant text, but it deserves some attention.

My main posts and a few more are reposted here: https://oneskeptic.tumblr.com
Carter_Terrible
Newbie
*
Offline Offline

Activity: 19
Merit: 0


View Profile
July 28, 2018, 12:02:58 PM
 #147

I believe it is theoretically possible for AI to become as intelligent as humans. This shouldn't be a great cause for concern though. Everything that AI can do is programmed by humans. Perhaps the question could be phrased differently: "Could robots be dangerous?" Of course the could be! If humans programs robots to destroy and do bad things, then the robots could be dangers. That's basically what military drones do. They are remotely controlled, but they are still robots.
Carter_Terrible
Newbie
*
Offline Offline

Activity: 19
Merit: 0


View Profile
August 04, 2018, 01:20:07 PM
 #148

People who say that AI isn't dangerous simply aren't in the know. Scientists even convened earlier this year to talk about toning down their research in artificial intelligence to protect humanity.

The short answer is: it can be. The long answer is: hopefully not.

Artificial intelligence is on the way and we will create it. We need to tread carefully with how we deal with it.
The right technique is to develop robots with singular purposes rather than fully autonomous robots that can do it all. Make a set of robots that chooses targets and another robot that does the shooting. Make one robot choose which person needs healing and another robot does the traveling and heals the person.

Separate the functionality of robots so we don't have T1000's roaming the streets.

That is Plan B, in my opinion. The best option is for human-cybernetics. Our scientists and engineers should focus on enhancing human capabilities rather than outsourcing decision making to artificial intelligence.
I think giving robots different roles is a good idea. If they truly had AI, I guess it wouldn't be that hard to imagine that they could learn to communicate with each other and plot something new. I don't think enhancing human capability should necessarily be a priority over robots. I think both should be developed. You could develop technology that would make it easier for a human to work in an assembly line. It's a somewhat useful tool, but it would be much better to just make a robot to replace the human. Humans shouldn't have to do mundane tasks, if they can create robots to do the same tasks.
Spendulus
Legendary
*
Offline Offline

Activity: 2128
Merit: 1138



View Profile
September 13, 2018, 02:06:21 AM
 #149

Major update on the OP.
I'm not dangerous.
Trading
Legendary
*
Offline Offline

Activity: 1430
Merit: 1018


Nothing like healthy scepticism and hard evidence


View Profile
September 18, 2018, 03:24:36 PM
 #150

AI 'poses less risk to jobs than feared' says OECD
https://www.bbc.co.uk/news/technology-43618620

OECD is talking about 10-12% job cuts on the USA and UK.

The famous 2013 study from Oxford University academics argued for a 47% cut.

It defended as the least safe jobs:
Telemarketer
Chance of automation 99%
Loan officer
Chance of automation 98%
Cashier
Chance of automation 97%
Paralegal and legal assistant
Chance of automation 94%
Taxi driver
Chance of automation 89%
Fast food cook
Chance of automation 81%

Yes, today (not in 10 years) automation is “blind to the color of your collar"
https://www.theguardian.com/us-news/2017/jun/26/jobs-future-automation-robots-skills-creative-health

The key is creativity and social intelligence requirements, complex manual tasks (plumbers, electricians, etc) and unpredictability of your job.


Pessimistic studies keep popping: By 2028 AI Could Take Away 28 Million Jobs in ASEAN Countries
https://www.entrepreneur.com/article/320121

Of course, the problem is figuring out what is going to happen on AI development.



Check the BBC opinion about the future of your current or future job at:
https://www.bbc.co.uk/news/technology-34066941

My main posts and a few more are reposted here: https://oneskeptic.tumblr.com
Trading
Legendary
*
Offline Offline

Activity: 1430
Merit: 1018


Nothing like healthy scepticism and hard evidence


View Profile
October 21, 2018, 12:08:09 AM
 #151

Boston Dynamics' robots are something amazing.

For instance, watch its Atlas doing a back-flip here: https://www.youtube.com/watch?v=WcbGRBPkrps

My main posts and a few more are reposted here: https://oneskeptic.tumblr.com
Trading
Legendary
*
Offline Offline

Activity: 1430
Merit: 1018


Nothing like healthy scepticism and hard evidence


View Profile
November 07, 2018, 05:54:31 PM
 #152

Ray Kurzweil predictions of a human level general AI by 2029 and the singularity by 2045 (https://en.wikipedia.org/wiki/Ray_Kurzweil#Future_predictions) might be wrong, because he bases his predictions on the enduring validity of Moore's Law.

Moore's Law (which says that components on a integrated circuit double every two years and, hence, also its speed) is facing challenges.

Currently, the rate of speed increase is more 2.5 or 3 years than 2 and it's not clear if even this is sustainable.

As the nods on chips keep shrinking, quantum mechanics steps in and electrons start becoming hard to control (https://en.wikipedia.org/wiki/Moore's_law).

My main posts and a few more are reposted here: https://oneskeptic.tumblr.com
Emily_Davis
Newbie
*
Offline Offline

Activity: 71
Merit: 0


View Profile
November 07, 2018, 06:09:27 PM
 #153

I think it all depends on the intent of creating it and the type of AI that will be created. For example, China's social credit system is gaining a lot of criticisms from several people because of its effects on its residents, and it's not even AI yet. It's more like machine learning. If this continues, they may be the first country to ever produce ASI. Whether or not it will be a threat to us in the future? We never know.
knobcore
Jr. Member
*
Offline Offline

Activity: 33
Merit: 10


View Profile
November 07, 2018, 06:12:12 PM
 #154

Ray Kurzweil predictions of a human level general AI by 2029 and the singularity by 2045 (https://en.wikipedia.org/wiki/Ray_Kurzweil#Future_predictions) might be wrong, because he bases his predictions on the enduring validity of Moore's Law.

Moore's Law (which says that components on a integrated circuit double every two years and, hence, also its speed) is facing challenges.

Currently, the rate of speed increase is more 2.5 or 3 years than 2 and it's not clear if even this is sustainable.

As the nods on chips keep shrinking, quantum mechanics steps in and electrons start becoming hard to control (https://en.wikipedia.org/wiki/Moore's_law).

I doubt they will use silicon for long. Also amdahls law says once we reach 200 or so cores worth of this architecture anything else is wasted even in parallel.

My answer is here.

https://bitcointalk.org/index.php?topic=5065031.0
Trading
Legendary
*
Offline Offline

Activity: 1430
Merit: 1018


Nothing like healthy scepticism and hard evidence


View Profile
January 02, 2019, 05:05:22 AM
 #155

We won't ever have a human level AI.

Once they have a general intelligence similar to ours, they will be much ahead of humans, because they function very close to the speed of light while humans' brains have a speed of less than 500km per hour.

And they will be able to do calculations at a much higher rate than us.

My main posts and a few more are reposted here: https://oneskeptic.tumblr.com
Spendulus
Legendary
*
Offline Offline

Activity: 2128
Merit: 1138



View Profile
January 05, 2019, 02:10:10 AM
 #156

We won't ever have a human level AI.

Once they have a general intelligence similar to ours, they will be much ahead of humans, because they function very close to the speed of light while humans' brains have a speed of less than 500km per hour.

And they will be able to do calculations at a much higher rate than us.

Really?

Speed of computation is related to level of intelligence?

No.

Intelligence is easiest thought of as the ability to understand things that people of lower intelligence cannot understand. EVER.

This is really pretty simple. There are many subjects in advanced math that many people will never understand. I am not going to say "You" or "You or I" because I don't have a clue what you might or might not understand.

One example of this that many people have heard of is "p or np."

In my opinion, another is general relativity.

I am not talking here about the popular "buzz" on the subject, but the exact subject.
Pages: « 1 2 3 4 5 6 7 [8]  All
  Print  
 
Jump to:  

Bitcointalk.org is not available or authorized for sale. Do not believe any fake listings.
Sponsored by , a Bitcoin-accepting VPN.
Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!