Bitcoin Forum
April 16, 2024, 02:34:08 PM *
News: Latest Bitcoin Core release: 26.0 [Torrent]
 
   Home   Help Search Login Register More  
Poll
Question: Is the creation of a superintelligent artificial being (AI) dangerous?
No, this won't ever happen or we can take care of the issue. No need to adopt any particular measure. - 20 (24.4%)
Yes, but we'll be able to handle it. Business as usual. - 15 (18.3%)
Yes, but AI investigators should decide what safeguards to be adopted. - 11 (13.4%)
Yes and all AI investigation on real autonomous programs should be subject to governmental authorization until we know better the danger. - 3 (3.7%)
Yes and all AI investigation should be subjected to international guidelines and control. - 14 (17.1%)
Yes and all AI investigation should cease completely. - 8 (9.8%)
I couldn't care less about AI. - 6 (7.3%)
I don't have an opinion on the issue - 1 (1.2%)
Why do you, OP, care about AI?, you shall burn in hell, like all atheists. God will save us from any dangerous AI. - 4 (4.9%)
Total Voters: 82

Pages: « 1 2 3 4 5 6 7 [8] 9 10 »  All
  Print  
Author Topic: Poll: Is the creation of artificial superinteligence dangerous?  (Read 24649 times)
Trading (OP)
Legendary
*
Offline Offline

Activity: 1455
Merit: 1033


Nothing like healthy scepticism and hard evidence


View Profile
July 28, 2018, 11:44:37 AM
 #141

Henry Kissinger just wrote about AI's dangers: https://www.theatlantic.com/magazine/archive/2018/06/henry-kissinger-ai-could-mean-the-end-of-human-history/559124/

It isn't a brilliant text, but it deserves some attention.

The Rock Trading Exchange forges its order books with bots, uses them to scam customers and is trying to appropriate 35000 euro from a forum member https://bitcointalk.org/index.php?topic=4975753.0
1713278048
Hero Member
*
Offline Offline

Posts: 1713278048

View Profile Personal Message (Offline)

Ignore
1713278048
Reply with quote  #2

1713278048
Report to moderator
1713278048
Hero Member
*
Offline Offline

Posts: 1713278048

View Profile Personal Message (Offline)

Ignore
1713278048
Reply with quote  #2

1713278048
Report to moderator
1713278048
Hero Member
*
Offline Offline

Posts: 1713278048

View Profile Personal Message (Offline)

Ignore
1713278048
Reply with quote  #2

1713278048
Report to moderator
Be very wary of relying on JavaScript for security on crypto sites. The site can change the JavaScript at any time unless you take unusual precautions, and browsers are not generally known for their airtight security.
Advertised sites are not endorsed by the Bitcoin Forum. They may be unsafe, untrustworthy, or illegal in your jurisdiction.
1713278048
Hero Member
*
Offline Offline

Posts: 1713278048

View Profile Personal Message (Offline)

Ignore
1713278048
Reply with quote  #2

1713278048
Report to moderator
1713278048
Hero Member
*
Offline Offline

Posts: 1713278048

View Profile Personal Message (Offline)

Ignore
1713278048
Reply with quote  #2

1713278048
Report to moderator
Carter_Terrible
Newbie
*
Offline Offline

Activity: 19
Merit: 0


View Profile
July 28, 2018, 12:02:58 PM
 #142

I believe it is theoretically possible for AI to become as intelligent as humans. This shouldn't be a great cause for concern though. Everything that AI can do is programmed by humans. Perhaps the question could be phrased differently: "Could robots be dangerous?" Of course the could be! If humans programs robots to destroy and do bad things, then the robots could be dangers. That's basically what military drones do. They are remotely controlled, but they are still robots.
Carter_Terrible
Newbie
*
Offline Offline

Activity: 19
Merit: 0


View Profile
August 04, 2018, 01:20:07 PM
 #143

People who say that AI isn't dangerous simply aren't in the know. Scientists even convened earlier this year to talk about toning down their research in artificial intelligence to protect humanity.

The short answer is: it can be. The long answer is: hopefully not.

Artificial intelligence is on the way and we will create it. We need to tread carefully with how we deal with it.
The right technique is to develop robots with singular purposes rather than fully autonomous robots that can do it all. Make a set of robots that chooses targets and another robot that does the shooting. Make one robot choose which person needs healing and another robot does the traveling and heals the person.

Separate the functionality of robots so we don't have T1000's roaming the streets.

That is Plan B, in my opinion. The best option is for human-cybernetics. Our scientists and engineers should focus on enhancing human capabilities rather than outsourcing decision making to artificial intelligence.
I think giving robots different roles is a good idea. If they truly had AI, I guess it wouldn't be that hard to imagine that they could learn to communicate with each other and plot something new. I don't think enhancing human capability should necessarily be a priority over robots. I think both should be developed. You could develop technology that would make it easier for a human to work in an assembly line. It's a somewhat useful tool, but it would be much better to just make a robot to replace the human. Humans shouldn't have to do mundane tasks, if they can create robots to do the same tasks.
Spendulus
Legendary
*
Offline Offline

Activity: 2898
Merit: 1386



View Profile
September 13, 2018, 02:06:21 AM
 #144

Major update on the OP.
I'm not dangerous.
Trading (OP)
Legendary
*
Offline Offline

Activity: 1455
Merit: 1033


Nothing like healthy scepticism and hard evidence


View Profile
September 18, 2018, 03:24:36 PM
 #145

AI 'poses less risk to jobs than feared' says OECD
https://www.bbc.co.uk/news/technology-43618620

OECD is talking about 10-12% job cuts on the USA and UK.

The famous 2013 study from Oxford University academics argued for a 47% cut.

It defended as the least safe jobs:
Telemarketer
Chance of automation 99%
Loan officer
Chance of automation 98%
Cashier
Chance of automation 97%
Paralegal and legal assistant
Chance of automation 94%
Taxi driver
Chance of automation 89%
Fast food cook
Chance of automation 81%

Yes, today (not in 10 years) automation is “blind to the color of your collar"
https://www.theguardian.com/us-news/2017/jun/26/jobs-future-automation-robots-skills-creative-health

The key is creativity and social intelligence requirements, complex manual tasks (plumbers, electricians, etc) and unpredictability of your job.


Pessimistic studies keep popping: By 2028 AI Could Take Away 28 Million Jobs in ASEAN Countries
https://www.entrepreneur.com/article/320121

Of course, the problem is figuring out what is going to happen on AI development.



Check the BBC opinion about the future of your current or future job at:
https://www.bbc.co.uk/news/technology-34066941

The Rock Trading Exchange forges its order books with bots, uses them to scam customers and is trying to appropriate 35000 euro from a forum member https://bitcointalk.org/index.php?topic=4975753.0
Trading (OP)
Legendary
*
Offline Offline

Activity: 1455
Merit: 1033


Nothing like healthy scepticism and hard evidence


View Profile
October 21, 2018, 12:08:09 AM
 #146

Boston Dynamics' robots are something amazing.

For instance, watch its Atlas doing a back-flip here: https://www.youtube.com/watch?v=WcbGRBPkrps

The Rock Trading Exchange forges its order books with bots, uses them to scam customers and is trying to appropriate 35000 euro from a forum member https://bitcointalk.org/index.php?topic=4975753.0
Trading (OP)
Legendary
*
Offline Offline

Activity: 1455
Merit: 1033


Nothing like healthy scepticism and hard evidence


View Profile
November 07, 2018, 05:54:31 PM
 #147

Ray Kurzweil predictions of a human level general AI by 2029 and the singularity by 2045 (https://en.wikipedia.org/wiki/Ray_Kurzweil#Future_predictions) might be wrong, because he bases his predictions on the enduring validity of Moore's Law.

Moore's Law (which says that components on a integrated circuit double every two years and, hence, also its speed) is facing challenges.

Currently, the rate of speed increase is more 2.5 or 3 years than 2 and it's not clear if even this is sustainable.

As the nods on chips keep shrinking, quantum mechanics steps in and electrons start becoming hard to control (https://en.wikipedia.org/wiki/Moore's_law).

The Rock Trading Exchange forges its order books with bots, uses them to scam customers and is trying to appropriate 35000 euro from a forum member https://bitcointalk.org/index.php?topic=4975753.0
Emily_Davis
Newbie
*
Offline Offline

Activity: 71
Merit: 0


View Profile
November 07, 2018, 06:09:27 PM
 #148

I think it all depends on the intent of creating it and the type of AI that will be created. For example, China's social credit system is gaining a lot of criticisms from several people because of its effects on its residents, and it's not even AI yet. It's more like machine learning. If this continues, they may be the first country to ever produce ASI. Whether or not it will be a threat to us in the future? We never know.
knobcore
Jr. Member
*
Offline Offline

Activity: 32
Merit: 10


View Profile
November 07, 2018, 06:12:12 PM
 #149

Ray Kurzweil predictions of a human level general AI by 2029 and the singularity by 2045 (https://en.wikipedia.org/wiki/Ray_Kurzweil#Future_predictions) might be wrong, because he bases his predictions on the enduring validity of Moore's Law.

Moore's Law (which says that components on a integrated circuit double every two years and, hence, also its speed) is facing challenges.

Currently, the rate of speed increase is more 2.5 or 3 years than 2 and it's not clear if even this is sustainable.

As the nods on chips keep shrinking, quantum mechanics steps in and electrons start becoming hard to control (https://en.wikipedia.org/wiki/Moore's_law).

I doubt they will use silicon for long. Also amdahls law says once we reach 200 or so cores worth of this architecture anything else is wasted even in parallel.

My answer is here.

https://bitcointalk.org/index.php?topic=5065031.0
Trading (OP)
Legendary
*
Offline Offline

Activity: 1455
Merit: 1033


Nothing like healthy scepticism and hard evidence


View Profile
January 02, 2019, 05:05:22 AM
 #150

We won't ever have a human level AI.

Once they have a general intelligence similar to ours, they will be much ahead of humans, because they function very close to the speed of light while humans' brains have a speed of less than 500km per hour.

And they will be able to do calculations at a much higher rate than us.

The Rock Trading Exchange forges its order books with bots, uses them to scam customers and is trying to appropriate 35000 euro from a forum member https://bitcointalk.org/index.php?topic=4975753.0
Spendulus
Legendary
*
Offline Offline

Activity: 2898
Merit: 1386



View Profile
January 05, 2019, 02:10:10 AM
 #151

We won't ever have a human level AI.

Once they have a general intelligence similar to ours, they will be much ahead of humans, because they function very close to the speed of light while humans' brains have a speed of less than 500km per hour.

And they will be able to do calculations at a much higher rate than us.

Really?

Speed of computation is related to level of intelligence?

No.

Intelligence is easiest thought of as the ability to understand things that people of lower intelligence cannot understand. EVER.

This is really pretty simple. There are many subjects in advanced math that many people will never understand. I am not going to say "You" or "You or I" because I don't have a clue what you might or might not understand.

One example of this that many people have heard of is "p or np."

In my opinion, another is general relativity.

I am not talking here about the popular "buzz" on the subject, but the exact subject.
Trading (OP)
Legendary
*
Offline Offline

Activity: 1455
Merit: 1033


Nothing like healthy scepticism and hard evidence


View Profile
February 21, 2019, 02:17:48 AM
Last edit: February 21, 2019, 02:31:21 AM by Trading
 #152

Really?

Speed of computation is related to level of intelligence?

No.


I wrote that once we have a human level AI it will be much ahead of us, because one of them will be able to to on a day what millions of humans can do on the same period.

And since millions of people are hard to coordinate when doing intellectual labor, the AI will be able to reach goals that millions of humans even cooperating won't.

Intelligence may be defined as the capacity to gather information, elaborate models/knowledge of reality and change reality with them to reach complex goals.

An human level AI will reach those goals faster than millions of us working together.

If by human level AI we'll have AI with the intelligence of some of the best of humanity, is like if we had millions of Einsteins working together. Just think on the possibilities.

Intelligence isn't only a qualitative capacity. Memory, speed and the capacity to handle an incredible amount of data are also a part of intelligence. A part decisive to reach goals.

Alphazero managed to discover new moves on Go and Chess that all humanity never discovered on more than a thousand years. So Alphazero is intelligent, even if it completely lacks consciousness and it's just a simple AI based on Reinforcement learning:

https://en.wikipedia.org/wiki/Reinforcement_learning
https://medium.freecodecamp.org/an-introduction-to-reinforcement-learning-4339519de419

The Rock Trading Exchange forges its order books with bots, uses them to scam customers and is trying to appropriate 35000 euro from a forum member https://bitcointalk.org/index.php?topic=4975753.0
Trading (OP)
Legendary
*
Offline Offline

Activity: 1455
Merit: 1033


Nothing like healthy scepticism and hard evidence


View Profile
March 21, 2019, 03:22:53 PM
Last edit: March 21, 2019, 04:56:43 PM by Trading
 #153

Faced with the predictions that AI will remove millions of jobs and create millions of unemployed persons, most economists answer that this prediction was made several times in the past and failed miserably.

That millions of people lost indeed their jobs, but found new ones on new sectors.

Just think about agriculture. On 1900, France had 33% of its working force on agriculture. Now it has 2.9%. England had 15% on 1900. Now it has 1.2%.
https://ourworldindata.org/employment-in-agriculture

There can be no doubt that mechanization destroyed millions of jobs.

But that was compensated by the creation of new jobs.

The question is: this time will be different?

It will be if AI can replace humans on a scale faster than the pace at which the Economy can create new jobs.

AI (particularly intelligent robots) isn't cheap and so currently this isn't happen, but can happen in the near future.

It will be if AI can assume all functions of the average human (manual and intellectual) on terms that will leave no alternative jobs for them to do.

Again this is far from happening. But it can happen on the next 10 to 20 years.

Think about horses.

On 1900, there were 21.5 million horses on the USA, even if its human population was much smaller. On 1960 there were only 3 million horses. Since then, the number fluctuated, but below 5 million: http://www.humanesociety.org/sites/default/files/archive/assets/pdfs/hsp/soaiv_07_ch10.pdf

Will we be the new horses?

Some AI experts are building bunkers, fearing that society will collapse because of violent reactions from raising unemployment:
https://www.independent.co.uk/life-style/gadgets-and-tech/silicon-valley-billionaires-buy-underground-bunkers-apocalypse-california-a7545126.html
https://www.theguardian.com/news/2018/feb/15/why-silicon-valley-billionaires-are-prepping-for-the-apocalypse-in-new-zealand

The Rock Trading Exchange forges its order books with bots, uses them to scam customers and is trying to appropriate 35000 euro from a forum member https://bitcointalk.org/index.php?topic=4975753.0
Spendulus
Legendary
*
Offline Offline

Activity: 2898
Merit: 1386



View Profile
March 21, 2019, 03:46:14 PM
 #154

Really?

Speed of computation is related to level of intelligence?

No.


I wrote that once we have a human level AI it will be much ahead of us, because one of them will be able to to on a day what millions of humans can do on the same period.

And since millions of people are hard to coordinate when doing intellectual labor, the AI will be able to reach goals that millions of humans even cooperating won't.

Intelligence may be defined as the capacity to gather information, elaborate models/knowledge of reality and change reality with them to reach complex goals.

An human level AI will reach those goals faster than millions of us working together.

If by human level AI we'll have AI with the intelligence of some of the best of humanity, is like if we had millions of Einsteins working together. Just think on the possibilities.

Intelligence isn't only a qualitative capacity. Memory, speed and the capacity to handle an incredible amount of data are also a part of intelligence. A part decisive to reach goals.

Alphazero managed to discover new moves on Go and Chess that all humanity never discovered on more than a thousand years. So Alphazero is intelligent, even if it completely lacks consciousness and it's just a simple AI based on Reinforcement learning:

https://en.wikipedia.org/wiki/Reinforcement_learning
https://medium.freecodecamp.org/an-introduction-to-reinforcement-learning-4339519de419

No. This is sloppy logic coupled with imprecise terms together buttressing the initial premise.
spadormie
Sr. Member
****
Offline Offline

Activity: 840
Merit: 268



View Profile
March 21, 2019, 04:49:07 PM
 #155

I attended a seminar in which it talked about AI. I really enjoyed that seminar in which we thought that AI could be the solution in the overgrowing pollution, in climate change. I think that AI could be a threat and at the same time, AI could be helpful to humanity. I think it's good for us to have AI on our side since it can bring a better world for us. Right now we are really hurting mother nature. And because of that, our suffering became worse. AI could be good for us but it needs moderation, for AI could destroy us all with domination around the globe.




.




  ▄▄▄▄▄▄▄▄▄▄▄▄▄
▄████████▀▀▀▀███▄
███████▀     ████
███████   ███████
█████        ████
███████   ███████
▀██████   ██████▀
  ▀▀▀▀▀   ▀▀▀▀▀

  ▄▄▄▄▄▄▄▄▄▄▄▄▄
▄██▀▀▀▀▀▀▀▀▀▀▀██▄
██    ▄▄▄▄▄ ▀  ██
██   █▀   ▀█   ██
██   █▄   ▄█   ██
██    ▀▀▀▀▀    ██
▀██▄▄▄▄▄▄▄▄▄▄▄██▀
  ▀▀▀▀▀▀▀▀▀▀▀▀▀

            ▄▄▄
█▄▄      ████████▄
 █████▄▄████████▌
▀██████████████▌
  █████████████
  ▀██████████▀
   ▄▄██████▀
    ▀▀▀▀▀

    ██  ██
  ███████████▄
    ██      ▀█
    ██▄▄▄▄▄▄█▀
    ██▀▀▀▀▀▀█▄
    ██      ▄█
  ███████████▀
    ██  ██




               ▄
       ▄  ▄█▄ ▀█▀      ▄
      ▀█▀  ▀   ▄  ▄█▄ ▀█▀
███▄▄▄        ▀█▀  ▀     ▄▄▄███       ▐█▄    ▄█▌   ▐█▌   █▄    ▐█▌   ████████   █████▄     ██    ▄█████▄▄   ▐█████▌
████████▄▄           ▄▄████████       ▐███▄▄███▌   ▐█▌   ███▄  ▐█▌      ██      █▌  ▀██    ██   ▄██▀   ▀▀   ▐█
███████████▄       ▄███████████       ▐█▌▀██▀▐█▌   ▐█▌   ██▀██▄▐█▌      ██      █▌   ▐█▌   ██   ██          ▐█████▌
 ████████████     ████████████        ▐█▌    ▐█▌   ▐█▌   ██  ▀███▌      ██      █▌  ▄██    ██   ▀██▄   ▄▄   ▐█
  ████████████   ████████████         ▐█▌    ▐█▌   ▐█▌   ██    ▀█▌      ██      █████▀     ██    ▀█████▀▀   ▐█████▌
   ▀███████████ ███████████▀
     ▀███████████████████▀
        ▀▀▀█████████▀▀▀
FIND OUT MORE AT MINTDICE.COM
Spendulus
Legendary
*
Offline Offline

Activity: 2898
Merit: 1386



View Profile
March 21, 2019, 07:34:22 PM
 #156

I attended a seminar in which it talked about AI. I really enjoyed that seminar in which we thought that AI could be the solution in the overgrowing pollution, in climate change. I think that AI could be a threat and at the same time, AI could be helpful to humanity. I think it's good for us to have AI on our side since it can bring a better world for us. Right now we are really hurting mother nature. And because of that, our suffering became worse. AI could be good for us but it needs moderation, for AI could destroy us all with domination around the globe.

AI doesn't care about your seminar, and it doesn't care what you think. It doesn't care about your ideas about climate charge, or moderation, or pollution, or being "helpful to humanity," or "hurting Mother Nature."

We're not able to ask AI what it does care about, or predict it.

A great short story relevant to "AI Paranoia" is Charles Stross "Antibodies." IMHO Stross's work varies, some is rather creepy others brilliant. This is the latter.

http://www.baen.com/Chapters/9781625791870/9781625791870___2.htm
Malsetid
Hero Member
*****
Offline Offline

Activity: 924
Merit: 502


CryptoTalk.Org - Get Paid for every Post!


View Profile
March 26, 2019, 01:58:29 PM
 #157

I attended a seminar in which it talked about AI. I really enjoyed that seminar in which we thought that AI could be the solution in the overgrowing pollution, in climate change. I think that AI could be a threat and at the same time, AI could be helpful to humanity. I think it's good for us to have AI on our side since it can bring a better world for us. Right now we are really hurting mother nature. And because of that, our suffering became worse. AI could be good for us but it needs moderation, for AI could destroy us all with domination around the globe.

I agree. Weigh the positive and negative effects of a super ai and i think it'll lean more on the former than the latter. Technology is progressing because we ourselves are becoming more intelligent. I don't think there will be a time when ai will be able to overthrow human minds. We're adaptable. We react to terrible situations so we can survive. Ai will always need humans to run the show.


           ▄▄██████▄▄                  ▄▄▄█████▄▄▄
        ▄██████████████▄           ▄▄███████████████▄
      ▄██████████████████▄       ▄████████████████████▄
     █████████▀▀▀▀█████████▄   ▄██████████▀▀▀▀██████████
   ███████▀        ▀█████▀ ▄███████████▀       ▀████████
  ▄█▄█████▀           ▀█▀ ▄███████████▀          ▀███████▌
▄█████▄███              ▄███████████▀             ███████▌
  ▀█▀█████▄           ▄███████████▀ ▄▄           ▄███████▌
   ███████▄        ▄███████████▀ ▄████▄        ▄████████
     █████████▄▄▄▄███████████▀  ▄█████████▄▄▄▄██████████
      ▀████████████████████▀     ▀████████████████████▀
       ▀███████████████▀▀         ▀████████████████▀
          ▀▀▀██████▀▀                ▀▀███████▀▀▀
.CryptoMania.            ▄▄ ▄▄▄▄▄▄
           ▐██████████▄▄██▌
▄▄▄ ▄▄▄▄▄ ▄████████████████▄▄ ████████████  ▄█████████████▄▄
▐███████████▌  █████████▀█████████
 ██▀▀ ██████ ▄████████▌▀▀ ▄███████▌
     ██████▐█████████  ▄███████▀▀
    ▐█████████████████▄███████
    ████████████████████████
    ▐████████████████████████
   ▄███▄▐███  ▄█████▄███████▄███▄
  ▐██ ██▐███ ▐███ ███▌ ███ ▐██ ██
   ▀███▄▐███ ▐███ ███▌ ███  ▀███▄
   ██▄█▀▐█████▀█████▀  ███  ██▄█▀

.The Real Slots.........
.Experience on-chain.

.
  $




.
  $
Spendulus
Legendary
*
Offline Offline

Activity: 2898
Merit: 1386



View Profile
March 26, 2019, 09:05:31 PM
 #158

....Technology is progressing because we ourselves are becoming more intelligent. ....

Over the last several thousand years, humans have not became more intelligent.
Trading (OP)
Legendary
*
Offline Offline

Activity: 1455
Merit: 1033


Nothing like healthy scepticism and hard evidence


View Profile
November 20, 2019, 04:11:06 AM
Last edit: November 20, 2019, 04:29:56 AM by Trading
 #159

Bank of America Merrill Lynch just published another study predicting that AI can eliminate 800 millions of jobs on the next 15 years:

https://www.inverse.com/article/60919-automation-jobs-millions-lost

As AI develops in its predictable way to reach human level, more and more jobs will be automated.

When AI reaches normal human level (not genius level, this might take more time), what kind of job will a normal human being be able to take, when AI will be faster, cheaper and able to work 24h/365 days a year?

There might be still opportunity for jobs that depend on empathy and human touch, but what more?

Technology might not offer opportunity for the creation of new kind of jobs in enough number. It will be as if agricultural workers in the forties and fifties wouldn't have alternative jobs.

This time might be different.

The Rock Trading Exchange forges its order books with bots, uses them to scam customers and is trying to appropriate 35000 euro from a forum member https://bitcointalk.org/index.php?topic=4975753.0
Spendulus
Legendary
*
Offline Offline

Activity: 2898
Merit: 1386



View Profile
November 20, 2019, 04:48:14 AM
 #160

Bank of America Merrill Lynch just published another study predicting that AI can eliminate 800 millions of jobs on the next 15 years:////

Let me know when Bank of America Merrill Lynch is replaced by AI, and when that AI does a study on jobs to be lost by AI.
Pages: « 1 2 3 4 5 6 7 [8] 9 10 »  All
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!