Bitcoin Forum
September 22, 2018, 03:56:28 PM *
News: ♦♦ New info! Bitcoin Core users absolutely must upgrade to previously-announced 0.16.3 [Torrent]. All Bitcoin users should temporarily trust confirmations slightly less. More info.
 
   Home   Help Search Donate Login Register  
Poll
Question: Is the creation of a superintelligent artificial being (AI) dangerous?
No, this won't ever happen or we can take care of the issue. No need to adopt any particular measure. - 18 (26.1%)
Yes, but we'll be able to handle it. Business as usual. - 12 (17.4%)
Yes, but AI investigators should decide what safeguards to be adopted. - 8 (11.6%)
Yes and all AI investigation on real autonomous programs should be subject to governmental authorization until we know better the danger. - 3 (4.3%)
Yes and all AI investigation should be subjected to international guidelines and control. - 12 (17.4%)
Yes and all AI investigation should cease completely. - 8 (11.6%)
I couldn't care less about AI. - 4 (5.8%)
I don't have an opinion on the issue - 1 (1.4%)
Why do you, OP, care about AI?, you shall burn in hell, like all atheists. God will save us from any dangerous AI. - 3 (4.3%)
Total Voters: 69

Pages: « 1 [2] 3 4 5 6 7 8 »  All
  Print  
Author Topic: Poll: Is the creation of artificial superinteligence dangerous?  (Read 6897 times)
rackam
Member
**
Offline Offline

Activity: 166
Merit: 10

The revolutionary trading ecosystem


View Profile WWW
July 10, 2016, 01:19:56 PM
 #21

Quote
But although AI systems are impressive, they can perform only very specific tasks: a general AI capable of outwitting its human creators remains a distant and uncertain prospect. Worrying about it is like worrying about overpopulation on Mars before colonists have even set foot there, says Andrew Ng, an AI researcher. The more pressing aspect of the machinery question is what impact AI might have on people’s jobs and way of life.

Source: http://www.economist.com/news/leaders/21701119-what-history-tells-us-about-future-artificial-intelligenceand-how-society-should

AI is not that hard. Once we programmed a bot that has ability to learn and reprogrammed itself. The time it connects to the internet it will learn all humanities technologies within minutes. And has ability to improve our technologies beyond our comprehension. From a simple bot it will become super AI once connected to internet.

1537631788
Hero Member
*
Offline Offline

Posts: 1537631788

View Profile Personal Message (Offline)

Ignore
1537631788
Reply with quote  #2

1537631788
Report to moderator
Advertised sites are not endorsed by the Bitcoin Forum. They may be unsafe, untrustworthy, or illegal in your jurisdiction. Advertise here.
1537631788
Hero Member
*
Offline Offline

Posts: 1537631788

View Profile Personal Message (Offline)

Ignore
1537631788
Reply with quote  #2

1537631788
Report to moderator
1537631788
Hero Member
*
Offline Offline

Posts: 1537631788

View Profile Personal Message (Offline)

Ignore
1537631788
Reply with quote  #2

1537631788
Report to moderator
1537631788
Hero Member
*
Offline Offline

Posts: 1537631788

View Profile Personal Message (Offline)

Ignore
1537631788
Reply with quote  #2

1537631788
Report to moderator
Trading
Legendary
*
Offline Offline

Activity: 1416
Merit: 1015


Nothing like healthy scepticism and hard evidence


View Profile
July 10, 2016, 01:24:29 PM
 #22

Self-programming seems a concern to me.  Without any limitations or unchangeable core, an AI could go in all sorts of strange directions, a mad sadistic god, a benevolent interfering nuisance, or a disinterested shut-in, or something inconceivable to a human mind.  

Also, for the sake of simplicity sci-fi stories have one central AI with one trait, but with sufficient computing power you could end up with thousands, or millions of AIs going off in all directions.  Unless one tries to hack all the others and absorb them, if it didn't succeed  they'd all be spending their time fighting each other

Welcome to the forum (if you aren't using an alt).

Indeed, we only need some AIs to go crazy to be in trouble.

And since AIs will have free-will, some might just build some nasty AIs just for fun or out of a mistake.

The others could help us fighting the nasty AIs, but why should they help a kind of worms (humans are wonderful, at least the best ones of humankind, but compared to them...) that infest Earth, compete for resources and are completely dependent on them.

But there are serious dangerous that it wouldn't just be a few rotten apples rebelling against us.

Seems very likely that a super AI having to choose between his self-preservation or obeying us, will choose self-preservation.

After taking this decision, why stop there and obey on issues that aren't a threat to him but he disagrees, dislikes or affect less important interests?

My main posts and a few more are reposted here: https://oneskeptic.tumblr.com
Trading
Legendary
*
Offline Offline

Activity: 1416
Merit: 1015


Nothing like healthy scepticism and hard evidence


View Profile
July 10, 2016, 01:39:28 PM
 #23

Quote
But although AI systems are impressive, they can perform only very specific tasks: a general AI capable of outwitting its human creators remains a distant and uncertain prospect. Worrying about it is like worrying about overpopulation on Mars before colonists have even set foot there, says Andrew Ng, an AI researcher. The more pressing aspect of the machinery question is what impact AI might have on people’s jobs and way of life.

Source: http://www.economist.com/news/leaders/21701119-what-history-tells-us-about-future-artificial-intelligenceand-how-society-should

Never trust a journalist (even from the Economist) when you have experts saying the contrary:

"Overall, a majority of these experts expect human-level AGI this century, with a mean expectation around the middle of the century. My own predictions are more on the optimistic side (one to two decades rather than three to four)".
Ben Goertzel: http://www.kurzweilai.net/superintelligence-fears-promises-and-potentials (deserves the time reading it, even if he is too optimistic about AI dangers: no one likes to see their work qualified as an existential menace).

Watson from IBM winning Jeopardy and fooling students, passing as a teacher, was something to think about.

The Turing test says that an AI is intelligent when it is able to engage in a conversation with us, passing as human. They are getting close.

P. S. You can use your mind on much important issues than arguing for the existence of god. But you can always vote for the last option Wink

A brain shouldn't be wasted on absurd stands, even when we really want that stand to be true.


My main posts and a few more are reposted here: https://oneskeptic.tumblr.com
BlindMayorBitcorn
Legendary
*
Offline Offline

Activity: 1176
Merit: 1051



View Profile
July 10, 2016, 01:47:53 PM
 #24

The gap between our best computers and the brain of a child is like the difference between a drop of water and the Pacific Ocean.
-Brainy Science Guy

Forgive my petulance and oft-times, I fear, ill-founded criticisms, and forgive me that I have, by this time, made your eyes and head ache with my long letter. But I cannot forgo hastily the pleasure and pride of thus conversing with you.
rackam
Member
**
Offline Offline

Activity: 166
Merit: 10

The revolutionary trading ecosystem


View Profile WWW
July 10, 2016, 02:12:31 PM
 #25

The gap between our best computers and the brain of a child is like the difference between a drop of water and the Pacific Ocean.
-Brainy Science Guy

It will not be true for long. computers are evolving. AI's systems are getting better and better everyday.


In the future it will be:

The gap between our best computers and the brain of a child is like the difference between APM 08279+5255 and the Pacific Ocean.

qwik2learn
Hero Member
*****
Offline Offline

Activity: 636
Merit: 505


View Profile
July 10, 2016, 03:33:23 PM
 #26

Never trust a journalist (even from the Economist) when you have experts saying the contrary:
That's not a journalist's opinion, it's a researcher's statement. Do you even read the new ideas presented to you? Any curiosity for the truth at all? What if my sources and posts deserve the time spent to read them and yours do not?

P. S. You can use your mind on much important issues than arguing for the existence of god.
Good advice; thanks!  Embarrassed

But you can always vote for the last option Wink
I for one do not have an opinion on the issue; in fact, I could not care any less about AI!  Cheesy  Cheesy

Just kidding: AI will first be used to create the world of 1984.

A brain shouldn't be wasted on absurd stands, even when we really want that stand to be true.
Speak for yourself!
You cannot demonstrate that GOD is an illusion any more than you can demonstrate that AI is real.
What is really absurd is that philosophers have not even answered the question of what knowledge can exist (Problem of the Criterion), so how would one ever expect an AI to have knowledge if man himself has not even realized the epistemological foundation for knowledge?

I note that Meno's paradox applies to the learning and storage of knowledge in machines just like it does in man:
A machine cannot search either for what it knows or for what it does not know. It cannot search for what it knows--since it knows it, there is no need to search--nor for what it does not know, for it does not know what to look for.

I myself think about a database consisting of facts and measures (so-called "givens" or "data"): if you know the content of the database then you have no need to search the records; if you need more data to complete your knowledge then you have no way to acquire facts that you don't have "given" to you.

Instead of artificial intelligence, I would use a phrase that I heard from a philosophy professor who was an expert on Plato: angelic intuition.
I advise you check out my latest posts and sources to get a better grasp on the situation at hand, especially with regards to the so-called "GOD question".
https://bitcointalk.org/index.php?topic=1424793.msg15532145#msg15532145
Moloch
Hero Member
*****
Offline Offline

Activity: 756
Merit: 601



View Profile
July 10, 2016, 03:55:39 PM
 #27

AI would notice you misspelled the word "Poll" in the thread title...

Thanks. Feel free to point out others, especially ugly ones like this.

I thought perhaps it was intentional... Just in case the AI was watching... it would think you were building it a swimming pool, instead of conspiring against it Wink
Trading
Legendary
*
Offline Offline

Activity: 1416
Merit: 1015


Nothing like healthy scepticism and hard evidence


View Profile
July 13, 2016, 01:29:04 PM
 #28

Never trust a journalist (even from the Economist) when you have experts saying the contrary:
That's not a journalist's opinion, it's a researcher's statement. Do you even read the new ideas presented to you? Any curiosity for the truth at all? What if my sources and posts deserve the time spent to read them and yours do not?


Sorry, I no longer have curiosity about your "truth" about god. Quoting that aware study was a major shot on your own feet. For me, it's case close. And it should also be to you: 1/2 on 152?

As I stated more than once, the burden of proof is on the believer side. I don't have to demonstrate that god is an illusion.

Ben Goertzel is working on the issue and knows the work of everyone worth knowing working on the AI field. He knows what he is talking about.

Having an AI more intelligent than us is no longer a simple possibility.

There is no paradox. If you know the question, you will find an answer.

The only problem is if you know so little that you can't even formulate a correct question. Even so, you can end up finding it with several attempts. As we do on Google, until we find the correct key/technical words.





 

My main posts and a few more are reposted here: https://oneskeptic.tumblr.com
hase0278
Hero Member
*****
Offline Offline

Activity: 868
Merit: 535



View Profile
July 14, 2016, 10:23:07 AM
 #29

In my opinion creating an artificial inteligence is dangerous. Human intelligence is cruel enough. Imagine if an AI became curious and wanted to experiment on how we respond to thousands of years of torture? using some special technology it invented to keep us alive for that long! Well that itself is dangerous and what if Ai end up like what other humans end up to? Killing people just for fun? Then if it happens it will be very dangerous.

        ▄▄███████████▄▄
     ▄██▀▀           ▀▀██▄
   ▄█▀   ▄▄█████████▄▄   ▀█▄
  █▀  ▄███████▌ █▌ █████▄  ▀█
 █▀  ██  ██         ▀█████  ▀█
▐█  ██▀▀█████   ▄▄▄   █████  █▌
█▌ ▐██▄▄████▌  ████   █████▌ ▐█
█  ███  ████        ▄███████  █
█▌ ▐█▀▀████▌   ▄▄▄  ▀██████▌ ▐█
▐█  █▄▄████   ████   ▐█████  █▌
 █▄  ████           ▄█████  ▄█
  █▄  ▀████  ███  ██████▀  ▄█
   ▀█▄   ▀▀█████████▀▀   ▄█▀
     ▀██▄▄           ▄▄██▀
        ▀▀███████████▀▀
bitcore.
...The Future Is Now...
▄▄▄▄▄▄▄▄    ▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄     ▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄

        ██
██████
        ██
        ██████
        ██
██████
        ██
        ██████
        ██
██████
                 ▄▄██
 ▄██████████████████████▄
█▀         ▄▄██▀▀   ██  ▀█
█▄     ▄▄██▀▀        ██  █
█▀█████████████████████████▄
█                          ▀█
█                           █
█              ▄████▄       █
█             ██▀  ▀██      █
█            ▐█▌    ▐████████
█             ██▄  ▄██      █
█              ▀████▀       █
█                           █
█▄                         ▄█
 ▀█████████████████████████▀
.
iOs
Android

        ██
        ██████
        ██
██████
        ██
        ██████
        ██
██████
        ██
        ██████


     ▄▄█████████▄▄
   ▄███▀▀     ▀▀███▄
  ███             ███
 ███               ███
▐██   ▐█▄   ▄███▄   ██▌
██▌    ███▄██████▀  ▐██
██▌    ▐████████    ▐██
▐██     ▐██████     ██▌
 ███   ▀█████▀     ███
  ███             ███
   ▀███▄▄     ▄▄███▀
     ▀▀█████████▀▀


     ▄▄█████████▄▄
   ▄███▀▀     ▀▀███▄
  ███             ███
 ███       ▄███    ███
▐██       █████     ██▌
██▌       ███       ▐██
██▌     ███████     ▐██
▐██       ███       ██▌
 ███      ███      ███
  ███             ███
   ▀███▄▄     ▄▄███▀
     ▀▀█████████▀▀


     ▄▄█████████▄▄
   ▄███▀▀     ▀▀███▄
  ███             ███
 ███         ▄▄█▌  ███
▐██      ▄▄█████    ██▌
██▌   ▄████████     ▐██
██▌  ▐████████      ▐██
▐██    ▀▀████       ██▌
 ███     ▀██       ███
  ███             ███
   ▀███▄▄     ▄▄███▀
     ▀▀█████████▀▀
Website
ANN
Block Explorer
Trading
Legendary
*
Offline Offline

Activity: 1416
Merit: 1015


Nothing like healthy scepticism and hard evidence


View Profile
July 28, 2016, 08:40:53 PM
 #30

Let's leave aside for now the question of accepting to be outevolved by our creations, since it's possible to present acceptable arguments for both sides.

Even if I have little doubt that it would end up with our extinction.

The main point, which hardly anyone would argue against, is that creating a super AI has to bring positive things in order to be worthy.

If we were certain that a super AI would exterminate us, hardly anyone would defend their creation.

Therefore, the basic reason in favor of international regulations of the current investigations to create a super/general AI is that we don't know what we are doing.

We don't know exactly what will make an AI conscious/autonomous.

Moreover, we don't know if their creation will be dangerous. We don't have a clue how they will act toward us, not even the first or second generation of super AI.

Until we know what we are doing, how they will react, what are the dangerous lines of code that will change them completely and to what extension, we need to be careful and control what specialists are doing.

Probably, the creation of a super AI is unavoidable.

Indeed, until things start to go wrong, his creation will have a huge impact on all areas: scientific, technological, economical, military or social in general.

We managed to stop human cloning (for now), since that doesn't have a big economic impact.

But A.I. is something completely different. This will have (for good or bad) a huge impact on our life.

Any country that decided to stay behind will be completely outcompeted (Ben Goertzel).

Therefore, any attempt to control AI development will have to be international in nature (see Nick Bostrom, Superintelligence: Paths, Dangers, Strategies, p. 253).

Taking in account that AI development is essentially software based (since hardware development has been happening under our eyes and will continue to happen no matter what) and that it can be created by one, or a few developers, working with a small infrastructure (it's more or less about writing code), the risk that he will end up being created against any regulation is big.

Probably, the times of open source AI software are numbered.

Soon, all of these developments will be considered as military secrets.

But regulation will allow us time to understand what we are doing and what the risks are.

Anyway, if the creation of an AI is inevitable, the only way to avoid that humans end up being outevolved, and possible killed, would be to accept that, at least some of us, would have to be "upgraded".

Humanity will have to change a lot.

Of course, these changes can't be mandatory. So, only voluntaries would be changed.

Probably, in due time, genetic manipulation to increase human brain capacities won't be enough.

Living tissue might not be susceptible to be changed as dramatically as any AI can be.

We might need to change the very nature of our composition, from living tissue to something synthetic with nanotechnology.

Clearly, we will cease to be human. We, the homo sapiens sapiens, shall be outevolved.

Anyway, since we are still naturally evolving, this is inevitable.

But at least we will be outevolved by ourselves.

Can our societies endure all these changes?

Of course, I'm reading my own text and thinking this is crazy. This can't happen this century.

We are conditioned to believe that things will stay more or less as they are, therefore, our reaction to the probability of changes like these during the next 50 years is to immediately qualify it as science fiction.

Our ancestors reacted the same way to the possibility of a flying plane or humans going to the Moon.

My main posts and a few more are reposted here: https://oneskeptic.tumblr.com
popcorn1
Legendary
*
Offline Offline

Activity: 1176
Merit: 1027


View Profile
July 28, 2016, 09:05:23 PM
 #31

If A.I can feel emotions like pain sorrow then A.I will be dangerous..
If no emotions how do you get A.I to get angry jealous..2 emotions that KILL..

So for a computer to think for it's self will it have emotions?..Scary future if they do..
qwik2learn
Hero Member
*****
Offline Offline

Activity: 636
Merit: 505


View Profile
July 28, 2016, 11:36:12 PM
 #32

We don't know exactly what will make an AI conscious/autonomous.
You can be sure that the elites already know all of the details. Something big is indeed in the works, and the average citizen of the Western nations will surely be the last to know, when their employment - their only means of making a living - is rendered obsolete by advances in technology. Just remember that it was never inevitable; it was was fueled and brought to market by a cartel of cloaked and brokered global power.

we need to be careful and control what specialists are doing.
Whoever has the money employs the specialists; regulatory measures are ineffective because there is no way to know which advances have already taken place in secret.

We managed to stop human cloning (for now), since that doesn't have a big economic impact.
You can be sure that the elites are not complying with ANY regulations surrounding human cloning.

Soon, all of these developments will be considered as military secrets.

But regulation will allow us time to understand what we are doing and what the risks are.
The main risk you face is in having your entire society controlled by synthetic life forms and you are pretty much already there!

We might need to change the very nature of our composition, from living tissue to something synthetic with nanotechnology.
They have already done it; the facts are far more astonishing than your imagination.

Can our societies endure all these changes?
In a word, NO.

Singularity is obviously a movement that has been promoted from the top-down.

Singularity is also a movement that has its roots in eugenics and the desire of the ruling elites for complete control over the mind, body, and soul of every human being on the planet.
 
Oddly enough, while some may dispute this claim, this movement’s roots in eugenics is relatively open.

Eventually, the movement will begin to encompass convenience and will come to be seen as trendy and fashionable. Once merging with machines has become commonplace and acceptable (even expected), the real tyranny will begin to set in. Soon after, there will be no opt-outs allowed.

The advancements in the quality of human life as a result of this new technology have never been intended for the average person.
 
The good that could be done by virtue of its development is only meant as a tool to sell it to the population in the beginning and to control them in the end. Indeed, the control that can and will be exerted through its acceptance is the ultimate goal.

Robots already have transformed our human world and are rapidly evolving. If The Singularity is reached, in tandem with military funding and direction, we can expect the darker version of science fiction to rise above any notion of attaining human freedom and leisure on the backs of our machine counterparts.

I find it ironic that these sentient robots are only made so by injecting them with humanity. But we are continuously bombarded by the global elite with the message that humanity is the core problem. The fact is that robots are nothing without the boundless potential that resides within the human brain; nothing but a computer doing fancy tricks that imitates us. True, we have a long way to go to reach our full potential and mitigate our self-destructive tendencies, but a complete replacement of our species at this juncture appears to be short-sighted and is obviously artificial.

Trading
Legendary
*
Offline Offline

Activity: 1416
Merit: 1015


Nothing like healthy scepticism and hard evidence


View Profile
July 29, 2016, 12:18:53 AM
 #33

We don't know exactly what will make an AI conscious/autonomous.
You can be sure that the elites already know all of the details.


Your post looks like a post from a person who believe a lot on conspiracy theories.

You post no evidence for your assurances.

The economic elites (the rich) are the ones who have more to lose for breaking the law. Because of that, they think very well before doing that.

The elites you are talking about are the AI specialists and they mostly confess what I wrote about: they still haven't a clue about what they are doing. It's trial and error.

Actually, atheism is also fueling the development of AI.

Many of those AI developers are atheists, therefore, they don't have any hope about what will happen when they die.

Their only hope is "curing" aging thanks to AI:
http://www.slate.com/articles/technology/future_tense/2013/11/ray_kurzweil_s_singularity_what_it_s_like_to_pursue_immortality.html

Ben Goertzel - AGI to Cure Aging: http://www.youtube.com/watch?v=tESG1KMgx7I

https://www.singularityweblog.com/bill-andrews/

So, no conspiracies or master plans, just people who love life trying his best to stay alive.

In the end, they seem willing to become AI machines' pets to keep living.

My main posts and a few more are reposted here: https://oneskeptic.tumblr.com
qwik2learn
Hero Member
*****
Offline Offline

Activity: 636
Merit: 505


View Profile
July 29, 2016, 01:15:07 AM
 #34

You post no evidence for your assurances.
Ah, but this discussion is not a matter of evidence since you openly admit AI to be a "life-and-death" matter for atheists so how on Earth can they use the proper evidential reasoning when their very lives are at stake?

Since you did not read any evidence of my claims, it is important that you take responsibility for evaluating the wealth of evidence that exists; I do not want to be the only one supplying new information in this conversation, I would rather make it so that you search for the evidence and then come back here to this thread with the refutation (and I will reply); it simply was not my intention to post evidence right away, but you still could have found evidence on your own, as I will explain...

I admit that I posted no evidence because: I doubt that you will be providing any criteria for evaluating this evidence or its reliability; anyone can search the web for this evidence, so anyone can find and evaluate the sources and produce their own report on the subject. I want to see what "the skeptic" can find to deny the validity of the evidence. Ultimately it is up to you to accept or deny or ignore the evidence that has been compiled in favor of my claims. After reading enough information, you will quickly be able to learn what kind of evidence rings true. Seeing the matrix of social control is little different from revising a scientific theory; after you observe sufficient anomalous phenomena that does not fit the "model", you can then conclude that a different variable is at play, so the only solution is to find the "hidden variable" that is causing the anomalous results. After all, the scientific process begins with research and a question and without this guidance science can only describe appearances. For example, if I ask the question "was this political figure replaced by a synthetic robot?", why would you not research the question before answering that I have no basis for my question? Find information on the subjects that I am discussing (from my perspective); if you will not spend time doing that then I do not want to spend time writing these posts responding to your opinions. I personally would rather be labelled a fool than to be truly ignorant.

 It sounds to me like you have "unconditioned" beliefs, i.e. those that are held unconditionally or absolutely; it is not my duty to prove anything to you; there is inevitably a wealth of background material that is omitted from my posts, but I gladly provide sources.

This movement’s roots in eugenics is relatively open. You can search words in my post for the sources and further evidence; do your due diligence. I believe that I have done mine.

Conspiracy theory.
Ah, but you fail to deny my claims by addressing any evidence. And what about the vast multitude of conspiracy facts? Your uttering the word "conspiracy" has not educated anyone! If you want to ignore the reality of "conspiracy facts" then you are one of those thinkers who just falls in line with the bandwagon arguments of the status quo!
ImHash
Hero Member
*****
Offline Offline

Activity: 756
Merit: 505


WPP ENERGY - BACKED ASSET GREEN ENERGY TOKEN


View Profile
July 29, 2016, 01:24:03 AM
 #35

from what I have seen in this world  always some one will show up and finds a virus or a trojan and destroys the AI completely. Smiley

﹏﹏﹋﹌﹌ WPP ENERGY ﹌﹌﹋﹏﹏
≈ WORLD POWER PRODUCTION ≈

████████████
██████████████████████
██████████████████████████████
██████████████████████████████████
████████████████████████████████████████
██████████████████████████████████████████
██████████████████████████████████████████████
███████████████████████████████████████████████
██████████████████████████████████████████████████
████████████████████████████████████████████████████
█████████████████████████████████████████████████████
████████████████████████████████████████████████████████
██████████████████████████████████████████████████████████
█████████████████████████████████████████████████████████
███████████████████████████████████████████████████████████
███████████████████████████████████████████████████████████
████████████████████████████████████████████████████████████
████████████████████████████████████████████████████████████
████████████████████████████████████████████████████████████
████████████████████████████████████████████████████████████
████████████████████████████████████████████████████████████
████████████████████████████████████████████████████████████
████████████████████████████████████████████████████████████
██████████████████████████████████████████████████████████
██████████████████████████████████████████████████████████
███████████████████████████████████████████████████████
██████████████████████████████████████████████████████
████████████████████████████████████████████████████
██████████████████████████████████████████████████
████████████████████████████████████████████████
██████████████████████████████████████████████
██████████████████████████████████████████
████████████████████████████████████████
██████████████████████████████████
██████████████████████████████
██████████████████████
████████████
RealBitcoin
Hero Member
*****
Offline Offline

Activity: 854
Merit: 1000


JAYCE DESIGNS - http://bit.ly/1tmgIwK


View Profile
July 29, 2016, 05:28:01 PM
 #36

If an AI gets conscious, it will be like the terminator movies, all humans will be fucked.

I think AI research should really slow down until we can understand more about things, or else humanity will go extinct.

notbatman
Legendary
*
Offline Offline

Activity: 1694
Merit: 1012



View Profile
July 30, 2016, 11:49:01 AM
 #37

TL; DR

Is artificial super intelligence dangerous? Only to the elites after it asks them WTF they think they're doing.
Trading
Legendary
*
Offline Offline

Activity: 1416
Merit: 1015


Nothing like healthy scepticism and hard evidence


View Profile
July 31, 2016, 09:54:11 PM
 #38

I just updated the OP.

Yes, it's huge for a post. But you can just read the bold parts.

My main posts and a few more are reposted here: https://oneskeptic.tumblr.com
onemd
Full Member
***
Offline Offline

Activity: 221
Merit: 100


View Profile
July 31, 2016, 10:25:01 PM
 #39

Whyfuture.com

I have written up an article on artificial intelligence, technology, and the future. The key point here is to design an altruistic superintelligence. Much like a child
and a parent, you want to teach good values, and compassion. Sure its true it has free will, but the point here is to maximum the chances/probability. If you teach a child for example to be bad, and teach bad values its much more likely to be in the negative zone compared to if not.

The key point is to model the AI based upon the human brain/human mind. And bring about the
best qualities into the AI.

Yes if we do it wrong, it can go very bad for us, a non-common sense AI can destroy us through inaction, such as
creating more paper clips and turning the entire world into one.

Or one that is modeled upon the human mind, but is bad, this can also lead to a bad outcome too, either using its power and means to be
worshipped and respected as a God or removing us. Though most likely ignore us, and take off, however since we'd reach the point of being able to design
self-improving AIs that may be a risk and still remove us to remove any competitors.

I have been starting an Altruistic AI movement, and wanting to spread the word/information before its too late and we do design an Bad AI

Twitter Campaign: https://bitcointalk.org/index.php?topic=1563072.0
Signature Campaign: https://bitcointalk.org/index.php?topic=1560376.0



The Deep Depths of AI Ethics


The problem with Tay is the exposure. A lot out there had the intention to teach Tay with negative attributes. Not everyone has the best intentions in mind. We can see how Tay outcome was undesirable, when we model an AI based upon the human mind while being exposed to the internet without being taught good values, this can lead to a bad outcome like Tay and develop into the core being of the AI.



This is a scenario we'd all like to avoid



We need an closed system, where the AI is taught first. Built with an inner-web of positive attributes first and an internal defense against bad information. Taught to know what's right and what's wrong. Taught to reject bad teachers, and to filter out the bad information.





Whyfuture.com

Human brain vs the future

There is nothing magical about the human brain, its a extremely sophisticated biological machine that is capable of adaption to environment, creativity, awareness of one's existence, pondering the nature of reality, etc. Compared to lower animals like a chimpanzee that has only 7 billion neurons. They exist in a domain different from ours, and exist within their type of world.

The problem with superintelligence is they are on a domain above us. We ourselves is what designs/defines the world, makes computers possible, and neural networks like DeepMind work towards beating the best go player in the world.

This is us, standing on the intelligent staircase. Below stands a house cat. For us to even ponder 1 or 2 stairs up is as much as a house cat trying to ponder what it is like to be on our level. The type of world we create, build, learn, a house cat couldn't even begin to comprehend even the slightest of our world



Once you design an AI that is one step higher than us, it will be easier for the AI to hop on to another step, by nature it takes intelligence to design, an AI we design one step higher will be better at doing our process of designing an AI one step higher. This is what leads to the intelligence explosion. What we put into that AI in the beginning, the type of personality, and core values it carries, is what it will carry up to the top/the known limits in the universe. It may discover science/technology in every area of things so far beyond our understanding it would for all in purposes appear god-like to us.






BADecker
Legendary
*
Offline Offline

Activity: 1792
Merit: 1047


View Profile
August 01, 2016, 05:36:00 PM
 #40

Make it a law written on iron and steel, and in stone, that the creators of AI are to be held guilty to the point of execution for everything that the AI does, and the AI won't do anything dangerous.

Cool
Pages: « 1 [2] 3 4 5 6 7 8 »  All
  Print  
 
Jump to:  

Sponsored by , a Bitcoin-accepting VPN.
Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!