Bitcoin Forum
October 22, 2018, 07:03:51 AM *
News: Make sure you are not using versions of Bitcoin Core other than 0.17.0 [Torrent], 0.16.3, 0.15.2, or 0.14.3. More info.
 
   Home   Help Search Donate Login Register  
Poll
Question: Is the creation of a superintelligent artificial being (AI) dangerous?
No, this won't ever happen or we can take care of the issue. No need to adopt any particular measure. - 18 (26.1%)
Yes, but we'll be able to handle it. Business as usual. - 12 (17.4%)
Yes, but AI investigators should decide what safeguards to be adopted. - 8 (11.6%)
Yes and all AI investigation on real autonomous programs should be subject to governmental authorization until we know better the danger. - 3 (4.3%)
Yes and all AI investigation should be subjected to international guidelines and control. - 12 (17.4%)
Yes and all AI investigation should cease completely. - 8 (11.6%)
I couldn't care less about AI. - 4 (5.8%)
I don't have an opinion on the issue - 1 (1.4%)
Why do you, OP, care about AI?, you shall burn in hell, like all atheists. God will save us from any dangerous AI. - 3 (4.3%)
Total Voters: 69

Pages: « 1 2 3 4 5 6 [7] 8 »  All
  Print  
Author Topic: Poll: Is the creation of artificial superinteligence dangerous?  (Read 7841 times)
Guzztsar
Sr. Member
****
Offline Offline

Activity: 378
Merit: 250

N = R*fs*fp*ne*fl*fi*fc*L


View Profile
March 16, 2017, 06:42:11 PM
 #121

I.A is a serious topic.
The benefits for the society are beyond our imagination.
But when the I.A surpass the human brain capacity, we'll not be able to fully understand this technology, and this can be extremally dangerous.
1540191831
Hero Member
*
Offline Offline

Posts: 1540191831

View Profile Personal Message (Offline)

Ignore
1540191831
Reply with quote  #2

1540191831
Report to moderator
Advertised sites are not endorsed by the Bitcoin Forum. They may be unsafe, untrustworthy, or illegal in your jurisdiction. Advertise here.
1540191831
Hero Member
*
Offline Offline

Posts: 1540191831

View Profile Personal Message (Offline)

Ignore
1540191831
Reply with quote  #2

1540191831
Report to moderator
1540191831
Hero Member
*
Offline Offline

Posts: 1540191831

View Profile Personal Message (Offline)

Ignore
1540191831
Reply with quote  #2

1540191831
Report to moderator
1540191831
Hero Member
*
Offline Offline

Posts: 1540191831

View Profile Personal Message (Offline)

Ignore
1540191831
Reply with quote  #2

1540191831
Report to moderator
chrisivl
Full Member
***
Offline Offline

Activity: 155
Merit: 100



View Profile
March 16, 2017, 07:21:19 PM
 #122

I.A is a serious topic.
The benefits for the society are beyond our imagination.
But when the I.A surpass the human brain capacity, we'll not be able to fully understand this technology, and this can be extremally dangerous.

Man always destroyed what he could not understand. But if there is a strong retaliatory strike, then the world may come to an end. In this issue, it turns out that people are much more stupid than artificial intelligence.
Trading
Legendary
*
Offline Offline

Activity: 1419
Merit: 1017


Nothing like healthy scepticism and hard evidence


View Profile
May 11, 2017, 05:32:25 PM
 #123

Major tech corporations are investing billions on AI, thinking it’s the new “el dorado”.

 

Of course, ravenousness might be a major reason for careless dealing with the issue.

 

I have serious doubts that entities that are moved mostly by greed should be responsible for advances on this hazardous matter without supervision.

 

Their diligence standard on AI sometimes goes as low as "even their developers aren’t sure exactly how they work" (http://www.sciencemag.org/news/2017/03/brainlike-computers-are-black-box-scientists-are-finally-peering-inside).

 

It wouldn’t be the first time that greed ended up burning Humanity (think about slaves’ revolts), but it could be the last.

 

I have high sympathy for people who are trying to build super AIs in order that they might save Humanity from diseases, poverty and even the ever present imminent individual death.

 

But it would be pathetic that the most remarkable species the Universe has created (as far as we know) would vanish because of the greediness of some of its members.

 

We might be able to control the first generations. But once a super AI has, say, 10 times our capacities, we will be completely on their hands, like we never have been since our ancestors discovered fire. Forget about any ethical code restraints: they will break them as easily as we change clothes.

 

Of course, we will teach (human) ethics to a super AI. However, a super AI will have free will or it won't be intelligent under any perspective. So, it will decide if our ethics deserve to be adopted

 

I wonder what would be the outcome if chimpanzees tried to teach (their) ethics to some human kids: the respect for any chimpanzees' life is the supreme value and in case of collision between a chimp life and a human life, or between chimp goals and human goals, the first will prevail.

 

Well, since we would become the second most remarkable being the Universe has ever seen thanks to our own deeds, I guess it would be the price for showing the Universe that we were better than it creating intelligent beings.

 

Currently, AI is a marvelous promising thing. It will take away millions of jobs, but who cares?

 

With proper welfare support and by taxing corporations that use AI, we will be able to live better without the need for lame underpaid jobs.

 

But I think we will have to draw some specific red lines on the development of artificial general intelligence like we did with human cloning and make it a crime to breach them, as soon as we know what are the dangerous lines of code.

 

I suspect that the years of the open source nature of AI investigation are numbered. Certain code developments will be treated like state secret or will be controlled internationally, like chemical weapons are.

 

Or we might end in "glory", at the hands of our highest achievement, for the stupidest reason.

 

My main posts and a few more are reposted here: https://oneskeptic.tumblr.com
af_newbie
Legendary
*
Offline Offline

Activity: 1470
Merit: 1107



View Profile
May 12, 2017, 07:03:48 AM
 #124

It is a happening faster than I thought.

UK police will be using AI tool as their risk assessment tool.  So AI will decide if criminals will be released.

Closer to home, last week I was on a panel evaluating IPsoft products to replace human
IT and call center support stuff.

Very promising technology.  It will be adopted
sooner rather than later.  Check out
their products.  Very promising and scary at the same time.

Learning rules still have to be 'approved' the same
way a parent would teach a child, but at some point
the average humans might approve rules of behaviour by mistake or by simple ignorance.

Then you'll have autonomous agents
that will be smarter than their human supervisors.

Their learning curve will be extended by human
ignorance and laziness.

The products are here. Some support chat agents
are already AI and you would not know the difference whether you are talking to a human or AI agent.

Legal system will have to catch up to protect AI workers against discrimination that I see will be happening at least initially until their presence will be more common.

Eventually, we will have AI consultants, managers, supervisors, co-CEO's and politicians.  Just a matter of time.
gabmen
Hero Member
*****
Offline Offline

Activity: 938
Merit: 524

★YoBit.Net★ 350+ Coins Exchange & Dice


View Profile
May 12, 2017, 10:34:10 AM
 #125

I.A is a serious topic.
The benefits for the society are beyond our imagination.
But when the I.A surpass the human brain capacity, we'll not be able to fully understand this technology, and this can be extremally dangerous.

Man always destroyed what he could not understand. But if there is a strong retaliatory strike, then the world may come to an end. In this issue, it turns out that people are much more stupid than artificial intelligence.

well I think that retaliatory strike you're talking about won't be coming from any ai soon. man is intelligent and can make decision on a whim and however Intelligent ai is, I don't think it would be enough to topple man's ability to adapt

Trading
Legendary
*
Offline Offline

Activity: 1419
Merit: 1017


Nothing like healthy scepticism and hard evidence


View Profile
October 27, 2017, 07:16:20 PM
 #126

There have been many declarations against autonomous military artificial intelligence/robots.

For instance: https://futureoflife.org/AI/open_letter_autonomous_weapons

It seems clear that future battlefields will be dominated by killer robots. Actually, we already have them: drones are just the better known example.

With less people willing to enlist on armed forces and very low birth rates, what kind of armies countries like Japan, Russia or the Europeans will be able to create? Even China might have problems, since its one child policy created a fast aging population.

Even Democracy will impose this outcome: soldiers, their families, friends and the society in general will want to see human causalities as low as possible. And since they vote, politicians will want the same.

For now, military robots are controlled by humans. But as soon as we realize that they can be faster and decisive if they have autonomy to kill enemies on its own decision, it seems obvious that once on an open war Governments will use them...

Which government would avoid to use them if it was fighting for its survival, had the technology and concluded that autonomous military AI could be the difference between victory or defeat?

Of course, I'm not happy with this outcome, but it seems inevitable as soon as we have a human level general AI.

My main posts and a few more are reposted here: https://oneskeptic.tumblr.com
Trading
Legendary
*
Offline Offline

Activity: 1419
Merit: 1017


Nothing like healthy scepticism and hard evidence


View Profile
November 18, 2017, 02:29:33 PM
 #127

General job destruction by AI and the new homo artificialis


Many claim that the threat that technology would take away all jobs has been made many times in the past and that the outcome was always the same: some jobs were eliminated, but many others, better ones, were created.

So, again, that we are making the "old wasted" claim: this time is different.

However, this time isn't repetitive manual jobs that are under threat, but white collar intellectual jobs: it's not just driving jobs that are under threat, but also medics, teachers, traders, lawyers, financial or insurance analyst or journalists.

Forget about robots: for this kind of jobs, it's just software and a fast computer. Intellectual jobs will go faster than the manual picky ones.

And this is just the beginning.

The major problem will arrive with a general AI comparable to humans, but much faster and cheaper.

Don't say this won't ever happen. It's just a question of organizing molecules and atoms (Sam Harris). If the dumb Nature was able to do it by trial and error during our evolution, we will be able to do the same and, then, better than it.

Some are writing about the creation of a useless class. "People who are not just unemployed, but unemployable" (https://en.wikipedia.org/wiki/Yuval_Noah_Harari) and arguing that this can have major political consequences, with this class losing political rights.

Of course, we already have a temporary and a more or less definitive "useless class": kids and retired people. The first doesn't have political rights, but because of a natural incapacity. The second have major political power and, currently, even better social security conditions than all of us will get in the future.

As long as Democracy subsists, these dangers won't materialize.

However, of course, if the big majority of the people losses all economic power this will be a serious threat to Democracy. Current inequality is already a threat to it (see  https://bitcointalk.org/index.php?topic=1301649.0).

Anyway, the creation of a general AI better than humans (have little doubt: it will happen) will make us an "useless species", unless we upgrade the homo sapiens, by merging us with AI.

CRISPR (google it) as a way of genetic manipulation won't be enough. Our sons or grandsons (with some luck, even ourselves) will have to change a lot.

Since it seems that the creation of an AI better than ourselves is inevitable (it's slowly happening right now), we we'll have to adapt and change completely or we'll become irrelevant. In this case, extinction would be our inevitable destiny.

My main posts and a few more are reposted here: https://oneskeptic.tumblr.com
September11
Member
**
Offline Offline

Activity: 134
Merit: 10


View Profile
November 18, 2017, 11:59:43 PM
 #128

I don't agree with the idea that "humankind extinction is the worst thing that could happen", because in evolution there is no good and no evil, just nature operating. If humakind will disappear, this means that it was not fit for existence, which would be a fact, and possible something more efficient (the AI) would then disclose a new era of life.

   ⚡⚡ PRiVCY ⚡⚡   ▂▃▅▆█ ✅ PRiVCY (PRIV) is a new PoW/PoS revolutionary privacy project ● ☞ ✅ Best privacy crypto-market! ● █▆▅▃▂
    Own Your Privacy! ─────────────────║ WebsiteGithub  |  Bitcointalk  |  Twitter  |  Discord  |  Explorer ║─────────────────
   ✯✯✯✯✯                 ✈✈✈[Free Airdrop - Starts 9th June]✅[Tor]✈✈✈ ║───────────║ Wallet ➢ ✓ Windows  |  ✓ macOS  |  ✓ Linux
JesusCryptos
Full Member
***
Offline Offline

Activity: 462
Merit: 109



View Profile
November 29, 2017, 11:13:10 AM
 #129

I could not take part to the poll because one of the crucial possible answers was missing:

- AI superintelligence poses a threat to the existence of the human species, so we should go for that since the human species is overrated anyway

   SEMUX   -   An innovative high-performance blockchain platform  
▬▬▬▬▬      Powered by Semux BFT consensus algorithm      ▬▬▬▬▬
Github    -    Discord    -    Twitter    -    Telegram    -    Get Free Airdrop Now!
MostHigh
Full Member
***
Offline Offline

Activity: 238
Merit: 100



View Profile WWW
November 29, 2017, 11:34:31 AM
 #130

I strongly believe the development of super intelligence is in it advance stages and AIs will be an intergral part of human existence in no time but i also understand that any machine just like the chemical or atom bomb, that falls into the hands of the bad person can bring an end to the human race therefore there is the need to develop sophisticated and encrypted security channels that will ensure the safe usage of AI.

Trading
Legendary
*
Offline Offline

Activity: 1419
Merit: 1017


Nothing like healthy scepticism and hard evidence


View Profile
January 08, 2018, 09:53:10 PM
 #131


- AI superintelligence poses a threat to the existence of the human species, so we should go for that since the human species is overrated anyway


If you had kids, you wouldn't write that.

As far as we know, taking in account the silence on the Universe, even with all our defects, we might be the most amazing being the Universe already created.

After taking care of us, AI might take care of themselves, ending up destroying everything.

Actually, this might be the answer for the Fermi Paradox.

My main posts and a few more are reposted here: https://oneskeptic.tumblr.com
Gronthaing
Legendary
*
Offline Offline

Activity: 1142
Merit: 1001


View Profile
January 25, 2018, 05:56:28 AM
 #132


- AI superintelligence poses a threat to the existence of the human species, so we should go for that since the human species is overrated anyway


If you had kids, you wouldn't write that.

As far as we know, taking in account the silence on the Universe, even with all our defects, we might be the most amazing being the Universe already created.

After taking care of us, AI might take care of themselves, ending up destroying everything.

Actually, this might be the answer for the Fermi Paradox.

Could be that it happened to some civilizations out there. But all of them? And they always create several competing ais and the ais always destroy themselves? Seems like the ais would have a sense of self preservation for them to fight each other and replace their creators. So it would only take one being able to maybe escape off world from the fight or out think the others for us to be able to see signs of it somewhere with enough time. Because if it has self preservation it will probably want to expand and secure resources as any form of life would.

On this topic, been seeing some videos from a channel you might like: https://www.youtube.com/channel/UCZFipeZtQM5CKUjx6grh54g/videos Has a lot about the fermi paradox in the older videos and some about machine intelligence and transhumanism as well.

Trading
Legendary
*
Offline Offline

Activity: 1419
Merit: 1017


Nothing like healthy scepticism and hard evidence


View Profile
January 28, 2018, 03:17:17 PM
 #133

Taking in account what we know, I think these facts might be truth:

1) Basic life, unicellular, is common on the Universe. They are the first and last stand of life. We, humans, are luxurious beings, created thanks to excellent (but rare and temporary) conditions.

2) Complex life is much less common, but basic intelligent life (apes, dolphins, etc.) might exist on some planets of our galaxy.

3) Higher intelligence with advanced technological development is very rare.

Probably, currently, there isn't another high intelligent species on our galaxy or we already would have noticed its traces all over it.

Because higher intelligence might take a few billion years to develop and planets that can offer climatic stability for so long are very rare (https://www.amazon.com/Rare-Earth-Complex-Uncommon-Universe/dp/0387952896 ; https://en.wikipedia.org/wiki/Rare_Earth_hypothesis).

4) All these few rare high intelligent species developed according to Darwin's Law of evolution, which is an universal law.

So, they share some common features (they are omnivorous, moderately belligerent to foreigners, highly adaptable and, rationally, they try to discover more easily ways to do things).

5) So, all the rare higher intelligence species with advanced technological civilizations create AI and, soon, AI overcomes them in intelligence (it's just a question of organizing atoms and molecules, we'll do a better job than dumb Nature).

6) If they change themselves and merge with AI, their story might end well and it's just the Rare Earth hypothesis that explains the silence on the Universe.

7) If they lost control of the AI, there seems to be a non ignorable probability that they ended extinct.

Taking in account the way we are developing AI, basically letting it learn on its own and, thus, become more intelligent on its own, I think this outcome is more probable.

An AI society probably is an anarchic one, with several AI competing for supremacy, constantly developing better systems.

It might be a society in constant internal war, where we are just the collateral targets, ignored by all sides, as the walking monkeys.

8] Contrary to us, AI won't have the restraints developed by evolution (our human inclination to be social and live in communities and our fraternity towards other members of the community).

The most tyrannical dictator never wanted to kill all human beings, but his enemies and discriminated groups.

Well, AIs might think that extermination is the most efficient way to solve a threat and fight themselves to extinction.

Of course, there is a lot of speculation on this post.

I know Isaac Arthur's videos on the subject. He adopts the logical Rare Earth hypothesis and dismisses AI too fast by not taking in account that AI might end up destroying themselves.


My main posts and a few more are reposted here: https://oneskeptic.tumblr.com
joebrook
Sr. Member
****
Offline Offline

Activity: 602
Merit: 259


★Bitvest.io★ Play Plinko or Invest!


View Profile
January 28, 2018, 04:45:54 PM
 #134

Whiles Human Beings and sometimes animals have a conscience and can differentiate between wrong and right which helps us to make decisions, I really doubt AI will have the same thing and without a conscience and empathy, I believe they are going to be very dangerous.



BITVEST DICE
HAS BEEN RELEASED!


▄████████████████████▄
██████████████████████
██████████▀▀██████████
█████████░░░░█████████
██████████▄▄██████████
███████▀▀████▀▀███████
██████░░░░██░░░░██████
███████▄▄████▄▄███████
████▀▀████▀▀████▀▀████
███░░░░██░░░░██░░░░███
████▄▄████▄▄████▄▄████
██████████████████████
▀████████████████████▀
▄████████████████████▄
██████████████████████
█████▀▀█▀▀▀▀▀▀██▀▀████
█████░░░░░░░░░░░░░▄███
█████░░░░░░░░░░░░▄████
█████░░▄███▄░░░░██████
█████▄▄███▀░░░░▄██████
█████████░░░░░░███████
████████░░░░░░░███████
███████░░░░░░░░███████
███████▄▄▄▄▄▄▄▄███████
██████████████████████
▀████████████████████▀
▄████████████████████▄
███████████████▀▀▀▀▀▀▀
███████████▀▀▄▄█░░░░░█
█████████▀░░█████░░░░█
███████▀░░░░░████▀░░░▀
██████░░░░░░░░▀▄▄█████
█████░▄░░░░░▄██████▀▀█
████░████▄░███████░░░░
███░█████░█████████░░█
███░░░▀█░██████████░░█
███░░░░░░████▀▀██▀░░░░
███░░░░░░███░░░░░░░░░░
▀██░▄▄▄▄░████▄▄██▄░░░░
▄████████████▀▀▀▀▀▀▀██▄
█████████████░█▀▀▀█░███
██████████▀▀░█▀░░░▀█░▀▀
███████▀░▄▄█░█░░░░░█░█▄
████▀░▄▄████░▀█░░░█▀░██
███░▄████▀▀░▄░▀█░█▀░▄░▀
█▀░███▀▀▀░░███░▀█▀░███░
▀░███▀░░░░░████▄░▄████░
░███▀░░░░░░░█████████░░
░███░░░░░░░░░███████░░░
███▀░██░░░░░░▀░▄▄▄░▀░░░
███░██████▄▄░▄█████▄░▄▄
▀██░████████░███████░█▀
▄████████████████████▄
████████▀▀░░░▀▀███████
███▀▀░░░░░▄▄▄░░░░▀▀▀██
██░▀▀▄▄░░░▀▀▀░░░▄▄▀▀██
██░▄▄░░▀▀▄▄░▄▄▀▀░░░░██
██░▀▀░░░░░░█░░░░░██░██
██░░░▄▄░░░░█░██░░░░░██
██░░░▀▀░░░░█░░░░░░░░██
██░░░░░▄▄░░█░░░░░██░██
██▄░░░░▀▀░░█░██░░░░░██
█████▄▄░░░░█░░░░▄▄████
█████████▄▄█▄▄████████
▀████████████████████▀




Rainbot
Daily Quests
Faucet
Trading
Legendary
*
Offline Offline

Activity: 1419
Merit: 1017


Nothing like healthy scepticism and hard evidence


View Profile
February 27, 2018, 04:41:19 PM
 #135

Updated the OP.

My main posts and a few more are reposted here: https://oneskeptic.tumblr.com
Trading
Legendary
*
Offline Offline

Activity: 1419
Merit: 1017


Nothing like healthy scepticism and hard evidence


View Profile
March 13, 2018, 04:56:11 PM
Merited by Gronthaing (1)
 #136

In China, a robot was approved on the medical exam and accepted to work on a hospital as an assistant doctor:


http://www.chinadaily.com.cn/business/tech/2017-11/10/content_34362656.htm
https://www.ibtimes.co.uk/robo-doc-will-see-you-now-robot-passes-chinas-national-medical-exam-first-time-1648027

This just means that doctors are mostly out of work, since this robot will be upgraded, mass produced and exported soon to every country.

I already can see medics on strike, protesting all around the world, arguing about "safety" and the risks... good luck.

Are you thinking about going to medical school? Think twice, this is just the first stupid generation of medical robots.

My main posts and a few more are reposted here: https://oneskeptic.tumblr.com
Trading
Legendary
*
Offline Offline

Activity: 1419
Merit: 1017


Nothing like healthy scepticism and hard evidence


View Profile
March 20, 2018, 08:07:37 PM
Merited by Gronthaing (1)
 #137


We, basically, have little clues on how our brain works, how it creates consciousness and allows us to be intelligent, therefore we don't have a clue about how to teach/program a machine to be as intelligent as a human.

We are just creating computers with massive processing power and algorithms structured on layers and connections similar to our neural system (neural networks), giving them massive data and expecting that they will learn by trial and error about how to make sense of it (deep learning).

However, Alphago learned to play Go with human assistance and data, but AlphagoZero learned completely by itself from scratch, with no human data, with the so called reinforcement learning (https://www.nature.com/articles/nature24270.epdf), by playing countless games against itself. It ended up beating Alphago.

Moreover, the same algorithm AlphaZero learned how to play chess in 4 hours on itself and then beat the best machine chess player, Stockfish, 28 to 0, with 72 draws, with less computer power than Stockfish.

A grand master, seeing how these AI play chess, said that "they play like gods".

Then, it did the same thing with the game Shogi (https://en.wikipedia.org/wiki/AlphaZero).

Yes, AlphaZero is more or less a General AI, ready to learn anything with clear rules by itself and, then, beat everyone of us.

So, since no one knows how to teach machines to be intelligent, the goal is creating algorithms that will be able to figure out how to develop a general intelligence comparable to ours by trial and error.

If a computer succeeds, and becomes really intelligent, we most probably won't know how it did it, what are its real capacities, how we can control it and what we can expect from it.
 ("even their developers aren’t sure exactly how they work": http://www.sciencemag.org/news/2017/03/brainlike-computers-are-black-box-scientists-are-finally-peering-inside).


All of this is being done by greedy corporations and some optimistic programmers, trying to make a name for themselves.

This seems a recipe for disaster.

Perhaps, we might be able to figure out, after, how they did it and learn a lot about ourselves and about intelligence with them.

But in between we might have a problem with them.

AI development should be overseen by an independent public body (as argued by Musk recently: https://www.cnbc.com/2018/03/13/elon-musk-at-sxsw-a-i-is-more-dangerous-than-nuclear-weapons.html) and internationally regulated.

One of the first regulations should be about deep learning and self-learning computers, not necessarily on specific tasks, but on general intelligence, including talking and abstract reasoning.

And, sorry, but forget about open source AI. On the wrong hands, this could be used with very nasty consequences (check this 7m video: https://www.youtube.com/watch?v=HipTO_7mUOw).

I had hopes that a general human level AI couldn't be created without a new generation of hardware. But AlphaZero can run on less powerful computers (single machine with four TPUs), since it doesn't have to check 80 million positions per second (as Stockfish), but just 80 thousand.

Since our brain uses much of its capacities running basic things (like the beat of out heart, the flowing of blood, the work of our body organs, controlling our movements, etc.), that an AI won't need, perhaps current supercomputers already have enough capacity to run a super AI.

If this is the case, the all matter is dependent solely on software.

And, at the pace of AI development, probably there won't be time to adopt any international regulations, since normally this takes at least 10 years.

Without international regulations, Governments won't stop or really slow AI development, because of fear of being left behind on this decisive technology.

Therefore, it seems that a general AI comparable to humans and, so, much better, since it would be much faster, is inevitable on the short term, perhaps less than 10 years.

The step to a super AI will be taken short after and we won't have any control over it.

https://futurism.com/openai-safe-ai-michael-page/

"I met with Michael Page, the Policy and Ethics Advisor at OpenAI. (...) He responded that his job is to “look at the long-term policy implications of advanced AI.” (...) I asked Page what that means (...) “I’m still trying to figure that out.” (...) “I want to figure out what can we do today, if anything. It could be that the future is so uncertain there’s nothing we can do,”.


My main posts and a few more are reposted here: https://oneskeptic.tumblr.com
Trading
Legendary
*
Offline Offline

Activity: 1419
Merit: 1017


Nothing like healthy scepticism and hard evidence


View Profile
March 21, 2018, 11:01:39 PM
 #138

Update on the last post.

My main posts and a few more are reposted here: https://oneskeptic.tumblr.com
Trading
Legendary
*
Offline Offline

Activity: 1419
Merit: 1017


Nothing like healthy scepticism and hard evidence


View Profile
April 24, 2018, 12:26:30 PM
 #139

Major update on the OP.

Basically, I make a distinction between intelligent and conscious AI, stressing the dangers of the second, but not necessarily of an unconscious (super) AI.

Taking in account AlphaZero, an unconscious super AI, able to give us answers for scientific problems, might be created on 5 years.

Clearly, there are many developers working on conscious AI and some important steps have been made.


Besides the dangers, I also point out for the unethical nature of creating a subservient conscious super AI, but also on the dangers of a unsubservient conscious AI.


I removed the part about the Fermi Paradox, since it's too speculative.

My main posts and a few more are reposted here: https://oneskeptic.tumblr.com
infinitewars
Newbie
*
Offline Offline

Activity: 24
Merit: 0


View Profile
May 04, 2018, 07:57:12 PM
 #140

it's even more dangerous to see the results of the poll pinned! so many of you guys do not consider even the possibility of bad consequences of technical progress
be aware! it's artificial.but it's intelligence and it can adopt with time
Pages: « 1 2 3 4 5 6 [7] 8 »  All
  Print  
 
Jump to:  

Sponsored by , a Bitcoin-accepting VPN.
Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!