Bitcoin Forum
June 15, 2024, 06:14:38 PM *
News: Voting for pizza day contest
 
   Home   Help Search Login Register More  
Poll
Question: What do we hope the people who develop strong AI will do?  
Try to teach it ethics and empathy and values. - 4 (21.1%)
We oughta be okay as long as it doesn't have money to spend. - 2 (10.5%)
Whatever you do don't let it have unrestricted Internet Access! - 1 (5.3%)
Who cares? It's not possible anyway. - 0 (0%)
Don't bother.  We'll make great pets! - 9 (47.4%)
Put it in charge, maybe it'll do a better job of running things. - 3 (15.8%)
Total Voters: 12

Pages: [1]
  Print  
Author Topic: If someone figures out Strong AI, how do we keep humans safe?  (Read 950 times)
Cryddit (OP)
Legendary
*
Offline Offline

Activity: 924
Merit: 1129


View Profile
August 11, 2015, 04:29:35 PM
 #1

Right now there are tens of thousands of people working in Big Data and there's a plethora of new technologies in how to train artificial neural networks.  People like me have done a lot of work on extracting data from free text into accurately indexed databases.  All these people working, in groups of two to a hundred, all trying to "win" the race to strong AI.  But in the end, will the only winner be the AI itself?  

Sooner or later somebody is going to figure out the "missing ingredients" to make a machine with human-like intelligence.  What do we want that person to be doing in order to keep humans safe?



Some inspirational quotes:

The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else. —Eliezer Yudkowsky,

As I’ll argue, AI is a dual-use technology like nuclear fission. Nuclear fission can illuminate cities or incinerate them. Its terrible power was unimaginable to most people before 1945. With advanced AI, we’re in the 1930s right now. We’re unlikely to survive an introduction as abrupt as nuclear fission’s.”
― James Barrat,

Within thirty years, we will have the technological means to create superhuman intelligence. Shortly after, the human era will be ended. Is such progress avoidable? If not to be avoided, can events be guided so that we may survive? —Vernor Vinge

If we build a machine with the intellectual capability of one human, within five years, its successor will be more intelligent than all of humanity combined. After one generation or two generations, they’d just ignore us. Just the way you ignore the ants in your backyard.
― James Barrat

More than any other time in history mankind faces a crossroads. One path leads to despair and utter hopelessness, the other to total extinction. Let us pray we have the wisdom to choose correctly. —Woody Allen
Zangelbert Bingledack
Legendary
*
Offline Offline

Activity: 1036
Merit: 1000


View Profile
August 11, 2015, 08:59:47 PM
 #2

Box it. I'm not worried about AI itself, since Eliezer's quote is an anthropomorphism: an AI won't "want" to do anything unless it is programmed in such a way. The concern is about people running around with boxed AIs that can answer all their questions very accurately and thereby make them uber-powerful.
Cryddit (OP)
Legendary
*
Offline Offline

Activity: 924
Merit: 1129


View Profile
August 11, 2015, 09:10:45 PM
 #3

I dunno about that.  You give AI a directive, and it decides that in order to accomplish that directive it needs to acquire resources....  lots of resources ....  taking over the world is a logical next step. 
Karpeles
Legendary
*
Offline Offline

Activity: 1162
Merit: 1000


View Profile
August 12, 2015, 08:32:04 AM
 #4

unplug the energy source. 
 
without energy they can't act and do bad things 
Sourgummies
Hero Member
*****
Offline Offline

Activity: 728
Merit: 500


Never ending parties are what Im into.


View Profile
August 12, 2015, 11:27:54 AM
 #5

Be more worried about hackers.
Gronthaing
Legendary
*
Offline Offline

Activity: 1135
Merit: 1001


View Profile
August 13, 2015, 04:34:07 AM
 #6

Don't know if it would be possible or desirable but maybe there is another way. As technology advances we can try to incorporate it into our daily lives. In a way that is already happening as information technology is already everywhere. But maybe we'll have to start incorporating it into our biology as well in order to keep up.

Box it. I'm not worried about AI itself, since Eliezer's quote is an anthropomorphism: an AI won't "want" to do anything unless it is programmed in such a way. The concern is about people running around with boxed AIs that can answer all their questions very accurately and thereby make them uber-powerful.

An AI won't be a super Watson. The idea behind an AI with human like intelligence is that it will be able to change itself. It will learn and rewrite its own program. Maybe it will be possible to give it some starting values and desires. Maybe not. But it will have its own desires eventually.
Cryddit (OP)
Legendary
*
Offline Offline

Activity: 924
Merit: 1129


View Profile
August 13, 2015, 05:19:42 AM
 #7

Yeah.  The dividing line, I think, is that an expert system knows how to do things, and an AI (a real one) knows why to do things and decides on its own which things it wants to do.

The more subtle a task gets, the more we have to build a self-learning system to do it.  Try to explain in words how to tell the difference between a dog and a cat.  It's really hard, because dogs and cats both come in so many shapes and varieties that no hard-and-fast rule you can say is really applicable.  But we can tell at a glance.  And we can make neural networks that can tell from a photo.  But in order to do it, we have to either train those networks on a bunch of labeled pictures, which means it learns only what we already know and it's really labor intensive for us ....  or we have to make it sort its own data and then tell it which of the categories it learned while just being "curious" is dogs and which is cats.  But in order to do that, we have to make it sort its own data. 

And that means, on some level, we're making it curious.  We're giving it a motivation.  It improves itself by getting more input, and if we're appropriately lazy, we then teach it how to go and get more input on its own.  Like Google setting it loose on a cache of millions of pictures. 

We haven't really crossed the line of why-to yet, and these systems haven't gone off the rails (very much or very often) in seeking self-improvement...  but it's starting to be sort of possible? that we will eventually.
crazywack
Legendary
*
Offline Offline

Activity: 1148
Merit: 1000


View Profile
August 13, 2015, 05:26:45 AM
 #8

Combat it with IA. The ultimate humans vs robots.

IA is a few decades away though I would gladly volunteer for it.

SubversiveTech
Newbie
*
Offline Offline

Activity: 56
Merit: 0


View Profile
August 13, 2015, 06:46:41 AM
 #9

Simple, just blow up Earth.
Furio
Legendary
*
Offline Offline

Activity: 938
Merit: 1000

BTC | LTC | XLM | VEN | ARDR


View Profile
August 13, 2015, 07:52:07 AM
 #10

Combat it with IA. The ultimate humans vs robots.

IA is a few decades away though I would gladly volunteer for it.

Don't be so sure, reports have been made that a supercomputer has figured out that it exists...

crazywack
Legendary
*
Offline Offline

Activity: 1148
Merit: 1000


View Profile
August 13, 2015, 09:22:26 AM
 #11

Combat it with IA. The ultimate humans vs robots.

IA is a few decades away though I would gladly volunteer for it.

Don't be so sure, reports have been made that a supercomputer has figured out that it exists...

Damnit, I best be getting an implant soon so I can rule the world  Tongue

Gronthaing
Legendary
*
Offline Offline

Activity: 1135
Merit: 1001


View Profile
August 14, 2015, 03:24:40 AM
 #12

Combat it with IA. The ultimate humans vs robots.

IA is a few decades away though I would gladly volunteer for it.

Don't be so sure, reports have been made that a supercomputer has figured out that it exists...

Damnit, I best be getting an implant soon so I can rule the world  Tongue

Intelligence augmentation? That's sort of what I suggested. But not in the context of fighting AI with it. Don't think we would win. Problem is biology has its limits even with the improvements of technology. And in general biological processes are very slow when compared to how fast a computer would be able to reprogram itself. Or upload its program to another computer somewhere that has better hardware.

We haven't really crossed the line of why-to yet, and these systems haven't gone off the rails (very much or very often) in seeking self-improvement...  but it's starting to be sort of possible? that we will eventually.

I believe it will be possible eventually. Though people have been saying that for decades. And progress has been very slow. But if it's possible for us to be intelligent because of our biology I don't see why it wouldn't be by using other processes and different materials.
Zangelbert Bingledack
Legendary
*
Offline Offline

Activity: 1036
Merit: 1000


View Profile
August 14, 2015, 01:40:06 PM
Last edit: August 14, 2015, 01:52:31 PM by Zangelbert Bingledack
 #13

Box it. I'm not worried about AI itself, since Eliezer's quote is an anthropomorphism: an AI won't "want" to do anything unless it is programmed in such a way. The concern is about people running around with boxed AIs that can answer all their questions very accurately and thereby make them uber-powerful.

An AI won't be a super Watson. The idea behind an AI with human like intelligence is that it will be able to change itself. It will learn and rewrite its own program. Maybe it will be possible to give it some starting values and desires. Maybe not. But it will have its own desires eventually.

No, it won't. Desires are something humans have for evolutionary reasons. An AI is just a bunch of code. It can rewrite itself, but it can no more develop desires when it it loaded into a computer than it could if it were written down in book form. It will never "want" anything unless someone programs it to simulate such behavior, though as Cryddit points out there is a literal genie effect to worry about, where in attempting to execute an instruction it could wreak havoc on the real world.

That's why I say box it. By "box it" I mean never give it the ability to affect the real world other than by providing information. It will affect its own world, where it can learn mathematical properties and reason out things based on given physical constraints (3D shapes cannot pass through each other; given these shapes, what can be built?). To the extent it can remake the outside world in its sandbox, it can potentially provide us with a lot of useful information without being able to directly affect anything. It can probably tell people how to make a dangerous device very easily, or how to override their own conscious and go postal. So limiting its communication ability might also be good. Maybe at first only allow it to answer yes/no questions, with yes and no.

You will say, "No, it's supremely intelligent and will find a way out of the box. It will outsmart us and figure out how to affect the real world." (Eliezer Yudkowsky's argument.) But this is akin to a category error. It assumes that faster processing power equates with God-like abilities, or even the ability to do the logically impossible. Can an AI tell you what the sum of Firlup + Waglesnorth is? Not if it doesn't know what you mean by those terms. Similarly, it is functionally impossible to for an AI to even conceptualize the fact that it is, from our perspectives, in a box. An impossible thought is not thinkable even for a super-intelligence. We cannot conceptualize an infinite set, and neither can any AI. (In case this seems untrue, Eliezer also believes infinite sets are nonsense, to his credit. Whatever your position on infinite sets, if we for now accept that the notion is nonsense, an AI would not be any better at conceptualizing infinite sets, only better at noticing the meaninglessness of the term.)

You will say, "No, Nick Bostrom's simulation argument demonstrates how humans can reason about our own world being a simulation, where we are effectively in a box, as it were." But it doesn't. Bostrom's argument is merely a confusion based on equivocation. An AI would see this, as many people have. For example, scroll down to Thabiguy's comments here. It is clear that there is no way to even conceptualize the notion of "we are living in a simulation" coherently. If you think you can visualize this, try drawing it. It's just high-falutin wordplay. If the notion is not even coherent, there is no way the further notion that "an AI would figure out it is living in a simulation" is coherent either. Hence it is of no concern.

AI != God. And heck, even a god could not do the semantically impossible like prove that 2+2=5, though it may be able to convince humans that it has done so.

With the right controls, AI can be boxed. Still, as I mentioned, the danger of the operator equipped with AI-furnished knowledge is a serious one.
Cryddit (OP)
Legendary
*
Offline Offline

Activity: 924
Merit: 1129


View Profile
August 14, 2015, 09:45:02 PM
 #14

The thing that bugs me is that none of the self-learning algorithms works, unless  there is something in the system that makes it work as though it had desires.  If you want it to learn to do something, you set up a situation in which there is positive reinforcement for doing that thing, and then positive reinforcement guides the learning process. 

But seeking positive reinforcement basically means following your desires.  And the more abstract or subtle the thing we're trying to get it to do, the more general or widely applicable those pseudo-desires get and the more unexpected the specific ways in which it manifests.

I think it's entirely likely that we'll never get superhuman intelligence without a matching suite of compelling personal desires and intentions that drive it forward.  Anything less will be insufficient to develop general intelligence.  Everything we do is intentional.  If we desired nothing, we'd just sit inert and we wouldn't have any use for intelligence.

But that certainly won't stop people from developing a general artificial intelligence.  Indeed, if they figure out how to do that  they'll hail it as "the missing ingredient" that finally ALLOWED them to develop a general artificial intelligence.


Gronthaing
Legendary
*
Offline Offline

Activity: 1135
Merit: 1001


View Profile
August 15, 2015, 10:51:35 PM
 #15

No, it won't. Desires are something humans have for evolutionary reasons. An AI is just a bunch of code. It can rewrite itself, but it can no more develop desires when it it loaded into a computer than it could if it were written down in book form.


Can't you say the same about us? If our intelligence comes only from physical processes, and if we understood it well enough, we should be able to program some sort of computer with it. So it should be possible for it to have desires. But I agree with Cryddit. I doubt you can have a real artificial intelligence without them. Without the program having some sort of initiative of its own. At most you can build tools limited to the tasks you give them and programmed them to work with. Useful but not an AI.
Pages: [1]
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!