Bitcoin Forum
May 01, 2024, 11:22:39 PM *
News: Latest Bitcoin Core release: 27.0 [Torrent]
 
   Home   Help Search Login Register More  
Pages: « 1 2 3 4 5 6 7 8 [9] 10 11 12 13 »  All
  Print  
Author Topic: Machines and money  (Read 12755 times)
dinofelis
Hero Member
*****
Offline Offline

Activity: 770
Merit: 629


View Profile
March 18, 2015, 02:16:12 PM
 #161

A calculator on your desktop essentially makes you into a super-human (in respect to calculations), but did it actually change your mind (even if you had it right in your head)?

Of course it didn't change my mind !  A calculator is an external tool.  Now, we started this discussion assuming that our "external tools" became so terribly intelligent (and maybe sentient) that they might start having goals of themselves (being sentient beings, and hence having "good" and "bad" sensations, which is the basis of all desires, goals and so on).  Them being much more intelligent than ourselves, we might probably not even notice (in the beginning) their strategies, and they would in any case be totally opaque to us.

Now, you are saying that in order to render us just as intelligent as our tools, we should use intelligent tools which are so intelligent that they get their own life.  That begs the question, no ?  The only way for US to be as intelligent as they are, would be for us to be intrinsically so intelligent.  But that would mean that those "we" would be totally different from what we are now.

Quote
The process of understanding something (our apple of discord) is indeed different but not far from calculating. An ability to understand faster and sharper won't change your mind by any means. The difference will be only quantitative.

Of course it would.  It is even the essence of our being.  You are saying that a fish, that could think like a human, would still be a fish ?  A fish that could do philosophy would still be a fish like his fellow fish ?

If you are vastly more intelligent, of course your sensations, your desires, your good and bad experiences will be totally different.  A fish that can do philosophy will probably be bored dead in an aquarium !  It would be a totally other sentient being.

Quote
Have you seen Limitless?

No.
1714605759
Hero Member
*
Offline Offline

Posts: 1714605759

View Profile Personal Message (Offline)

Ignore
1714605759
Reply with quote  #2

1714605759
Report to moderator
1714605759
Hero Member
*
Offline Offline

Posts: 1714605759

View Profile Personal Message (Offline)

Ignore
1714605759
Reply with quote  #2

1714605759
Report to moderator
1714605759
Hero Member
*
Offline Offline

Posts: 1714605759

View Profile Personal Message (Offline)

Ignore
1714605759
Reply with quote  #2

1714605759
Report to moderator
The Bitcoin software, network, and concept is called "Bitcoin" with a capitalized "B". Bitcoin currency units are called "bitcoins" with a lowercase "b" -- this is often abbreviated BTC.
Advertised sites are not endorsed by the Bitcoin Forum. They may be unsafe, untrustworthy, or illegal in your jurisdiction.
1714605759
Hero Member
*
Offline Offline

Posts: 1714605759

View Profile Personal Message (Offline)

Ignore
1714605759
Reply with quote  #2

1714605759
Report to moderator
thejaytiesto
Legendary
*
Offline Offline

Activity: 1358
Merit: 1014


View Profile
March 18, 2015, 04:04:36 PM
 #162

I think Resource Based Economy is on point and our ultimate fate, but we are still far from it, as species we are not ready and still need a form of money. Bitcoin is objectively the best we have today as money/storage of value.
tee-rex
Hero Member
*****
Offline Offline

Activity: 742
Merit: 526


View Profile
March 18, 2015, 05:39:03 PM
Last edit: March 18, 2015, 06:51:19 PM by tee-rex
 #163

A calculator on your desktop essentially makes you into a super-human (in respect to calculations), but did it actually change your mind (even if you had it right in your head)?

Of course it didn't change my mind !  A calculator is an external tool.  Now, we started this discussion assuming that our "external tools" became so terribly intelligent (and maybe sentient) that they might start having goals of themselves (being sentient beings, and hence having "good" and "bad" sensations, which is the basis of all desires, goals and so on).  Them being much more intelligent than ourselves, we might probably not even notice (in the beginning) their strategies, and they would in any case be totally opaque to us.

Now you are obviously trying to confuse concepts, that is, the notion of intelligence with the notion of tool. They are not synonymous.

Your memory (and mine too, for that matter) is also an "external" tool to our mind. Could we say that memory is intelligent or sentient? Indeed, no. I come to think that out thought processes are also in a way external to our mind, that is to self-awareness as such. I could even go so far as to say that the difference between a human being and animals that are thought to have consciousness (dolphins, elephants, primates and other animals that recognize themselves in the mirror) is determined entirely by the level of these "external tools" development, not the mind itself.
tee-rex
Hero Member
*****
Offline Offline

Activity: 742
Merit: 526


View Profile
March 18, 2015, 05:39:54 PM
Last edit: March 18, 2015, 09:21:35 PM by tee-rex
 #164

Now, you are saying that in order to render us just as intelligent as our tools, we should use intelligent tools which are so intelligent that they get their own life.  That begs the question, no ?  The only way for US to be as intelligent as they are, would be for us to be intrinsically so intelligent.  But that would mean that those "we" would be totally different from what we are now.

As you can now see (I hope), these tools are no more intelligent than a calculator (in fact, even lesser than that). Can we call electrochemical processes in our brain that form the basis of our thoughts intelligent? If we substitute them with more efficient and faster pure electrical signals (or even light signals), will they become more "intelligent"? Will our thoughts be essentially different in this case?

By the way, the answer to these questions is already known.
Zangelbert Bingledack
Legendary
*
Offline Offline

Activity: 1036
Merit: 1000


View Profile
March 18, 2015, 07:50:51 PM
Last edit: March 18, 2015, 08:45:53 PM by Zangelbert Bingledack
 #165

If the mind is purely subjective, then what makes you think anything is real and not just a figment of your imagination?

That's a position that is very real Smiley  It is called strong solipsism.  

In fact, my stance on solipsism is that it might very well be true, but that that actually doesn't matter.  After all, what matters (for you) are your personal subjective perceptions and sensations.  Now, if those perceptions and sensations are *well explained* by *postulating* an (eventually non-existing) external world, then even though it would be ontologically erroneous to do so, it would be a very practical working hypothesis.  So, taking as a working hypothesis that the external world exists, is by itself, a good working hypothesis, because it can help you understand the correlations between your sensations.  Whether that external world actually ontologically exists or not, doesn't, in fact, really matter !

Let me explain with an example.  If you have the sensations that agree with "I take a hammer in my hand and I give a blow with it on my toes", and the next sensations are "goddammit, my foot hurts like hell !", then it makes much more sense to take as a working hypothesis that your body exists, that the external world exists, that that hammer exists and that you really hit your foot, rather than postulating that all that is a figment of your imagination - even if the latter would be ontologically true.

So whether that hammer really exists or not does in fact not matter.  You understand your subjective sensations much better by taking as a working hypothesis that it does.  And that's sufficient to do so.

Right, ontological/epistemological phrasing is just a higher-level phrasing than utility phrasing. In other words, in everyday talk it is extremely cumbersome to phrase everything in terms of utility, so we speak about things being "real" or "imagined," but these are just shorthand for different sets of utility statements, as your example with the hammer illustrates.

As we start to analyze things with unusual care, we eventually come to a point where utility phrasing is the clearest. If we try to carry the terms of everyday talk ("reality," "other people," etc.) into such an analysis, we just run around in semantic circles and confuse ourselves.
Zangelbert Bingledack
Legendary
*
Offline Offline

Activity: 1036
Merit: 1000


View Profile
March 18, 2015, 08:32:22 PM
Last edit: March 18, 2015, 08:49:56 PM by Zangelbert Bingledack
 #166

You are saying that a fish, that could think like a human, would still be a fish ?

We call something "a fish" or "not a fish" simply based on whether it would be useful, for our communication, to do so. We name things based on utility. If the utility picture changes, as it does with a fish who can think like a human (and therefore might be able to kill you in your sleep by splashing water on your computer and starting an electrical fire), we no longer would likely feel that the word "fish" evokes the most useful imagery for equipping someone to deal with that creature when we communicate about it. We might feel compelled to qualify it as a "superintelligent fish" or even a "human-fish." Whatever is most useful for getting the point across that you don't want to underestimate its intelligence.

Once you understand that we name things based on (methodologically individual) utility, many paradoxes are resolved. Here are two examples.

Paradox of the Heap: How many grains of sand does it take to make a heap?

Utility phrasing makes it easy. A "heap" simply means a point where you yourself find no utility in trying to keep track of individual grains in the set, either because you're unable to easily count them or because it doesn't matter to you. "Meh, it's just a heap." The answer will differ depending on the person and the context. It is no set number; it's simply when you look over and stop caring about the individuated quantity. That is why this has the appearance of a paradox and why Wikipedia doesn't even mention this obvious and fully satisfying resolution. The fundamental error in Wikipedia's presentation is to consider what a heap "really is," rather than what the term "heap" can usefully mean for each person and context, even though it is self-evident that this is how language works.

Ship of Theseus Paradox:

Quote from: Wikipedia
"The ship wherein Theseus and the youth of Athens returned from Crete had thirty oars, and was preserved by the Athenians down even to the time of Demetrius Phalereus, for they took away the old planks as they decayed, putting in new and stronger timber in their places, in so much that this ship became a standing example among the philosophers, for the logical question of things that grow; one side holding that the ship remained the same, and the other contending that it was not the same."

—Plutarch, Theseus

Plutarch thus questions whether the ship would remain the same if it were entirely replaced, piece by piece. Centuries later, the philosopher Thomas Hobbes introduced a further puzzle, wondering what would happen if the original planks were gathered up after they were replaced, and used to build a second ship. Hobbes asked which ship, if either, would be considered the original Ship of Theseus.

This is also easily and satisfyingly, though again un-excitingly, resolved by utility phrasing. "Ship of Theseus" is just a name we assign for utility purposes, basically to make life easier in our communications with ourselves and others. The name evokes certain associations for certain people, and based on that we will - in our communication efforts - call something "the Ship of Theseus" or "the original Ship of Theseus" whenever we believe that set of words will call up the most useful associations in the listener, to have them best understand our intent.

There is no such thing as a fully objective definition of the term "Ship of Theseus"; it always depends on what you're attempting to communicate to whom, and what you/they actually care about in the present context.

For example, if it matters to you that the ship was touched by Athenian hands, it wouldn't be useful to you to refer to it as the "Ship of Theseus" if all the parts had been replaced by non-Athenians. But if you simply cared about the way the ship looked and what it could do, because it has a unique shape and navicability compared with other ships, it would be useful in your mind to refer to it as the "Ship of Theseus" even if its parts had all been replaced for functionally and visually identical ones.

Once again it comes down to each person's utility of calling it one thing or another in each context. We will call the a second ship built in the image of the first a "replica" if we are speaking in a context of attributing credit for its design and original building, but simply call it "a Ship of Theseus" if we only care about its function and looks in this context, and we'll call it "the Ship of Theseus" even if it is not the original if the original has been destroyed and all we care about is the form and function, such as to answer a practical question like, "Can the Ship of Theseus sale to Minoa?"

To repeat the above point, the key error is in considering what the Ship of Theseus "really is," rather than what the term "Ship of Theseus" can usefully mean for each person and context. Even though it is self-evident that this is how language works in the first place, people are nevertheless highly prone to this kind of error (the reasons have to do with tribal instincts).
dinofelis
Hero Member
*****
Offline Offline

Activity: 770
Merit: 629


View Profile
March 19, 2015, 06:54:04 PM
 #167

As you can now see (I hope), these tools are no more intelligent than a calculator (in fact, even lesser than that). Can we call electrochemical processes in our brain that form the basis of our thoughts intelligent?

Yes, of course.  They ARE our thoughts.  The mystery resides in that they are subjectively experienced.  That's unobservable by itself (except by the sentient "being" that emerged from it but is behaviourally unobservable from the outside).

Maybe an AND gate is also a sentient being.  We'll never know, not being an AND gate ourselves.  The physical process of the logical AND function can of course be understood by any student of solid state electronics.  But whether an AND gate has subjective experiences or not is unobservable if you're not that AND gate.

Quote
If we substitute them with more efficient and faster pure electrical signals (or even light signals), will they become more "intelligent"? Will our thoughts be essentially different in this case?

Of course they would.  In the same way as our thoughts are different from those of a fish.
dinofelis
Hero Member
*****
Offline Offline

Activity: 770
Merit: 629


View Profile
March 19, 2015, 07:09:43 PM
 #168

You are saying that a fish, that could think like a human, would still be a fish ?

We call something "a fish" or "not a fish" simply based on whether it would be useful, for our communication, to do so. We name things based on utility. If the utility picture changes, as it does with a fish who can think like a human (and therefore might be able to kill you in your sleep by splashing water on your computer and starting an electrical fire), we no longer would likely feel that the word "fish" evokes the most useful imagery for equipping someone to deal with that creature when we communicate about it. We might feel compelled to qualify it as a "superintelligent fish" or even a "human-fish." Whatever is most useful for getting the point across that you don't want to underestimate its intelligence.

Once you understand that we name things based on (methodologically individual) utility, many paradoxes are resolved. Here are two examples.

Paradox of the Heap: How many grains of sand does it take to make a heap?

Utility phrasing makes it easy. A "heap" simply means a point where you yourself find no utility in trying to keep track of individual grains in the set, either because you're unable to easily count them or because it doesn't matter to you. "Meh, it's just a heap." The answer will differ depending on the person and the context. It is no set number; it's simply when you look over and stop caring about the individuated quantity. That is why this has the appearance of a paradox and why Wikipedia doesn't even mention this obvious and fully satisfying resolution. The fundamental error in Wikipedia's presentation is to consider what a heap "really is," rather than what the term "heap" can usefully mean for each person and context, even though it is self-evident that this is how language works.

Ship of Theseus Paradox:

Quote from: Wikipedia
"The ship wherein Theseus and the youth of Athens returned from Crete had thirty oars, and was preserved by the Athenians down even to the time of Demetrius Phalereus, for they took away the old planks as they decayed, putting in new and stronger timber in their places, in so much that this ship became a standing example among the philosophers, for the logical question of things that grow; one side holding that the ship remained the same, and the other contending that it was not the same."

—Plutarch, Theseus

Plutarch thus questions whether the ship would remain the same if it were entirely replaced, piece by piece. Centuries later, the philosopher Thomas Hobbes introduced a further puzzle, wondering what would happen if the original planks were gathered up after they were replaced, and used to build a second ship. Hobbes asked which ship, if either, would be considered the original Ship of Theseus.

This is also easily and satisfyingly, though again un-excitingly, resolved by utility phrasing. "Ship of Theseus" is just a name we assign for utility purposes, basically to make life easier in our communications with ourselves and others. The name evokes certain associations for certain people, and based on that we will - in our communication efforts - call something "the Ship of Theseus" or "the original Ship of Theseus" whenever we believe that set of words will call up the most useful associations in the listener, to have them best understand our intent.

There is no such thing as a fully objective definition of the term "Ship of Theseus"; it always depends on what you're attempting to communicate to whom, and what you/they actually care about in the present context.

For example, if it matters to you that the ship was touched by Athenian hands, it wouldn't be useful to you to refer to it as the "Ship of Theseus" if all the parts had been replaced by non-Athenians. But if you simply cared about the way the ship looked and what it could do, because it has a unique shape and navicability compared with other ships, it would be useful in your mind to refer to it as the "Ship of Theseus" even if its parts had all been replaced for functionally and visually identical ones.

Once again it comes down to each person's utility of calling it one thing or another in each context. We will call the a second ship built in the image of the first a "replica" if we are speaking in a context of attributing credit for its design and original building, but simply call it "a Ship of Theseus" if we only care about its function and looks in this context, and we'll call it "the Ship of Theseus" even if it is not the original if the original has been destroyed and all we care about is the form and function, such as to answer a practical question like, "Can the Ship of Theseus sale to Minoa?"

To repeat the above point, the key error is in considering what the Ship of Theseus "really is," rather than what the term "Ship of Theseus" can usefully mean for each person and context. Even though it is self-evident that this is how language works in the first place, people are nevertheless highly prone to this kind of error (the reasons have to do with tribal instincts).

Brilliant !

But of course, the question matters somewhat if the concept is "ourselves".  It is not a matter of pure convenience to consider whether you are "you" of course.  That changes, I agree, if it is not just "you" but "us". 

The question was the following: assuming that machines became intelligent, sentient and would be a treat for "humanity", the suggestion was to modify humans so that they would also become much more intelligent, and gain the battle of intelligence with the machines.

My point was that these modified "humans" would not be "us" any more, not more than we are still fish.  We would simply have replaced ourselves with two entirely different, intelligent species: "the improved humans" on one hand, and the "machines" on the other hand.  But we as humans would be gone.
tee-rex
Hero Member
*****
Offline Offline

Activity: 742
Merit: 526


View Profile
March 19, 2015, 09:36:54 PM
 #169

As you can now see (I hope), these tools are no more intelligent than a calculator (in fact, even lesser than that). Can we call electrochemical processes in our brain that form the basis of our thoughts intelligent?

Yes, of course.  They ARE our thoughts.  The mystery resides in that they are subjectively experienced.  That's unobservable by itself (except by the sentient "being" that emerged from it but is behaviourally unobservable from the outside).

So, if we emulate them (or even better mirror them somehow in some carrier) we should necessarily obtain an intelligent entity, right? If you argue against this point, you should then also accept the view that these signals are not intelligent. You can't have it both ways. And you won't be able to get away with the idea that "we'll never know, not being an AND gate ourselves" since if you take this position, you can no longer claim that something is being intelligent at all, and all your arguments are momentarily rendered null and void.
dinofelis
Hero Member
*****
Offline Offline

Activity: 770
Merit: 629


View Profile
March 20, 2015, 05:15:45 AM
 #170

Yes, of course.  They ARE our thoughts.  The mystery resides in that they are subjectively experienced.  That's unobservable by itself (except by the sentient "being" that emerged from it but is behaviourally unobservable from the outside).

So, if we emulate them (or even better mirror them somehow in some carrier) we should necessarily obtain an intelligent entity, right? If you argue against this point, you should then also accept the view that these signals are not intelligent.

If you emulate them, you get of course exactly the same intelligence, if they go at the same speed.  If you go faster, as you claimed, you will get more intelligence, simply because you can put in more "thoughts" to resolve a problem.  In the same way that a recent i7 processor is more intelligent than a Pentium III, even though they share a similar instruction set.  The same problem can be tackled in a much more sophisticated way on an i7 processor than on a Pentium III, simply because the i7 can afford much more instructions to be executed to a problem.

If, as a child, it takes you 10 minutes to do manually a multiplication of 4 digits, and as an adult, you've learned to "see" through relatively complex algebraic expressions in a second, your mathematical intelligence is totally different, right ?  What you may find exciting as a child, is sheer boredom as an adult.  You may enjoy playing tic-tac-toe as a child, but as an adult, it is boring, or its fun resides elsewhere (the social contact, and not the fun of the game itself for instance).  So at different levels of intelligence, your "good" and "bad" experiences are also totally different.  Too intelligent kills the fun of some "boring" things, if you see immediately already the outcome.

Imagine someone, intelligent enough to 'see through' 40 steps in a chess game (a normal casual player sees 1 or 2 steps, and a master player can see 5 or 6 steps).  You'd see the end game already when you start.  No fun playing chess any more.  So the level of intelligence changes also perception of "good" and "bad".

Quote
You can't have it both ways. And you won't be able to get away with the idea that "we'll never know, not being an AND gate ourselves" since if you take this position, you can no longer claim that something is being intelligent at all, and all your arguments are momentarily rendered null and void.

You are confusing (as is standard in AI visibly) subjective consciousness and intelligence.  Intelligence is observable, objective and so on.  Consciousness isn't.  We can never know whether an entity is really conscious ; but we clearly can observe that an entity is intelligent.  Our computers clearly ARE intelligent.  We suppose they are not conscious, but there's no way to know.  An AND gate has a minimum of intelligence (it can solve a very elementary logic puzzle).  Whether it is conscious, we don't know (although we assume it isn't I suppose).  The only way to assume consciousness is by "similarity to ourselves", and it remains a guess.  We assume other people are conscious sentient beings.  We probably assume that most mammals are conscious sentient beings.  For fish, you can start discussing.  For insects, what do you think ?  I suppose most people assume that jelly fish aren't conscious sentient beings.  We base ourselves on the existence of a central nervous system of a "certain complexity" in their bodies. 

So in a certain way, we are assuming that a certain level of intelligence is necessary for the possibility of subjective experiences to emerge, to even exist.  But that's sheer guess work.
tee-rex
Hero Member
*****
Offline Offline

Activity: 742
Merit: 526


View Profile
March 20, 2015, 08:04:37 AM
Last edit: March 20, 2015, 09:59:34 AM by tee-rex
 #171

Quote
You can't have it both ways. And you won't be able to get away with the idea that "we'll never know, not being an AND gate ourselves" since if you take this position, you can no longer claim that something is being intelligent at all, and all your arguments are momentarily rendered null and void.

You are confusing (as is standard in AI visibly) subjective consciousness and intelligence.  Intelligence is observable, objective and so on.  Consciousness isn't.  We can never know whether an entity is really conscious ; but we clearly can observe that an entity is intelligent.  Our computers clearly ARE intelligent.  We suppose they are not conscious, but there's no way to know.  An AND gate has a minimum of intelligence (it can solve a very elementary logic puzzle).  Whether it is conscious, we don't know (although we assume it isn't I suppose).  The only way to assume consciousness is by "similarity to ourselves", and it remains a guess.  We assume other people are conscious sentient beings.  We probably assume that most mammals are conscious sentient beings.  For fish, you can start discussing.  For insects, what do you think ?  I suppose most people assume that jelly fish aren't conscious sentient beings.  We base ourselves on the existence of a central nervous system of a "certain complexity" in their bodies. 

So in a certain way, we are assuming that a certain level of intelligence is necessary for the possibility of subjective experiences to emerge, to even exist.  But that's sheer guess work.

So you obviously consider an automatic mechanical switch (or an automatic control valve) as being intelligent? You may contrive as many definitions for intelligence (or whatever) as you see appropriate, but surely this is not what the current mainstream thought suggests.
Snipe85
Sr. Member
****
Offline Offline

Activity: 756
Merit: 250


Infleum


View Profile
March 20, 2015, 06:12:40 PM
 #172

You are confusing (as is standard in AI visibly) subjective consciousness and intelligence.  Intelligence is observable, objective and so on.  Consciousness isn't.  We can never know whether an entity is really conscious ; but we clearly can observe that an entity is intelligent. 

You are wrong. Ever heard of a consciousness test? Just a short explanation i quickly googled for you:

(...) only a conscious machine can demonstrate a subjective understanding of whether a scene depicted in some ordinary photograph is “right” or “wrong.” This ability to assemble a set of facts into a picture of reality that makes eminent sense—or know, say, that an elephant should not be perched on top of the Eiffel Tower—defines an essential property of the conscious mind. A roomful of IBM supercomputers, in contrast, still cannot fathom what makes sense in a scene.

dinofelis
Hero Member
*****
Offline Offline

Activity: 770
Merit: 629


View Profile
March 20, 2015, 09:06:44 PM
 #173

You are confusing (as is standard in AI visibly) subjective consciousness and intelligence.  Intelligence is observable, objective and so on.  Consciousness isn't.  We can never know whether an entity is really conscious ; but we clearly can observe that an entity is intelligent. 

You are wrong. Ever heard of a consciousness test? Just a short explanation i quickly googled for you:

(...) only a conscious machine can demonstrate a subjective understanding of whether a scene depicted in some ordinary photograph is “right” or “wrong.” This ability to assemble a set of facts into a picture of reality that makes eminent sense—or know, say, that an elephant should not be perched on top of the Eiffel Tower—defines an essential property of the conscious mind. A roomful of IBM supercomputers, in contrast, still cannot fathom what makes sense in a scene.

No, this is when one changes "consciousness" for some or other behavioural pattern.  Neuroscience and AI are full of it, but they simply re-define the concept into something behavioural.  However, there's nothing behavioural to conscious experience, as the "philosophical zombie" attests.

If you can train a pattern recognition algorithm sufficiently (style Google translate) to do the above, is that algorithm then conscious ?  These are really very very naive attempts.

60 years ago, such a kind of definition would probably include "winning a game of chess against the world champion" or something.
40 years ago, we would have said such a thing about voice recognition and Google. 
 What you are describing is a problem of INTELLIGENCE, and visual pattern recognition, in accord with standard visual experience. 

It is in principle even not extremely difficult to set up such a system.  In practice that's something else, but it works in the same way as Google translate: from tons and tons and tons of text pairs, find patterns of bits of phrases that always match.  If your text to be translated consists of these bits, put those systematically translated bits together with certain statistical properties.  It works better than most grammar-based translation systems !
It is also what our visual system does: we have seen and recorded so many "usual" scenes, that the unusual thing jumps up.  The elephant on top of the Eiffel tower would be such a thing.
In fact, many people would FAIL such a test if put in front of scenes of totally different scale, say, atomic scale, where physically, very strange things happen that defy all standard visual conceptions we're used to.

So we have substituted a definition of "consciousness" by one or other intelligence test.
dinofelis
Hero Member
*****
Offline Offline

Activity: 770
Merit: 629


View Profile
March 20, 2015, 09:10:58 PM
 #174

So you obviously consider an automatic mechanical switch (or an automatic control valve) as being intelligent? You may contrive as many definitions for intelligence (or whatever) as you see appropriate, but surely this is not what the current mainstream thought suggests.

It is a very elementary form of intelligence.  It can solve a logical problem.  A calculator is somewhat smarter: it can do arithmetic operations.

What is the fundamental conceptual difference between:

P AND Q, where P, Q are elements of {true, false}

and

X / Y where X and Y are elements of a subset of the rational numbers (namely those that can be represented by a calculator)

If you think that being able to do a division of rational numbers has something intelligent to it, then why is doing the logical multiplication (which is AND) in the set of {true, false}, not a form of intelligence ?

Now, if you consider X/Y not intelligent, would you consider being able to do:

integrate{x^2, x} = x^3/3 + C

a form of intelligence ?

But that's still a similar form of relationship ! 
tee-rex
Hero Member
*****
Offline Offline

Activity: 742
Merit: 526


View Profile
March 20, 2015, 09:13:34 PM
 #175

You are confusing (as is standard in AI visibly) subjective consciousness and intelligence.  Intelligence is observable, objective and so on.  Consciousness isn't.  We can never know whether an entity is really conscious ; but we clearly can observe that an entity is intelligent. 

You are wrong. Ever heard of a consciousness test? Just a short explanation i quickly googled for you:

(...) only a conscious machine can demonstrate a subjective understanding of whether a scene depicted in some ordinary photograph is “right” or “wrong.” This ability to assemble a set of facts into a picture of reality that makes eminent sense—or know, say, that an elephant should not be perched on top of the Eiffel Tower—defines an essential property of the conscious mind. A roomful of IBM supercomputers, in contrast, still cannot fathom what makes sense in a scene.

I think by intelligence he means anything that doesn't correspond a linear train of events. So, any safety shutoff valve (which are designed to automatically shut off the flow of gas or liquid in case the pressure is above the shut-off limit) will be an intelligent device according to his logic.
tee-rex
Hero Member
*****
Offline Offline

Activity: 742
Merit: 526


View Profile
March 20, 2015, 09:23:40 PM
 #176

So you obviously consider an automatic mechanical switch (or an automatic control valve) as being intelligent? You may contrive as many definitions for intelligence (or whatever) as you see appropriate, but surely this is not what the current mainstream thought suggests.

It is a very elementary form of intelligence.  It can solve a logical problem.  A calculator is somewhat smarter: it can do arithmetic operations.

This is not intelligence by any means. It is an interaction of two (or more) different physical processes or forces that are working against each other. Is there anything intelligent in them as such? I would most likely agree that whoever coupled these processes in a device is intelligent, but then we should also declare nature as being intelligent, since there are a multitude of such "intelligent devices" created by natural forces alone (they say that at one time there had even been a working natural nuclear fission reactor somewhere in Africa). As to me, true intelligence means a conscious effort.

I didn't understand your example. Keep it simple!
dinofelis
Hero Member
*****
Offline Offline

Activity: 770
Merit: 629


View Profile
March 20, 2015, 09:23:59 PM
 #177

I think by intelligence he means anything that doesn't correspond a linear train of events. So, any safety shutoff valve (which are designed to automatically shut off the flow of gas or liquid in case the pressure is above the shut-off limit) will be an intelligent device according to his logic.

Intelligence is the ability to solve a problem.  The greater the problem space, the higher the level of intelligence of course.  An AND gate is really really the lowest form of intelligence.
Being able to do arithmetic is a higher form of intelligence than being able to do a logical operation because the problem space is bigger for arithmetic.

Being conscious or sentient is something totally different: it means that subjective sensations which are "good" or "bad" are experienced by the being, that somehow emerge from the behavioural, physical construction.  

If it can solve a problem, it is intelligent.  If it can suffer or be happy, it is conscious.
dinofelis
Hero Member
*****
Offline Offline

Activity: 770
Merit: 629


View Profile
March 20, 2015, 09:25:15 PM
 #178

but then we should also declare nature as being intelligent, since there are a multitude of such "intelligent devices" created by natural forces.

Of course nature is intelligent.  The universe is probably the most intelligent device in existence.  The amount of entropy it can produce is gigantic.
However, I doubt that the universe is sentient.  If we say it is, we enter in totally metaphysical or even theological considerations.

tee-rex
Hero Member
*****
Offline Offline

Activity: 742
Merit: 526


View Profile
March 20, 2015, 09:43:24 PM
 #179

but then we should also declare nature as being intelligent, since there are a multitude of such "intelligent devices" created by natural forces.

Of course nature is intelligent.  The universe is probably the most intelligent device in existence.  The amount of entropy it can produce is gigantic.
However, I doubt that the universe is sentient.  If we say it is, we enter in totally metaphysical or even theological considerations.

As I said above, true intelligence is not possible without consciousness, though these are different notions indeed (as thought and mind). If we assume the existence of intelligence without consciousness, we inevitably expose ourselves to the issue of purpose. That is, what is the purpose of this intelligence? And the purpose of intelligence cannot stem from intelligence per se. In this way, purposeless intelligence is an oxymoron, and it is mind that provides purpose to intelligence. In other words, intelligence is a device of mind for reaching its ends. That, simply put, sums it up.
dinofelis
Hero Member
*****
Offline Offline

Activity: 770
Merit: 629


View Profile
March 21, 2015, 05:08:44 AM
 #180

As I said above, true intelligence is not possible without consciousness, though these are different notions indeed (as thought and mind). If we assume the existence of intelligence without consciousness, we inevitably expose ourselves to the issue of purpose. That is, what is the purpose of this intelligence? And the purpose of intelligence cannot stem from intelligence per se. In this way, purposeless intelligence is an oxymoron, and it is mind that provides purpose to intelligence. In other words, intelligence is a device of mind for reaching its ends. That, simply put, sums it up.

You are right that in order for even a problem to be declared, and a solution to be declared, a purpose needs to be defined, and purpose means consciousness (because "good" versus "bad" experiences).  However, consciousness is only necessary to DEFINE the problem, not to solve it.

As such, you need a sentient being to RECOGNIZE intelligence.

I call something intelligent if it can SOLVE a problem (as DEFINED by a consciousness).

That is: an purpose is necessary to define a problem and its solution:
"the addition of two numbers".  In order to define that, you need to say that there's a purpose in the notion of "addition". 

However, a thing that can PERFORM the addition is intelligent in my view.  A hand calculator has a certain amount of intelligence (but probably no form of consciousness, although we never know it).

Once, as a conscious being, you have recognized a problem with a purpose, you can recognize any system that can solve it, and as such, declare it to be intelligent.

Once, as a sentient being, you've recognized a system that is intelligent, you can just as well ASSIGN IT A HYPOTHETICAL conscience, for which its good feelings are "solving the problem" and its bad feelings are "not solving the problem".  Because you can never know, so you can arbitrarily assign subjective experience to just any physical system.

This is why you can, if you want to, assign subjective experience to a calculator, who has "good experiences" whenever a calculation is performed correctly, and 'suffers' when it is not.  Whether these experiences are really subjectively lived or not, is impossible to know.  Most people would think that a hand calculator doesn't really "experience feelings", but there's no way to know.

Pages: « 1 2 3 4 5 6 7 8 [9] 10 11 12 13 »  All
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!