Bitcoin Forum
May 08, 2024, 05:37:31 PM *
News: Latest Bitcoin Core release: 27.0 [Torrent]
 
   Home   Help Search Login Register More  
Pages: [1]
  Print  
Author Topic: Artificial Intelligence  (Read 957 times)
herzmeister (OP)
Legendary
*
Offline Offline

Activity: 1764
Merit: 1007



View Profile WWW
March 14, 2013, 01:29:27 PM
 #1

Our intelligence comes from learning. Learning comes from motivation. Motivation comes from desire. Desire comes from instinct of survival and self-preservation. This instinct comes from evolution. 
 
Machines do not have all of the latter. That's why the concept of "Artificial Intelligence" is questionable because machines do not have any intrinsic desire to learn anything. They never experienced evolutionary pressure and never had to go through natural selection. It's the instinct of self-preservation that would have to be programmed into them. Artificially. Fine, artificial self-preservation then. But I guess if it works at all, it would essentially have to be a chaos-theoretical system, and the consequences of such an experiment would be unpredictable.

https://localbitcoins.com/?ch=80k | BTC: 1LJvmd1iLi199eY7EVKtNQRW3LqZi8ZmmB
1715189851
Hero Member
*
Offline Offline

Posts: 1715189851

View Profile Personal Message (Offline)

Ignore
1715189851
Reply with quote  #2

1715189851
Report to moderator
1715189851
Hero Member
*
Offline Offline

Posts: 1715189851

View Profile Personal Message (Offline)

Ignore
1715189851
Reply with quote  #2

1715189851
Report to moderator
The network tries to produce one block per 10 minutes. It does this by automatically adjusting how difficult it is to produce blocks.
Advertised sites are not endorsed by the Bitcoin Forum. They may be unsafe, untrustworthy, or illegal in your jurisdiction.
1715189851
Hero Member
*
Offline Offline

Posts: 1715189851

View Profile Personal Message (Offline)

Ignore
1715189851
Reply with quote  #2

1715189851
Report to moderator
1715189851
Hero Member
*
Offline Offline

Posts: 1715189851

View Profile Personal Message (Offline)

Ignore
1715189851
Reply with quote  #2

1715189851
Report to moderator
yogi
Legendary
*
Offline Offline

Activity: 947
Merit: 1042


Hamster ate my bitcoin


View Profile
March 14, 2013, 01:39:45 PM
 #2

You are making the false assumption that evolution can not occur inside a computer.

dree12
Legendary
*
Offline Offline

Activity: 1246
Merit: 1077



View Profile
March 14, 2013, 01:42:43 PM
 #3

Our intelligence comes from learning. Learning comes from motivation. Motivation comes from desire. Desire comes from instinct of survival and self-preservation. This instinct comes from evolution. 
 
Machines do not have all of the latter. That's why the concept of "Artificial Intelligence" is questionable because machines do not have any intrinsic desire to learn anything. They never experienced evolutionary pressure and never had to go through natural selection. It's the instinct of self-preservation that would have to be programmed into them. Artificially. Fine, artificial self-preservation then. But I guess if it works at all, it would essentially have to be a chaos-theoretical system, and the consequences of such an experiment would be unpredictable.

Artificial selection can have the same effect as natural selection. Most of our current breeds of dogs or cats came from artificial selection. If people use the AI software that is better than other AI software, this is evolution.
benjamindees
Legendary
*
Offline Offline

Activity: 1330
Merit: 1000


View Profile
March 14, 2013, 02:07:04 PM
 #4

Desire comes from instinct of survival and self-preservation. This instinct comes from evolution. 

Not just preservation, growth.  Crystals grow, yet are not generally regarded as having "instincts".

Civil Liberty Through Complex Mathematics
herzmeister (OP)
Legendary
*
Offline Offline

Activity: 1764
Merit: 1007



View Profile WWW
March 14, 2013, 04:19:41 PM
 #5

You are making the false assumption that evolution can not occur inside a computer.

You mean running simulations?

Artificial selection can have the same effect as natural selection. Most of our current breeds of dogs or cats came from artificial selection. If people use the AI software that is better than other AI software, this is evolution.

Which "AI software"? My premise is that there doesn't exist any yet which deserves that name. Also it seems to me that this kind of "artifical selection" is detrimental. If the poodle is to be released into the wild again, it's less likely to survive than the wolf.

Not just preservation, growth.  Crystals grow, yet are not generally regarded as having "instincts".

Desire comes from growth?

https://localbitcoins.com/?ch=80k | BTC: 1LJvmd1iLi199eY7EVKtNQRW3LqZi8ZmmB
yogi
Legendary
*
Offline Offline

Activity: 947
Merit: 1042


Hamster ate my bitcoin


View Profile
March 14, 2013, 05:11:58 PM
 #6

You are making the false assumption that evolution can not occur inside a computer.

You mean running simulations?

Yes, I've written a program myself that simulates evolution. You can simulate anything. There are some who say we may be living in a simulation.

DarkHyudrA
Legendary
*
Offline Offline

Activity: 1386
Merit: 1000


English <-> Portuguese translations


View Profile
March 14, 2013, 06:21:43 PM
 #7

Well the army have some bots with interesting AI, I hope.

English <-> Brazilian Portuguese translations
herzmeister (OP)
Legendary
*
Offline Offline

Activity: 1764
Merit: 1007



View Profile WWW
March 14, 2013, 06:37:00 PM
 #8


Yes, I've written a program myself that simulates evolution. You can simulate anything. There are some who say we may be living in a simulation.


I'm not exactly unfamiliar.  Wink

However, I'm talking more about our own plane of existence. Did you release your artificial intelligences into the wild? Have they learnt enough about our contemporary culture in order to be not mistaken as "artificial" by fellow human beings?

https://localbitcoins.com/?ch=80k | BTC: 1LJvmd1iLi199eY7EVKtNQRW3LqZi8ZmmB
yogi
Legendary
*
Offline Offline

Activity: 947
Merit: 1042


Hamster ate my bitcoin


View Profile
March 14, 2013, 08:31:33 PM
Last edit: March 18, 2013, 04:48:01 AM by yogi
 #9

If you believe it's possible we are living in a simulation, then you disagree with you own premise. And, passing a Turing test does not indicate intelligence anymore than quacking indicates that you're a duck.

Quack!  Grin

dree12
Legendary
*
Offline Offline

Activity: 1246
Merit: 1077



View Profile
March 14, 2013, 09:31:09 PM
 #10


Artificial selection can have the same effect as natural selection. Most of our current breeds of dogs or cats came from artificial selection. If people use the AI software that is better than other AI software, this is evolution.

Which "AI software"? My premise is that there doesn't exist any yet which deserves that name. Also it seems to me that this kind of "artifical selection" is detrimental. If the poodle is to be released into the wild again, it's less likely to survive than the wolf.


Any software which has an internal state that can differentiate its actions (when compared to a copy of such software with a different state) and modify its own state is a "living" software. "Living" software differ from inert software in that their purpose can be tailored to the user, in this case a human. Although most current "living" software does not have the capability to reproduce or mutate on its own accord, they can do so with the assistance of humans.

For example, imagine an open-source speech-to-text software that can be trained for a specific person. If this software is more useful to humans that previous speech-to-text software, it will displace that software. In doing so, it attracts developers, who are humans that assist its mutation, and users, who are humans that assist its reproduction. Different copies of this software will naturally have different internal states. If some humans modify ("mutate") the software through forking it, and the mutation is favourable (the software becomes more useful to humans), there would have been some limited evolution through artificial selection.

Fast-forward 10 years into the future, and visualize the far descendents of this software. When compared to today's software, these descendents (which originally shared its codebase) are more useful to humans and more differentiated from each other. Although their specialization means that they use relatively few concepts of "intelligence", the software does not really need any more (and, indeed, software that becomes excessively intelligent is simply bloated and will be artificially selected against).

This software is neither reproducing on its own accord or actively attempting to preserve or improve itself beyond a basic level of machine learning. Even so, it has become more intelligent, effectively undergoing evolution. Although it is an eventual process, thanks to improved research in artificial intelligence, eventually software will acquire multiple facets of intelligence, including certain traits that resemble "self-preservation". Speculating the future is difficult, but a cursory guess indicates that these traits and behaviours may include marketing one's species, maximizing income for one's developers, detecting and reporting one's own deficiencies, etc.
herzmeister (OP)
Legendary
*
Offline Offline

Activity: 1764
Merit: 1007



View Profile WWW
March 18, 2013, 02:12:25 AM
 #11

If you believe it's possible we are living in a simulation, then you disagree with you own premises. And, passing a Turing test does not indicate intelligence anymore than quacking indicates that you're a duck.

Quack!  Grin

If we take the simulation argument into account (which is not a common prerequisite after all), then I have to put things a little differently.

My premise was also about evolution. So in this context I'd say in this simulation we apparently had plenty of time to go through evolution.

It's admittedly hard to define what the "artificial" part in "intelligence" actually means then. (I didn't invent that term  Smiley).

What I meant is contemporary AI, i.e. the current state of research, and that AI (think chat bots) is not convincing just because it lacks inherent motivation and direction of self-learning. And I assume that's because it lacks culture, which in turn is because it lacks its own evolutionary history.

It may very well be that one of the purposes to run simulations then is exactly to "breed" "artificial" intelligence, i.e. to get AI units that are faithful and convincing enough.

Any software which has an internal state that can differentiate its actions (when compared to a copy of such software with a different state) and modify its own state is a "living" software. "Living" software differ from inert software in that their purpose can be tailored to the user, in this case a human. Although most current "living" software does not have the capability to reproduce or mutate on its own accord, they can do so with the assistance of humans.

For example, imagine an open-source speech-to-text software that can be trained for a specific person. If this software is more useful to humans that previous speech-to-text software, it will displace that software. In doing so, it attracts developers, who are humans that assist its mutation, and users, who are humans that assist its reproduction. Different copies of this software will naturally have different internal states. If some humans modify ("mutate") the software through forking it, and the mutation is favourable (the software becomes more useful to humans), there would have been some limited evolution through artificial selection.

Fast-forward 10 years into the future, and visualize the far descendents of this software. When compared to today's software, these descendents (which originally shared its codebase) are more useful to humans and more differentiated from each other. Although their specialization means that they use relatively few concepts of "intelligence", the software does not really need any more (and, indeed, software that becomes excessively intelligent is simply bloated and will be artificially selected against).

This software is neither reproducing on its own accord or actively attempting to preserve or improve itself beyond a basic level of machine learning. Even so, it has become more intelligent, effectively undergoing evolution. Although it is an eventual process, thanks to improved research in artificial intelligence, eventually software will acquire multiple facets of intelligence, including certain traits that resemble "self-preservation". Speculating the future is difficult, but a cursory guess indicates that these traits and behaviours may include marketing one's species, maximizing income for one's developers, detecting and reporting one's own deficiencies, etc.

Thanks for the interesting case, but I guess AI for such a specific use case is not what I meant. If you mean it would develop over time into something much richer in expression, I'm not sure, because it will for long be very dependent on the environment we feed it. And that is not an optimal or sufficiently neutral condition for the thought model about AI that I intended in the OP.

https://localbitcoins.com/?ch=80k | BTC: 1LJvmd1iLi199eY7EVKtNQRW3LqZi8ZmmB
yogi
Legendary
*
Offline Offline

Activity: 947
Merit: 1042


Hamster ate my bitcoin


View Profile
March 18, 2013, 04:45:37 AM
 #12

If we take the simulation argument into account (which is not a common prerequisite after all), then I have to put things a little differently.

My premise was also about evolution. So in this context I'd say in this simulation we apparently had plenty of time to go through evolution.

It's admittedly hard to define what the "artificial" part in "intelligence" actually means then. (I didn't invent that term  Smiley).

What I meant is contemporary AI, i.e. the current state of research, and that AI (think chat bots) is not convincing just because it lacks inherent motivation and direction of self-learning. And I assume that's because it lacks culture, which in turn is because it lacks its own evolutionary history.

It may very well be that one of the purposes to run simulations then is exactly to "breed" "artificial" intelligence, i.e. to get AI units that are faithful and convincing enough.

I agree that it would be simpler, to give an artificial intelligence human experiences in order to get it to behave in a more human way. As opposed to manually programming those human traits.

My simulated evolution program was an experiment in artificial intelligence. I discovered that, whilst evolution works, it is very slow.

The reason why AI is still very basic, is because we don't yet understand exactly how it works in nature. Also, driving an intelligence comparable to a human is going to take a lot of computational power.

I do not believe an evolutionary history is a fundamental prerequisite for intelligence.

Mike Christ
aka snapsunny
Legendary
*
Offline Offline

Activity: 1078
Merit: 1003



View Profile
March 18, 2013, 06:00:20 AM
 #13

You would have to program the machine to have basic desire.  But I think, before A.I. ever becomes a thing, we need to first fully understand how our own minds function.  We're still working on that bit.

What kind of desires would a machine have?  Depends on how advanced you expect the A.I. to be.  If you're aiming for a machine which can fully emulate a human, from needs and wants, emotion, limited thinking power (on top of the processes for everything else), you'll need to allow the machine complete freedom to do as it wants, and make it so inefficient that it will require food three times a day, lots of water, and sleep--else it's only pretending by doing these things.  The desire ceases to be genuine if it's programmed to act this way.  If, on the other hand, you're only aiming to create a machine intelligent enough to perform tasks from being fed verbal instruction, then you don't have to worry about any of that previously mentioned.  A machine with enough progress to be able to see, hear, and react on both, inside a physical body, or just as software, is very much in our reach.  Once we've accomplished this feat, it's only a matter of time until we want to give the machine thoughts and emotion.  AFAIC, a machine which can invent is the last stop in mirroring human kind.  A machine that is aware it is a machine without prior programming to let it know that.

yogi
Legendary
*
Offline Offline

Activity: 947
Merit: 1042


Hamster ate my bitcoin


View Profile
March 18, 2013, 06:55:58 PM
 #14

Designed by nature, or by mans hand, we are all just machines. So why one should be called artificial and the other real? Once computer intelligence is mastered, maybe the conversation *they* will have is "Will humans ever be intelligent".

Mike Christ
aka snapsunny
Legendary
*
Offline Offline

Activity: 1078
Merit: 1003



View Profile
March 18, 2013, 07:49:59 PM
 #15

Designed by nature, or by mans hand, we are all just machines. So why one should be called artificial and the other real? Once computer intelligence is mastered, maybe the conversation *they* will have is "Will humans ever be intelligent".


Good point Grin  Metal machines, organic machines, it's the same difference: a complex system of individual parts which come together to make one identifiable whole.  We're made from the matter and energy of the universe, and so will A.I. come from this same matter.

If we designed an organic brain from the base up, would it still be A.I.?

avin_lex
Newbie
*
Offline Offline

Activity: 1
Merit: 0


View Profile WWW
April 18, 2022, 03:27:16 PM
 #16

I think the definition and distinction between real and artificial intelligence are not so important. The ability to use it is much more important. No matter how limited AI was, we still found enough areas for its application. Several articles clearly show the benefits of the application of AI in everyday life. https://scilifestyle.com/category/artificial-intelligence/
Pages: [1]
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!