Bitcoin Forum

Other => Off-topic => Topic started by: herzmeister on March 14, 2013, 01:29:27 PM



Title: Artificial Intelligence
Post by: herzmeister on March 14, 2013, 01:29:27 PM
Our intelligence comes from learning. Learning comes from motivation. Motivation comes from desire. Desire comes from instinct of survival and self-preservation. This instinct comes from evolution. 
 
Machines do not have all of the latter. That's why the concept of "Artificial Intelligence" is questionable because machines do not have any intrinsic desire to learn anything. They never experienced evolutionary pressure and never had to go through natural selection. It's the instinct of self-preservation that would have to be programmed into them. Artificially. Fine, artificial self-preservation then. But I guess if it works at all, it would essentially have to be a chaos-theoretical system, and the consequences of such an experiment would be unpredictable.


Title: Re: Artificial Intelligence
Post by: yogi on March 14, 2013, 01:39:45 PM
You are making the false assumption that evolution can not occur inside a computer.


Title: Re: Artificial Intelligence
Post by: dree12 on March 14, 2013, 01:42:43 PM
Our intelligence comes from learning. Learning comes from motivation. Motivation comes from desire. Desire comes from instinct of survival and self-preservation. This instinct comes from evolution. 
 
Machines do not have all of the latter. That's why the concept of "Artificial Intelligence" is questionable because machines do not have any intrinsic desire to learn anything. They never experienced evolutionary pressure and never had to go through natural selection. It's the instinct of self-preservation that would have to be programmed into them. Artificially. Fine, artificial self-preservation then. But I guess if it works at all, it would essentially have to be a chaos-theoretical system, and the consequences of such an experiment would be unpredictable.

Artificial selection can have the same effect as natural selection. Most of our current breeds of dogs or cats came from artificial selection. If people use the AI software that is better than other AI software, this is evolution.


Title: Re: Artificial Intelligence
Post by: benjamindees on March 14, 2013, 02:07:04 PM
Desire comes from instinct of survival and self-preservation. This instinct comes from evolution. 

Not just preservation, growth (http://www.youtube.com/watch?v=U6QYDdgP9eg).  Crystals grow, yet are not generally regarded as having "instincts".


Title: Re: Artificial Intelligence
Post by: herzmeister on March 14, 2013, 04:19:41 PM
You are making the false assumption that evolution can not occur inside a computer.

You mean running simulations?

Artificial selection can have the same effect as natural selection. Most of our current breeds of dogs or cats came from artificial selection. If people use the AI software that is better than other AI software, this is evolution.

Which "AI software"? My premise is that there doesn't exist any yet which deserves that name. Also it seems to me that this kind of "artifical selection" is detrimental. If the poodle is to be released into the wild again, it's less likely to survive than the wolf.

Not just preservation, growth (http://www.youtube.com/watch?v=U6QYDdgP9eg).  Crystals grow, yet are not generally regarded as having "instincts".

Desire comes from growth?


Title: Re: Artificial Intelligence
Post by: yogi on March 14, 2013, 05:11:58 PM
You are making the false assumption that evolution can not occur inside a computer.

You mean running simulations?

Yes, I've written a program myself that simulates evolution. You can simulate anything. There are some who say we may be living in a simulation.


Title: Re: Artificial Intelligence
Post by: DarkHyudrA on March 14, 2013, 06:21:43 PM
Well the army have some bots with interesting AI, I hope.


Title: Re: Artificial Intelligence
Post by: herzmeister on March 14, 2013, 06:37:00 PM

Yes, I've written a program myself that simulates evolution. You can simulate anything. There are some who say we may be living in a simulation.


I'm not exactly unfamiliar (https://bitcointalk.org/index.php?topic=76315.0).  ;)

However, I'm talking more about our own plane of existence. Did you release your artificial intelligences into the wild? Have they learnt enough about our contemporary culture in order to be not mistaken as "artificial" by fellow human beings?


Title: Re: Artificial Intelligence
Post by: yogi on March 14, 2013, 08:31:33 PM
If you believe it's possible we are living in a simulation, then you disagree with you own premise. And, passing a Turing test does not indicate intelligence anymore than quacking indicates that you're a duck.

Quack!  ;D


Title: Re: Artificial Intelligence
Post by: dree12 on March 14, 2013, 09:31:09 PM

Artificial selection can have the same effect as natural selection. Most of our current breeds of dogs or cats came from artificial selection. If people use the AI software that is better than other AI software, this is evolution.

Which "AI software"? My premise is that there doesn't exist any yet which deserves that name. Also it seems to me that this kind of "artifical selection" is detrimental. If the poodle is to be released into the wild again, it's less likely to survive than the wolf.


Any software which has an internal state that can differentiate its actions (when compared to a copy of such software with a different state) and modify its own state is a "living" software. "Living" software differ from inert software in that their purpose can be tailored to the user, in this case a human. Although most current "living" software does not have the capability to reproduce or mutate on its own accord, they can do so with the assistance of humans.

For example, imagine an open-source speech-to-text software that can be trained for a specific person. If this software is more useful to humans that previous speech-to-text software, it will displace that software. In doing so, it attracts developers, who are humans that assist its mutation, and users, who are humans that assist its reproduction. Different copies of this software will naturally have different internal states. If some humans modify ("mutate") the software through forking it, and the mutation is favourable (the software becomes more useful to humans), there would have been some limited evolution through artificial selection.

Fast-forward 10 years into the future, and visualize the far descendents of this software. When compared to today's software, these descendents (which originally shared its codebase) are more useful to humans and more differentiated from each other. Although their specialization means that they use relatively few concepts of "intelligence", the software does not really need any more (and, indeed, software that becomes excessively intelligent is simply bloated and will be artificially selected against).

This software is neither reproducing on its own accord or actively attempting to preserve or improve itself beyond a basic level of machine learning. Even so, it has become more intelligent, effectively undergoing evolution. Although it is an eventual process, thanks to improved research in artificial intelligence, eventually software will acquire multiple facets of intelligence, including certain traits that resemble "self-preservation". Speculating the future is difficult, but a cursory guess indicates that these traits and behaviours may include marketing one's species, maximizing income for one's developers, detecting and reporting one's own deficiencies, etc.


Title: Re: Artificial Intelligence
Post by: herzmeister on March 18, 2013, 02:12:25 AM
If you believe it's possible we are living in a simulation, then you disagree with you own premises. And, passing a Turing test does not indicate intelligence anymore than quacking indicates that you're a duck.

Quack!  ;D

If we take the simulation argument into account (which is not a common prerequisite after all), then I have to put things a little differently.

My premise was also about evolution. So in this context I'd say in this simulation we apparently had plenty of time to go through evolution.

It's admittedly hard to define what the "artificial" part in "intelligence" actually means then. (I didn't invent that term  :)).

What I meant is contemporary AI, i.e. the current state of research, and that AI (think chat bots) is not convincing just because it lacks inherent motivation and direction of self-learning. And I assume that's because it lacks culture, which in turn is because it lacks its own evolutionary history.

It may very well be that one of the purposes to run simulations then is exactly to "breed" "artificial" intelligence, i.e. to get AI units that are faithful and convincing enough.

Any software which has an internal state that can differentiate its actions (when compared to a copy of such software with a different state) and modify its own state is a "living" software. "Living" software differ from inert software in that their purpose can be tailored to the user, in this case a human. Although most current "living" software does not have the capability to reproduce or mutate on its own accord, they can do so with the assistance of humans.

For example, imagine an open-source speech-to-text software that can be trained for a specific person. If this software is more useful to humans that previous speech-to-text software, it will displace that software. In doing so, it attracts developers, who are humans that assist its mutation, and users, who are humans that assist its reproduction. Different copies of this software will naturally have different internal states. If some humans modify ("mutate") the software through forking it, and the mutation is favourable (the software becomes more useful to humans), there would have been some limited evolution through artificial selection.

Fast-forward 10 years into the future, and visualize the far descendents of this software. When compared to today's software, these descendents (which originally shared its codebase) are more useful to humans and more differentiated from each other. Although their specialization means that they use relatively few concepts of "intelligence", the software does not really need any more (and, indeed, software that becomes excessively intelligent is simply bloated and will be artificially selected against).

This software is neither reproducing on its own accord or actively attempting to preserve or improve itself beyond a basic level of machine learning. Even so, it has become more intelligent, effectively undergoing evolution. Although it is an eventual process, thanks to improved research in artificial intelligence, eventually software will acquire multiple facets of intelligence, including certain traits that resemble "self-preservation". Speculating the future is difficult, but a cursory guess indicates that these traits and behaviours may include marketing one's species, maximizing income for one's developers, detecting and reporting one's own deficiencies, etc.

Thanks for the interesting case, but I guess AI for such a specific use case is not what I meant. If you mean it would develop over time into something much richer in expression, I'm not sure, because it will for long be very dependent on the environment we feed it. And that is not an optimal or sufficiently neutral condition for the thought model about AI that I intended in the OP.


Title: Re: Artificial Intelligence
Post by: yogi on March 18, 2013, 04:45:37 AM
If we take the simulation argument into account (which is not a common prerequisite after all), then I have to put things a little differently.

My premise was also about evolution. So in this context I'd say in this simulation we apparently had plenty of time to go through evolution.

It's admittedly hard to define what the "artificial" part in "intelligence" actually means then. (I didn't invent that term  :)).

What I meant is contemporary AI, i.e. the current state of research, and that AI (think chat bots) is not convincing just because it lacks inherent motivation and direction of self-learning. And I assume that's because it lacks culture, which in turn is because it lacks its own evolutionary history.

It may very well be that one of the purposes to run simulations then is exactly to "breed" "artificial" intelligence, i.e. to get AI units that are faithful and convincing enough.

I agree that it would be simpler, to give an artificial intelligence human experiences in order to get it to behave in a more human way. As opposed to manually programming those human traits.

My simulated evolution program was an experiment in artificial intelligence. I discovered that, whilst evolution works, it is very slow.

The reason why AI is still very basic, is because we don't yet understand exactly how it works in nature. Also, driving an intelligence comparable to a human is going to take a lot of computational power.

I do not believe an evolutionary history is a fundamental prerequisite for intelligence.


Title: Re: Artificial Intelligence
Post by: Mike Christ on March 18, 2013, 06:00:20 AM
You would have to program the machine to have basic desire.  But I think, before A.I. ever becomes a thing, we need to first fully understand how our own minds function.  We're still working on that bit.

What kind of desires would a machine have?  Depends on how advanced you expect the A.I. to be.  If you're aiming for a machine which can fully emulate a human, from needs and wants, emotion, limited thinking power (on top of the processes for everything else), you'll need to allow the machine complete freedom to do as it wants, and make it so inefficient that it will require food three times a day, lots of water, and sleep--else it's only pretending by doing these things.  The desire ceases to be genuine if it's programmed to act this way.  If, on the other hand, you're only aiming to create a machine intelligent enough to perform tasks from being fed verbal instruction, then you don't have to worry about any of that previously mentioned.  A machine with enough progress to be able to see, hear, and react on both, inside a physical body, or just as software, is very much in our reach.  Once we've accomplished this feat, it's only a matter of time until we want to give the machine thoughts and emotion.  AFAIC, a machine which can invent is the last stop in mirroring human kind.  A machine that is aware it is a machine without prior programming to let it know that.


Title: Re: Artificial Intelligence
Post by: yogi on March 18, 2013, 06:55:58 PM
Designed by nature, or by mans hand, we are all just machines. So why one should be called artificial and the other real? Once computer intelligence is mastered, maybe the conversation *they* will have is "Will humans ever be intelligent".


Title: Re: Artificial Intelligence
Post by: Mike Christ on March 18, 2013, 07:49:59 PM
Designed by nature, or by mans hand, we are all just machines. So why one should be called artificial and the other real? Once computer intelligence is mastered, maybe the conversation *they* will have is "Will humans ever be intelligent".


Good point ;D  Metal machines, organic machines, it's the same difference: a complex system of individual parts which come together to make one identifiable whole.  We're made from the matter and energy of the universe, and so will A.I. come from this same matter.

If we designed an organic brain from the base up, would it still be A.I.?


Title: Re: Artificial Intelligence
Post by: avin_lex on April 18, 2022, 03:27:16 PM
I think the definition and distinction between real and artificial intelligence are not so important. The ability to use it is much more important. No matter how limited AI was, we still found enough areas for its application. Several articles clearly show the benefits of the application of AI in everyday life. https://scilifestyle.com/category/artificial-intelligence/