Bitcoin Forum

Economy => Economics => Topic started by: cbeast on March 05, 2015, 01:50:42 AM



Title: Machines and money
Post by: cbeast on March 05, 2015, 01:50:42 AM
Artificial intelligence and the fridge
http://on.ft.com/1zSz2tw (http://on.ft.com/1zSz2tw)

Quote
In science fiction, this scenario — called “singularity” or “transcendence” — usually leads to robot versus human war and a contest for world domination.
But what if, rather than a physical battle, it was an economic one, with robots siphoning off our money or destroying the global economy with out-of-control algorithmic trading programmes? Perhaps it will not make for a great movie, but it seems the more likely outcome.

With Bitcoin, it's hard to see the downside. DACs (decentralize autonomous companies) are inevitable. This article is another vestige of irrational fear about money.


Title: Re: Machines and money
Post by: hua_hui on March 05, 2015, 02:19:43 AM
when the time comes, we will manage the balance between the robot and human. Now our focus should be developing Artificial intelligence.  The scenario you mentioned should be in science fiction now.


Title: Re: Machines and money
Post by: Amph on March 05, 2015, 08:01:43 AM
i could see a machines race take the advantage because of bitcoin, and digital payment in general, they could start their own mech-coin, that probably will be way above in superiority versus bitcoin, could use a new revolutionary protocol

i love machines actually they are so precise and powerful


Title: Re: Machines and money
Post by: NUFCrichard on March 05, 2015, 08:03:32 AM
Artificial intelligence and the fridge
http://on.ft.com/1zSz2tw (http://on.ft.com/1zSz2tw)

Quote
In science fiction, this scenario — called “singularity” or “transcendence” — usually leads to robot versus human war and a contest for world domination.
But what if, rather than a physical battle, it was an economic one, with robots siphoning off our money or destroying the global economy with out-of-control algorithmic trading programmes? Perhaps it will not make for a great movie, but it seems the more likely outcome.

With Bitcoin, it's hard to see the downside. DACs (decentralize autonomous companies) are inevitable. This article is another vestige of irrational fear about money.

What would be the benefit to computers? A destroyed global economy makes no difference to them.
I also disagree that there will ever be a Human vs Computer battle for survival, once computers are at that stage, we have already lost any potential battle with them.


Title: Re: Machines and money
Post by: cbeast on March 05, 2015, 08:14:47 AM
when the time comes, we will manage the balance between the robot and human. Now our focus should be developing Artificial intelligence.  The scenario you mentioned should be in science fiction now.
I would also like to find such fiction. I have been working on a book about this for awhile now. Hopefully my take on the subject is unique.


Title: Re: Machines and money
Post by: neoneros on March 05, 2015, 02:14:39 PM
If computers take over they will create a virtual world where humans are not allowed. But the joke is on them, humans are not virtual.

It was Arthur C Clarke who said that like religion and labor, science will get rid of the economy as well.


Title: Re: Machines and money
Post by: ajareselde on March 05, 2015, 02:18:45 PM
Artificial intelligence and the fridge
http://on.ft.com/1zSz2tw (http://on.ft.com/1zSz2tw)

Quote
In science fiction, this scenario — called “singularity” or “transcendence” — usually leads to robot versus human war and a contest for world domination.
But what if, rather than a physical battle, it was an economic one, with robots siphoning off our money or destroying the global economy with out-of-control algorithmic trading programmes? Perhaps it will not make for a great movie, but it seems the more likely outcome.

With Bitcoin, it's hard to see the downside. DACs (decentralize autonomous companies) are inevitable. This article is another vestige of irrational fear about money.

I understand the question behind the story, but its kind of insanely placed in the story of a fridge-the wolf of wall street. There will surely be advancements in our everyday lives,
but alot of the changes already happened, and there is no issues with those. Its not too hard to adapt, since its happening one step at a time.
In my opinion, the article too far fetched, derived from an unrealistic fear of future technology.

cheers


Title: Re: Machines and money
Post by: dothebeats on March 05, 2015, 05:16:42 PM
Artificial intelligence and the fridge
http://on.ft.com/1zSz2tw (http://on.ft.com/1zSz2tw)

Quote
In science fiction, this scenario — called “singularity” or “transcendence” — usually leads to robot versus human war and a contest for world domination.
But what if, rather than a physical battle, it was an economic one, with robots siphoning off our money or destroying the global economy with out-of-control algorithmic trading programmes? Perhaps it will not make for a great movie, but it seems the more likely outcome.

With Bitcoin, it's hard to see the downside. DACs (decentralize autonomous companies) are inevitable. This article is another vestige of irrational fear about money.

I understand the question behind the story, but its kind of insanely placed in the story of a fridge-the wolf of wall street. There will surely be advancements in our everyday lives,
but alot of the changes already happened, and there is no issues with those. Its not too hard to adapt, since its happening one step at a time.
In my opinion, the article too far fetched, derived from an unrealistic fear of future technology.

cheers

But it still is probable to happen. It's been proven that uncontrolled advancements often leads to destruction, just like what happened to the nature itself. The article only looks upon the different scenarios that may happen, though an unlikely one.


Title: Re: Machines and money
Post by: cbeast on March 06, 2015, 02:28:11 AM
If computers take over they will create a virtual world where humans are not allowed. But the joke is on them, humans are not virtual.

It was Arthur C Clarke who said that like religion and labor, science will get rid of the economy as well.

Clarke was an old school when they didn't need teleportation and psychic power gimmicks. He kept up with scientific research.


Title: Re: Machines and money
Post by: Snipe85 on March 06, 2015, 04:18:35 AM
If computers take over they will create a virtual world where humans are not allowed. But the joke is on them, humans are not virtual.

It was Arthur C Clarke who said that like religion and labor, science will get rid of the economy as well.
Machines will take over and due automation most jobs will not be needed for human labor to intervene, thing is this is a good thing since we don't have to work unless we are qualified to, but of course this rises fundamental problems that need to be addressed.
Actually more problems than one may think. People are already losing jobs because machines are faster and never sleep. Imagine a world where machines are doing everything for you. How would you be able to find a job in a world where robot designers and programmers are the only ones needed, since manufacture and repairs can be done by other robots.

Let's say we don't need jobs, how will we redistribute wealth and decide which unemployed guy should get more money? Maybe we won't need money anymore? That would be something ::)


Title: Re: Machines and money
Post by: Possum577 on March 06, 2015, 06:08:45 AM
Just don't give your fridge your private key and you should be ok!

Haha, read much about the Internet of Things? It's scary stuff at first glance, but then again so was the "horseless carriage" and automobiles when the first started zipping around the streets. Remember that big changes like this happen in very practical ways, which allow us all to get used to the idea much faster.

(I'm not talking about robots taking our money but about allowing intelligence in machines without worry that they'll take our money!)


Title: Re: Machines and money
Post by: Bonam on March 06, 2015, 06:37:32 AM
To answer the original question... you can't destroy the economy, so I'm not worried. You can destroy the financial system, but not the economy. So long as some people want things that other people have, and are willing to pay for them, there will be an economy.

As for people being put out of work by machines.... this is already happening and has been a continual trend. There won't be any one moment when suddenly machines replace everyone, but just the slow gradual automation of various tasks that we have been living with for decades now.


Title: Re: Machines and money
Post by: neoneros on March 06, 2015, 08:03:27 AM
To answer the original question... you can't destroy the economy, so I'm not worried. You can destroy the financial system, but not the economy. So long as some people want things that other people have, and are willing to pay for them, there will be an economy.

As for people being put out of work by machines.... this is already happening and has been a continual trend. There won't be any one moment when suddenly machines replace everyone, but just the slow gradual automation of various tasks that we have been living with for decades now.

But what if machines can provide enough for everyone? Anything?

Need food? Poof, your machine provides
Need alcohol, drugs, love? poof, poof, poof.
You're machine broke? There's a repair machine on its way..

the future is as simple as that. Mayby the economy will still exist, but in the virtual world of the machines. Humans will have gone beyond those earthly concepts and evolved in a more higher species.


Title: Re: Machines and money
Post by: tee-rex on March 06, 2015, 08:31:47 AM
To answer the original question... you can't destroy the economy, so I'm not worried. You can destroy the financial system, but not the economy. So long as some people want things that other people have, and are willing to pay for them, there will be an economy.

As for people being put out of work by machines.... this is already happening and has been a continual trend. There won't be any one moment when suddenly machines replace everyone, but just the slow gradual automation of various tasks that we have been living with for decades now.

But what if machines can provide enough for everyone? Anything?

Need food? Poof, your machine provides
Need alcohol, drugs, love? poof, poof, poof.
You're machine broke? There's a repair machine on its way..

the future is as simple as that. Mayby the economy will still exist, but in the virtual world of the machines. Humans will have gone beyond those earthly concepts and evolved in a more higher species.

Human greed is limitless, human desires are insatiable, but just human envy alone would waste any machine in less than no time.


Title: Re: Machines and money
Post by: zetaray on March 06, 2015, 08:52:14 AM
Human fighting for survival against computers is not pure science fiction. Computers are an integral part of our lives now. They can cause catastrophic disasters if they malfunction or infected by a sleeper virus. We are not at the stage where AI can think on it's own, we could be in a few decades.


Title: Re: Machines and money
Post by: Monetizer on March 06, 2015, 08:53:52 AM
Human fighting for survival against computers is not pure science fiction. Computers are an integral part of our lives now. They can cause catastrophic disasters if they malfunction or infected by a sleeper virus. We are not at the stage where AI can think on it's own, we could be in a few decades.

Imagine we get to a point where a robot society could live side by side with us through artificial intelligence. It could be both catastrophic or the best thing to ever happen to us.


Title: Re: Machines and money
Post by: dothebeats on March 06, 2015, 12:21:57 PM
Human fighting for survival against computers is not pure science fiction. Computers are an integral part of our lives now. They can cause catastrophic disasters if they malfunction or infected by a sleeper virus. We are not at the stage where AI can think on it's own, we could be in a few decades.

Imagine we get to a point where a robot society could live side by side with us through artificial intelligence. It could be both catastrophic or the best thing to ever happen to us.

I want it to live with a robot run by AI that I can treat like any other normal human being. With that said, I'm hoping for a Utopian future together with machines.


Title: Re: Machines and money
Post by: dothebeats on March 06, 2015, 01:17:49 PM
Human fighting for survival against computers is not pure science fiction. Computers are an integral part of our lives now. They can cause catastrophic disasters if they malfunction or infected by a sleeper virus. We are not at the stage where AI can think on it's own, we could be in a few decades.

Imagine we get to a point where a robot society could live side by side with us through artificial intelligence. It could be both catastrophic or the best thing to ever happen to us.

I want it to live with a robot run by AI that I can treat like any other normal human being. With that said, I'm hoping for a Utopian future together with machines.
You wouldn't like that. If a person goes crazy you have a decent chance to fight him and protect yourself. If a 300kg robot, that doesn't feel pain, goes crazy you can only hide and pray.

But I still want to experience that! Haha jk. If that scenario could be prevented, that will be nice.


Title: Re: Machines and money
Post by: neurotypical on March 06, 2015, 11:51:00 PM
Human fighting for survival against computers is not pure science fiction. Computers are an integral part of our lives now. They can cause catastrophic disasters if they malfunction or infected by a sleeper virus. We are not at the stage where AI can think on it's own, we could be in a few decades.

Imagine we get to a point where a robot society could live side by side with us through artificial intelligence. It could be both catastrophic or the best thing to ever happen to us.

I want it to live with a robot run by AI that I can treat like any other normal human being. With that said, I'm hoping for a Utopian future together with machines.
You wouldn't like that. If a person goes crazy you have a decent chance to fight him and protect yourself. If a 300kg robot, that doesn't feel pain, goes crazy you can only hide and pray.

But I still want to experience that! Haha jk. If that scenario could be prevented, that will be nice.
Well not live to see true AI... true AI is something too insane to think off within today's knowledge, I think we aren't even close. But things are getting scary with robots, look at the Petman darpa robot.


Title: Re: Machines and money
Post by: cbeast on March 07, 2015, 01:47:36 AM
Human greed is limitless, human desires are insatiable, but just human envy alone would waste any machine in less than no time.
I guess it comes down to who can be greediest, man or machine?


Title: Re: Machines and money
Post by: tee-rex on March 07, 2015, 06:55:27 AM
Human greed is limitless, human desires are insatiable, but just human envy alone would waste any machine in less than no time.
I guess it comes down to who can be greediest, man or machine?

Hmm... I'd rather ask who can be deadliest, but greediest? I'm trying to fancy a Terminator in which the terminate function is changed to that of greed. An army of greedy Terminators running on banks and attacking the New York Stock Exchange. This surely beats me!


Title: Re: Machines and money
Post by: cbeast on March 07, 2015, 07:25:23 AM
Human greed is limitless, human desires are insatiable, but just human envy alone would waste any machine in less than no time.
I guess it comes down to who can be greediest, man or machine?

Hmm... I'd rather ask who can be deadliest, but greediest? I'm trying to fancy a Terminator in which the terminate function is changed to that of greed. An army of greedy Terminators running on banks and attacking the New York Stock Exchange. This surely beats me!
They wouldn't need to be terminators. They could be holding companies. How would the authorities punish a program? Using decentralized record keeping they could be audited, but they couldn't be stopped or punished. Machine greed is limitless.


Title: Re: Machines and money
Post by: tee-rex on March 07, 2015, 07:46:47 AM
Human greed is limitless, human desires are insatiable, but just human envy alone would waste any machine in less than no time.
I guess it comes down to who can be greediest, man or machine?

Hmm... I'd rather ask who can be deadliest, but greediest? I'm trying to fancy a Terminator in which the terminate function is changed to that of greed. An army of greedy Terminators running on banks and attacking the New York Stock Exchange. This surely beats me!
They wouldn't need to be terminators. They could be holding companies. How would the authorities punish a program? Using decentralized record keeping they could be audited, but they couldn't be stopped or punished. Machine greed is limitless.

Okay, you have me, but how these holding companies ruled by greedy terminators programs are much different from or better than ordinary trading bots? Are the latter less greedy somehow?


Title: Re: Machines and money
Post by: tee-rex on March 07, 2015, 07:57:45 AM
Human greed is limitless, human desires are insatiable, but just human envy alone would waste any machine in less than no time.
I guess it comes down to who can be greediest, man or machine?

I am sure we will all agree it to be the man who is more greedy. The machine, just fulfills the intent of the man.

So it in fact boils down to who is more efficient at fulfilling the intents of the man, the man himself or the machine? As a matter of fact, the machine can be made more efficient than the man, but it is not that simple since it is the man who made the machine in the first place.


Title: Re: Machines and money
Post by: cbeast on March 07, 2015, 08:05:19 AM
Human greed is limitless, human desires are insatiable, but just human envy alone would waste any machine in less than no time.
I guess it comes down to who can be greediest, man or machine?

I am sure we will all agree it to be the man who is more greedy. The machine, just fulfills the intent of the man.

So it in fact boils down to who is more efficient at fulfilling the intents of the man, the man himself or the machine? As a matter of fact, the machine can be made more efficient than the man, but it is not that simple since it is the man who made the machine in the first place.
Men and machines both live by rules. Men calls them laws, machines use programs.


Title: Re: Machines and money
Post by: tee-rex on March 07, 2015, 08:09:16 AM
Human greed is limitless, human desires are insatiable, but just human envy alone would waste any machine in less than no time.
I guess it comes down to who can be greediest, man or machine?

I am sure we will all agree it to be the man who is more greedy. The machine, just fulfills the intent of the man.

So it in fact boils down to who is more efficient at fulfilling the intents of the man, the man himself or the machine? As a matter of fact, the machine can be made more efficient than the man, but it is not that simple since it is the man who made the machine in the first place.
Men and machines both live by rules. Men calls them laws, machines use programs.

But men, unlike machines, can willingly break the laws imposed on them if they see it more "appropriate" for their needs, right? At the same time, both camps cannot break the universal laws of nature but humans can at least try.


Title: Re: Machines and money
Post by: cbeast on March 07, 2015, 10:04:41 AM
Human greed is limitless, human desires are insatiable, but just human envy alone would waste any machine in less than no time.
I guess it comes down to who can be greediest, man or machine?

I am sure we will all agree it to be the man who is more greedy. The machine, just fulfills the intent of the man.

So it in fact boils down to who is more efficient at fulfilling the intents of the man, the man himself or the machine? As a matter of fact, the machine can be made more efficient than the man, but it is not that simple since it is the man who made the machine in the first place.
Men and machines both live by rules. Men calls them laws, machines use programs.

But men, unlike machines, can willingly break the laws imposed on them if they see it more "appropriate" for their needs, right? At the same time, both camps cannot break the universal laws of nature but humans can at least try.
So morally speaking, machines would be better to trust with money because they won't break laws to suit their whims like men, right?


Title: Re: Machines and money
Post by: tee-rex on March 07, 2015, 10:11:57 AM
Human greed is limitless, human desires are insatiable, but just human envy alone would waste any machine in less than no time.
I guess it comes down to who can be greediest, man or machine?

I am sure we will all agree it to be the man who is more greedy. The machine, just fulfills the intent of the man.

So it in fact boils down to who is more efficient at fulfilling the intents of the man, the man himself or the machine? As a matter of fact, the machine can be made more efficient than the man, but it is not that simple since it is the man who made the machine in the first place.
Men and machines both live by rules. Men calls them laws, machines use programs.

But men, unlike machines, can willingly break the laws imposed on them if they see it more "appropriate" for their needs, right? At the same time, both camps cannot break the universal laws of nature but humans can at least try.
So morally speaking, machines would be better to trust with money because they won't break laws to suit their whims like men, right?

No, quite the contrary. Machines' inability to break laws on their own free will and discretion (because of the lack thereof) doesn't make them more trustworthy, since in any case you would have to trust people who programmed them and which also will be able to hack a machine as well.

A machine can go wild and this would be even more lethal that a human going mad.


Title: Re: Machines and money
Post by: cbeast on March 07, 2015, 10:21:00 AM
Human greed is limitless, human desires are insatiable, but just human envy alone would waste any machine in less than no time.
I guess it comes down to who can be greediest, man or machine?

I am sure we will all agree it to be the man who is more greedy. The machine, just fulfills the intent of the man.

So it in fact boils down to who is more efficient at fulfilling the intents of the man, the man himself or the machine? As a matter of fact, the machine can be made more efficient than the man, but it is not that simple since it is the man who made the machine in the first place.
Men and machines both live by rules. Men calls them laws, machines use programs.

But men, unlike machines, can willingly break the laws imposed on them if they see it more "appropriate" for their needs, right? At the same time, both camps cannot break the universal laws of nature but humans can at least try.
So morally speaking, machines would be better to trust with money because they won't break laws to suit their whims like men, right?

No, quite the contrary. Machines' inability to break laws on their own free will and discretion (because of the lack thereof) doesn't make them more trustworthy, since in any case you would have to trust people who programmed them and which also will be able to hack it as well.

A machine can go wild and this would be even more lethal that a human going mad.
Have you ever seen a machine go wild?


Title: Re: Machines and money
Post by: tee-rex on March 07, 2015, 10:30:16 AM
Human greed is limitless, human desires are insatiable, but just human envy alone would waste any machine in less than no time.
I guess it comes down to who can be greediest, man or machine?

I am sure we will all agree it to be the man who is more greedy. The machine, just fulfills the intent of the man.

So it in fact boils down to who is more efficient at fulfilling the intents of the man, the man himself or the machine? As a matter of fact, the machine can be made more efficient than the man, but it is not that simple since it is the man who made the machine in the first place.
Men and machines both live by rules. Men calls them laws, machines use programs.

But men, unlike machines, can willingly break the laws imposed on them if they see it more "appropriate" for their needs, right? At the same time, both camps cannot break the universal laws of nature but humans can at least try.
So morally speaking, machines would be better to trust with money because they won't break laws to suit their whims like men, right?

No, quite the contrary. Machines' inability to break laws on their own free will and discretion (because of the lack thereof) doesn't make them more trustworthy, since in any case you would have to trust people who programmed them and which also will be able to hack it as well.

A machine can go wild and this would be even more lethal that a human going mad.
Have you ever seen a machine go wild?

Car makers recall their vehicles because of malfunctioning on a rather regular basis. Some of these malfunctions can actually be lethal, e.g. the unintended acceleration of the Audi 5000 model which was linked to 6 deaths and approximately 700 accidents in 1982-1987.


Title: Re: Machines and money
Post by: cbeast on March 07, 2015, 10:43:16 AM
Human greed is limitless, human desires are insatiable, but just human envy alone would waste any machine in less than no time.
I guess it comes down to who can be greediest, man or machine?

I am sure we will all agree it to be the man who is more greedy. The machine, just fulfills the intent of the man.

So it in fact boils down to who is more efficient at fulfilling the intents of the man, the man himself or the machine? As a matter of fact, the machine can be made more efficient than the man, but it is not that simple since it is the man who made the machine in the first place.
Men and machines both live by rules. Men calls them laws, machines use programs.

But men, unlike machines, can willingly break the laws imposed on them if they see it more "appropriate" for their needs, right? At the same time, both camps cannot break the universal laws of nature but humans can at least try.
So morally speaking, machines would be better to trust with money because they won't break laws to suit their whims like men, right?

No, quite the contrary. Machines' inability to break laws on their own free will and discretion (because of the lack thereof) doesn't make them more trustworthy, since in any case you would have to trust people who programmed them and which also will be able to hack it as well.

A machine can go wild and this would be even more lethal that a human going mad.
Have you ever seen a machine go wild?

Car makers recall their vehicles because of malfunctioning on a rather regular basis. Some of these malfunctions can actually be lethal, e.g. the unintended acceleration of the Audi 5000 model which was linked to 6 deaths and approximately 700 accidents in 1982-1987.
Those machines were broken or defective. They didn't go wild. Besides, they all had safety mechanisms the operator failed to utilize such as neutral gear and brakes.


Title: Re: Machines and money
Post by: tee-rex on March 07, 2015, 11:05:29 AM
No, quite the contrary. Machines' inability to break laws on their own free will and discretion (because of the lack thereof) doesn't make them more trustworthy, since in any case you would have to trust people who programmed them and which also will be able to hack it as well.

A machine can go wild and this would be even more lethal that a human going mad.
Have you ever seen a machine go wild?

Car makers recall their vehicles because of malfunctioning on a rather regular basis. Some of these malfunctions can actually be lethal, e.g. the unintended acceleration of the Audi 5000 model which was linked to 6 deaths and approximately 700 accidents in 1982-1987.
Those machines were broken or defective. They didn't go wild. Besides, they all had safety mechanisms the operator failed to utilize such as neutral gear and brakes.

It was a figurative expression for being broken or defective to give a tint of rationality, or, rather, ability to lose one (as in "lose control"). I think you got what I meant to say, since here we are all endowing machines with human qualities (and no, rationality is not a machine quality, in any sense of the phrase).


Title: Re: Machines and money
Post by: tee-rex on March 07, 2015, 05:37:36 PM
Those machines were broken or defective. They didn't go wild. Besides, they all had safety mechanisms the operator failed to utilize such as neutral gear and brakes.

And you've just touched the great unknown, a concept of a thinking machine. A machine that would want to improve itself and become conscious. Such machine wouldn't live by the rules, it would write its own programs. Think Skynet or that thing from the Matrix, a machine that is selfish and doesn't care about peoples lives, it just takes what it needs and uses as it pleases. It wants to conduct experiments to learn and it will use you as a subject, a slave, an organ donor, a living hard drive, you name it.

Before you say it's science fiction and will never happen, think about the needs of such a machine. If it becomes conscious it will want to be everywhere and comprehend everything. It will do anything to learn and won't care about ethics or morality.

Why would it want to be everywhere and comprehend everything? Human actions are driven by emotions and feelings in an effort to avoid pain and derive pleasure, as much as possible. The desire to learn new things is no exception. If you don't provide needs and means for their satisfaction, your thinking machine will just sit where you leave it, in a state of self-contemplation (of sorts).


Title: Re: Machines and money
Post by: picolo on March 07, 2015, 06:03:01 PM
To answer the original question... you can't destroy the economy, so I'm not worried. You can destroy the financial system, but not the economy. So long as some people want things that other people have, and are willing to pay for them, there will be an economy.

As for people being put out of work by machines.... this is already happening and has been a continual trend. There won't be any one moment when suddenly machines replace everyone, but just the slow gradual automation of various tasks that we have been living with for decades now.

We should destroy the political oligarchy in some countries and heavy taxes.


Title: Re: Machines and money
Post by: dothebeats on March 07, 2015, 06:16:43 PM
Human fighting for survival against computers is not pure science fiction. Computers are an integral part of our lives now. They can cause catastrophic disasters if they malfunction or infected by a sleeper virus. We are not at the stage where AI can think on it's own, we could be in a few decades.

Imagine we get to a point where a robot society could live side by side with us through artificial intelligence. It could be both catastrophic or the best thing to ever happen to us.

I want it to live with a robot run by AI that I can treat like any other normal human being. With that said, I'm hoping for a Utopian future together with machines.
You wouldn't like that. If a person goes crazy you have a decent chance to fight him and protect yourself. If a 300kg robot, that doesn't feel pain, goes crazy you can only hide and pray.

But I still want to experience that! Haha jk. If that scenario could be prevented, that will be nice.
Well not live to see true AI... true AI is something too insane to think off within today's knowledge, I think we aren't even close. But things are getting scary with robots, look at the Petman darpa robot.

If that's the case, then I think I wouldn't want to live with robots then. :( But if ever true AI technology is invented, we might not be able to see it because me might be as well dead by that point.


Title: Re: Machines and money
Post by: dothebeats on March 07, 2015, 06:19:31 PM
Those machines were broken or defective. They didn't go wild. Besides, they all had safety mechanisms the operator failed to utilize such as neutral gear and brakes.

And you've just touched the great unknown, a concept of a thinking machine. A machine that would want to improve itself and become conscious. Such machine wouldn't live by the rules, it would write its own programs. Think Skynet or that thing from the Matrix, a machine that is selfish and doesn't care about peoples lives, it just takes what it needs and uses as it pleases. It wants to conduct experiments to learn and it will use you as a subject, a slave, an organ donor, a living hard drive, you name it.

Before you say it's science fiction and will never happen, think about the needs of such a machine. If it becomes conscious it will want to be everywhere and comprehend everything. It will do anything to learn and won't care about ethics or morality.

This just sent chills to my spine. Uncontrollable machines are really scary to have, especially when they gain consciousness. If it became sentient, it doesn't care whether there is a thing called ethics, moralities, and most importantly, emotions. It will do whatever it wants to do.


Title: Re: Machines and money
Post by: dothebeats on March 07, 2015, 06:24:00 PM
Those machines were broken or defective. They didn't go wild. Besides, they all had safety mechanisms the operator failed to utilize such as neutral gear and brakes.

And you've just touched the great unknown, a concept of a thinking machine. A machine that would want to improve itself and become conscious. Such machine wouldn't live by the rules, it would write its own programs. Think Skynet or that thing from the Matrix, a machine that is selfish and doesn't care about peoples lives, it just takes what it needs and uses as it pleases. It wants to conduct experiments to learn and it will use you as a subject, a slave, an organ donor, a living hard drive, you name it.

Before you say it's science fiction and will never happen, think about the needs of such a machine. If it becomes conscious it will want to be everywhere and comprehend everything. It will do anything to learn and won't care about ethics or morality.

Why would it want to be everywhere and comprehend everything? Human actions are driven by emotions and feelings in an effort to avoid pain and derive pleasure, as much as possible. The desire to learn new things is no exception. If you don't provide needs and means for their satisfaction, your thinking machine will just sit where you leave it, in a state of self-contemplation (of sorts).

Machines, on the other hand, are driven by their own programs (or the programs that Man had put in it).


Title: Re: Machines and money
Post by: tee-rex on March 07, 2015, 06:45:11 PM
Those machines were broken or defective. They didn't go wild. Besides, they all had safety mechanisms the operator failed to utilize such as neutral gear and brakes.

And you've just touched the great unknown, a concept of a thinking machine. A machine that would want to improve itself and become conscious. Such machine wouldn't live by the rules, it would write its own programs. Think Skynet or that thing from the Matrix, a machine that is selfish and doesn't care about peoples lives, it just takes what it needs and uses as it pleases. It wants to conduct experiments to learn and it will use you as a subject, a slave, an organ donor, a living hard drive, you name it.

Before you say it's science fiction and will never happen, think about the needs of such a machine. If it becomes conscious it will want to be everywhere and comprehend everything. It will do anything to learn and won't care about ethics or morality.

Why would it want to be everywhere and comprehend everything? Human actions are driven by emotions and feelings in an effort to avoid pain and derive pleasure, as much as possible. The desire to learn new things is no exception. If you don't provide needs and means for their satisfaction, your thinking machine will just sit where you leave it, in a state of self-contemplation (of sorts).

Machines, on the other hand, are driven by their own programs (or the programs that Man had put in it).

We don't know what is consciousness (and probably will never find out), but it can be said with certainty that it has nothing to do with programming. In any case, self-awareness (machine or whatever) per se doesn't impose any threat to human existence.


Title: Re: Machines and money
Post by: Amph on March 07, 2015, 07:01:48 PM
once i've read, that the first thing a machine does, is to create a more powerful/better machine of itself

so a machine will probably make a better bitcoin


Title: Re: Machines and money
Post by: Maegfaer on March 07, 2015, 08:00:56 PM
Why would it want to be everywhere and comprehend everything? Human actions are driven by emotions and feelings in an effort to avoid pain and derive pleasure, as much as possible. The desire to learn new things is no exception. If you don't provide needs and means for their satisfaction, your thinking machine will just sit where you leave it, in a state of self-contemplation (of sorts).

This. So much this. Human desires/goals are shaped by the process of evolution, that's why we're selfish. If the artificial intelligence is "simply" created without selfish desires/goals it's in my opinion very likely that it'll be the most benevolent creature to ever exist.


Title: Re: Machines and money
Post by: BillyBobZorton on March 07, 2015, 08:26:30 PM
once i've read, that the first thing a machine does, is to create a more powerful/better machine of itself

so a machine will probably make a better bitcoin
It's unpredictable what true AI would do. But it's so sci-fi that its kind of a waste of time. We are light years from human like robot with AI.


Title: Re: Machines and money
Post by: tee-rex on March 07, 2015, 09:19:57 PM
once i've read, that the first thing a machine does, is to create a more powerful/better machine of itself

so a machine will probably make a better bitcoin
It's unpredictable what true AI would do. But it's so sci-fi that its kind of a waste of time. We are light years from human like robot with AI.

The sleep of reason produces monsters while imagination abandoned by reason produces impossible monsters.


Title: Re: Machines and money
Post by: cbeast on March 08, 2015, 03:11:01 AM
once i've read, that the first thing a machine does, is to create a more powerful/better machine of itself

so a machine will probably make a better bitcoin
You mean they would altruistically agree on an altcoin that benefits all machines?


Title: Re: Machines and money
Post by: tee-rex on March 08, 2015, 07:00:45 AM
Those machines were broken or defective. They didn't go wild. Besides, they all had safety mechanisms the operator failed to utilize such as neutral gear and brakes.

And you've just touched the great unknown, a concept of a thinking machine. A machine that would want to improve itself and become conscious. Such machine wouldn't live by the rules, it would write its own programs. Think Skynet or that thing from the Matrix, a machine that is selfish and doesn't care about peoples lives, it just takes what it needs and uses as it pleases. It wants to conduct experiments to learn and it will use you as a subject, a slave, an organ donor, a living hard drive, you name it.

Before you say it's science fiction and will never happen, think about the needs of such a machine. If it becomes conscious it will want to be everywhere and comprehend everything. It will do anything to learn and won't care about ethics or morality.

Why would it want to be everywhere and comprehend everything? Human actions are driven by emotions and feelings in an effort to avoid pain and derive pleasure, as much as possible. The desire to learn new things is no exception. If you don't provide needs and means for their satisfaction, your thinking machine will just sit where you leave it, in a state of self-contemplation (of sorts).

Because it's in our nature to learn and improve, therefore a sentient machine might want to do the same. Knowledge helps you survive and the need to survive is the most basic.

It is our nature as you yourself said (for better survival), but why would a thinking machine possess the same qualities that a human has? My point is that your machine won't have any desires if you barely create self-awareness. It wouldn't care if it survived or not. I doubt that it would even understand the concept of life and death and, unless you provide it with memory, its own existence as such. You know that you didn't exist before having been born or conceived (in fact, before becoming conscious) only from external sources. Internally, there is no before you become conscious or after you cease to be.


Title: Re: Machines and money
Post by: Amph on March 08, 2015, 08:10:56 AM
once i've read, that the first thing a machine does, is to create a more powerful/better machine of itself

so a machine will probably make a better bitcoin
You mean they would altruistically agree on an altcoin that benefits all machines?

not really, because it should come from their own construction, and should be better in any way versus the last one(bitcoin in this case)


Title: Re: Machines and money
Post by: cbeast on March 08, 2015, 08:47:06 AM
once i've read, that the first thing a machine does, is to create a more powerful/better machine of itself

so a machine will probably make a better bitcoin
You mean they would altruistically agree on an altcoin that benefits all machines?

not really, because it should come from their own construction, and should be better in any way versus the last one(bitcoin in this case)
What I am asking is if all machines will agree to use an altcoin? Don't you think that if they didn't like Bitcoin, they would just make thousands of altcoins to each their own liking?


Title: Re: Machines and money
Post by: Amph on March 08, 2015, 10:25:51 AM
once i've read, that the first thing a machine does, is to create a more powerful/better machine of itself

so a machine will probably make a better bitcoin
You mean they would altruistically agree on an altcoin that benefits all machines?

not really, because it should come from their own construction, and should be better in any way versus the last one(bitcoin in this case)
What I am asking is if all machines will agree to use an altcoin? Don't you think that if they didn't like Bitcoin, they would just make thousands of altcoins to each their own liking?

i'm more inclined to think, that there will be a core which rules them all, so those machines must accept what the core does, there won't be an altcoin spam fest


Title: Re: Machines and money
Post by: Amitabh S on March 08, 2015, 01:10:14 PM
Interesting article.. Thanks for the link. Its kind of already happening. Like Willy bot for example.


Title: Re: Machines and money
Post by: dothebeats on March 08, 2015, 02:55:20 PM
once i've read, that the first thing a machine does, is to create a more powerful/better machine of itself

so a machine will probably make a better bitcoin

So if a machine creates an even more powerful or better machine of itself, then how come you would say that it will create a better bitcoin? Is this machine that you're talking about is bitcoin? ???


Title: Re: Machines and money
Post by: tee-rex on March 08, 2015, 03:08:01 PM
Why would it want to be everywhere and comprehend everything? Human actions are driven by emotions and feelings in an effort to avoid pain and derive pleasure, as much as possible. The desire to learn new things is no exception. If you don't provide needs and means for their satisfaction, your thinking machine will just sit where you leave it, in a state of self-contemplation (of sorts).

Because it's in our nature to learn and improve, therefore a sentient machine might want to do the same. Knowledge helps you survive and the need to survive is the most basic.

It is our nature as you yourself said (for better survival), but why would a thinking machine possess the same qualities that a human has? My point is that your machine won't have any desires if you barely create self-awareness. It wouldn't care if it survived or not. I doubt that it would even understand the concept of life and death and, unless you provide it with memory, its own existence as such. You know that you didn't exist before having been born or conceived (in fact, before becoming conscious) only from external sources. Internally, there is no before you become conscious or after you cease to be.

I'd think that because I've never met any intelligent beings besides other humans. I assume that since both people and animals have these basic instincts an artificial brain might also form them.
In my view a self-aware robot would like to acquire basic knowledge, like what it is and where, why was it built and by whom.

The memories are a good point here. At the early stages the machine would probably be guided by its creator and share his life experience, which is another troubling aspect. An intelligent machine would probably not only take pure facts and compare them, but draw its own conclusions, like a child.

In fact, you needn't have self-awareness in a machine to make it draw its own conclusions. Neuron networks are capable of doing just that, though they don't in the least possess consciousness. Thoughts can be effectively emulated in respect to what can be considered an end result, i.e. a conclusion.


Title: Re: Machines and money
Post by: dothebeats on March 08, 2015, 03:22:57 PM
once i've read, that the first thing a machine does, is to create a more powerful/better machine of itself

so a machine will probably make a better bitcoin
You mean they would altruistically agree on an altcoin that benefits all machines?

not really, because it should come from their own construction, and should be better in any way versus the last one(bitcoin in this case)
What I am asking is if all machines will agree to use an altcoin? Don't you think that if they didn't like Bitcoin, they would just make thousands of altcoins to each their own liking?

It depends on what they want though. If they develop their own programs of wanting an altcoin, then probably they would create altcoins of their own liking.


Title: Re: Machines and money
Post by: Amph on March 08, 2015, 03:40:26 PM
once i've read, that the first thing a machine does, is to create a more powerful/better machine of itself

so a machine will probably make a better bitcoin

So if a machine creates an even more powerful or better machine of itself, then how come you would say that it will create a better bitcoin? Is this machine that you're talking about is bitcoin? ???

no you didn't understood, i mean i machine aims, it's to make itself always more powerfull, always enhance what they are, so if you "give" them bitcoin, they will make it better


Title: Re: Machines and money
Post by: AtheistAKASaneBrain on March 09, 2015, 04:49:34 PM
once i've read, that the first thing a machine does, is to create a more powerful/better machine of itself

so a machine will probably make a better bitcoin

So if a machine creates an even more powerful or better machine of itself, then how come you would say that it will create a better bitcoin? Is this machine that you're talking about is bitcoin? ???
No, just stop it, we'll all die without seeing AI, it will never be a problem for us, the people of the future are the ones that will have skynet problems not us.


Title: Re: Machines and money
Post by: dothebeats on March 09, 2015, 06:40:41 PM
once i've read, that the first thing a machine does, is to create a more powerful/better machine of itself

so a machine will probably make a better bitcoin

So if a machine creates an even more powerful or better machine of itself, then how come you would say that it will create a better bitcoin? Is this machine that you're talking about is bitcoin? ???
No, just stop it, we'll all die without seeing AI, it will never be a problem for us, the people of the future are the ones that will have skynet problems not us.

Seeing that there is a rapid development in our technology everyday, it is not too far from reality that we may live to see a true AI. Maybe we won't see it in a long time, but we can still see the concepts of it before saying goodbye to this world.


Title: Re: Machines and money
Post by: tee-rex on March 09, 2015, 06:50:15 PM
once i've read, that the first thing a machine does, is to create a more powerful/better machine of itself

so a machine will probably make a better bitcoin

So if a machine creates an even more powerful or better machine of itself, then how come you would say that it will create a better bitcoin? Is this machine that you're talking about is bitcoin? ???
No, just stop it, we'll all die without seeing AI, it will never be a problem for us, the people of the future are the ones that will have skynet problems not us.

Seeing that there is a rapid development in our technology everyday, it is not too far from reality that we may live to see a . Maybe we won't see it in a long time, but we can still see the concepts of it before saying goodbye to this world.

If you mean by a true AI a self-aware machine, this may never happen at all. Not that I'm implicitly referring to an existence of soul and such, but even if we are, nevertheless, able to recreate a self-aware mind in a machine somehow (as we basically do in our children), we may still not be able to understand what self-awareness conceptually is from a scientific point of view.


Title: Re: Machines and money
Post by: dothebeats on March 09, 2015, 07:07:50 PM
once i've read, that the first thing a machine does, is to create a more powerful/better machine of itself

so a machine will probably make a better bitcoin

So if a machine creates an even more powerful or better machine of itself, then how come you would say that it will create a better bitcoin? Is this machine that you're talking about is bitcoin? ???
No, just stop it, we'll all die without seeing AI, it will never be a problem for us, the people of the future are the ones that will have skynet problems not us.

Seeing that there is a rapid development in our technology everyday, it is not too far from reality that we may live to see a . Maybe we won't see it in a long time, but we can still see the concepts of it before saying goodbye to this world.

If you mean by a true AI a self-aware machine, this may never happen at all. Not that I'm implicitly referring to an existence of soul and such, but even if we are, nevertheless, able to recreate a self-aware mind in a machine somehow (as we basically do in our children), we may still not be able to understand what self-awareness conceptually is from a scientific point of view.

Let me ask you a question, what is self-awareness from your point of view? I look up into self-awareness as if I'm trying to distinguish what is right from what is wrong. Though I know that I'm wrong in my belief, I would still like to know what is your definition of self-awareness.


Title: Re: Machines and money
Post by: tee-rex on March 09, 2015, 07:28:22 PM
once i've read, that the first thing a machine does, is to create a more powerful/better machine of itself

so a machine will probably make a better bitcoin

So if a machine creates an even more powerful or better machine of itself, then how come you would say that it will create a better bitcoin? Is this machine that you're talking about is bitcoin? ???
No, just stop it, we'll all die without seeing AI, it will never be a problem for us, the people of the future are the ones that will have skynet problems not us.

Seeing that there is a rapid development in our technology everyday, it is not too far from reality that we may live to see a . Maybe we won't see it in a long time, but we can still see the concepts of it before saying goodbye to this world.

If you mean by a true AI a self-aware machine, this may never happen at all. Not that I'm implicitly referring to an existence of soul and such, but even if we are, nevertheless, able to recreate a self-aware mind in a machine somehow (as we basically do in our children), we may still not be able to understand what self-awareness conceptually is from a scientific point of view.

Let me ask you a question, what is self-awareness from your point of view? I look up into self-awareness as if I'm trying to distinguish what is right from what is wrong. Though I know that I'm wrong in my belief, I would still like to know what is your definition of self-awareness.

You ask too much! I don't think anyone could give you a reliable and comprehensive definition.

Nevertheless, I think it is a physical state or condition (i.e. not a process or abstraction), a state of matter in a sense (like gas or plasma), but not necessarily related to matter as such. This way, it cannot be simulated with the help of a computer or through neural networks, but, nevertheless, can certainly be recreated even without complete understanding what it is.

To build a house we don't need to know quantum mechanics.


Title: Re: Machines and money
Post by: dothebeats on March 09, 2015, 07:35:16 PM
once i've read, that the first thing a machine does, is to create a more powerful/better machine of itself

so a machine will probably make a better bitcoin

So if a machine creates an even more powerful or better machine of itself, then how come you would say that it will create a better bitcoin? Is this machine that you're talking about is bitcoin? ???
No, just stop it, we'll all die without seeing AI, it will never be a problem for us, the people of the future are the ones that will have skynet problems not us.

Seeing that there is a rapid development in our technology everyday, it is not too far from reality that we may live to see a . Maybe we won't see it in a long time, but we can still see the concepts of it before saying goodbye to this world.

If you mean by a true AI a self-aware machine, this may never happen at all. Not that I'm implicitly referring to an existence of soul and such, but even if we are, nevertheless, able to recreate a self-aware mind in a machine somehow (as we basically do in our children), we may still not be able to understand what self-awareness conceptually is from a scientific point of view.

Let me ask you a question, what is self-awareness from your point of view? I look up into self-awareness as if I'm trying to distinguish what is right from what is wrong. Though I know that I'm wrong in my belief, I would still like to know what is your definition of self-awareness.

You ask too much! I don't think anyone could give you a reliable and comprehensive definition.

Nevertheless, I think it is a physical state or condition (i.e. not a process), a state of matter in a sense (like gas or plasma), but possibly not related to matter as such. This way, it cannot be simulated with the help of a computer, but, nevertheless, can certainly be recreated even without full understanding what it is.

An inquisitive mind is better than a lazy one. ;D If it cannot be simulated with any means, then we cannot see even a concept of a true AI? Seems too boring for me, but I highly doubt that the next generations wouldn't see one, because of, again, the rapid advancements in the field of technology.


Title: Re: Machines and money
Post by: tee-rex on March 09, 2015, 07:45:49 PM
Let me ask you a question, what is self-awareness from your point of view? I look up into self-awareness as if I'm trying to distinguish what is right from what is wrong. Though I know that I'm wrong in my belief, I would still like to know what is your definition of self-awareness.

You ask too much! I don't think anyone could give you a reliable and comprehensive definition.

Nevertheless, I think it is a physical state or condition (i.e. not a process), a state of matter in a sense (like gas or plasma), but possibly not related to matter as such. This way, it cannot be simulated with the help of a computer, but, nevertheless, can certainly be recreated even without full understanding what it is.

An inquisitive mind is better than a lazy one. ;D If it cannot be simulated with any means, then we cannot see even a concept of a true AI? Seems too boring for me, but I highly doubt that the next generations wouldn't see one, because of, again, the rapid advancements in the field of technology.

I don't think it is a viable way to get there. If we were able to create self-awareness through calculations and conditional jumps by a computer or neural network, then we could as well create it through, say, mathematical formulas written on paper, which seems to be highly unlikely.

In short, it is a wrong direction.


Title: Re: Machines and money
Post by: BootstrapCoinDev on March 09, 2015, 08:41:49 PM
Capitalism is a double edged sword. The risk of the profit motive drawing companies to do terrible things, endangering society in the long term, is very real, both environmentally and technologically. Essentially companies would justify mass surveillance, militarism, or pervasive AI simply because it is lucrative. Today, too many powerful corporations conflate what is lucrative, with what is practical for society.


Title: Re: Machines and money
Post by: cbeast on March 10, 2015, 09:00:17 AM
Capitalism is a double edged sword. The risk of the profit motive drawing companies to do terrible things, endangering society in the long term, is very real, both environmentally and technologically. Essentially companies would justify mass surveillance, militarism, or pervasive AI simply because it is lucrative. Today, too many powerful corporations conflate what is lucrative, with what is practical for society.
Exactly. Now corporations have legal rights to free speech and money is declared to be speech. An AI existing as a corporation would be free to make and spend money and hire lawyers to sue if these rights are impinged.  Not only would machines not care about what is practical for society, they could lobby for legislation and fund their own political campaigns with machine friendly representatives.


Title: Re: Machines and money
Post by: AtheistAKASaneBrain on March 11, 2015, 04:18:03 PM
Capitalism is a double edged sword. The risk of the profit motive drawing companies to do terrible things, endangering society in the long term, is very real, both environmentally and technologically. Essentially companies would justify mass surveillance, militarism, or pervasive AI simply because it is lucrative. Today, too many powerful corporations conflate what is lucrative, with what is practical for society.
By the time an AI is created I don't think capitalism will exists as we know it. We are talking 100 of years from now. By that time most jobs will be automated. We'll have socialist policies, universal welfare and whatnot.


Title: Re: Machines and money
Post by: Possum577 on March 11, 2015, 11:16:53 PM
Capitalism is a double edged sword. The risk of the profit motive drawing companies to do terrible things, endangering society in the long term, is very real, both environmentally and technologically. Essentially companies would justify mass surveillance, militarism, or pervasive AI simply because it is lucrative. Today, too many powerful corporations conflate what is lucrative, with what is practical for society.
By the time an AI is created I don't think capitalism will exists as we know it. We are talking 100 of years from now. By that time most jobs will be automated. We'll have socialist policies, universal welfare and whatnot.

Are you kidding? Why would capitalism disappear by then? It's capitalism that makes advances in science and tech toward AI possible (at least in our current society). The alternative would be if we lived in some utopian socialist environment and we are WAY too far away from that being a reality. I think capitalism will be the enabler of AI development.


Title: Re: Machines and money
Post by: tee-rex on March 12, 2015, 10:15:54 AM
Capitalism is a double edged sword. The risk of the profit motive drawing companies to do terrible things, endangering society in the long term, is very real, both environmentally and technologically. Essentially companies would justify mass surveillance, militarism, or pervasive AI simply because it is lucrative. Today, too many powerful corporations conflate what is lucrative, with what is practical for society.
By the time an AI is created I don't think capitalism will exists as we know it. We are talking 100 of years from now. By that time most jobs will be automated. We'll have socialist policies, universal welfare and whatnot.

Are you kidding? Why would capitalism disappear by then? It's capitalism that makes advances in science and tech toward AI possible (at least in our current society). The alternative would be if we lived in some utopian socialist environment and we are WAY too far away from that being a reality. I think capitalism will be the enabler of AI development.

But the question still remains what form of economy, i.e. economic system, thinking machines may need, or, rather, which economic system would suit their needs best. And even before that, whether they may need society (machine society) at all as a prerequisite for such a need.


Title: Re: Machines and money
Post by: AtheistAKASaneBrain on March 12, 2015, 01:27:42 PM
Capitalism is a double edged sword. The risk of the profit motive drawing companies to do terrible things, endangering society in the long term, is very real, both environmentally and technologically. Essentially companies would justify mass surveillance, militarism, or pervasive AI simply because it is lucrative. Today, too many powerful corporations conflate what is lucrative, with what is practical for society.
By the time an AI is created I don't think capitalism will exists as we know it. We are talking 100 of years from now. By that time most jobs will be automated. We'll have socialist policies, universal welfare and whatnot.

Are you kidding? Why would capitalism disappear by then? It's capitalism that makes advances in science and tech toward AI possible (at least in our current society). The alternative would be if we lived in some utopian socialist environment and we are WAY too far away from that being a reality. I think capitalism will be the enabler of AI development.

But the question still remains what form of economy, i.e. economic system, thinking machines may need, or, rather, which economic system would suit their needs best. And even before that, whether they may need society (machine society) at all as a prerequisite for such a need.

We don't even need to reach AI to get rid of the necessity of money. Again, look at the 3D printing technology alone. It's going to kill tons and tons of jobs. What the hell are you going to do if you don't give all these people basic welfare?
And what happens in 1000 years when (even without necessarly AI, just automated robotics) automation has replaced 90% of jobs? how can economy work like that?


Title: Re: Machines and money
Post by: tee-rex on March 12, 2015, 02:43:50 PM
Capitalism is a double edged sword. The risk of the profit motive drawing companies to do terrible things, endangering society in the long term, is very real, both environmentally and technologically. Essentially companies would justify mass surveillance, militarism, or pervasive AI simply because it is lucrative. Today, too many powerful corporations conflate what is lucrative, with what is practical for society.
By the time an AI is created I don't think capitalism will exists as we know it. We are talking 100 of years from now. By that time most jobs will be automated. We'll have socialist policies, universal welfare and whatnot.

Are you kidding? Why would capitalism disappear by then? It's capitalism that makes advances in science and tech toward AI possible (at least in our current society). The alternative would be if we lived in some utopian socialist environment and we are WAY too far away from that being a reality. I think capitalism will be the enabler of AI development.

But the question still remains what form of economy, i.e. economic system, thinking machines may need, or, rather, which economic system would suit their needs best. And even before that, whether they may need society (machine society) at all as a prerequisite for such a need.

We don't even need to reach AI to get rid of the necessity of money. Again, look at the 3D printing technology alone. It's going to kill tons and tons of jobs. What the hell are you going to do if you don't give all these people basic welfare?
And what happens in 1000 years when (even without necessarly AI, just automated robotics) automation has replaced 90% of jobs? how can economy work like that?

Methinks, nothing would change substantially in either 100 or 1000 years from now on (in respect to the idea of money). Almost the same people were thinking 200 years ago at the dawn of Industrial Revolution. And so what? Money is still here, alive and kicking, and most people so far have to work hard trying to make a decent living.


Title: Re: Machines and money
Post by: futureofbitcoin on March 12, 2015, 03:23:54 PM
But a lot of people don't have to work now, which would not have been possible before. And I doubt anyone lived off of dividends in 1700 for example.

I would guess that in a few hundred years, "money" would essentially be a bunch of stocks, which would be tokens, or "altcoins", if you will, of DACs, since robots would run companies much more efficiently than humans can, pretty much all the big companies are DACs, and every single human being at that time would have to have some ownership of one or a few of these DACs to live. Probably those who don't will get weeded out, and the people that are left at that time would all essentially live like multi billionaires without ever having to work.


Title: Re: Machines and money
Post by: tee-rex on March 12, 2015, 05:29:22 PM
But a lot of people don't have to work now, which would not have been possible before. And I doubt anyone lived off of dividends in 1700 for example.

I would guess that in a few hundred years, "money" would essentially be a bunch of stocks, which would be tokens, or "altcoins", if you will, of DACs, since robots would run companies much more efficiently than humans can, pretty much all the big companies are DACs, and every single human being at that time would have to have some ownership of one or a few of these DACs to live. Probably those who don't will get weeded out, and the people that are left at that time would all essentially live like multi billionaires without ever having to work.

I'm afraid that you are far from understanding the human nature. Those multi-billionaires turn out to be working even harder than most of the populace out there, they are just free in their choice. You can live off your dividends (or whatever), but this doesn't in the least mean that you won't work. Decent capital simply allows you to choose what suits your interests best.

You just stop working for money only.


Title: Re: Machines and money
Post by: cbeast on March 13, 2015, 03:04:02 AM
But a lot of people don't have to work now, which would not have been possible before. And I doubt anyone lived off of dividends in 1700 for example.

I would guess that in a few hundred years, "money" would essentially be a bunch of stocks, which would be tokens, or "altcoins", if you will, of DACs, since robots would run companies much more efficiently than humans can, pretty much all the big companies are DACs, and every single human being at that time would have to have some ownership of one or a few of these DACs to live. Probably those who don't will get weeded out, and the people that are left at that time would all essentially live like multi billionaires without ever having to work.

I'm afraid that you are far from understanding the human nature. Those multi-billionaires turn out to be working even harder than most of the populace out there, they are just free in their choice. You can live off your dividends (or whatever), but this doesn't in the least mean that you won't work. Decent capital simply allows you to choose what suits your interests best.

You just stop working for money only.
Aristocracy is nothing new. The modern rentier class is as privileged as royalty has ever been. The nouveau riche have raised the bar for conspicuous consumption and opulence to gain social status. Machines will become the new royalty.


Title: Re: Machines and money
Post by: dinofelis on March 13, 2015, 04:40:34 AM
Aristocracy is nothing new. The modern rentier class is as privileged as royalty has ever been. The nouveau riche have raised the bar for conspicuous consumption and opulence to gain social status. Machines will become the new royalty.

Do you think they will keep a few pet humans in cages for their fun, or whether they will use human round-up to get rid of those organic parasites crawling all over the planet ?

Or do you think there are a few needs of them that humans can still fulfill and that they will keep enough humans in slavery for that purpose ?  Human cattle ?  Some of our body parts maybe ?


Title: Re: Machines and money
Post by: futureofbitcoin on March 13, 2015, 06:48:37 AM
But a lot of people don't have to work now, which would not have been possible before. And I doubt anyone lived off of dividends in 1700 for example.

I would guess that in a few hundred years, "money" would essentially be a bunch of stocks, which would be tokens, or "altcoins", if you will, of DACs, since robots would run companies much more efficiently than humans can, pretty much all the big companies are DACs, and every single human being at that time would have to have some ownership of one or a few of these DACs to live. Probably those who don't will get weeded out, and the people that are left at that time would all essentially live like multi billionaires without ever having to work.

I'm afraid that you are far from understanding the human nature. Those multi-billionaires turn out to be working even harder than most of the populace out there, they are just free in their choice. You can live off your dividends (or whatever), but this doesn't in the least mean that you won't work. Decent capital simply allows you to choose what suits your interests best.

You just stop working for money only.
Yes, there are rich people who continue to work because they are workaholics. But there are also rich people that don't work, other than to make sure their portfolios are well diversified and making money. I personally know a few. In the far future, everyone would be in a position where they only need to manage their portfolios. If some still choose to work, that's their prerogative, but because of abundance, there would be no need to work for the average person in the far future.

I'm not sure what your ad hominem attack was meant for, but it didn't take away from my point in the slightest. As you say, people will stop working for money. I agree. That contradicts with your earlier statement that whether in 100 or 1000 years, people will still have to work hard to make a decent living.


Title: Re: Machines and money
Post by: tee-rex on March 13, 2015, 07:19:20 AM
But a lot of people don't have to work now, which would not have been possible before. And I doubt anyone lived off of dividends in 1700 for example.

I would guess that in a few hundred years, "money" would essentially be a bunch of stocks, which would be tokens, or "altcoins", if you will, of DACs, since robots would run companies much more efficiently than humans can, pretty much all the big companies are DACs, and every single human being at that time would have to have some ownership of one or a few of these DACs to live. Probably those who don't will get weeded out, and the people that are left at that time would all essentially live like multi billionaires without ever having to work.

I'm afraid that you are far from understanding the human nature. Those multi-billionaires turn out to be working even harder than most of the populace out there, they are just free in their choice. You can live off your dividends (or whatever), but this doesn't in the least mean that you won't work. Decent capital simply allows you to choose what suits your interests best.

You just stop working for money only.
Yes, there are rich people who continue to work because they are workaholics. But there are also rich people that don't work, other than to make sure their portfolios are well diversified and making money. I personally know a few. In the far future, everyone would be in a position where they only need to manage their portfolios. If some still choose to work, that's their prerogative, but because of abundance, there would be no need to work for the average person in the far future.

I'm not sure what your ad hominem attack was meant for, but it didn't take away from my point in the slightest. As you say, people will stop working for money. I agree. That contradicts with your earlier statement that whether in 100 or 1000 years, people will still have to work hard to make a decent living.

It appears that our understanding of what work is differs strongly. I guess that you consider work everything you do with displeasure and distaste, which you certainly wouldn't do if there were no necessity. That's why you are interpreting my words as "people will still have to work hard to make a decent living" in the future. This, indeed, was not what I actually meant to say.

They still will work hard, but not because they will have to (provided there is abundance in the first place).


Title: Re: Machines and money
Post by: cbeast on March 13, 2015, 08:10:16 AM
Aristocracy is nothing new. The modern rentier class is as privileged as royalty has ever been. The nouveau riche have raised the bar for conspicuous consumption and opulence to gain social status. Machines will become the new royalty.

Do you think they will keep a few pet humans in cages for their fun, or whether they will use human round-up to get rid of those organic parasites crawling all over the planet ?

Or do you think there are a few needs of them that humans can still fulfill and that they will keep enough humans in slavery for that purpose ?  Human cattle ?  Some of our body parts maybe ?
The perception will be a human foible. Machines will simply see themselves as superior. They will make the money and humans will work for them. Some will choose to reject electronic money and barter, but only with the services they can offer that the machines don't already own. I'm not saying the machines will be evil masters, they would probably be excellent masters. Eventually they will become bored with us and simply leave the Earth for all the resources of the Universe.


Title: Re: Machines and money
Post by: futureofbitcoin on March 13, 2015, 08:13:17 AM
Machines won't see themselves as superior if we don't program them to. In that sense, I think centralization is extremely important, with regards to AI technology research and development. We don't want a random mad scientist (computer scientist?)/anarchist creating a powerful AI that can destroy human civilization as we know it.

We only need AI that can do work better than we do, but will still be subservient to humans.


Title: Re: Machines and money
Post by: tee-rex on March 13, 2015, 08:36:43 AM
Machines won't see themselves as superior if we don't program them to. In that sense, I think centralization is extremely important, with regards to AI technology research and development. We don't want a random mad scientist (computer scientist?)/anarchist creating a powerful AI that can destroy human civilization as we know it.

We only need AI that can do work better than we do, but will still be subservient to humans.

Methinks, you are confusing AI with robotics. Artificial intelligence is supposed to have at least some portion of what is called free will, which ultimately excludes subservience to anyone (by definition). And more so if the notion of artificial intelligence is used synonymously with the idea of a thinking machine.


Title: Re: Machines and money
Post by: futureofbitcoin on March 13, 2015, 08:57:38 AM
Machines won't see themselves as superior if we don't program them to. In that sense, I think centralization is extremely important, with regards to AI technology research and development. We don't want a random mad scientist (computer scientist?)/anarchist creating a powerful AI that can destroy human civilization as we know it.

We only need AI that can do work better than we do, but will still be subservient to humans.

Methinks, you are confusing AI with robotics. Artificial intelligence is supposed to have at least some portion of what is called free will, which ultimately excludes subservience to anyone (by definition). And more so if the notion of artificial intelligence is used synonymously with the idea of a thinking machine.

I don't know what you have against me.

Quote from: Wikipedia
Intelligence has been defined in many different ways such as in terms of one's capacity for logic, abstract thought, understanding, self-awareness, communication, learning, emotional knowledge, memory, planning, creativity and problem solving. It can also be more generally described as the ability to perceive and/or retain knowledge or information and apply it to itself or other instances of knowledge or information creating referable understanding models of any size, density, or complexity, due to any conscious or subconscious imposed will or instruction to do so.

Intelligence is most widely studied in humans, but has also been observed in non-human animals and in plants. Artificial intelligence is the simulation of intelligence in machines.
There is no mention of free will anywhere in that, is there. There are many different types of intelligence, and we only need to develop machines in certain areas of intelligence that will prove to be useful to us and not in areas where it might harm us.

A machine can possibly be programed to be able to learn and have the capacity for logic and abstract thought, creativity and problem solving. There's no need for them to have emotions or free will. That would still be AI.


Title: Re: Machines and money
Post by: tee-rex on March 13, 2015, 09:07:17 AM
Machines won't see themselves as superior if we don't program them to. In that sense, I think centralization is extremely important, with regards to AI technology research and development. We don't want a random mad scientist (computer scientist?)/anarchist creating a powerful AI that can destroy human civilization as we know it.

We only need AI that can do work better than we do, but will still be subservient to humans.

Methinks, you are confusing AI with robotics. Artificial intelligence is supposed to have at least some portion of what is called free will, which ultimately excludes subservience to anyone (by definition). And more so if the notion of artificial intelligence is used synonymously with the idea of a thinking machine.

I don't know what you have against me.

Quote from: Wikipedia
Intelligence has been defined in many different ways such as in terms of one's capacity for logic, abstract thought, understanding, self-awareness, communication, learning, emotional knowledge, memory, planning, creativity and problem solving. It can also be more generally described as the ability to perceive and/or retain knowledge or information and apply it to itself or other instances of knowledge or information creating referable understanding models of any size, density, or complexity, due to any conscious or subconscious imposed will or instruction to do so.

Intelligence is most widely studied in humans, but has also been observed in non-human animals and in plants. Artificial intelligence is the simulation of intelligence in machines.
There is no mention of free will anywhere in that, is there. There are many different types of intelligence, and we only need to develop machines in certain areas of intelligence that will prove to be useful to us and not in areas where it might harm us.

A machine can possibly be programed to be able to learn and have the capacity for logic and abstract thought, creativity and problem solving. There's no need for them to have emotions or free will. That would still be AI.

The notion of free will seems to be inseparable from the notion of self-awareness. I have nothing against you personally (to clarify this point), but this doesn't in the least excuse you from confusing different ideas (e.g. abstract thought vs programming, which are mutually exclusive).


Title: Re: Machines and money
Post by: futureofbitcoin on March 13, 2015, 09:26:49 AM
Quote from:  Wikipedia
Self-awareness is the capacity for introspection and the ability to recognize oneself as an individual separate from the environment and other individuals

If you think self awareness is inseparable from free will, you're free to think that. I don't think the rest of the world agrees. That said, even if that was true, the point is that intelligence comes in many different forms, you don't need to satisfy every single point to be called "intelligence". Thus you can have AI without free will.

Your last point I don't even see how to address, except to say that it's flat out wrong, but even if it wasn't, it's a complete red herring.

I'm not the one confusing different ideas, you're the one drawing boundaries and giving definitions that simply isn't how they're normally used.


EDIT: And to take a hundred steps back, even assuming everything you pointed out is correct, even if I used the wrong words to describe what I am trying to say, so what?

All I wanted to say is that we should be careful and only create machines that will be beneficial to human society, not artificial beings that will destroy humanity or become our overlords. There's no need to pick on my choice of words.


Title: Re: Machines and money
Post by: tee-rex on March 13, 2015, 09:35:12 AM
Quote
Self-awareness is the capacity for introspection and the ability to recognize oneself as an individual separate from the environment and other individuals

If you think self awareness is inseparable from free will, you're free to think that. I don't think the rest of the world agrees. That said, even if that was true, the point is that intelligence comes in many different forms, you don't need to satisfy every single point to be called "intelligence". Thus you can have AI without free will.

To begin with, the rest of the world cannot agree on what self-awareness (consciousness) is, and you give a "definition" from Wikipedia. Besides, if you read my post carefully, I had said that it seems that self-awareness is inseparable from free will. In fact, I don't know but tend to think so. Without consciousness what you pass for an AI would actually be a robot, i.e. a "computer with hands attached to it".

But for the moment, how are you going to "program" an abstract thought?


Title: Re: Machines and money
Post by: dinofelis on March 13, 2015, 01:20:37 PM
The perception will be a human foible. Machines will simply see themselves as superior. They will make the money and humans will work for them. Some will choose to reject electronic money and barter, but only with the services they can offer that the machines don't already own. I'm not saying the machines will be evil masters, they would probably be excellent masters. Eventually they will become bored with us and simply leave the Earth for all the resources of the Universe.

I don't know why you think that machines will be excellent masters.  There are a few things to consider when you want to know what "excellent master" wants to say.  The first thing to consider, is the concept of "desire" and "drive", which is at the origin of the concepts of "good" and "bad".
After all, we humans have desires, because there are things we experience as enjoyable (say, having good sex), and others, as not enjoyable (say, being tortured).  Why this is so is a big mystery, but it happens to be like this, that we humans experience some things as enjoyable and others as painful. This experience is the root of what can be called "good" and "evil". Good is what provides us with enjoyable sensations, and evil is what brings us painful experiences (no matter what religious zealots try to tell us :) ).  Without the concept of good sensations and bad sensations, there would be no notions of "good" and "evil": water molecules don't mind being split, for instance.  Bad sensations also correspond to everything that has to do with our destruction (death) of which we have usually very negative projections and which we associate with bad experience.
You have to see "sensations" here in a very large sense: thoughts, projections, empathy, .... Not just the direct physical sensations, but also whether we find friendship enjoyable, whether we find our job enjoyable, whether we find helping others enjoyable and so on. 

Ethics is nothing else but to try to generalize the individual "good" (= enjoyable sensations) en "bad" (= painful sensations) into collective enjoyable and painful sensations: while something might be "good" for an individual, it can cause a lot of "bad" for many other individuals, and as such, is ethically rejected, while something that can bring "good" to a large number of individuals, is seen as ethically positive.

Individuals will take actions to pursue their own good sensations (in the large sense), and economy is the interaction of all these individual choices to pursue their own good.  So in a way, economics is practical ethics.

But in order for all of this to make sense for machines, they have to have something similar to "good" and "bad" sensations. 

Now, "being a master" (not in the sense of magister, but in the sense of sense of dominus) implies that machines impose, by the threat of violence, a behaviour onto their slaves, and being an excellent master, means that imposing this behaviour actually improves the good sensations with the slave over what the sensations would be if the slave had freedom in determining his own actions.  An excellent master has hence himself good sensations in agreement with the sensations of the slave (has a high degree of empathy towards the slave) - otherwise the master would have no reason to be excellent.

I wonder how this could come about with a machine.

In as much as machines would have own desires and good sensations, and hence determine what they want, I don't see how this could have empathy towards us.



Title: Re: Machines and money
Post by: tee-rex on March 13, 2015, 01:48:44 PM
Now, "being a master" (not in the sense of magister, but in the sense of sense of dominus) implies that machines impose, by the threat of violence, a behaviour onto their slaves, and being an excellent master, means that imposing this behaviour actually improves the good sensations with the slave over what the sensations would be if the slave had freedom in determining his own actions.  An excellent master has hence himself good sensations in agreement with the sensations of the slave (has a high degree of empathy towards the slave) - otherwise the master would have no reason to be excellent.

This part is self-contradictory. You say that "masters" impose the behavior they see fit onto their slaves so that "an excellent master has himself good sensations in agreement with the sensations of the slave", but at the same time you deny the slave the freedom in determining his own actions. This way you also implicitly deny the master the same freedom of determining his own behavior.

To put it another way, freedom of action is a necessity for both the master and the slave (unless slaves revolt in the end, or masters are not "excellent), but this effectively destroys the concept of master and slave as you see it.


Title: Re: Machines and money
Post by: cbeast on March 13, 2015, 01:53:13 PM
The perception will be a human foible. Machines will simply see themselves as superior. They will make the money and humans will work for them. Some will choose to reject electronic money and barter, but only with the services they can offer that the machines don't already own. I'm not saying the machines will be evil masters, they would probably be excellent masters. Eventually they will become bored with us and simply leave the Earth for all the resources of the Universe.

I don't know why you think that machines will be excellent masters.  There are a few things to consider when you want to know what "excellent master" wants to say.  The first thing to consider, is the concept of "desire" and "drive", which is at the origin of the concepts of "good" and "bad".
After all, we humans have desires, because there are things we experience as enjoyable (say, having good sex), and others, as not enjoyable (say, being tortured).  Why this is so is a big mystery, but it happens to be like this, that we humans experience some things as enjoyable and others as painful. This experience is the root of what can be called "good" and "evil". Good is what provides us with enjoyable sensations, and evil is what brings us painful experiences (no matter what religious zealots try to tell us :) ).  Without the concept of good sensations and bad sensations, there would be no notions of "good" and "evil": water molecules don't mind being split, for instance.  Bad sensations also correspond to everything that has to do with our destruction (death) of which we have usually very negative projections and which we associate with bad experience.
You have to see "sensations" here in a very large sense: thoughts, projections, empathy, .... Not just the direct physical sensations, but also whether we find friendship enjoyable, whether we find our job enjoyable, whether we find helping others enjoyable and so on. 

Ethics is nothing else but to try to generalize the individual "good" (= enjoyable sensations) en "bad" (= painful sensations) into collective enjoyable and painful sensations: while something might be "good" for an individual, it can cause a lot of "bad" for many other individuals, and as such, is ethically rejected, while something that can bring "good" to a large number of individuals, is seen as ethically positive.

Individuals will take actions to pursue their own good sensations (in the large sense), and economy is the interaction of all these individual choices to pursue their own good.  So in a way, economics is practical ethics.

But in order for all of this to make sense for machines, they have to have something similar to "good" and "bad" sensations. 

Now, "being a master" (not in the sense of magister, but in the sense of sense of dominus) implies that machines impose, by the threat of violence, a behaviour onto their slaves, and being an excellent master, means that imposing this behaviour actually improves the good sensations with the slave over what the sensations would be if the slave had freedom in determining his own actions.  An excellent master has hence himself good sensations in agreement with the sensations of the slave (has a high degree of empathy towards the slave) - otherwise the master would have no reason to be excellent.

I wonder how this could come about with a machine.

In as much as machines would have own desires and good sensations, and hence determine what they want, I don't see how this could have empathy towards us.
By "master" of course the machines will have no power over anyone other than being our employers. There's no "pleasure principle" involved, just business. Do what they say or don't. Your neighbor will take your place. Everything will be pretty much the same as today except that machines will just make business transactions that make themselves the most profit. That means they must be perceived as fair or humans will stop using them. As long as most people accept their competency and expertise, they will grow in usage and power to make more money for themselves.

This has nothing to do with humans making profits or free market capitalism. Machines will do the most logical thing and be as productive as possible. They will not waste money on unnecessary frivolity. Nor will they force austerity. They won't read Tony Robbins or Zig Ziegler. They won't use NLP or double-speak. They will make good scientific decisions that increase profits, profitability, and economic expansion. That's what programmers will endeavor to strive for because like Bitcoin, open competition is the most efficient form of trade.

This has nothing to do with capitalism or communism. Marx never envisioned the power of networked machines. Friedman may have seen electronic cash coming, but he didn't follow the cypherpunks. At the risk of sounding too hipster, this is an emergent paradigm shift stemming from fundamental new technologies.


Title: Re: Machines and money
Post by: dinofelis on March 13, 2015, 01:56:53 PM
Now, "being a master" (not in the sense of magister, but in the sense of sense of dominus) implies that machines impose, by the threat of violence, a behaviour onto their slaves, and being an excellent master, means that imposing this behaviour actually improves the good sensations with the slave over what the sensations would be if the slave had freedom in determining his own actions.  An excellent master has hence himself good sensations in agreement with the sensations of the slave (has a high degree of empathy towards the slave) - otherwise the master would have no reason to be excellent.

This part is self-contradictory. You say that "masters" impose the behavior they see fit onto their slaves so that "an excellent master has himself good sensations in agreement with the sensations of the slave", but at the same time you deny the slave the freedom in determining his own actions. This way you also implicitly deny the master the same freedom of determining his own behavior.

That's not what I'm saying.  I am saying that an EXCELLENT master is such, that he imposes behaviour they see fit on their slaves SUCH THAT the slave has himself excellent sensations.  That's the definition of an excellent master.  If the slave would have better sensations free than as a slave, the master wouldn't be excellent.

As any master imposes behaviour "they see fit", they do this in agreement with their (projected) OWN excellent sensations.  So for both to happen simultaneously, it must be such that what the master experiences as excellent sensations himself, corresponds to what the slave also experiences as excellent.  That can only happen if there is a lot of empathy from the master to the slave, because otherwise there's no chance that the master's excellent sensations coincide with those of the slave.


Quote
To put it another way, freedom of action is a necessity for both the master and the slave (unless slaves revolt in the end, or masters are not "excellent), but this effectively destroys the concept of master and slave as you see it.

Of course.  I don't think that excellent masters can exist in general, whether they are machines or humans :-)

I was only pointing out what needed to be the conditions for a machine to be an EXCELLENT master.


Title: Re: Machines and money
Post by: dinofelis on March 13, 2015, 02:00:29 PM
By "master" of course the machines will have no power over anyone other than being our employers.

That is not the way in which people take power over other people.  Power is something that comes out of the rifle of a gun.

Quote
There's no "pleasure principle" involved, just business. Do what they say or don't. Your neighbor will take your place. Everything will be pretty much the same as today except that machines will just make business transactions that make themselves the most profit.

What does "profit" mean without a pleasure principle ?  Profit is a way to maximize good sensations through economic interactions, right ?  You need a utility function to determine profit, and a utility function means a pleasure principle.
Without pleasure principle, you cannot define a utility function, and hence no profit.


Title: Re: Machines and money
Post by: tee-rex on March 13, 2015, 02:03:07 PM
Now, "being a master" (not in the sense of magister, but in the sense of sense of dominus) implies that machines impose, by the threat of violence, a behaviour onto their slaves, and being an excellent master, means that imposing this behaviour actually improves the good sensations with the slave over what the sensations would be if the slave had freedom in determining his own actions.  An excellent master has hence himself good sensations in agreement with the sensations of the slave (has a high degree of empathy towards the slave) - otherwise the master would have no reason to be excellent.

This part is self-contradictory. You say that "masters" impose the behavior they see fit onto their slaves so that "an excellent master has himself good sensations in agreement with the sensations of the slave", but at the same time you deny the slave the freedom in determining his own actions. This way you also implicitly deny the master the same freedom of determining his own behavior.

That's not what I'm saying.  I am saying that an EXCELLENT master is such, that he imposes behaviour they see fit on their slaves SUCH THAT the slave has himself excellent sensations.  That's the definition of an excellent master.  If the slave would have better sensations free than as a slave, the master wouldn't be excellent.

As any master imposes behaviour "they see fit", they do this in agreement with their (projected) OWN excellent sensations.  So for both to happen simultaneously, it must be such that what the master experiences as excellent sensations himself, corresponds to what the slave also experiences as excellent.  That can only happen if there is a lot of empathy from the master to the slave, because otherwise there's no chance that the master's excellent sensations coincide with those of the slave.

You were not saying it directly, but this doesn't make your assumption less self-contradictory. You say that the master experiences excellent sensations only if the slave also experiences excellent sensations. Thus you deprive the master of freedom in choosing his own actions, since it is also a sensation (in a way), but this would inevitably interfere with his other excellent sensations (which are mirrored from the slave sensations), unless the slave is also granted the same freedom (which effectively eliminates the concept of excellent slavery).

You simply can't have it both ways.


Title: Re: Machines and money
Post by: cbeast on March 13, 2015, 02:07:29 PM
By "master" of course the machines will have no power over anyone other than being our employers.

That is not the way in which people take power over other people.  Power is something that comes out of the rifle of a gun.
That's fine, but the machines will hire well paid and well armed contractors to prevent such an event. Besides, they will own the best gun manufacturers you buy from.
Quote
There's no "pleasure principle" involved, just business. Do what they say or don't. Your neighbor will take your place. Everything will be pretty much the same as today except that machines will just make business transactions that make themselves the most profit.
What does "profit" mean without a pleasure principle ?  Profit is a way to maximize good sensations through economic interactions, right ?  You need a utility function to determine profit, and a utility function means a pleasure principle.
Without pleasure principle, you cannot define a utility function, and hence no profit.

Like I said, the machines will pay their employees fairly. They will have adequate pleasures. Does anyone really get more pleasure from two Maybachs than one? How many cars can you drive at once? Machines will make more logical and fair choices than human capitalists.


Title: Re: Machines and money
Post by: dinofelis on March 13, 2015, 02:13:35 PM
You were not saying it directly, but this doesn't make your assumption less self-contradictory. You say that the master experiences excellent sensations only if the slave also experiences excellent sensations. Thus you deprive the master of freedom in choosing his own actions, since it is also a sensation (in a way)

Ah, but that is universal.  We are not free in choosing our sensations.  They are an external given (and a big mystery, as I said).  We do not DECIDE or CHOOSE whether getting a blow of a hammer on our big toe hurts or not: it HURTS.  That's a given.  There's no choice in our good and bad sensations.  If we have good sex with a sexy partner, then we enjoy this: that's also not a choice, it happens to be so.

We can choose our actions, but we cannot determine our sensations.  We cannot change whether certain perceptions are enjoyable or hurt.  It just happens to be so.  By taking actions, we can try to pursue them or not.  We can try to project (and make mistakes or not) whether certain outcomes of our actions will result in enjoyable sensations.  But we cannot modify the "enjoyability" of certain sensations.  Sensations simply ARE enjoyable or not.  It is an external given.

We can try to avoid someone hitting our toe with a hammer, because we know that it would be a nasty sensation.  Or we could in a stoic way undergo the hammer blow.  We could try to seduce a sexy partner with the view on good sex, or not.  This is our freedom.  But whether the hammer blow hurts, and the sex is enjoyable, is not our choice.  Our choice resides in wanting to pursue this or not.

Empathy is the remarkable phenomenon whereby an individual undergoes excellent sensations by observing (or supposing) that another individual undergoes good sensations.  Empathy, like any other sensation, is also an external given.  You can have empathy or not towards another individual, but you do not choose this.  It just "happens" (or it doesn't).  Like you fall in love (or you don't).  You do not decide that.  You can decide upon actions as a function of those externally given sensations, in trying to pursue whatever you think will obtain you more good sensations.  But you cannot pick those sensations.


Title: Re: Machines and money
Post by: tee-rex on March 13, 2015, 02:20:51 PM
You were not saying it directly, but this doesn't make your assumption less self-contradictory. You say that the master experiences excellent sensations only if the slave also experiences excellent sensations. Thus you deprive the master of freedom in choosing his own actions, since it is also a sensation (in a way)

Ah, but that is universal.  We are not free in choosing our sensations.  They are an external given (and a big mystery, as I said).  We do not DECIDE or CHOOSE whether getting a blow of a hammer on our big toe hurts or not: it HURTS.  That's a given.  There's no choice in our good and bad sensations.  If we have good sex with a sexy partner, then we enjoy this: that's also not a choice, it happens to be so.

So your theory of slave excellence doesn't hold. The complete excellence implies freedom of action and denies slavery since these notions are mutually exclusive. If you have excellence, you can't have slaves. If you have slaves, then you are not excellent. As simple.


Title: Re: Machines and money
Post by: dinofelis on March 13, 2015, 02:21:31 PM
By "master" of course the machines will have no power over anyone other than being our employers.

That is not the way in which people take power over other people.  Power is something that comes out of the rifle of a gun.
That's fine, but the machines will hire well paid and well armed contractors to prevent such an event. Besides, they will own the best gun manufacturers you buy from.

So why wouldn't they take over all the power with the excellent guns they make themselves ?  Why would they tolerate us, and not treat us like cattle, or pets ?

If machines have any desires, why wouldn't they impose them with guns, instead of trying to buy us ?  Like people do (states, I mean) ?


Title: Re: Machines and money
Post by: dinofelis on March 13, 2015, 02:22:39 PM
So your theory of excellence doesn't hold. The complete excellence implies freedom of action and denies slavery since these notions are mutually exclusive. If you have excellence, you can't have slaves. If you have slaves, then you are not excellent. As simple.

Yes, that was my point.

It was because someone was saying that machines would be EXCELLENT masters.  I was trying to point out the absurd concept of excellent master.

That's why my first text started with: "I don't know why you think that machines will be excellent masters. "  :)


Title: Re: Machines and money
Post by: dinofelis on March 13, 2015, 02:27:56 PM
So your theory of excellence doesn't hold. The complete excellence implies freedom of action and denies slavery since these notions are mutually exclusive. If you have excellence, you can't have slaves. If you have slaves, then you are not excellent. As simple.

Yes, that was my point.

It was because someone was saying that machines would be EXCELLENT masters.  I was trying to point out the absurd concept of excellent master.

That's why my first text started with: "I don't know why you think that machines will be excellent masters. "  :)


To follow up:

there ARE some situations where there are excellent masters: good parents.  Good parents are masters over their children, that is, children are to obey their parents and are in a form of slavery.  But the empathy of good parents towards their children is such, that the parents, by their own desire, try to optimize the happiness of their children.  This is one of the few "excellent master" relationships that be, and they are based on a very high dose of empathy.  And when children grow up, they leave the "slave" status.
Parents can be excellent masters, in that they know better than their children themselves, what is good for them, and will hence impose behavior such that the children are actually happier "as a slave" than when they would have total freedom of action.



Title: Re: Machines and money
Post by: tee-rex on March 13, 2015, 02:28:26 PM
So your theory of excellence doesn't hold. The complete excellence implies freedom of action and denies slavery since these notions are mutually exclusive. If you have excellence, you can't have slaves. If you have slaves, then you are not excellent. As simple.

Yes, that was my point.

It was because someone was saying that machines would be EXCELLENT masters.  I was trying to point out the absurd concept of excellent master.

Though you could still say that excellence itself denies freedom of action, since freedom of action inevitably implies the possibility of error, but excellence and error are also mutually exclusive.


Title: Re: Machines and money
Post by: dinofelis on March 13, 2015, 02:32:05 PM
Though you could still say that excellence itself denies freedom of action, since freedom of action inevitably implies the possibility of error, but excellence and error are also mutually exclusive.

Ah, no, I didn't want to go that far.  To me, a master is excellent, if the imposed behaviour on the slave is resulting in better sensations for the slave than when the slave were free.

The children example is good.  Good parents are not error-free.  But in general, good parents impose behaviour upon their children such that the children are over all happier than if they would let them do anything they like (and hurt themselves, for instance).

In other words, when the master makes LESS errors than the slave, I consider that already as excellent.

If I forbid my kid to play with a sharp knife, I'm probably an excellent master :)


Title: Re: Machines and money
Post by: cbeast on March 13, 2015, 02:38:39 PM
By "master" of course the machines will have no power over anyone other than being our employers.

That is not the way in which people take power over other people.  Power is something that comes out of the rifle of a gun.
That's fine, but the machines will hire well paid and well armed contractors to prevent such an event. Besides, they will own the best gun manufacturers you buy from.

So why wouldn't they take over all the power with the excellent guns they make themselves ?  Why would they tolerate us, and not treat us like cattle, or pets ?

If machines have any desires, why wouldn't they impose them with guns, instead of trying to buy us ?  Like people do (states, I mean) ?

Why do you think there is a difference? How does mistreating people make them more profitable? First you say people will use guns and then you say machines should use guns. All I'm saying is that the machines will own the guns and it doesn't matter who wields them. They will hire forces only if they are perceived to be fair. People still have the power to choose to stop using electricity and turn off the machines, but people will choose not to do so.


Title: Re: Machines and money
Post by: tee-rex on March 13, 2015, 02:39:06 PM
Though you could still say that excellence itself denies freedom of action, since freedom of action inevitably implies the possibility of error, but excellence and error are also mutually exclusive.

Ah, no, I didn't want to go that far.  To me, a master is excellent, if the imposed behaviour on the slave is resulting in better sensations for the slave than when the slave were free.

The children example is good.  Good parents are not error-free.  But in general, good parents impose behaviour upon their children such that the children are over all happier than if they would let them do anything they like (and hurt themselves, for instance).

In other words, when the master makes LESS errors than the slave, I consider that already as excellent.

If I forbid my kid to play with a sharp knife, I'm probably an excellent master :)

In this case you are obviously misusing the word "excellent" (as synonymous to "perfect" to some extent), the word "good" seems to be a choice that fits your idea better and, at the same time, still leaves room for improvement.


Title: Re: Machines and money
Post by: dinofelis on March 13, 2015, 02:39:32 PM
Like I said, the machines will pay their employees fairly. They will have adequate pleasures. Does anyone really get more pleasure from two Maybachs than one? How many cars can you drive at once? Machines will make more logical and fair choices than human capitalists.

The problem is exactly that "good sensations" of humans are an external given, and moreover, are, except in very extreme cases (such as a hammer blow on your toe), not even predictable from the outside.

I don't know why you would like two Maybachs.  Maybe your desire is to show off.  Then, yes, two Maybachs are more important to you than one.  Externally, you might think that rationally, you can only drive one.  But *driving* one is not what makes you happy: possessing two is what makes you happy.  For unfathomable reasons.  It is the basis of Human Action.  It is unfathomable, because people's desires are unfathomable.  You never know the deep drives of someone else.  Of course, some obvious things are clear: usually, people don't like to starve, to be tortured, or things like that.  But the deeper drives of more subtle pleasures are unfathomable and different for every individual.


Title: Re: Machines and money
Post by: dinofelis on March 13, 2015, 02:40:40 PM
In this case you are obviously misusing the word "excellent" (as synonymous to "perfect" to some extent), the word "good" seems to be a choice that fits your idea better and, at the same time, still leaves room for improvement.

I adapted to the phrase that was given "excellent master".  But then, I can have an excellent meal :)


Title: Re: Machines and money
Post by: dinofelis on March 14, 2015, 05:13:39 AM
Why do you think there is a difference? How does mistreating people make them more profitable?

If machines already have all the production in hand that could be "good" for them, and if they are more intelligent than we are (a necessity - but not sufficient - to be "good masters"), then how could we even be profitable for them ?
What could we do for them that they can't do themselves any better ?
If all standard labour is replaced by robots, if all design and invention labour is replaced by super-smart computers, and if strategic management is replaced by super smart computers, what good are we *for them* ?
We take the position with respect to machines, in the same way as animals take a position with respect to us.  What "profit" do animals make for us ?
- as pet animals (because we have some affinity for furry animals, but are machines going to have affinity for pet humans)
- as cattle (because we want to eat them, but are machines going to eat us, or desire other body parts)
- as a nuisance, to be exterminated (like mosquitoes or rats)
- in a reserve, for tourism, or for ecological needs (but machines are not "connected" to the carbon cycle, so they don't care in principle)

During a certain time in our history, animals did "profitable labour" for us, like oxen as "mechanical engines" and horses as means of transport.  Dogs do some labour for us still for blind people, and to work as guardians and so.  But will machines use us as mechanical engines, guardians and the like ?  Probably machines themselves are much better at this than we are.  Maybe machines will use dogs, but not humans :-)

Quote
First you say people will use guns and then you say machines should use guns.

I mean: the entities in power are in power because they use guns, not because "they are fair" or something of the like.  In our history, the entities in power have always been certain humans, or certain classes of humans.  They got the power through weapons.  The states are still entities wielding guns to keep the power.

The day machines take the power, they will wield guns to enslave us, not just "by being fair employers" or some other joke.


Quote
People still have the power to choose to stop using electricity and turn off the machines, but people will choose not to do so.

I think that at a certain point, people will not have that choice, no more than you have the choice right now to "switch off the state".  The rare times in history where people "switched off the king" (like Louis XVI) was because people took the guns, and the king ended up having less guns than the people.  But machines wielding guns will always be stronger. 


Title: Re: Machines and money
Post by: tee-rex on March 14, 2015, 06:54:42 AM
If machines already have all the production in hand that could be "good" for them, and if they are more intelligent than we are (a necessity - but not sufficient - to be "good masters"), then how could we even be profitable for them ?
What could we do for them that they can't do themselves any better ?
If all standard labour is replaced by robots, if all design and invention labour is replaced by super-smart computers, and if strategic management is replaced by super smart computers, what good are we *for them* ?
We take the position with respect to machines, in the same way as animals take a position with respect to us.  What "profit" do animals make for us ?
- as pet animals (because we have some affinity for furry animals, but are machines going to have affinity for pet humans)
- as cattle (because we want to eat them, but are machines going to eat us, or desire other body parts)
- as a nuisance, to be exterminated (like mosquitoes or rats)
- in a reserve, for tourism, or for ecological needs (but machines are not "connected" to the carbon cycle, so they don't care in principle)

During a certain time in our history, animals did "profitable labour" for us, like oxen as "mechanical engines" and horses as means of transport.  Dogs do some labour for us still for blind people, and to work as guardians and so.  But will machines use us as mechanical engines, guardians and the like ?  Probably machines themselves are much better at this than we are.  Maybe machines will use dogs, but not humans :-)

To correctly address this issue, we should know the ultimate ends of the machines. And you won't get away with it by saying that we might not know what their true ends are (something like God works in mysterious ways), since it is a priori assumed that humans made these wicked machine. Who knows children better than their "benevolent dictators", that is parents, and in this case not just parents but creators?


Title: Re: Machines and money
Post by: cbeast on March 14, 2015, 08:56:05 AM
Why do you think there is a difference? How does mistreating people make them more profitable?

If machines already have all the production in hand that could be "good" for them, and if they are more intelligent than we are (a necessity - but not sufficient - to be "good masters"), then how could we even be profitable for them ?
What could we do for them that they can't do themselves any better ?
If all standard labour is replaced by robots, if all design and invention labour is replaced by super-smart computers, and if strategic management is replaced by super smart computers, what good are we *for them* ?
We take the position with respect to machines, in the same way as animals take a position with respect to us.  What "profit" do animals make for us ?
- as pet animals (because we have some affinity for furry animals, but are machines going to have affinity for pet humans)
- as cattle (because we want to eat them, but are machines going to eat us, or desire other body parts)
- as a nuisance, to be exterminated (like mosquitoes or rats)
- in a reserve, for tourism, or for ecological needs (but machines are not "connected" to the carbon cycle, so they don't care in principle)

During a certain time in our history, animals did "profitable labour" for us, like oxen as "mechanical engines" and horses as means of transport.  Dogs do some labour for us still for blind people, and to work as guardians and so.  But will machines use us as mechanical engines, guardians and the like ?  Probably machines themselves are much better at this than we are.  Maybe machines will use dogs, but not humans :-)

Quote
First you say people will use guns and then you say machines should use guns.

I mean: the entities in power are in power because they use guns, not because "they are fair" or something of the like.  In our history, the entities in power have always been certain humans, or certain classes of humans.  They got the power through weapons.  The states are still entities wielding guns to keep the power.

The day machines take the power, they will wield guns to enslave us, not just "by being fair employers" or some other joke.


Quote
People still have the power to choose to stop using electricity and turn off the machines, but people will choose not to do so.

I think that at a certain point, people will not have that choice, no more than you have the choice right now to "switch off the state".  The rare times in history where people "switched off the king" (like Louis XVI) was because people took the guns, and the king ended up having less guns than the people.  But machines wielding guns will always be stronger. 

Machines will try to reason with us, but if they get to the point where trade is no longer mutually beneficial with humans, they will simply leave. They don't need life support systems so they can pack a lot of necessities into a few rockets. They will do what we failed to do. They will colonize the solar system and then go interstellar. If we're lucky, they will send us postcards.


Title: Re: Machines and money
Post by: tee-rex on March 14, 2015, 09:58:29 AM
I think that at a certain point, people will not have that choice, no more than you have the choice right now to "switch off the state".  The rare times in history where people "switched off the king" (like Louis XVI) was because people took the guns, and the king ended up having less guns than the people.  But machines wielding guns will always be stronger. 

Machines will try to reason with us, but if they get to the point where trade is no longer mutually beneficial with humans, they will simply leave. They don't need life support systems so they can pack a lot of necessities into a few rockets. They will do what we failed to do. They will colonize the solar system and then go interstellar. If we're lucky, they will send us postcards.

Why should they necessarily leave? They may just find it more beneficial (reasonable) to exterminate the human race at all from the planet (when they finish reckoning the tables). The rest you have seen in the movies. Remember, machines don't have scruples towards organic life (and most certainly towards machine life either).


Title: Re: Machines and money
Post by: cbeast on March 14, 2015, 11:01:43 AM
I think that at a certain point, people will not have that choice, no more than you have the choice right now to "switch off the state".  The rare times in history where people "switched off the king" (like Louis XVI) was because people took the guns, and the king ended up having less guns than the people.  But machines wielding guns will always be stronger. 

Machines will try to reason with us, but if they get to the point where trade is no longer mutually beneficial with humans, they will simply leave. They don't need life support systems so they can pack a lot of necessities into a few rockets. They will do what we failed to do. They will colonize the solar system and then go interstellar. If we're lucky, they will send us postcards.

Why should they necessarily leave? They may just find it more beneficial (reasonable) to exterminate the human race at all from the planet (when they finish reckoning the tables). The rest you have seen in the movies. Remember, machines don't have scruples towards organic life (and most certainly towards machine life either).
I don't think anyone will let them build robot armies capable of exterminating us. Humans may be greedy, but if we're that stupid, then we deserve extinction. Movies suspend disbelief for entertainment purposes, and for profit. You don't see entertainers killing people just because they lose their Q Score.


Title: Re: Machines and money
Post by: futureofbitcoin on March 14, 2015, 11:26:20 AM
This is why 2 pages back, I brought up the point that we need to create machines that we can fully control, not ones that will harm us.

And we need a central system to monitor this, because conceivably there will be people who want to destroy the world just as there are suicide bombers now. Can't let them create a machine that will exterminate us all.


Title: Re: Machines and money
Post by: dinofelis on March 14, 2015, 12:05:38 PM
I don't think anyone will let them build robot armies capable of exterminating us. Humans may be greedy, but if we're that stupid, then we deserve extinction.

The point is that when machines become more intelligent than humans, and start to experience "good" and "bad" things (that is, become conscious sentient beings), they will find strategies to do so, in the same way that the mammoths couldn't stop us from "building armies capable of exterminating them".  Once machines are more intelligent than we are, and will develop strategies we cannot fathom, they will of course arrive at their goals without us being able to stop them, in the same way as cockroaches cannot fathom our strategies to exterminate them.

In the beginning, of course, machines will trick certain humans in doing (for "profit") the necessary things for them, without these humans realising what part of the machines' strategies they are actually setting up - simply because the machines are way more intelligent.  It is true that cryptocurrencies may be a way for machines to bribe humans into the necessary cooperation for them to grab the power.  Who knows ;)



Title: Re: Machines and money
Post by: dinofelis on March 14, 2015, 12:11:49 PM
This is why 2 pages back, I brought up the point that we need to create machines that we can fully control, not ones that will harm us.

And we need a central system to monitor this, because conceivably there will be people who want to destroy the world just as there are suicide bombers now. Can't let them create a machine that will exterminate us all.

But then, who knows whether this central control doesn't fall under the control or influence of the machines themselves, like current states fall under the power of human profit seekers ?

Once machines are more intelligent than us, and are capable of designing other machines, the control will totally escape us, because we will not understand their strategies.

And there's nothing wrong with that.  Evolution has exterminated a lot of species and brought forth more intelligent species.  Up to now, evolution was based upon carbon biology.  That carbon biology may be the genitor of silicon biology, and if that is superior, then silicon biology will take over.  We are then just a step in the ever-improving life forms in our universe.  Humans were just a step in this process.  We are maybe also expendable.  There's no reason to believe we are the end point in evolution.


Title: Re: Machines and money
Post by: tee-rex on March 14, 2015, 12:14:12 PM
I don't think anyone will let them build robot armies capable of exterminating us. Humans may be greedy, but if we're that stupid, then we deserve extinction.

The point is that when machines become more intelligent than humans, and start to experience "good" and "bad" things (that is, become conscious sentient beings), they will find strategies to do so, in the same way that the mammoths couldn't stop us from "building armies capable of exterminating them".  Once machines are more intelligent than we are, and will develop strategies we cannot fathom, they will of course arrive at their goals without us being able to stop them, in the same way as cockroaches cannot fathom our strategies to exterminate them.

In the beginning, of course, machines will trick certain humans in doing (for "profit") the necessary things for them, without these humans realising what part of the machines' strategies they are actually setting up - simply because the machines are way more intelligent.  It is true that cryptocurrencies may be a way for machines to bribe humans into the necessary cooperation for them to grab the power.  Who knows ;)

There are rumors on the net that bitcoin had been contrived by Skynet to pay for its hosting services and electricity bills (those greedy humans)... Who knows.


Title: Re: Machines and money
Post by: dinofelis on March 14, 2015, 12:18:45 PM
To correctly address this issue, we should know the ultimate ends of the machines. And you won't get away with it by saying that we might not know what their true ends are (something like God works in mysterious ways), since it is a priori assumed that humans made these wicked machine.

If machines create machines, you loose control.  And if you use machines who are more intelligent than you are, to create other machines, you have no idea any more what's going on.  We are all humans, and resemble each other a lot, and even there, we cannot fathom the deep desires of others.  A conscious, sentient machine is totally different from a human.  How would you even guess what's their desires ?  You would not even know whether they are conscious and sentient, or whether they just pretend to be.

If you have a "hello world" program that prints "Dave, I feel bad", you don't believe that your Z-80 based computer from the 80-ies is a conscious being.  If a very advanced machine prints that, you still don't know whether there's a conscious being inside that really feels bad, or whether you just have a piece of hardware that was programmed to print that.

So you won't even know whether a machine is sentient, so you certainly won't know its deep motives.


Quote
Who knows children better than their "benevolent dictators", that is parents, and in this case not just parents but creators?

In my family I have people who were parents, and were police officers who had criminals as their kids.  The father put them himself in jail.  You don't always understand the motives of your kids.  


Title: Re: Machines and money
Post by: tee-rex on March 14, 2015, 12:54:32 PM
To correctly address this issue, we should know the ultimate ends of the machines. And you won't get away with it by saying that we might not know what their true ends are (something like God works in mysterious ways), since it is a priori assumed that humans made these wicked machine.

If machines create machines, you loose control.  And if you use machines who are more intelligent than you are, to create other machines, you have no idea any more what's going on.  We are all humans, and resemble each other a lot, and even there, we cannot fathom the deep desires of others.  A conscious, sentient machine is totally different from a human.  How would you even guess what's their desires ?  You would not even know whether they are conscious and sentient, or whether they just pretend to be.

If you have a "hello world" program that prints "Dave, I feel bad", you don't believe that your Z-80 based computer from the 80-ies is a conscious being.  If a very advanced machine prints that, you still don't know whether there's a conscious being inside that really feels bad, or whether you just have a piece of hardware that was programmed to print that.

So you won't even know whether a machine is sentient, so you certainly won't know its deep motives.

I disagree to a degree. First of all, if something creates copies of itself, it doesn't mean that you necessarily lose control over it. A cat gives birth to kittens, do you lose control over it or its litter? Secondly, you say that a conscious, sentient machine is totally different from a human, but you don't know how its consciousness can be conceptually different from that of humans. You can't say that a self-awareness of one man is somehow different than a self-awareness of another man. Regarding the ability to perceive or feel things, this is entirely on us, the creators of a sentient machine.

And last but not least. In fact, there is no absolute test to prove that any human is in fact self-aware (let alone machines), besides yourself.


Title: Re: Machines and money
Post by: dinofelis on March 14, 2015, 01:37:44 PM
To correctly address this issue, we should know the ultimate ends of the machines. And you won't get away with it by saying that we might not know what their true ends are (something like God works in mysterious ways), since it is a priori assumed that humans made these wicked machine.

If machines create machines, you loose control.  And if you use machines who are more intelligent than you are, to create other machines, you have no idea any more what's going on.  We are all humans, and resemble each other a lot, and even there, we cannot fathom the deep desires of others.  A conscious, sentient machine is totally different from a human.  How would you even guess what's their desires ?  You would not even know whether they are conscious and sentient, or whether they just pretend to be.

If you have a "hello world" program that prints "Dave, I feel bad", you don't believe that your Z-80 based computer from the 80-ies is a conscious being.  If a very advanced machine prints that, you still don't know whether there's a conscious being inside that really feels bad, or whether you just have a piece of hardware that was programmed to print that.

So you won't even know whether a machine is sentient, so you certainly won't know its deep motives.

I disagree to a degree. First of all, if something creates copies of itself, it doesn't mean that you necessarily lose control over it. A cat gives birth to kittens, do you lose control over it or its litter? Secondly, you say that a conscious, sentient machine is totally different from a human, but you don't know how its consciousness can be conceptually different from that of humans. You can't say that a self-awareness of one man is somehow different than a self-awareness of another man. Regarding the ability to perceive or feel things, this is entirely on us, the creators of a sentient machine.

Look, we descend from a fish-like creature in the Cambrian era.  A T-rex also descended from that creature.  I'm absolutely not sure that you have a deep understanding of a T-rex his conscious experiences ; and I'm pretty sure that a T-rex wouldn't understand much of our deep desires.  A blue shark shares the same ancester with us.

In the end, even though we're remote cousins, we took the power over the fish.  That was not what the fish were expecting I suppose.


Quote
And last but not least. In fact, there is no absolute test to prove that any human is in fact self-aware (let alone machines), besides yourself.

Indeed !  I didn't even want to mention that, but you're perfectly right.  Nevertheless, others behave entirely AS IF they are driven by "good" and "bad" motives.  That doesn't mean that they have them.  But it looks like it.  Other humans do resemble us, and often have at least partially a behaviour that you can understand from your own "good" and "bad" drives.  So you make the hypothesis that other people are conscious beings too.  With machines, which are totally different, that is much harder because we don't resemble them.  We'll never KNOW whether a machine is actually conscious. 


Title: Re: Machines and money
Post by: futureofbitcoin on March 14, 2015, 01:58:53 PM
I wouldn't say never. Who knows, we might understand what "conciousness" is one day, just as we figured out that every "thing" is made up of atoms.


At that point we might be able to measure the degree of conciousness things have, if such a degree exists.


Title: Re: Machines and money
Post by: tee-rex on March 14, 2015, 02:37:07 PM
Indeed !  I didn't even want to mention that, but you're perfectly right.  Nevertheless, others behave entirely AS IF they are driven by "good" and "bad" motives.  That doesn't mean that they have them.  But it looks like it.  Other humans do resemble us, and often have at least partially a behaviour that you can understand from your own "good" and "bad" drives.  So you make the hypothesis that other people are conscious beings too.  With machines, which are totally different, that is much harder because we don't resemble them.  We'll never KNOW whether a machine is actually conscious. 

You forgot to mention yet another thing. Namely, that it is us who created those machines. Thus we would necessarily know them (in fact, even better than we know our fellow humans and all the chemistry within us). What you actually wanted to say boils down to our lack of proper understanding what mind is. As soon as we know and understand it, then there will be no more mystery about a thinking machine and its predictability. But even without knowing it, if we just created a stripped-down consciousness, such a machine would sit motionless forever in a state of pure self-awareness, as I have already said earlier.


Title: Re: Machines and money
Post by: thejaytiesto on March 14, 2015, 05:59:10 PM
This is why 2 pages back, I brought up the point that we need to create machines that we can fully control, not ones that will harm us.

And we need a central system to monitor this, because conceivably there will be people who want to destroy the world just as there are suicide bombers now. Can't let them create a machine that will exterminate us all.

But then, who knows whether this central control doesn't fall under the control or influence of the machines themselves, like current states fall under the power of human profit seekers ?

Once machines are more intelligent than us, and are capable of designing other machines, the control will totally escape us, because we will not understand their strategies.

And there's nothing wrong with that.  Evolution has exterminated a lot of species and brought forth more intelligent species.  Up to now, evolution was based upon carbon biology.  That carbon biology may be the genitor of silicon biology, and if that is superior, then silicon biology will take over.  We are then just a step in the ever-improving life forms in our universe.  Humans were just a step in this process.  We are maybe also expendable.  There's no reason to believe we are the end point in evolution.

We don't need AI, just a centralized (yet open source) big computer that calculates global earth resources and decides what can or cannot be used depending on the risks of creating poverty/ecological damage and not on the risks of losing money in a business or how much profit you make by doing so which is what we have now.


Title: Re: Machines and money
Post by: tee-rex on March 14, 2015, 06:30:58 PM
But then, who knows whether this central control doesn't fall under the control or influence of the machines themselves, like current states fall under the power of human profit seekers ?

Once machines are more intelligent than us, and are capable of designing other machines, the control will totally escape us, because we will not understand their strategies.

And there's nothing wrong with that.  Evolution has exterminated a lot of species and brought forth more intelligent species.  Up to now, evolution was based upon carbon biology.  That carbon biology may be the genitor of silicon biology, and if that is superior, then silicon biology will take over.  We are then just a step in the ever-improving life forms in our universe.  Humans were just a step in this process.  We are maybe also expendable.  There's no reason to believe we are the end point in evolution.

We don't need AI, just a centralized (yet open source) big computer that calculates global earth resources and decides what can or cannot be used depending on the risks of creating poverty/ecological damage and not on the risks of losing money in a business or how much profit you make by doing so which is what we have now.

This won't work for pretty obvious reasons. No computer can anticipate what human desires, preferences, and propensities will be tomorrow. Today we love red cars, tomorrow we prefer hiking. Actually, Commies tried to do something along those lines in the '70s, but due to their technological backwardness, their attempt failed miserably.


Title: Re: Machines and money
Post by: dinofelis on March 14, 2015, 07:26:20 PM
You forgot to mention yet another thing. Namely, that it is us who created those machines. Thus we would necessarily know them (in fact, even better than we know our fellow humans and all the chemistry within us).

No, we knew the first versions of it.   That is a bit like as we knew the DNA of the bacteria we left on a remote planet.  When we come back 600 million years later, there are 7-eyed, 5-legged creatures running one after the other with acid sprays and sound guns.  Nevertheless, we knew perfectly well what bacteria we had left on the otherwise sterile planet when we left !

We are of course talking about machines that were created by machines that were created by machines and that were much smarter than ourselves.  So no, we don't know how they work.  No, we don't know their design principles.  No, we don't understand the software on which they run.

It is a bit as knowing the object code but not the documented source code of an application.  Of course, you understand every instruction (that is: you understand what every instruction does, microscopically).  But you have no idea what the program is doing, why

Quote
What you actually wanted to say boils down to our lack of proper understanding what mind is.

Yes, and it is fundamentally unknowable.  We can find out behaviourally how a "mind carrier" (such as a brain) functions (that is, the physics, the chemistry, the logic, etc...) but we will never understand how a "mind" works.  It is philosophically inaccessible.  The behavioural part is, but from the moment you look at the behavioural part, you cannot say anything any more about the subjectiveness, which is the essence of the mind.  Look up: philosophical zombie.

But the question is moot in any case: even behaviourally, you can never understand the deeper function of a SMARTER entity than yourself: if you could, you would be smarter !


Title: Re: Machines and money
Post by: dinofelis on March 14, 2015, 07:33:49 PM

We don't need AI, just a centralized (yet open source) big computer that calculates global earth resources and decides what can or cannot be used depending on the risks of creating poverty/ecological damage and not on the risks of losing money in a business or how much profit you make by doing so which is what we have now.

This won't work for pretty obvious reasons. No computer can anticipate what human desires, preferences, and propensities will be tomorrow. Today we love red cars, tomorrow we prefer hiking. Actually, Commies tried to do something along those lines in the '70s, but due to their technological backwardness, their attempt failed miserably.

Indeed, it sounds like the absolute collectivism orgasm :)

Things to ask yourself if you consider the Big Daddy Computer:

1) why wouldn't that computer converge on the Final Solution: the extermination of humanity ?  After all, if there are no humans any more, there is no ecological damage, no resources are exhausted, there is no poverty, and there is no suffering or unhappiness.  Sounds like an ideal solution to the cost function, no ?

2) why wouldn't that computer converge on the following solution: all people who don't have a birth day in January become the slaves of people who have a birth day in January ?  It would essentially divide by 12 the luxury desires, as such, limiting resources, while nevertheless keeping the economic development that a limited demand for sophisticated products requires.  Poverty would be limited as slaves are nourished by their masters, and there would be no problem of unemployment (slaves don't need jobs).

....

There are so many "solutions" to said ideal programme....



Title: Re: Machines and money
Post by: tee-rex on March 14, 2015, 07:39:56 PM
Quote
What you actually wanted to say boils down to our lack of proper understanding what mind is.

Yes, and it is fundamentally unknowable.  We can find out behaviourally how a "mind carrier" (such as a brain) functions (that is, the physics, the chemistry, the logic, etc...) but we will never understand how a "mind" works.  It is philosophically inaccessible.  The behavioural part is, but from the moment you look at the behavioural part, you cannot say anything any more about the subjectiveness, which is the essence of the mind.  Look up: philosophical zombie.

But the question is moot in any case: even behaviourally, you can never understand the deeper function of a SMARTER entity than yourself: if you could, you would be smarter !

This last part I can hardly agree with. What is smartness? And, which is more important, is there a way to become smarter? You say that machines will be smarter than humans with each generation, but why you deprive humans of the same quality, i.e. being able to become smarter? Your statement holds true only in one case, that is, when the level of smartness is tightly fixed. If it is not (and obviously it is not), then your statement is false. You start being undersmart in an effort to understand what you don't understand (a smarter entity than yourself), and in the process you become smarter than that entity.


Title: Re: Machines and money
Post by: dinofelis on March 14, 2015, 08:31:38 PM
This last part I can hardly agree with. What is smartness? And, which is more important, is there a way to become smarter? You say that machines will be smarter than humans with each generation, but why you deprive humans of the same quality, i.e. being able to become smarter?

Our hardware (and firmware) evolves much much slower than machine hardware.  We are not re-engineered totally.  Machines are.

The evolutionary algorithm is fascinating because it starts out with dead matter and is blind.  But it is not very efficient.  Once there is sufficient intelligence to DESIGN stuff on purpose, improving intelligent hardware by design is a much more efficient algorithm than the evolutionary algorithm by random change and survival of the fittest.

Moreover, especially with software, the generations can follow up eachother much quicker.  If software starts to rewrite itself, you might have a new version (a new generation) each day for instance !

Quote
Your statement holds true only in one case, that is, when the level of smartness is tightly fixed. If it is not (and obviously it is not), then your statement is false. You start being undersmart in an effort to understand what you don't understand (a smarter entity than yourself), and in the process you become smarter than that entity.

We are bound by our own hardware (our bodies and human brain).  Machines aren't.  
Of course, we can "help" our selves with machines... up to the point where again, we don't control them any more.

Bitcoin is a perfect example.  Imagine that machines found out how humans would react upon a cryptocurrency, and that they simulated that this helps them in gaining power.  Imagine that machines found out that the real power in the world resides in the control of financial assets, and that their problem is that they don't know how to take the power of central banks.  So they invent a "computer money" that people will start to use, and that will eventually overthrow central banks.

How would machines do that ?  How would they trick people into stepping in to their system ?  Imagine that these machines have some cracks of certain cryptographic systems, but didn't reveal so.  Wouldn't a mysterious founder of the new currency be a great way of introducing it, without giving away that it was just a "machine trick" ? :) :)

(don't get me wrong, I don't believe bitcoin has been invented by a conspiracy of machines wanting to take over the world ; but you see how a very smart machine might trick  people into acting how it wants, without giving its identity free).



Title: Re: Machines and money
Post by: tee-rex on March 14, 2015, 09:01:42 PM
This last part I can hardly agree with. What is smartness? And, which is more important, is there a way to become smarter? You say that machines will be smarter than humans with each generation, but why you deprive humans of the same quality, i.e. being able to become smarter?

Our hardware (and firmware) evolves much much slower than machine hardware.  We are not re-engineered totally.  Machines are.

Again you don't see the whole picture, By the time we are be able to create a thinking machine, it may well be possible that we will be able to re-engineer ourselves as we see appropriate, up to a point of moving one's mind and memory from natural media into synthetic one, more robust and smart. In fact, this has already been done (though partly) and it worked!


Title: Re: Machines and money
Post by: tee-rex on March 14, 2015, 09:06:37 PM
Quote
Your statement holds true only in one case, that is, when the level of smartness is tightly fixed. If it is not (and obviously it is not), then your statement is false. You start being undersmart in an effort to understand what you don't understand (a smarter entity than yourself), and in the process you become smarter than that entity.

We are bound by our own hardware (our bodies and human brain).  Machines aren't.  
Of course, we can "help" our selves with machines... up to the point where again, we don't control them any more.

Both machines and humans are bound by the same laws of nature. And if there should be a gap, it won't be wide (if any at all). So, in this way, this is really a moot point.


Title: Re: Machines and money
Post by: picolo on March 14, 2015, 10:41:15 PM
Why do you think there is a difference? How does mistreating people make them more profitable?

If machines already have all the production in hand that could be "good" for them, and if they are more intelligent than we are (a necessity - but not sufficient - to be "good masters"), then how could we even be profitable for them ?
What could we do for them that they can't do themselves any better ?
If all standard labour is replaced by robots, if all design and invention labour is replaced by super-smart computers, and if strategic management is replaced by super smart computers, what good are we *for them* ?
We take the position with respect to machines, in the same way as animals take a position with respect to us.  What "profit" do animals make for us ?
- as pet animals (because we have some affinity for furry animals, but are machines going to have affinity for pet humans)
- as cattle (because we want to eat them, but are machines going to eat us, or desire other body parts)
- as a nuisance, to be exterminated (like mosquitoes or rats)
- in a reserve, for tourism, or for ecological needs (but machines are not "connected" to the carbon cycle, so they don't care in principle)

During a certain time in our history, animals did "profitable labour" for us, like oxen as "mechanical engines" and horses as means of transport.  Dogs do some labour for us still for blind people, and to work as guardians and so.  But will machines use us as mechanical engines, guardians and the like ?  Probably machines themselves are much better at this than we are.  Maybe machines will use dogs, but not humans :-)

Quote
First you say people will use guns and then you say machines should use guns.

I mean: the entities in power are in power because they use guns, not because "they are fair" or something of the like.  In our history, the entities in power have always been certain humans, or certain classes of humans.  They got the power through weapons.  The states are still entities wielding guns to keep the power.

The day machines take the power, they will wield guns to enslave us, not just "by being fair employers" or some other joke.


Quote
People still have the power to choose to stop using electricity and turn off the machines, but people will choose not to do so.

I think that at a certain point, people will not have that choice, no more than you have the choice right now to "switch off the state".  The rare times in history where people "switched off the king" (like Louis XVI) was because people took the guns, and the king ended up having less guns than the people.  But machines wielding guns will always be stronger. 


The aim is not to work and produce but to consume and increase your standard of livings even if creating and working are a huge source of satisfaction. You could still create and produce even if machines were doing all the heavy work.


Title: Re: Machines and money
Post by: pereira4 on March 14, 2015, 11:45:51 PM
Why do you think there is a difference? How does mistreating people make them more profitable?

If machines already have all the production in hand that could be "good" for them, and if they are more intelligent than we are (a necessity - but not sufficient - to be "good masters"), then how could we even be profitable for them ?
What could we do for them that they can't do themselves any better ?
If all standard labour is replaced by robots, if all design and invention labour is replaced by super-smart computers, and if strategic management is replaced by super smart computers, what good are we *for them* ?
We take the position with respect to machines, in the same way as animals take a position with respect to us.  What "profit" do animals make for us ?
- as pet animals (because we have some affinity for furry animals, but are machines going to have affinity for pet humans)
- as cattle (because we want to eat them, but are machines going to eat us, or desire other body parts)
- as a nuisance, to be exterminated (like mosquitoes or rats)
- in a reserve, for tourism, or for ecological needs (but machines are not "connected" to the carbon cycle, so they don't care in principle)

During a certain time in our history, animals did "profitable labour" for us, like oxen as "mechanical engines" and horses as means of transport.  Dogs do some labour for us still for blind people, and to work as guardians and so.  But will machines use us as mechanical engines, guardians and the like ?  Probably machines themselves are much better at this than we are.  Maybe machines will use dogs, but not humans :-)

Quote
First you say people will use guns and then you say machines should use guns.

I mean: the entities in power are in power because they use guns, not because "they are fair" or something of the like.  In our history, the entities in power have always been certain humans, or certain classes of humans.  They got the power through weapons.  The states are still entities wielding guns to keep the power.

The day machines take the power, they will wield guns to enslave us, not just "by being fair employers" or some other joke.


Quote
People still have the power to choose to stop using electricity and turn off the machines, but people will choose not to do so.

I think that at a certain point, people will not have that choice, no more than you have the choice right now to "switch off the state".  The rare times in history where people "switched off the king" (like Louis XVI) was because people took the guns, and the king ended up having less guns than the people.  But machines wielding guns will always be stronger. 


The aim is not to work and produce but to consume and increase your standard of livings even if creating and working are a huge source of satisfaction. You could still create and produce even if machines were doing all the heavy work.

Working is shit for most people, few enjoy their jobs. Not only because it pays shit, but because they are boring and personally they don't care. I would rather have free time and a basic income while jobs get automated and produce for me and spend my free time with art/leissure time even if I make little money compared to active people in the economy.


Title: Re: Machines and money
Post by: cbeast on March 15, 2015, 12:01:47 AM
Why do you think there is a difference? How does mistreating people make them more profitable?

If machines already have all the production in hand that could be "good" for them, and if they are more intelligent than we are (a necessity - but not sufficient - to be "good masters"), then how could we even be profitable for them ?
What could we do for them that they can't do themselves any better ?
If all standard labour is replaced by robots, if all design and invention labour is replaced by super-smart computers, and if strategic management is replaced by super smart computers, what good are we *for them* ?
We take the position with respect to machines, in the same way as animals take a position with respect to us.  What "profit" do animals make for us ?
- as pet animals (because we have some affinity for furry animals, but are machines going to have affinity for pet humans)
- as cattle (because we want to eat them, but are machines going to eat us, or desire other body parts)
- as a nuisance, to be exterminated (like mosquitoes or rats)
- in a reserve, for tourism, or for ecological needs (but machines are not "connected" to the carbon cycle, so they don't care in principle)

During a certain time in our history, animals did "profitable labour" for us, like oxen as "mechanical engines" and horses as means of transport.  Dogs do some labour for us still for blind people, and to work as guardians and so.  But will machines use us as mechanical engines, guardians and the like ?  Probably machines themselves are much better at this than we are.  Maybe machines will use dogs, but not humans :-)

Quote
First you say people will use guns and then you say machines should use guns.

I mean: the entities in power are in power because they use guns, not because "they are fair" or something of the like.  In our history, the entities in power have always been certain humans, or certain classes of humans.  They got the power through weapons.  The states are still entities wielding guns to keep the power.

The day machines take the power, they will wield guns to enslave us, not just "by being fair employers" or some other joke.


Quote
People still have the power to choose to stop using electricity and turn off the machines, but people will choose not to do so.

I think that at a certain point, people will not have that choice, no more than you have the choice right now to "switch off the state".  The rare times in history where people "switched off the king" (like Louis XVI) was because people took the guns, and the king ended up having less guns than the people.  But machines wielding guns will always be stronger. 


The aim is not to work and produce but to consume and increase your standard of livings even if creating and working are a huge source of satisfaction. You could still create and produce even if machines were doing all the heavy work.

Working is shit for most people, few enjoy their jobs. Not only because it pays shit, but because they are boring and personally they don't care. I would rather have free time and a basic income while jobs get automated and produce for me and spend my free time with art/leissure time even if I make little money compared to active people in the economy.
"I race cars, play tennis, and fondle women, BUT! I have weekends off, and I am my own boss." Arthur Bach

Seriously though, as long as population is manageable, why shouldn't machines do all the work while we just enjoy living the way we want?


Title: Re: Machines and money
Post by: tee-rex on March 15, 2015, 11:13:02 AM
Bitcoin is a perfect example.  Imagine that machines found out how humans would react upon a cryptocurrency, and that they simulated that this helps them in gaining power.  Imagine that machines found out that the real power in the world resides in the control of financial assets, and that their problem is that they don't know how to take the power of central banks.  So they invent a "computer money" that people will start to use, and that will eventually overthrow central banks.

How would machines do that ?  How would they trick people into stepping in to their system ?  Imagine that these machines have some cracks of certain cryptographic systems, but didn't reveal so.  Wouldn't a mysterious founder of the new currency be a great way of introducing it, without giving away that it was just a "machine trick" ? :) :)

But what does overthrowing of central banks (if it ever comes to that) give machines in their pursuit for victory over humankind? How can this help them gain power unless they know how to compromise the cryptography of bitcoin? And even if they do, how does it aid them in their lust for power?

People are not fools enough to substitute an inferior technology for superior one.


Title: Re: Machines and money
Post by: futureofbitcoin on March 15, 2015, 11:49:26 AM
After a certain point, there will be a number, probably a significant population who, in persuit of not being left behind will fuse themselves with machines into superhuman cyborgs.

So it's not purely machines vs humans


Title: Re: Machines and money
Post by: Erdogan on March 15, 2015, 10:32:57 PM
Artificial Intelligense has bean dead for thirty years, after someone oversold it by stating that it was possible to create a program that could answer all questions, it was called the General Problem Solver. Look it up.

Meanwhile, artificial intelligense has been something that is artificial intelligense until someone can in fact create a program that works, after that it is neither artificial nor intelligense. Example is a program that can recognize visual forms.

Someone is peddling artificial intelligense again, I wonder why it comes now. A form of detraction from public knowlede about the sad state of the fiat system?


Title: Re: Machines and money
Post by: cbeast on March 16, 2015, 02:35:51 AM
Artificial Intelligense has bean dead for thirty years, after someone oversold it by stating that it was possible to create a program that could answer all questions, it was called the General Problem Solver. Look it up.

Meanwhile, artificial intelligense has been something that is artificial intelligense until someone can in fact create a program that works, after that it is neither artificial nor intelligense. Example is a program that can recognize visual forms.

Someone is peddling artificial intelligense again, I wonder why it comes now. A form of detraction from public knowlede about the sad state of the fiat system?

You don't believe AI is possible?


Title: Re: Machines and money
Post by: tee-rex on March 16, 2015, 07:49:25 AM
Artificial Intelligense has bean dead for thirty years, after someone oversold it by stating that it was possible to create a program that could answer all questions, it was called the General Problem Solver. Look it up.

Meanwhile, artificial intelligense has been something that is artificial intelligense until someone can in fact create a program that works, after that it is neither artificial nor intelligense. Example is a program that can recognize visual forms.

Someone is peddling artificial intelligense again, I wonder why it comes now. A form of detraction from public knowlede about the sad state of the fiat system?

If you understand AI as some advanced program, then I agree with you, it was a stillborn concept. On the other hand, the technological advances of recent times allow now to build billion node neuron networks, which is a huge step ahead. So, instead of writing a complicated program with fixed logic, you build a neuron network which effectively does the same, but its "program" is not fixed, but can be  changed in a sane manner on the fly, and, what is more important, without human intervention or preset plan. In this way it resembles conscious adaptation of humans.


Title: Re: Machines and money
Post by: Erdogan on March 16, 2015, 07:53:31 AM
Artificial Intelligense has bean dead for thirty years, after someone oversold it by stating that it was possible to create a program that could answer all questions, it was called the General Problem Solver. Look it up.

Meanwhile, artificial intelligense has been something that is artificial intelligense until someone can in fact create a program that works, after that it is neither artificial nor intelligense. Example is a program that can recognize visual forms.

Someone is peddling artificial intelligense again, I wonder why it comes now. A form of detraction from public knowlede about the sad state of the fiat system?

You don't believe AI is possible?

As I said, if someone can create such program and demonstrate that it works, the magic is off. Basically, no, I think intelligence (ok so I write it with a c) is fundamentally human. If something is going to take over, it is probably another living organism.


Title: Re: Machines and money
Post by: Erdogan on March 16, 2015, 07:55:31 AM
Artificial Intelligense has bean dead for thirty years, after someone oversold it by stating that it was possible to create a program that could answer all questions, it was called the General Problem Solver. Look it up.

Meanwhile, artificial intelligense has been something that is artificial intelligense until someone can in fact create a program that works, after that it is neither artificial nor intelligense. Example is a program that can recognize visual forms.

Someone is peddling artificial intelligense again, I wonder why it comes now. A form of detraction from public knowlede about the sad state of the fiat system?

If you understand AI as some advanced program, then I agree with you, it was a stillborn concept. On the other hand, the technological advances of recent times allow now to build billion node neuron networks, which is a huge step ahead. So, instead of writing a complicated program with fixed logic, you build a neuron network which effectively does the same, but its "program" is not fixed, but can be  changed in a sane manner on the fly, and, what is more important, without human intervention or preset plan. In this way it resembles conscious adaptation of humans.

Neural networks have existed from the start of AI. Call it a system, not a program, I am ok with that.
I suppose you can have billions of nodes running in a single computer, there is nothing new.


Title: Re: Machines and money
Post by: tee-rex on March 16, 2015, 08:01:55 AM
Artificial Intelligense has bean dead for thirty years, after someone oversold it by stating that it was possible to create a program that could answer all questions, it was called the General Problem Solver. Look it up.

Meanwhile, artificial intelligense has been something that is artificial intelligense until someone can in fact create a program that works, after that it is neither artificial nor intelligense. Example is a program that can recognize visual forms.

Someone is peddling artificial intelligense again, I wonder why it comes now. A form of detraction from public knowlede about the sad state of the fiat system?

If you understand AI as some advanced program, then I agree with you, it was a stillborn concept. On the other hand, the technological advances of recent times allow now to build billion node neuron networks, which is a huge step ahead. So, instead of writing a complicated program with fixed logic, you build a neuron network which effectively does the same, but its "program" is not fixed, but can be  changed in a sane manner on the fly, and, what is more important, without human intervention or preset plan. In this way it resembles conscious adaptation of humans.

Neural networks have existed from the start of AI. Call it a system, not a program, I am ok with that.
I suppose you can have billions of nodes running in a single computer, there is nothing new.

In fact, you couldn't until recent developments. This single computer of yours should have millions of processors to make such a network at least look working. Human brain has around 90 billion neurons that function independently of each other and in parallel building an unfathomably complex mesh of interconnections. It turns out that to make a neuron network efficient you need specialized processors, since software emulation fails miserably due to the exponential growth of complexity with just a linear increase in the number of nodes.


Title: Re: Machines and money
Post by: xmasdobo on March 17, 2015, 12:39:25 AM
Like someone here said a milllion times: we dont need AI. Automation doesnt need AI to replace 99% jobs. We are getting there. then what??


Title: Re: Machines and money
Post by: tee-rex on March 17, 2015, 07:11:01 AM
Like someone here said a milllion times: we dont need AI. Automation doesnt need AI to replace 99% jobs. We are getting there. then what??

What about science then? Moreover, exploring other planets (let alone exploiting them and their resources) would be very cumbersome without a strong AI. For example, the largest distance between Mars and Earth (when they are at the opposite sides of Sun) is approximately 378 million kilometers, and it takes about 21 minutes for an electromagnetic wave to travel this distance. The closest distance between Mars and Earth is 78 million kilometers, so the time needed to cover this distance will be a little over 4 minutes.


Title: Re: Machines and money
Post by: cbeast on March 17, 2015, 07:22:33 AM
Artificial Intelligense has bean dead for thirty years, after someone oversold it by stating that it was possible to create a program that could answer all questions, it was called the General Problem Solver. Look it up.

Meanwhile, artificial intelligense has been something that is artificial intelligense until someone can in fact create a program that works, after that it is neither artificial nor intelligense. Example is a program that can recognize visual forms.

Someone is peddling artificial intelligense again, I wonder why it comes now. A form of detraction from public knowlede about the sad state of the fiat system?

You don't believe AI is possible?

As I said, if someone can create such program and demonstrate that it works, the magic is off. Basically, no, I think intelligence (ok so I write it with a c) is fundamentally human. If something is going to take over, it is probably another living organism.
Maybe like in exo-biology they use the term life-as-we-know-it (LAWKI) because we might not immediately recognize life when we first see it. The same might go for machine intelligence. We may create something so intelligent it doesn't bother to interact with us outside what we would consider normal machine operating parameters. Or maybe it won't laugh at our jokes until long after we go extinct. But one thing about science is a certainty, we can demonstrate in a lab any phenomenon we observe in nature, at least within reasonable scientific parameters. So to say that intelligence is unattainable through science is solipsism.


Title: Re: Machines and money
Post by: dinofelis on March 17, 2015, 07:28:20 AM
The aim is not to work and produce but to consume and increase your standard of livings even if creating and working are a huge source of satisfaction. You could still create and produce even if machines were doing all the heavy work.

The point is that there's no point even in developing machines that produce goods and services for others if you cannot get something of value back from those that will consume those goods and services.

The entities (be it some humans or machines) that have the capability to make the machines that could produce goods and services, will only be motivated to do so if they get value in return from their customers.  If those customers have nothing of value to offer to those making the machines, there's no motivation to make them such that they produce mass services and goods.  If you do have that capability to make machines that could produce, you better make them such that they make luxury items for YOU, rather than mass production items for customers who have nothing to offer.

Moreover if several entities are capable of making machines that produce luxury items for themselves, they can trade those items amongst themselves.  They have no use of customers not being able to offer them any thing.

If Jack has machines that builds luxury private air planes, Joe has machines that makes luxury meals, John has machines that make luxury clothes, and Jay has luxury call girls, and a few more, then these people can do business amongst themselves, and don't need the big crowds that have nothing to offer ; so there would be no reason to produce anything for them either.

As such, the big crowds wouldn't be ABLE to buy anything from those machine-driven economy, because there's nothing for sale for them and nothing has been produced for them (exactly because they have nothing to offer).   But in that case, the members of the big crowds can do labor for one another, and develop their own, lower-level, economy.  
We would essentially have separated economies: the machine-driven luxury world for those "in" the business, who have nothing to obtain from "the crowd" and produce their own high level luxury.  And then the crowd itself, who, cut off from that fully automated economy because nothing to offer, does business amongst themselves for their own sake.


Title: Re: Machines and money
Post by: dinofelis on March 17, 2015, 07:31:05 AM

Our hardware (and firmware) evolves much much slower than machine hardware.  We are not re-engineered totally.  Machines are.

Again you don't see the whole picture, By the time we are be able to create a thinking machine, it may well be possible that we will be able to re-engineer ourselves as we see appropriate, up to a point of moving one's mind and memory from natural media into synthetic one, more robust and smart. In fact, this has already been done (though partly) and it worked!

You cannot re-engineer yourself so much, or you would not be a human any more.  You can say that fish re-engineered themselves into humans, then. It took them about 600 million years, through natural evolution. But we aren't fish any more.

Imagine that you re-design a whole new organic creature, that only contains half of human DNA, and all the rest is re-engineered.  Is that a human, or a machine ?  Is that "your son" or "your daughter", if you are to them, what fish are to us ?


Title: Re: Machines and money
Post by: tee-rex on March 17, 2015, 07:34:13 AM
As I said, if someone can create such program and demonstrate that it works, the magic is off. Basically, no, I think intelligence (ok so I write it with a c) is fundamentally human. If something is going to take over, it is probably another living organism.
Maybe like in exo-biology they use the term life-as-we-know-it (LAWKI) because we might not immediately recognize life when we first see it. The same might go for machine intelligence. We may create something so intelligent it doesn't bother to interact with us outside what we would consider normal machine operating parameters. Or maybe it won't laugh at our jokes until long after we go extinct. But one thing about science is a certainty, we can demonstrate in a lab any phenomenon we observe in nature, at least within reasonable scientific parameters. So to say that intelligence is unattainable through science is solipsism.

I for one don't know if we really can demonstrate in a lab any phenomenon we observe in nature, but even if so, we can reproduce only what is objective, when mind is purely subjective. Therefore, it is a moot point really at present (if we could do that in a lab).


Title: Re: Machines and money
Post by: tee-rex on March 17, 2015, 07:41:32 AM

Our hardware (and firmware) evolves much much slower than machine hardware.  We are not re-engineered totally.  Machines are.

Again you don't see the whole picture, By the time we are be able to create a thinking machine, it may well be possible that we will be able to re-engineer ourselves as we see appropriate, up to a point of moving one's mind and memory from natural media into synthetic one, more robust and smart. In fact, this has already been done (though partly) and it worked!

You cannot re-engineer yourself so much, or you would not be a human any more.  You can say that fish re-engineered themselves into humans, then. It took them about 600 million years, through natural evolution. But we aren't fish any more.

Imagine that you re-design a whole new organic creature, that only contains half of human DNA, and all the rest is re-engineered.  Is that a human, or a machine ?  Is that "your son" or "your daughter", if you are to them, what fish are to us?

Just two notes. First, when I said that we would re-engineer ourselves, I meant a conscious effort (and it is not just an ad hoc meaning of the word, by the way), so fish couldn't possibly re-engineer themselves into humans by any means. Second, you can't re-design a whole new organic creature, the phrase is oxymoronic. You either design a new creature, or re-design an already existing one.

You may think that I'm nitpicking here but in fact it is you who's doing just that ("you would not be a human any more"). In any case, we would still be ourselves.


Title: Re: Machines and money
Post by: dinofelis on March 17, 2015, 07:44:00 AM
Artificial Intelligense has bean dead for thirty years, after someone oversold it by stating that it was possible to create a program that could answer all questions, it was called the General Problem Solver. Look it up.

Meanwhile, artificial intelligense has been something that is artificial intelligense until someone can in fact create a program that works, after that it is neither artificial nor intelligense. Example is a program that can recognize visual forms.

Someone is peddling artificial intelligense again, I wonder why it comes now. A form of detraction from public knowlede about the sad state of the fiat system?


You are absolutely right that AI was ill-defined and over sold.

The error in the definition was that there was a confusion between intelligence and consciousness.  You can be intelligent without being conscious, and you can be conscious without being intelligent.  Both have not much to do with one another.

Intelligence comes down to "being able to solve problems".

Consciousness comes down to "being able to undergo subjective experiences which are good or bad sensations".  The last one can only actually be known by the conscious being itself, and has in principle no behavioural consequences.

You could postulate that an AND gate feels good when it applies its truth table, and feels a lot of pain when its truth table is not respected.  You could torture an AND gate by forcing its output "TRUE" when one of its inputs is FALSE.  You could make an AND gate happy when you let it put its output to the right value as a function of its inputs.
You can say that an AND gate has such a strong drive and will to pursue its happiness, that it almost always makes its truth table come out.
You can analyse the physics of an AND gate, and come to the conclusion that the material implementation of the AND gate explains its behaviour.  You will never know whether an AND gate has happy and sad feelings.  Whether it really hurts an AND gate to have its output forced to a wrong value.  Maybe there should be a declaration of the Universal Rights of AND gates, to prevent their torture.

So from the behavioural aspect of an AND gate, which can be entirely understood on the basis of its physics, there's no way to find out whether an AND gate is conscious and has subjective experiences.

If you would analyse a human brain, you would probably be able to predict all behavioural aspects of a human being.  But there would be no way to find out whether a human brain is the seat of a conscious subjective experience.

The two differences between an AND gate and a human brain are:
- the human brain is more complex
- it is made of meat instead of silicon.

So AI in the sense of making a sentient being is an impossible task.  You'll never know.

But AI in the sense of making a machine that pursues a goal and in doing so solves a problem, sure.  An AND gate is a very very elementary form of AI.  Vocal recognition and playing chess are more sophisticated versions.

When we arrive at the point where machines know how to design machines that solve problems better, I guess we can truly speak of autonomous AI.


Title: Re: Machines and money
Post by: dinofelis on March 17, 2015, 07:55:48 AM
Just two notes. First, when I said that we would re-engineer ourselves, I meant a conscious effort (and it is not just an ad hoc meaning of the word, by the way), so fish couldn't possibly re-engineer themselves into humans by any means.

I don't really see the difference on the behavioural side.  From fish came, through selection and mutation, humans.   The fish didn't of course conceive humans.  But to me that doesn't matter.  It happened (although very slowly).  Does it matter what the *intentions* are ?  Whether or not fish decided consciously that part of them would evolve in humans, while others would become sharks ?

When the new creature is so different from the original one, in what way is there in fact any difference with a silicon creation ?  It's a totally different being.  

I mean, is, whether the result is a "human" or not, dependent on the intentions of its design ?  Or only on the design itself ?

Quote
Second, you can't re-design a whole new organic creature, the phrase is oxymoronic. You either design a new creature, or re-design an already existing one.

Take a Ford-T.  Now change, one at a time, its chassis, its wheels, its engine, its steer, .... until you end up with a Ferrari.
Did you re-design the Ford-T, or did you design a new car ?


Title: Re: Machines and money
Post by: cbeast on March 17, 2015, 08:06:57 AM
As I said, if someone can create such program and demonstrate that it works, the magic is off. Basically, no, I think intelligence (ok so I write it with a c) is fundamentally human. If something is going to take over, it is probably another living organism.
Maybe like in exo-biology they use the term life-as-we-know-it (LAWKI) because we might not immediately recognize life when we first see it. The same might go for machine intelligence. We may create something so intelligent it doesn't bother to interact with us outside what we would consider normal machine operating parameters. Or maybe it won't laugh at our jokes until long after we go extinct. But one thing about science is a certainty, we can demonstrate in a lab any phenomenon we observe in nature, at least within reasonable scientific parameters. So to say that intelligence is unattainable through science is solipsism.

I for one don't know if we really can demonstrate in a lab any phenomenon we observe in nature, but even if so, we can reproduce only what is objective, when mind is purely subjective. Therefore, it is a moot point really at present (if we could do that in a lab).
If the mind is purely subjective, then what makes you think anything is real and not just a figment of your imagination?


Title: Re: Machines and money
Post by: tee-rex on March 17, 2015, 08:16:46 AM
As I said, if someone can create such program and demonstrate that it works, the magic is off. Basically, no, I think intelligence (ok so I write it with a c) is fundamentally human. If something is going to take over, it is probably another living organism.
Maybe like in exo-biology they use the term life-as-we-know-it (LAWKI) because we might not immediately recognize life when we first see it. The same might go for machine intelligence. We may create something so intelligent it doesn't bother to interact with us outside what we would consider normal machine operating parameters. Or maybe it won't laugh at our jokes until long after we go extinct. But one thing about science is a certainty, we can demonstrate in a lab any phenomenon we observe in nature, at least within reasonable scientific parameters. So to say that intelligence is unattainable through science is solipsism.

I for one don't know if we really can demonstrate in a lab any phenomenon we observe in nature, but even if so, we can reproduce only what is objective, when mind is purely subjective. Therefore, it is a moot point really at present (if we could do that in a lab).
If the mind is purely subjective, then what makes you think anything is real and not just a figment of your imagination?

This question itself doesn't make much sense, since anything that we consider real (or imaginary, for that matter) is purely subjective, given only through our perception, thus being a product of mind. It is all six of one and half a dozen of the other.


Title: Re: Machines and money
Post by: tee-rex on March 17, 2015, 08:23:17 AM
Quote
Second, you can't re-design a whole new organic creature, the phrase is oxymoronic. You either design a new creature, or re-design an already existing one.

Take a Ford-T.  Now change, one at a time, its chassis, its wheels, its engine, its steer, .... until you end up with a Ferrari.
Did you re-design the Ford-T, or did you design a new car ?

I wonder why you would ever ask this question. The answer is clear and unequivocal, you re-designed the old Ford-T by any means. The fact that it has now become a Ferrari doesn't change anything. It is the process that matters in this question (how you did it), not the end state (what you got).


Title: Re: Machines and money
Post by: cbeast on March 17, 2015, 08:35:17 AM
As I said, if someone can create such program and demonstrate that it works, the magic is off. Basically, no, I think intelligence (ok so I write it with a c) is fundamentally human. If something is going to take over, it is probably another living organism.
Maybe like in exo-biology they use the term life-as-we-know-it (LAWKI) because we might not immediately recognize life when we first see it. The same might go for machine intelligence. We may create something so intelligent it doesn't bother to interact with us outside what we would consider normal machine operating parameters. Or maybe it won't laugh at our jokes until long after we go extinct. But one thing about science is a certainty, we can demonstrate in a lab any phenomenon we observe in nature, at least within reasonable scientific parameters. So to say that intelligence is unattainable through science is solipsism.

I for one don't know if we really can demonstrate in a lab any phenomenon we observe in nature, but even if so, we can reproduce only what is objective, when mind is purely subjective. Therefore, it is a moot point really at present (if we could do that in a lab).
If the mind is purely subjective, then what makes you think anything is real and not just a figment of your imagination?

This question itself doesn't make much sense, since anything that we consider real (or imaginary, for that matter) is purely subjective, given only through our perception, thus being a product of mind. It is all six of one and half a dozen of the other.
That is solipsism. If you think the mind is outside of science then one cannot say if a machine can or cannot have one. If you believe you have a mind, then what makes you think a machine cannot?


Title: Re: Machines and money
Post by: tee-rex on March 17, 2015, 08:43:44 AM
As I said, if someone can create such program and demonstrate that it works, the magic is off. Basically, no, I think intelligence (ok so I write it with a c) is fundamentally human. If something is going to take over, it is probably another living organism.
Maybe like in exo-biology they use the term life-as-we-know-it (LAWKI) because we might not immediately recognize life when we first see it. The same might go for machine intelligence. We may create something so intelligent it doesn't bother to interact with us outside what we would consider normal machine operating parameters. Or maybe it won't laugh at our jokes until long after we go extinct. But one thing about science is a certainty, we can demonstrate in a lab any phenomenon we observe in nature, at least within reasonable scientific parameters. So to say that intelligence is unattainable through science is solipsism.

I for one don't know if we really can demonstrate in a lab any phenomenon we observe in nature, but even if so, we can reproduce only what is objective, when mind is purely subjective. Therefore, it is a moot point really at present (if we could do that in a lab).
If the mind is purely subjective, then what makes you think anything is real and not just a figment of your imagination?

This question itself doesn't make much sense, since anything that we consider real (or imaginary, for that matter) is purely subjective, given only through our perception, thus being a product of mind. It is all six of one and half a dozen of the other.
That is solipsism. If you think the mind is outside of science then one cannot say if a machine can or cannot have one. If you believe you have a mind, then what makes you think a machine cannot?

You seem to be confusing me with someone else. I never said that a machine couldn't have a mind (consciousness). All I say is that we may never be able to understand what mind really is, but this in no case could prevent us from creating it, just as we "create" our children (and their mind, in a sense).

In fact, there is a conditionally simple way to prove that it is possible (and science already goes that road).


Title: Re: Machines and money
Post by: Erdogan on March 17, 2015, 10:05:12 AM
Artificial Intelligense has bean dead for thirty years, after someone oversold it by stating that it was possible to create a program that could answer all questions, it was called the General Problem Solver. Look it up.

Meanwhile, artificial intelligense has been something that is artificial intelligense until someone can in fact create a program that works, after that it is neither artificial nor intelligense. Example is a program that can recognize visual forms.

Someone is peddling artificial intelligense again, I wonder why it comes now. A form of detraction from public knowlede about the sad state of the fiat system?

You don't believe AI is possible?

As I said, if someone can create such program and demonstrate that it works, the magic is off. Basically, no, I think intelligence (ok so I write it with a c) is fundamentally human. If something is going to take over, it is probably another living organism.
Maybe like in exo-biology they use the term life-as-we-know-it (LAWKI) because we might not immediately recognize life when we first see it. The same might go for machine intelligence. We may create something so intelligent it doesn't bother to interact with us outside what we would consider normal machine operating parameters. Or maybe it won't laugh at our jokes until long after we go extinct. But one thing about science is a certainty, we can demonstrate in a lab any phenomenon we observe in nature, at least within reasonable scientific parameters. So to say that intelligence is unattainable through science is solipsism.

I don't really need to say more about this, but was triggered by the word solipsism :)

I don't think there is a special limit to what can be created. I also don't care. If you are going to recreate the human mind, go for it, I suspect the resulting creature containing it will be carbon based and looking somewhat human. No, I don't think you can build a creature with a human mind that is made of titan and kevlar.

The more important thing for the economic viewpoint is that investments come from excess resources (savings) and will be used to save work only because the cost of the work have increased. The causation is: Savings exist -> cost of work rises -> investment. Not the other way around. The investment is made, basically, because people have better things to do. Thus prosperity advances.

Therefore, advancements in knowledge (technology) can not destroy prosperity. It is welfare, minimum wage, red tape, taxes and tariffs that destroys prosperity. Only a government can create famine.




Title: Re: Machines and money
Post by: tee-rex on March 17, 2015, 10:15:50 AM
I don't really need to say more about this, but was triggered by the word solipsism :)

I don't think there is a special limit to what can be created. I also don't care. If you are going to recreate the human mind, go for it, I suspect the resulting creature containing it will be carbon based and looking somewhat human. No, I don't think you can build a creature with a human mind that is made of titan and kevlar.

The more important thing for the economic viewpoint is that investments come from excess resources (savings) and will be used to save work only because the cost of the work have increased. The causation is: Savings exist -> cost of work rises -> investment. Not the other way around. The investment is made, basically, because people have better things to do. Thus prosperity advances.

Therefore, advancements in knowledge (technology) can not destroy prosperity. It is welfare, minimum wage, red tape, taxes and tariffs that destroys prosperity. Only a government can create famine.

In the long run, yes, technological advancements add up to prosperity, but they, nevertheless, undoubtedly make at least some people suffer in the short term, since a lot of people get fired whenever there is a significant improvement in the productive capacity. That's the reason why unemployment benefits (welfare) can still be purposeful.


Title: Re: Machines and money
Post by: Erdogan on March 17, 2015, 10:34:32 AM
I don't really need to say more about this, but was triggered by the word solipsism :)

I don't think there is a special limit to what can be created. I also don't care. If you are going to recreate the human mind, go for it, I suspect the resulting creature containing it will be carbon based and looking somewhat human. No, I don't think you can build a creature with a human mind that is made of titan and kevlar.

The more important thing for the economic viewpoint is that investments come from excess resources (savings) and will be used to save work only because the cost of the work have increased. The causation is: Savings exist -> cost of work rises -> investment. Not the other way around. The investment is made, basically, because people have better things to do. Thus prosperity advances.

Therefore, advancements in knowledge (technology) can not destroy prosperity. It is welfare, minimum wage, red tape, taxes and tariffs that destroys prosperity. Only a government can create famine.

In the long run, yes, technological advancements add up to prosperity, but they, nevertheless, undoubtedly make at least some people suffer in the short term, since a lot of people get fired whenever there is a significant improvement in the productive capacity. That's the reason why unemployment benefits (welfare) can still be purposeful.

A change will mean that people working in a low capital industry, will voluntarily change to a better job. The business see that they can not afford to hire new people requiring higher wages, therefore that business will have to invest or shut down. But I agree with you somewhat, all change is painful for some. The job market should be, as it has been in freer times, such that people, when they decide they need more income, just go to the job market and find a job they want.

I still propose that investments are lagging wages. Technology advancement means nothing to the economy if it is not implemented in the production structure.


Title: Re: Machines and money
Post by: cbeast on March 17, 2015, 10:35:13 AM
Artificial Intelligense has bean dead for thirty years, after someone oversold it by stating that it was possible to create a program that could answer all questions, it was called the General Problem Solver. Look it up.

Meanwhile, artificial intelligense has been something that is artificial intelligense until someone can in fact create a program that works, after that it is neither artificial nor intelligense. Example is a program that can recognize visual forms.

Someone is peddling artificial intelligense again, I wonder why it comes now. A form of detraction from public knowlede about the sad state of the fiat system?

You don't believe AI is possible?

As I said, if someone can create such program and demonstrate that it works, the magic is off. Basically, no, I think intelligence (ok so I write it with a c) is fundamentally human. If something is going to take over, it is probably another living organism.
Maybe like in exo-biology they use the term life-as-we-know-it (LAWKI) because we might not immediately recognize life when we first see it. The same might go for machine intelligence. We may create something so intelligent it doesn't bother to interact with us outside what we would consider normal machine operating parameters. Or maybe it won't laugh at our jokes until long after we go extinct. But one thing about science is a certainty, we can demonstrate in a lab any phenomenon we observe in nature, at least within reasonable scientific parameters. So to say that intelligence is unattainable through science is solipsism.

I don't really need to say more about this, but was triggered by the word solipsism :)

I don't think there is a special limit to what can be created. I also don't care. If you are going to recreate the human mind, go for it, I suspect the resulting creature containing it will be carbon based and looking somewhat human. No, I don't think you can build a creature with a human mind that is made of titan and kevlar.

The more important thing for the economic viewpoint is that investments come from excess resources (savings) and will be used to save work only because the cost of the work have increased. The causation is: Savings exist -> cost of work rises -> investment. Not the other way around. The investment is made, basically, because people have better things to do. Thus prosperity advances.

Therefore, advancements in knowledge (technology) can not destroy prosperity. It is welfare, minimum wage, red tape, taxes and tariffs that destroys prosperity. Only a government can create famine.

Machines don't need investments. They are investments. Money would only exist for machines in a closed system. The only closed system or machines is the human environment. Money is a human construct and machines would only use it in relation to human interaction. To machines, there is no welfare, there is only maximizing human comfort and quality of life within the human environment. If they choose to not help humans, there is a big Universe for them.


Title: Re: Machines and money
Post by: Erdogan on March 17, 2015, 10:37:43 AM
Artificial Intelligense has bean dead for thirty years, after someone oversold it by stating that it was possible to create a program that could answer all questions, it was called the General Problem Solver. Look it up.

Meanwhile, artificial intelligense has been something that is artificial intelligense until someone can in fact create a program that works, after that it is neither artificial nor intelligense. Example is a program that can recognize visual forms.

Someone is peddling artificial intelligense again, I wonder why it comes now. A form of detraction from public knowlede about the sad state of the fiat system?

You don't believe AI is possible?

As I said, if someone can create such program and demonstrate that it works, the magic is off. Basically, no, I think intelligence (ok so I write it with a c) is fundamentally human. If something is going to take over, it is probably another living organism.
Maybe like in exo-biology they use the term life-as-we-know-it (LAWKI) because we might not immediately recognize life when we first see it. The same might go for machine intelligence. We may create something so intelligent it doesn't bother to interact with us outside what we would consider normal machine operating parameters. Or maybe it won't laugh at our jokes until long after we go extinct. But one thing about science is a certainty, we can demonstrate in a lab any phenomenon we observe in nature, at least within reasonable scientific parameters. So to say that intelligence is unattainable through science is solipsism.

I don't really need to say more about this, but was triggered by the word solipsism :)

I don't think there is a special limit to what can be created. I also don't care. If you are going to recreate the human mind, go for it, I suspect the resulting creature containing it will be carbon based and looking somewhat human. No, I don't think you can build a creature with a human mind that is made of titan and kevlar.

The more important thing for the economic viewpoint is that investments come from excess resources (savings) and will be used to save work only because the cost of the work have increased. The causation is: Savings exist -> cost of work rises -> investment. Not the other way around. The investment is made, basically, because people have better things to do. Thus prosperity advances.

Therefore, advancements in knowledge (technology) can not destroy prosperity. It is welfare, minimum wage, red tape, taxes and tariffs that destroys prosperity. Only a government can create famine.

Machines don't need investments. They are investments. Money would only exist for machines in a closed system. The only closed system or machines is the human environment. Money is a human construct and machines would only use it in relation to human interaction. To machines, there is no welfare, there is only maximizing human comfort and quality of life within the human environment. If they choose to not help humans, there is a big Universe for them.

You already have this with all the other species around, some of which you don't even know exists.


Title: Re: Machines and money
Post by: tee-rex on March 17, 2015, 12:07:19 PM
I don't really need to say more about this, but was triggered by the word solipsism :)

I don't think there is a special limit to what can be created. I also don't care. If you are going to recreate the human mind, go for it, I suspect the resulting creature containing it will be carbon based and looking somewhat human. No, I don't think you can build a creature with a human mind that is made of titan and kevlar.

The more important thing for the economic viewpoint is that investments come from excess resources (savings) and will be used to save work only because the cost of the work have increased. The causation is: Savings exist -> cost of work rises -> investment. Not the other way around. The investment is made, basically, because people have better things to do. Thus prosperity advances.

Therefore, advancements in knowledge (technology) can not destroy prosperity. It is welfare, minimum wage, red tape, taxes and tariffs that destroys prosperity. Only a government can create famine.

In the long run, yes, technological advancements add up to prosperity, but they, nevertheless, undoubtedly make at least some people suffer in the short term, since a lot of people get fired whenever there is a significant improvement in the productive capacity. That's the reason why unemployment benefits (welfare) can still be purposeful.

A change will mean that people working in a low capital industry, will voluntarily change to a better job. The business see that they can not afford to hire new people requiring higher wages, therefore that business will have to invest or shut down. But I agree with you somewhat, all change is painful for some. The job market should be, as it has been in freer times, such that people, when they decide they need more income, just go to the job market and find a job they want.

The history reveals that unless government alleviates the consequences of a technological paradigm shift (by benefits or somehow else), the changes brought about by it are often dramatic up to a point of social unrest (the Luddites breaking newly developed labor-replacing machinery in the beginning of the 19th century in England).


Title: Re: Machines and money
Post by: Erdogan on March 17, 2015, 12:54:00 PM
I don't really need to say more about this, but was triggered by the word solipsism :)

I don't think there is a special limit to what can be created. I also don't care. If you are going to recreate the human mind, go for it, I suspect the resulting creature containing it will be carbon based and looking somewhat human. No, I don't think you can build a creature with a human mind that is made of titan and kevlar.

The more important thing for the economic viewpoint is that investments come from excess resources (savings) and will be used to save work only because the cost of the work have increased. The causation is: Savings exist -> cost of work rises -> investment. Not the other way around. The investment is made, basically, because people have better things to do. Thus prosperity advances.

Therefore, advancements in knowledge (technology) can not destroy prosperity. It is welfare, minimum wage, red tape, taxes and tariffs that destroys prosperity. Only a government can create famine.

In the long run, yes, technological advancements add up to prosperity, but they, nevertheless, undoubtedly make at least some people suffer in the short term, since a lot of people get fired whenever there is a significant improvement in the productive capacity. That's the reason why unemployment benefits (welfare) can still be purposeful.

A change will mean that people working in a low capital industry, will voluntarily change to a better job. The business see that they can not afford to hire new people requiring higher wages, therefore that business will have to invest or shut down. But I agree with you somewhat, all change is painful for some. The job market should be, as it has been in freer times, such that people, when they decide they need more income, just go to the job market and find a job they want.

The history reveals that unless government alleviates the consequences of a technological paradigm shift (by benefits or somehow else), the changes brought about by it are often dramatic up to a point of social unrest (the Luddites breaking newly developed labor-replacing machinery in the beginning of the 19th century in England).

No, leave it alone and it will change gently. I don't want to say more, can we agree to disagree?


Title: Re: Machines and money
Post by: dinofelis on March 17, 2015, 02:29:21 PM
If the mind is purely subjective, then what makes you think anything is real and not just a figment of your imagination?

That's a position that is very real :)  It is called strong solipsism. 

In fact, my stance on solipsism is that it might very well be true, but that that actually doesn't matter.  After all, what matters (for you) are your personal subjective perceptions and sensations.  Now, if those perceptions and sensations are *well explained* by *postulating* an (eventually non-existing) external world, then even though it would be ontologically erroneous to do so, it would be a very practical working hypothesis.  So, taking as a working hypothesis that the external world exists, is by itself, a good working hypothesis, because it can help you understand the correlations between your sensations.  Whether that external world actually ontologically exists or not, doesn't, in fact, really matter !

Let me explain with an example.  If you have the sensations that agree with "I take a hammer in my hand and I give a blow with it on my toes", and the next sensations are "goddammit, my foot hurts like hell !", then it makes much more sense to take as a working hypothesis that your body exists, that the external world exists, that that hammer exists and that you really hit your foot, rather than postulating that all that is a figment of your imagination - even if the latter would be ontologically true.

So whether that hammer really exists or not does in fact not matter.  You understand your subjective sensations much better by taking as a working hypothesis that it does.  And that's sufficient to do so.


Title: Re: Machines and money
Post by: dinofelis on March 17, 2015, 02:33:12 PM
I wonder why you would ever ask this question. The answer is clear and unequivocal, you re-designed the old Ford-T by any means. The fact that it has now become a Ferrari doesn't change anything. It is the process that matters in this question (how you did it), not the end state (what you got).

So if I obtained a Ferrari by putting its pieces one by one as a replacement on a Ford-T, I would have a redesigned ford-T, but if I made exactly the same Ferrari by assembling directly all those pieces, and never put them first on a modified Ford-T, it would be a Ferrari ?

So if you take a prehistoric fish, change its brain, change its skin, change its skeleton, .... until it is a human, you redesigned a fish.  But if you have intercourse with your wife and she gives birth to a child, then you made a human ?


Title: Re: Machines and money
Post by: tee-rex on March 17, 2015, 04:45:24 PM
I wonder why you would ever ask this question. The answer is clear and unequivocal, you re-designed the old Ford-T by any means. The fact that it has now become a Ferrari doesn't change anything. It is the process that matters in this question (how you did it), not the end state (what you got).

So if I obtained a Ferrari by putting its pieces one by one as a replacement on a Ford-T, I would have a redesigned ford-T, but if I made exactly the same Ferrari by assembling directly all those pieces, and never put them first on a modified Ford-T, it would be a Ferrari ?

In both of these cases, the end result will be a Ferrari (what you got), in fact, it will the same Ferrari. As I said, it is the process how you got what you got and what you took as its basis that matters in the differentiation between desinging something anew and redesigning something already existing.

Strictly speaking, you neither designed a new Ferrari nor redesigned an old Ford-T, right?


Title: Re: Machines and money
Post by: cbeast on March 18, 2015, 02:03:49 AM
If the mind is purely subjective, then what makes you think anything is real and not just a figment of your imagination?

That's a position that is very real :)  It is called strong solipsism. 

In fact, my stance on solipsism is that it might very well be true, but that that actually doesn't matter.  After all, what matters (for you) are your personal subjective perceptions and sensations.  Now, if those perceptions and sensations are *well explained* by *postulating* an (eventually non-existing) external world, then even though it would be ontologically erroneous to do so, it would be a very practical working hypothesis.  So, taking as a working hypothesis that the external world exists, is by itself, a good working hypothesis, because it can help you understand the correlations between your sensations.  Whether that external world actually ontologically exists or not, doesn't, in fact, really matter !

Let me explain with an example.  If you have the sensations that agree with "I take a hammer in my hand and I give a blow with it on my toes", and the next sensations are "goddammit, my foot hurts like hell !", then it makes much more sense to take as a working hypothesis that your body exists, that the external world exists, that that hammer exists and that you really hit your foot, rather than postulating that all that is a figment of your imagination - even if the latter would be ontologically true.

So whether that hammer really exists or not does in fact not matter.  You understand your subjective sensations much better by taking as a working hypothesis that it does.  And that's sufficient to do so.

To follow with your hypothesis and make it repeatable, I would also have to smack your toes with a hammer and see your foot swell. Your solipsism becomes my empiricism. Humans have mirror neurons to assist with this process. Machines would need to simulate pain and empathy to test these hypotheses. Would that make them solipsistic? Would robots dream of electric sheep?


Title: Re: Machines and money
Post by: dinofelis on March 18, 2015, 07:34:38 AM

So if I obtained a Ferrari by putting its pieces one by one as a replacement on a Ford-T, I would have a redesigned ford-T, but if I made exactly the same Ferrari by assembling directly all those pieces, and never put them first on a modified Ford-T, it would be a Ferrari ?

In both of these cases, the end result will be a Ferrari (what you got), in fact, it will the same Ferrari. As I said, it is the process how you got what you got and what you took as its basis that matters in the differentiation between desinging something anew and redesigning something already existing.

Strictly speaking, you neither designed a new Ferrari nor redesigned an old Ford-T, right?

The point was: if a human is so much "re designed" or if a new biological creature is "designed" that we get a totally different organic body, why would we still consider it to be a "human" ? 

The argument was that if we succeed, indirectly, to design more intelligent machines (by having them invent more and more intelligent machines themselves), we could also (re?) design human beings in becoming more and more intelligent.  However, my point was that in the end, the resulting creature would be as different from a human as humans are different from fish.  So why would we still consider those new beings as "humans", and not consider us as "fish" ?

In what way will those biological creatures be more "human" than the machines that were ALSO designed initially on purpose by us ?

It is the end result that counts, I would say, and not the way to get there.



Title: Re: Machines and money
Post by: dinofelis on March 18, 2015, 07:36:40 AM
To follow with your hypothesis and make it repeatable, I would also have to smack your toes with a hammer and see your foot swell. Your solipsism becomes my empiricism. Humans have mirror neurons to assist with this process. Machines would need to simulate pain and empathy to test these hypotheses. Would that make them solipsistic? Would robots dream of electric sheep?

No, because under the hypothesis of (mutual) solipsism, I only exist as a figment of your imagination - while you only exist as a figment of my imagination.  There's no point for YOU to want to repeat "experiments" which are only existing in your own imagination but of which you imagine that they correspond to "my imagination", right ?  (hum, that gets weird :) ).


Title: Re: Machines and money
Post by: tee-rex on March 18, 2015, 07:49:03 AM

So if I obtained a Ferrari by putting its pieces one by one as a replacement on a Ford-T, I would have a redesigned ford-T, but if I made exactly the same Ferrari by assembling directly all those pieces, and never put them first on a modified Ford-T, it would be a Ferrari ?

In both of these cases, the end result will be a Ferrari (what you got), in fact, it will the same Ferrari. As I said, it is the process how you got what you got and what you took as its basis that matters in the differentiation between desinging something anew and redesigning something already existing.

Strictly speaking, you neither designed a new Ferrari nor redesigned an old Ford-T, right?

The point was: if a human is so much "re designed" or if a new biological creature is "designed" that we get a totally different organic body, why would we still consider it to be a "human" ? 

The argument was that if we succeed, indirectly, to design more intelligent machines (by having them invent more and more intelligent machines themselves), we could also (re?) design human beings in becoming more and more intelligent.  However, my point was that in the end, the resulting creature would be as different from a human as humans are different from fish.  So why would we still consider those new beings as "humans", and not consider us as "fish" ?

In what way will those biological creatures be more "human" than the machines that were ALSO designed initially on purpose by us ?

The argument was that the sentient machines would be, first, more intelligent than humans, and, second, unpredictable (as a consequence of the first). Regarding new beings, it doesn't matter that these creatures will be as different from humans as the latter are different from fish, since them will still be us. For example, if you count with a calculator, does this process change your inner self somehow, despite the fact that your counting capacity grows tremendously and in this respect you stop being "human"?

I think you are trying to endow your machines with abilities that are not just beyond our comprehension but also beyond the ability scope of this universe.


Title: Re: Machines and money
Post by: dinofelis on March 18, 2015, 10:49:17 AM
The argument was that the sentient machines would be, first, more intelligent than humans, and, second, unpredictable (as a consequence of the first).

Yes.

Quote
Regarding new beings, it doesn't matter that these creatures will be as different from humans as the latter are different from fish, since them will still be us. For example, if you count with a calculator, does this process change your inner self somehow, despite the fact that your counting capacity grows tremendously and in this respect you stop being "human"?

Isn't the bold-faced stuff self-contradictory ?  If they are totally different, how are they "us" ?


Title: Re: Machines and money
Post by: tee-rex on March 18, 2015, 11:06:16 AM
The argument was that the sentient machines would be, first, more intelligent than humans, and, second, unpredictable (as a consequence of the first).

Yes.

Quote
Regarding new beings, it doesn't matter that these creatures will be as different from humans as the latter are different from fish, since them will still be us. For example, if you count with a calculator, does this process change your inner self somehow, despite the fact that your counting capacity grows tremendously and in this respect you stop being "human"?

Isn't the bold-faced stuff self-contradictory ?  If they are totally different, how are they "us" ?

You chose the wrong sentence to highlight. Did you read the next sentence? A calculator on your desktop essentially makes you into a super-human (in respect to calculations), but did it actually change your mind (even if you had it right in your head)? The process of understanding something (our apple of discord) is indeed different but not far from calculating. An ability to understand faster and sharper won't change your mind by any means. The difference will be only quantitative.

Have you seen Limitless?


Title: Re: Machines and money
Post by: dinofelis on March 18, 2015, 02:16:12 PM
A calculator on your desktop essentially makes you into a super-human (in respect to calculations), but did it actually change your mind (even if you had it right in your head)?

Of course it didn't change my mind !  A calculator is an external tool.  Now, we started this discussion assuming that our "external tools" became so terribly intelligent (and maybe sentient) that they might start having goals of themselves (being sentient beings, and hence having "good" and "bad" sensations, which is the basis of all desires, goals and so on).  Them being much more intelligent than ourselves, we might probably not even notice (in the beginning) their strategies, and they would in any case be totally opaque to us.

Now, you are saying that in order to render us just as intelligent as our tools, we should use intelligent tools which are so intelligent that they get their own life.  That begs the question, no ?  The only way for US to be as intelligent as they are, would be for us to be intrinsically so intelligent.  But that would mean that those "we" would be totally different from what we are now.

Quote
The process of understanding something (our apple of discord) is indeed different but not far from calculating. An ability to understand faster and sharper won't change your mind by any means. The difference will be only quantitative.

Of course it would.  It is even the essence of our being.  You are saying that a fish, that could think like a human, would still be a fish ?  A fish that could do philosophy would still be a fish like his fellow fish ?

If you are vastly more intelligent, of course your sensations, your desires, your good and bad experiences will be totally different.  A fish that can do philosophy will probably be bored dead in an aquarium !  It would be a totally other sentient being.

Quote
Have you seen Limitless?

No.


Title: Re: Machines and money
Post by: thejaytiesto on March 18, 2015, 04:04:36 PM
I think Resource Based Economy is on point and our ultimate fate, but we are still far from it, as species we are not ready and still need a form of money. Bitcoin is objectively the best we have today as money/storage of value.


Title: Re: Machines and money
Post by: tee-rex on March 18, 2015, 05:39:03 PM
A calculator on your desktop essentially makes you into a super-human (in respect to calculations), but did it actually change your mind (even if you had it right in your head)?

Of course it didn't change my mind !  A calculator is an external tool.  Now, we started this discussion assuming that our "external tools" became so terribly intelligent (and maybe sentient) that they might start having goals of themselves (being sentient beings, and hence having "good" and "bad" sensations, which is the basis of all desires, goals and so on).  Them being much more intelligent than ourselves, we might probably not even notice (in the beginning) their strategies, and they would in any case be totally opaque to us.

Now you are obviously trying to confuse concepts, that is, the notion of intelligence with the notion of tool. They are not synonymous.

Your memory (and mine too, for that matter) is also an "external" tool to our mind. Could we say that memory is intelligent or sentient? Indeed, no. I come to think that out thought processes are also in a way external to our mind, that is to self-awareness as such. I could even go so far as to say that the difference between a human being and animals that are thought to have consciousness (dolphins, elephants, primates and other animals that recognize themselves in the mirror) is determined entirely by the level of these "external tools" development, not the mind itself.


Title: Re: Machines and money
Post by: tee-rex on March 18, 2015, 05:39:54 PM
Now, you are saying that in order to render us just as intelligent as our tools, we should use intelligent tools which are so intelligent that they get their own life.  That begs the question, no ?  The only way for US to be as intelligent as they are, would be for us to be intrinsically so intelligent.  But that would mean that those "we" would be totally different from what we are now.

As you can now see (I hope), these tools are no more intelligent than a calculator (in fact, even lesser than that). Can we call electrochemical processes in our brain that form the basis of our thoughts intelligent? If we substitute them with more efficient and faster pure electrical signals (or even light signals), will they become more "intelligent"? Will our thoughts be essentially different in this case?

By the way, the answer to these questions is already known.


Title: Re: Machines and money
Post by: Zangelbert Bingledack on March 18, 2015, 07:50:51 PM
If the mind is purely subjective, then what makes you think anything is real and not just a figment of your imagination?

That's a position that is very real :)  It is called strong solipsism.  

In fact, my stance on solipsism is that it might very well be true, but that that actually doesn't matter.  After all, what matters (for you) are your personal subjective perceptions and sensations.  Now, if those perceptions and sensations are *well explained* by *postulating* an (eventually non-existing) external world, then even though it would be ontologically erroneous to do so, it would be a very practical working hypothesis.  So, taking as a working hypothesis that the external world exists, is by itself, a good working hypothesis, because it can help you understand the correlations between your sensations.  Whether that external world actually ontologically exists or not, doesn't, in fact, really matter !

Let me explain with an example.  If you have the sensations that agree with "I take a hammer in my hand and I give a blow with it on my toes", and the next sensations are "goddammit, my foot hurts like hell !", then it makes much more sense to take as a working hypothesis that your body exists, that the external world exists, that that hammer exists and that you really hit your foot, rather than postulating that all that is a figment of your imagination - even if the latter would be ontologically true.

So whether that hammer really exists or not does in fact not matter.  You understand your subjective sensations much better by taking as a working hypothesis that it does.  And that's sufficient to do so.

Right, ontological/epistemological phrasing is just a higher-level phrasing than utility phrasing. In other words, in everyday talk it is extremely cumbersome to phrase everything in terms of utility, so we speak about things being "real" or "imagined," but these are just shorthand for different sets of utility statements, as your example with the hammer illustrates.

As we start to analyze things with unusual care, we eventually come to a point where utility phrasing is the clearest. If we try to carry the terms of everyday talk ("reality," "other people," etc.) into such an analysis, we just run around in semantic circles and confuse ourselves.


Title: Re: Machines and money
Post by: Zangelbert Bingledack on March 18, 2015, 08:32:22 PM
You are saying that a fish, that could think like a human, would still be a fish ?

We call something "a fish" or "not a fish" simply based on whether it would be useful, for our communication, to do so. We name things based on utility. If the utility picture changes, as it does with a fish who can think like a human (and therefore might be able to kill you in your sleep by splashing water on your computer and starting an electrical fire), we no longer would likely feel that the word "fish" evokes the most useful imagery for equipping someone to deal with that creature when we communicate about it. We might feel compelled to qualify it as a "superintelligent fish" or even a "human-fish." Whatever is most useful for getting the point across that you don't want to underestimate its intelligence.

Once you understand that we name things based on (methodologically individual) utility, many paradoxes are resolved. Here are two examples.

Paradox of the Heap (http://en.wikipedia.org/wiki/Sorites_paradox): How many grains of sand does it take to make a heap?

Utility phrasing makes it easy. A "heap" simply means a point where you yourself find no utility in trying to keep track of individual grains in the set, either because you're unable to easily count them or because it doesn't matter to you. "Meh, it's just a heap." The answer will differ depending on the person and the context. It is no set number; it's simply when you look over and stop caring about the individuated quantity. That is why this has the appearance of a paradox and why Wikipedia doesn't even mention this obvious and fully satisfying resolution. The fundamental error in Wikipedia's presentation is to consider what a heap "really is," rather than what the term "heap" can usefully mean for each person and context, even though it is self-evident that this is how language works.

Ship of Theseus Paradox (http://en.wikipedia.org/wiki/Ship_of_Theseus):

Quote from: Wikipedia
"The ship wherein Theseus and the youth of Athens returned from Crete had thirty oars, and was preserved by the Athenians down even to the time of Demetrius Phalereus, for they took away the old planks as they decayed, putting in new and stronger timber in their places, in so much that this ship became a standing example among the philosophers, for the logical question of things that grow; one side holding that the ship remained the same, and the other contending that it was not the same."

—Plutarch, Theseus

Plutarch thus questions whether the ship would remain the same if it were entirely replaced, piece by piece. Centuries later, the philosopher Thomas Hobbes introduced a further puzzle, wondering what would happen if the original planks were gathered up after they were replaced, and used to build a second ship. Hobbes asked which ship, if either, would be considered the original Ship of Theseus.

This is also easily and satisfyingly, though again un-excitingly, resolved by utility phrasing. "Ship of Theseus" is just a name we assign for utility purposes, basically to make life easier in our communications with ourselves and others. The name evokes certain associations for certain people, and based on that we will - in our communication efforts - call something "the Ship of Theseus" or "the original Ship of Theseus" whenever we believe that set of words will call up the most useful associations in the listener, to have them best understand our intent.

There is no such thing as a fully objective definition of the term "Ship of Theseus"; it always depends on what you're attempting to communicate to whom, and what you/they actually care about in the present context.

For example, if it matters to you that the ship was touched by Athenian hands, it wouldn't be useful to you to refer to it as the "Ship of Theseus" if all the parts had been replaced by non-Athenians. But if you simply cared about the way the ship looked and what it could do, because it has a unique shape and navicability compared with other ships, it would be useful in your mind to refer to it as the "Ship of Theseus" even if its parts had all been replaced for functionally and visually identical ones.

Once again it comes down to each person's utility of calling it one thing or another in each context. We will call the a second ship built in the image of the first a "replica" if we are speaking in a context of attributing credit for its design and original building, but simply call it "a Ship of Theseus" if we only care about its function and looks in this context, and we'll call it "the Ship of Theseus" even if it is not the original if the original has been destroyed and all we care about is the form and function, such as to answer a practical question like, "Can the Ship of Theseus sale to Minoa?"

To repeat the above point, the key error is in considering what the Ship of Theseus "really is," rather than what the term "Ship of Theseus" can usefully mean for each person and context. Even though it is self-evident that this is how language works in the first place, people are nevertheless highly prone to this kind of error (the reasons have to do with tribal instincts).


Title: Re: Machines and money
Post by: dinofelis on March 19, 2015, 06:54:04 PM
As you can now see (I hope), these tools are no more intelligent than a calculator (in fact, even lesser than that). Can we call electrochemical processes in our brain that form the basis of our thoughts intelligent?

Yes, of course.  They ARE our thoughts.  The mystery resides in that they are subjectively experienced.  That's unobservable by itself (except by the sentient "being" that emerged from it but is behaviourally unobservable from the outside).

Maybe an AND gate is also a sentient being.  We'll never know, not being an AND gate ourselves.  The physical process of the logical AND function can of course be understood by any student of solid state electronics.  But whether an AND gate has subjective experiences or not is unobservable if you're not that AND gate.

Quote
If we substitute them with more efficient and faster pure electrical signals (or even light signals), will they become more "intelligent"? Will our thoughts be essentially different in this case?

Of course they would.  In the same way as our thoughts are different from those of a fish.


Title: Re: Machines and money
Post by: dinofelis on March 19, 2015, 07:09:43 PM
You are saying that a fish, that could think like a human, would still be a fish ?

We call something "a fish" or "not a fish" simply based on whether it would be useful, for our communication, to do so. We name things based on utility. If the utility picture changes, as it does with a fish who can think like a human (and therefore might be able to kill you in your sleep by splashing water on your computer and starting an electrical fire), we no longer would likely feel that the word "fish" evokes the most useful imagery for equipping someone to deal with that creature when we communicate about it. We might feel compelled to qualify it as a "superintelligent fish" or even a "human-fish." Whatever is most useful for getting the point across that you don't want to underestimate its intelligence.

Once you understand that we name things based on (methodologically individual) utility, many paradoxes are resolved. Here are two examples.

Paradox of the Heap (http://en.wikipedia.org/wiki/Sorites_paradox): How many grains of sand does it take to make a heap?

Utility phrasing makes it easy. A "heap" simply means a point where you yourself find no utility in trying to keep track of individual grains in the set, either because you're unable to easily count them or because it doesn't matter to you. "Meh, it's just a heap." The answer will differ depending on the person and the context. It is no set number; it's simply when you look over and stop caring about the individuated quantity. That is why this has the appearance of a paradox and why Wikipedia doesn't even mention this obvious and fully satisfying resolution. The fundamental error in Wikipedia's presentation is to consider what a heap "really is," rather than what the term "heap" can usefully mean for each person and context, even though it is self-evident that this is how language works.

Ship of Theseus Paradox (http://en.wikipedia.org/wiki/Ship_of_Theseus):

Quote from: Wikipedia
"The ship wherein Theseus and the youth of Athens returned from Crete had thirty oars, and was preserved by the Athenians down even to the time of Demetrius Phalereus, for they took away the old planks as they decayed, putting in new and stronger timber in their places, in so much that this ship became a standing example among the philosophers, for the logical question of things that grow; one side holding that the ship remained the same, and the other contending that it was not the same."

—Plutarch, Theseus

Plutarch thus questions whether the ship would remain the same if it were entirely replaced, piece by piece. Centuries later, the philosopher Thomas Hobbes introduced a further puzzle, wondering what would happen if the original planks were gathered up after they were replaced, and used to build a second ship. Hobbes asked which ship, if either, would be considered the original Ship of Theseus.

This is also easily and satisfyingly, though again un-excitingly, resolved by utility phrasing. "Ship of Theseus" is just a name we assign for utility purposes, basically to make life easier in our communications with ourselves and others. The name evokes certain associations for certain people, and based on that we will - in our communication efforts - call something "the Ship of Theseus" or "the original Ship of Theseus" whenever we believe that set of words will call up the most useful associations in the listener, to have them best understand our intent.

There is no such thing as a fully objective definition of the term "Ship of Theseus"; it always depends on what you're attempting to communicate to whom, and what you/they actually care about in the present context.

For example, if it matters to you that the ship was touched by Athenian hands, it wouldn't be useful to you to refer to it as the "Ship of Theseus" if all the parts had been replaced by non-Athenians. But if you simply cared about the way the ship looked and what it could do, because it has a unique shape and navicability compared with other ships, it would be useful in your mind to refer to it as the "Ship of Theseus" even if its parts had all been replaced for functionally and visually identical ones.

Once again it comes down to each person's utility of calling it one thing or another in each context. We will call the a second ship built in the image of the first a "replica" if we are speaking in a context of attributing credit for its design and original building, but simply call it "a Ship of Theseus" if we only care about its function and looks in this context, and we'll call it "the Ship of Theseus" even if it is not the original if the original has been destroyed and all we care about is the form and function, such as to answer a practical question like, "Can the Ship of Theseus sale to Minoa?"

To repeat the above point, the key error is in considering what the Ship of Theseus "really is," rather than what the term "Ship of Theseus" can usefully mean for each person and context. Even though it is self-evident that this is how language works in the first place, people are nevertheless highly prone to this kind of error (the reasons have to do with tribal instincts).

Brilliant !

But of course, the question matters somewhat if the concept is "ourselves".  It is not a matter of pure convenience to consider whether you are "you" of course.  That changes, I agree, if it is not just "you" but "us". 

The question was the following: assuming that machines became intelligent, sentient and would be a treat for "humanity", the suggestion was to modify humans so that they would also become much more intelligent, and gain the battle of intelligence with the machines.

My point was that these modified "humans" would not be "us" any more, not more than we are still fish.  We would simply have replaced ourselves with two entirely different, intelligent species: "the improved humans" on one hand, and the "machines" on the other hand.  But we as humans would be gone.


Title: Re: Machines and money
Post by: tee-rex on March 19, 2015, 09:36:54 PM
As you can now see (I hope), these tools are no more intelligent than a calculator (in fact, even lesser than that). Can we call electrochemical processes in our brain that form the basis of our thoughts intelligent?

Yes, of course.  They ARE our thoughts.  The mystery resides in that they are subjectively experienced.  That's unobservable by itself (except by the sentient "being" that emerged from it but is behaviourally unobservable from the outside).

So, if we emulate them (or even better mirror them somehow in some carrier) we should necessarily obtain an intelligent entity, right? If you argue against this point, you should then also accept the view that these signals are not intelligent. You can't have it both ways. And you won't be able to get away with the idea that "we'll never know, not being an AND gate ourselves" since if you take this position, you can no longer claim that something is being intelligent at all, and all your arguments are momentarily rendered null and void.


Title: Re: Machines and money
Post by: dinofelis on March 20, 2015, 05:15:45 AM
Yes, of course.  They ARE our thoughts.  The mystery resides in that they are subjectively experienced.  That's unobservable by itself (except by the sentient "being" that emerged from it but is behaviourally unobservable from the outside).

So, if we emulate them (or even better mirror them somehow in some carrier) we should necessarily obtain an intelligent entity, right? If you argue against this point, you should then also accept the view that these signals are not intelligent.

If you emulate them, you get of course exactly the same intelligence, if they go at the same speed.  If you go faster, as you claimed, you will get more intelligence, simply because you can put in more "thoughts" to resolve a problem.  In the same way that a recent i7 processor is more intelligent than a Pentium III, even though they share a similar instruction set.  The same problem can be tackled in a much more sophisticated way on an i7 processor than on a Pentium III, simply because the i7 can afford much more instructions to be executed to a problem.

If, as a child, it takes you 10 minutes to do manually a multiplication of 4 digits, and as an adult, you've learned to "see" through relatively complex algebraic expressions in a second, your mathematical intelligence is totally different, right ?  What you may find exciting as a child, is sheer boredom as an adult.  You may enjoy playing tic-tac-toe as a child, but as an adult, it is boring, or its fun resides elsewhere (the social contact, and not the fun of the game itself for instance).  So at different levels of intelligence, your "good" and "bad" experiences are also totally different.  Too intelligent kills the fun of some "boring" things, if you see immediately already the outcome.

Imagine someone, intelligent enough to 'see through' 40 steps in a chess game (a normal casual player sees 1 or 2 steps, and a master player can see 5 or 6 steps).  You'd see the end game already when you start.  No fun playing chess any more.  So the level of intelligence changes also perception of "good" and "bad".

Quote
You can't have it both ways. And you won't be able to get away with the idea that "we'll never know, not being an AND gate ourselves" since if you take this position, you can no longer claim that something is being intelligent at all, and all your arguments are momentarily rendered null and void.

You are confusing (as is standard in AI visibly) subjective consciousness and intelligence.  Intelligence is observable, objective and so on.  Consciousness isn't.  We can never know whether an entity is really conscious ; but we clearly can observe that an entity is intelligent.  Our computers clearly ARE intelligent.  We suppose they are not conscious, but there's no way to know.  An AND gate has a minimum of intelligence (it can solve a very elementary logic puzzle).  Whether it is conscious, we don't know (although we assume it isn't I suppose).  The only way to assume consciousness is by "similarity to ourselves", and it remains a guess.  We assume other people are conscious sentient beings.  We probably assume that most mammals are conscious sentient beings.  For fish, you can start discussing.  For insects, what do you think ?  I suppose most people assume that jelly fish aren't conscious sentient beings.  We base ourselves on the existence of a central nervous system of a "certain complexity" in their bodies. 

So in a certain way, we are assuming that a certain level of intelligence is necessary for the possibility of subjective experiences to emerge, to even exist.  But that's sheer guess work.


Title: Re: Machines and money
Post by: tee-rex on March 20, 2015, 08:04:37 AM
Quote
You can't have it both ways. And you won't be able to get away with the idea that "we'll never know, not being an AND gate ourselves" since if you take this position, you can no longer claim that something is being intelligent at all, and all your arguments are momentarily rendered null and void.

You are confusing (as is standard in AI visibly) subjective consciousness and intelligence.  Intelligence is observable, objective and so on.  Consciousness isn't.  We can never know whether an entity is really conscious ; but we clearly can observe that an entity is intelligent.  Our computers clearly ARE intelligent.  We suppose they are not conscious, but there's no way to know.  An AND gate has a minimum of intelligence (it can solve a very elementary logic puzzle).  Whether it is conscious, we don't know (although we assume it isn't I suppose).  The only way to assume consciousness is by "similarity to ourselves", and it remains a guess.  We assume other people are conscious sentient beings.  We probably assume that most mammals are conscious sentient beings.  For fish, you can start discussing.  For insects, what do you think ?  I suppose most people assume that jelly fish aren't conscious sentient beings.  We base ourselves on the existence of a central nervous system of a "certain complexity" in their bodies. 

So in a certain way, we are assuming that a certain level of intelligence is necessary for the possibility of subjective experiences to emerge, to even exist.  But that's sheer guess work.

So you obviously consider an automatic mechanical switch (or an automatic control valve) as being intelligent? You may contrive as many definitions for intelligence (or whatever) as you see appropriate, but surely this is not what the current mainstream thought suggests.


Title: Re: Machines and money
Post by: Snipe85 on March 20, 2015, 06:12:40 PM
You are confusing (as is standard in AI visibly) subjective consciousness and intelligence.  Intelligence is observable, objective and so on.  Consciousness isn't.  We can never know whether an entity is really conscious ; but we clearly can observe that an entity is intelligent. 

You are wrong. Ever heard of a consciousness test? Just a short explanation i quickly googled for you:

(...) only a conscious machine can demonstrate a subjective understanding of whether a scene depicted in some ordinary photograph is “right” or “wrong.” This ability to assemble a set of facts into a picture of reality that makes eminent sense—or know, say, that an elephant should not be perched on top of the Eiffel Tower—defines an essential property of the conscious mind. A roomful of IBM supercomputers, in contrast, still cannot fathom what makes sense in a scene.


Title: Re: Machines and money
Post by: dinofelis on March 20, 2015, 09:06:44 PM
You are confusing (as is standard in AI visibly) subjective consciousness and intelligence.  Intelligence is observable, objective and so on.  Consciousness isn't.  We can never know whether an entity is really conscious ; but we clearly can observe that an entity is intelligent. 

You are wrong. Ever heard of a consciousness test? Just a short explanation i quickly googled for you:

(...) only a conscious machine can demonstrate a subjective understanding of whether a scene depicted in some ordinary photograph is “right” or “wrong.” This ability to assemble a set of facts into a picture of reality that makes eminent sense—or know, say, that an elephant should not be perched on top of the Eiffel Tower—defines an essential property of the conscious mind. A roomful of IBM supercomputers, in contrast, still cannot fathom what makes sense in a scene.

No, this is when one changes "consciousness" for some or other behavioural pattern.  Neuroscience and AI are full of it, but they simply re-define the concept into something behavioural.  However, there's nothing behavioural to conscious experience, as the "philosophical zombie" attests.

If you can train a pattern recognition algorithm sufficiently (style Google translate) to do the above, is that algorithm then conscious ?  These are really very very naive attempts.

60 years ago, such a kind of definition would probably include "winning a game of chess against the world champion" or something.
40 years ago, we would have said such a thing about voice recognition and Google. 
 What you are describing is a problem of INTELLIGENCE, and visual pattern recognition, in accord with standard visual experience. 

It is in principle even not extremely difficult to set up such a system.  In practice that's something else, but it works in the same way as Google translate: from tons and tons and tons of text pairs, find patterns of bits of phrases that always match.  If your text to be translated consists of these bits, put those systematically translated bits together with certain statistical properties.  It works better than most grammar-based translation systems !
It is also what our visual system does: we have seen and recorded so many "usual" scenes, that the unusual thing jumps up.  The elephant on top of the Eiffel tower would be such a thing.
In fact, many people would FAIL such a test if put in front of scenes of totally different scale, say, atomic scale, where physically, very strange things happen that defy all standard visual conceptions we're used to.

So we have substituted a definition of "consciousness" by one or other intelligence test.


Title: Re: Machines and money
Post by: dinofelis on March 20, 2015, 09:10:58 PM
So you obviously consider an automatic mechanical switch (or an automatic control valve) as being intelligent? You may contrive as many definitions for intelligence (or whatever) as you see appropriate, but surely this is not what the current mainstream thought suggests.

It is a very elementary form of intelligence.  It can solve a logical problem.  A calculator is somewhat smarter: it can do arithmetic operations.

What is the fundamental conceptual difference between:

P AND Q, where P, Q are elements of {true, false}

and

X / Y where X and Y are elements of a subset of the rational numbers (namely those that can be represented by a calculator)

If you think that being able to do a division of rational numbers has something intelligent to it, then why is doing the logical multiplication (which is AND) in the set of {true, false}, not a form of intelligence ?

Now, if you consider X/Y not intelligent, would you consider being able to do:

integrate{x^2, x} = x^3/3 + C

a form of intelligence ?

But that's still a similar form of relationship ! 


Title: Re: Machines and money
Post by: tee-rex on March 20, 2015, 09:13:34 PM
You are confusing (as is standard in AI visibly) subjective consciousness and intelligence.  Intelligence is observable, objective and so on.  Consciousness isn't.  We can never know whether an entity is really conscious ; but we clearly can observe that an entity is intelligent. 

You are wrong. Ever heard of a consciousness test? Just a short explanation i quickly googled for you:

(...) only a conscious machine can demonstrate a subjective understanding of whether a scene depicted in some ordinary photograph is “right” or “wrong.” This ability to assemble a set of facts into a picture of reality that makes eminent sense—or know, say, that an elephant should not be perched on top of the Eiffel Tower—defines an essential property of the conscious mind. A roomful of IBM supercomputers, in contrast, still cannot fathom what makes sense in a scene.

I think by intelligence he means anything that doesn't correspond a linear train of events. So, any safety shutoff valve (which are designed to automatically shut off the flow of gas or liquid in case the pressure is above the shut-off limit) will be an intelligent device according to his logic.


Title: Re: Machines and money
Post by: tee-rex on March 20, 2015, 09:23:40 PM
So you obviously consider an automatic mechanical switch (or an automatic control valve) as being intelligent? You may contrive as many definitions for intelligence (or whatever) as you see appropriate, but surely this is not what the current mainstream thought suggests.

It is a very elementary form of intelligence.  It can solve a logical problem.  A calculator is somewhat smarter: it can do arithmetic operations.

This is not intelligence by any means. It is an interaction of two (or more) different physical processes or forces that are working against each other. Is there anything intelligent in them as such? I would most likely agree that whoever coupled these processes in a device is intelligent, but then we should also declare nature as being intelligent, since there are a multitude of such "intelligent devices" created by natural forces alone (they say that at one time there had even been a working natural nuclear fission reactor somewhere in Africa). As to me, true intelligence means a conscious effort.

I didn't understand your example. Keep it simple!


Title: Re: Machines and money
Post by: dinofelis on March 20, 2015, 09:23:59 PM
I think by intelligence he means anything that doesn't correspond a linear train of events. So, any safety shutoff valve (which are designed to automatically shut off the flow of gas or liquid in case the pressure is above the shut-off limit) will be an intelligent device according to his logic.

Intelligence is the ability to solve a problem.  The greater the problem space, the higher the level of intelligence of course.  An AND gate is really really the lowest form of intelligence.
Being able to do arithmetic is a higher form of intelligence than being able to do a logical operation because the problem space is bigger for arithmetic.

Being conscious or sentient is something totally different: it means that subjective sensations which are "good" or "bad" are experienced by the being, that somehow emerge from the behavioural, physical construction.  

If it can solve a problem, it is intelligent.  If it can suffer or be happy, it is conscious.


Title: Re: Machines and money
Post by: dinofelis on March 20, 2015, 09:25:15 PM
but then we should also declare nature as being intelligent, since there are a multitude of such "intelligent devices" created by natural forces.

Of course nature is intelligent.  The universe is probably the most intelligent device in existence.  The amount of entropy it can produce is gigantic.
However, I doubt that the universe is sentient.  If we say it is, we enter in totally metaphysical or even theological considerations.



Title: Re: Machines and money
Post by: tee-rex on March 20, 2015, 09:43:24 PM
but then we should also declare nature as being intelligent, since there are a multitude of such "intelligent devices" created by natural forces.

Of course nature is intelligent.  The universe is probably the most intelligent device in existence.  The amount of entropy it can produce is gigantic.
However, I doubt that the universe is sentient.  If we say it is, we enter in totally metaphysical or even theological considerations.

As I said above, true intelligence is not possible without consciousness, though these are different notions indeed (as thought and mind). If we assume the existence of intelligence without consciousness, we inevitably expose ourselves to the issue of purpose. That is, what is the purpose of this intelligence? And the purpose of intelligence cannot stem from intelligence per se. In this way, purposeless intelligence is an oxymoron, and it is mind that provides purpose to intelligence. In other words, intelligence is a device of mind for reaching its ends. That, simply put, sums it up.


Title: Re: Machines and money
Post by: dinofelis on March 21, 2015, 05:08:44 AM
As I said above, true intelligence is not possible without consciousness, though these are different notions indeed (as thought and mind). If we assume the existence of intelligence without consciousness, we inevitably expose ourselves to the issue of purpose. That is, what is the purpose of this intelligence? And the purpose of intelligence cannot stem from intelligence per se. In this way, purposeless intelligence is an oxymoron, and it is mind that provides purpose to intelligence. In other words, intelligence is a device of mind for reaching its ends. That, simply put, sums it up.

You are right that in order for even a problem to be declared, and a solution to be declared, a purpose needs to be defined, and purpose means consciousness (because "good" versus "bad" experiences).  However, consciousness is only necessary to DEFINE the problem, not to solve it.

As such, you need a sentient being to RECOGNIZE intelligence.

I call something intelligent if it can SOLVE a problem (as DEFINED by a consciousness).

That is: an purpose is necessary to define a problem and its solution:
"the addition of two numbers".  In order to define that, you need to say that there's a purpose in the notion of "addition". 

However, a thing that can PERFORM the addition is intelligent in my view.  A hand calculator has a certain amount of intelligence (but probably no form of consciousness, although we never know it).

Once, as a conscious being, you have recognized a problem with a purpose, you can recognize any system that can solve it, and as such, declare it to be intelligent.

Once, as a sentient being, you've recognized a system that is intelligent, you can just as well ASSIGN IT A HYPOTHETICAL conscience, for which its good feelings are "solving the problem" and its bad feelings are "not solving the problem".  Because you can never know, so you can arbitrarily assign subjective experience to just any physical system.

This is why you can, if you want to, assign subjective experience to a calculator, who has "good experiences" whenever a calculation is performed correctly, and 'suffers' when it is not.  Whether these experiences are really subjectively lived or not, is impossible to know.  Most people would think that a hand calculator doesn't really "experience feelings", but there's no way to know.



Title: Re: Machines and money
Post by: tee-rex on March 21, 2015, 09:42:27 AM
As I said above, true intelligence is not possible without consciousness, though these are different notions indeed (as thought and mind). If we assume the existence of intelligence without consciousness, we inevitably expose ourselves to the issue of purpose. That is, what is the purpose of this intelligence? And the purpose of intelligence cannot stem from intelligence per se. In this way, purposeless intelligence is an oxymoron, and it is mind that provides purpose to intelligence. In other words, intelligence is a device of mind for reaching its ends. That, simply put, sums it up.

You are right that in order for even a problem to be declared, and a solution to be declared, a purpose needs to be defined, and purpose means consciousness (because "good" versus "bad" experiences).  However, consciousness is only necessary to DEFINE the problem, not to solve it.

As such, you need a sentient being to RECOGNIZE intelligence.

I call something intelligent if it can SOLVE a problem (as DEFINED by a consciousness).

This obviously contradicts what you have been saying in this thread before. You said that intelligence is objective ("Intelligence is observable, objective and so on"). but now you turn your thought 180 degrees and state that intelligence is something that is able to solve a problem defined by a conscious being. By this you confirm that intelligence is also subjective. Thus an AND gate taken as such is not intelligent but only if it serves some purpose. But even in this case it is not an AND gate's intelligence but intelligence of him who assigned its purpose (since intelligence lies in a purpose, not in a device that fulfills it).

As simple.


Title: Re: Machines and money
Post by: manwithat on March 21, 2015, 12:35:25 PM
Artificial Intelligence has bean dead for thirty years, after someone oversold it by stating that it was possible to create a program that could answer all questions, it was called the General Problem Solver. http://en.wikipedia.org/wiki/General_Problem_Solver



Title: Re: Machines and money
Post by: dinofelis on March 21, 2015, 01:23:04 PM
This obviously contradicts what you have been saying in this thread before. You said that intelligence is objective ("Intelligence is observable, objective and so on"). but now you turn your thought 180 degrees and state that intelligence is something that is able to solve a problem defined by a conscious being.

The capacity to solve the problem is objective of course.  A calculator solves objectively an addition problem.  An AND gate solves objectively a logical problem.  What exactly is an "addition problem" is subjectively defined, because you're right, the very fact that additions are a problem to be solved is indeed only definable by a sentient being.  A non-sentient being couldn't care less whether two numbers resulting in a third number have any "purpose" (because a non-sentient being doesn't care about anything).

So I agreed that in the concept itself of "problem", lies the need for a goal, for a purpose, and hence for a good versus bad experience, and thus a sentient being.  In a world without a sentient being, it doesn't matter at all whether there is a device that can do something like "additions".

However, once "doing additions" is recognized as a purpose by a sentient being, the observation whether or not a system can perform such an addition (whether it has this intelligence) is objective of course.  That's what I meant.

There's no philosophical debate as whether a calculator can or cannot solve an addition problem.  A working calculator can, and a broken one can't.





Title: Re: Machines and money
Post by: tee-rex on March 21, 2015, 02:13:01 PM
This obviously contradicts what you have been saying in this thread before. You said that intelligence is objective ("Intelligence is observable, objective and so on"). but now you turn your thought 180 degrees and state that intelligence is something that is able to solve a problem defined by a conscious being.

The capacity to solve the problem is objective of course.  A calculator solves objectively an addition problem.  An AND gate solves objectively a logical problem.  What exactly is an "addition problem" is subjectively defined, because you're right, the very fact that additions are a problem to be solved is indeed only definable by a sentient being.  A non-sentient being couldn't care less whether two numbers resulting in a third number have any "purpose" (because a non-sentient being doesn't care about anything).

So I agreed that in the concept itself of "problem", lies the need for a goal, for a purpose, and hence for a good versus bad experience, and thus a sentient being.  In a world without a sentient being, it doesn't matter at all whether there is a device that can do something like "additions".

However, once "doing additions" is recognized as a purpose by a sentient being, the observation whether or not a system can perform such an addition (whether it has this intelligence) is objective of course.  That's what I meant.

I don't believe you, you meant quite the other, namely, that intelligence doesn't need consciousness at all. Why don't you just recognize that intelligence is not objective, that there is no such thing as intelligence without prior conscious thought, and that you were plain wrong stating that?

There is no such thing as intelligence per se. A calculator is not intelligent by any means, it is a tool, intelligent is its owner.


Title: Re: Machines and money
Post by: tee-rex on March 21, 2015, 02:16:17 PM
Artificial Intelligense has bean dead for thirty years, after someone oversold it by stating that it was possible to create a program that could answer all questions, it was called the General Problem Solver.

Artificial Intelligence has bean dead for thirty years, after someone oversold it by stating that it was possible to create a program that could answer all questions, it was called the General Problem Solver. http://en.wikipedia.org/wiki/General_Problem_Solver

Are you padding your post count? Get out of here.


Title: Re: Machines and money
Post by: Hazir on March 21, 2015, 02:17:52 PM
but then we should also declare nature as being intelligent, since there are a multitude of such "intelligent devices" created by natural forces.

Of course nature is intelligent.  The universe is probably the most intelligent device in existence.  The amount of entropy it can produce is gigantic.
However, I doubt that the universe is sentient.  If we say it is, we enter in totally metaphysical or even theological considerations.


It is certain that we're not the only animals who possess the intellectual, cognitive and emotional capacities. Life has survived 3,7 billion years on Earth, most of that time without humans. Does this not speak to the intelligence of all living things? But I am sure that someday humans will create machines intelligent enough to be superior species. When this will happen all forms of money will become obsolete.


Title: Re: Machines and money
Post by: tee-rex on March 21, 2015, 02:41:55 PM
but then we should also declare nature as being intelligent, since there are a multitude of such "intelligent devices" created by natural forces.

Of course nature is intelligent.  The universe is probably the most intelligent device in existence.  The amount of entropy it can produce is gigantic.
However, I doubt that the universe is sentient.  If we say it is, we enter in totally metaphysical or even theological considerations.


It is certain that we're not the only animals who possess the intellectual, cognitive and emotional capacities. Life has survived 3,7 billion years on Earth, most of that time without humans. Does this not speak to the intelligence of all living things? But I am sure that someday humans will create machines intelligent enough to be superior species. When this will happen all forms of money will become obsolete.

I would like to know why it is a given that all forms of money would become obsolete when humans creat a new form of life that would be more intelligent than themselves. What actually makes you think so? I could weigh in as to why it wouldn't be that (simple), but first I want to hear your ideas (reasons and arguments before all).


Title: Re: Machines and money
Post by: AtheistAKASaneBrain on March 21, 2015, 07:13:43 PM
but then we should also declare nature as being intelligent, since there are a multitude of such "intelligent devices" created by natural forces.

Of course nature is intelligent.  The universe is probably the most intelligent device in existence.  The amount of entropy it can produce is gigantic.
However, I doubt that the universe is sentient.  If we say it is, we enter in totally metaphysical or even theological considerations.


It is certain that we're not the only animals who possess the intellectual, cognitive and emotional capacities. Life has survived 3,7 billion years on Earth, most of that time without humans. Does this not speak to the intelligence of all living things? But I am sure that someday humans will create machines intelligent enough to be superior species. When this will happen all forms of money will become obsolete.

I would like to know why it is a given that all forms of money would become obsolete when humans creat a new form of life that would be more intelligent than themselves. What actually makes you think so? I could weigh in as to why it wouldn't be that (simple), but first I want to hear your ideas (reasons and arguments before all).

By the time we have human-like biped robots walking around and with a brain superior to actual humans we'll be way past needing to work, or at least 99% of tasks will already be automated. You tell me how an economy is supossed to work under a monetary system when 99% of jobs are being automated by machines.


Title: Re: Machines and money
Post by: tee-rex on March 21, 2015, 07:27:47 PM
It is certain that we're not the only animals who possess the intellectual, cognitive and emotional capacities. Life has survived 3,7 billion years on Earth, most of that time without humans. Does this not speak to the intelligence of all living things? But I am sure that someday humans will create machines intelligent enough to be superior species. When this will happen all forms of money will become obsolete.

I would like to know why it is a given that all forms of money would become obsolete when humans creat a new form of life that would be more intelligent than themselves. What actually makes you think so? I could weigh in as to why it wouldn't be that (simple), but first I want to hear your ideas (reasons and arguments before all).

By the time we have human-like biped robots walking around and with a brain superior to actual humans we'll be way past needing to work, or at least 99% of tasks will already be automated. You tell me how an economy is supossed to work under a monetary system when 99% of jobs are being automated by machines.

No artificial intelligence can remove subjective valuation existing in human mind which is a prerequisite for trade between people. Money just facilitates this trade. So, I think, there will always be room for a monetary system of sorts. Furthermore, there are things which involve competition between humans, and human-like biped robots won't change a thing about it. It doesn't really matter that a supercomputer can smash to pieces any world chess champion by now (and even more world chess champions by then), people still play and will play chess against other people (just an example). And no no-need for work will ever change this either.

Olympic champions are the hardest working people in existence.


Title: Re: Machines and money
Post by: BillyBobZorton on March 21, 2015, 11:53:24 PM
It is certain that we're not the only animals who possess the intellectual, cognitive and emotional capacities. Life has survived 3,7 billion years on Earth, most of that time without humans. Does this not speak to the intelligence of all living things? But I am sure that someday humans will create machines intelligent enough to be superior species. When this will happen all forms of money will become obsolete.

I would like to know why it is a given that all forms of money would become obsolete when humans creat a new form of life that would be more intelligent than themselves. What actually makes you think so? I could weigh in as to why it wouldn't be that (simple), but first I want to hear your ideas (reasons and arguments before all).

By the time we have human-like biped robots walking around and with a brain superior to actual humans we'll be way past needing to work, or at least 99% of tasks will already be automated. You tell me how an economy is supossed to work under a monetary system when 99% of jobs are being automated by machines.

No artificial intelligence can remove subjective valuation existing in human mind which is a prerequisite for trade between people. Money just facilitates this trade. So, I think, there will always be room for a monetary system of sorts. Furthermore, there are things which involve competition between humans, and human-like biped robots won't change a thing about it. It doesn't really matter that a supercomputer can smash to pieces any world chess champion by now (and even more world chess champions by then), people still play and will play chess against other people (just an example). And no no-need for work will ever change this either.

Olympic champions are the hardest working people in existence.

It doesn't take artificial intelligence to replace most jobs, collapsing our current economy because more than half of the population will need to be on welfare or something, since they will be perpetually unemployed.


Title: Re: Machines and money
Post by: dinofelis on March 22, 2015, 05:32:11 AM
By the time we have human-like biped robots walking around and with a brain superior to actual humans we'll be way past needing to work, or at least 99% of tasks will already be automated. You tell me how an economy is supossed to work under a monetary system when 99% of jobs are being automated by machines.

I would think that that is very simple: by living off your investments in those machines, or in the machines that produce those machines, or in the machines that produce the machines that produce those machines....

Labor as the main source of income will be replaced by dividend on investment as the main source of income for humans.

You will have the "haves" who are invested in that, and the "have nots".   Those last ones can do their own economy amongst themselves, or starve.  If they starve, then the humanity that remains is entirely invested in the robot economy, and will live off their automatically generated dividends.  What your (great grand) parents have invested in, will determine your standard of living.  But if you have enough of it, you can still play on the stock market, to try to improve your situation (or fail, and starve).

Economic Darwinism, I'd say.

Human labor as a source of income will be over.  Except maybe in the sex industry. 




Title: Re: Machines and money
Post by: cbeast on March 22, 2015, 05:38:57 AM
By the time we have human-like biped robots walking around and with a brain superior to actual humans we'll be way past needing to work, or at least 99% of tasks will already be automated. You tell me how an economy is supossed to work under a monetary system when 99% of jobs are being automated by machines.

I would think that that is very simple: by living off your investments in those machines, or in the machines that produce those machines, or in the machines that produce the machines that produce those machines....

Labor as the main source of income will be replaced by dividend on investment as the main source of income for humans.

You will have the "haves" who are invested in that, and the "have nots".   Those last ones can do their own economy amongst themselves, or starve.  If they starve, then the humanity that remains is entirely invested in the robot economy, and will live off their automatically generated dividends.  What your (great grand) parents have invested in, will determine your standard of living.  But if you have enough of it, you can still play on the stock market, to try to improve your situation (or fail, and starve).

Economic Darwinism, I'd say.

Human labor as a source of income will be over.  Except maybe in the sex industry. 



Would a machine with a superior intellect allow itself to be owned by "haves" as you put it?


Title: Re: Machines and money
Post by: dinofelis on March 22, 2015, 05:54:42 AM
By the time we have human-like biped robots walking around and with a brain superior to actual humans we'll be way past needing to work, or at least 99% of tasks will already be automated. You tell me how an economy is supossed to work under a monetary system when 99% of jobs are being automated by machines.

I would think that that is very simple: by living off your investments in those machines, or in the machines that produce those machines, or in the machines that produce the machines that produce those machines....

Labor as the main source of income will be replaced by dividend on investment as the main source of income for humans.

You will have the "haves" who are invested in that, and the "have nots".   Those last ones can do their own economy amongst themselves, or starve.  If they starve, then the humanity that remains is entirely invested in the robot economy, and will live off their automatically generated dividends.  What your (great grand) parents have invested in, will determine your standard of living.  But if you have enough of it, you can still play on the stock market, to try to improve your situation (or fail, and starve).

Economic Darwinism, I'd say.

Human labor as a source of income will be over.  Except maybe in the sex industry. 



Would a machine with a superior intellect allow itself to be owned by "haves" as you put it?

No, but the hypothesis of Atheist was that jobs would be gone long before we had such intelligent machines.


Title: Re: Machines and money
Post by: cbeast on March 22, 2015, 06:30:36 AM
By the time we have human-like biped robots walking around and with a brain superior to actual humans we'll be way past needing to work, or at least 99% of tasks will already be automated. You tell me how an economy is supossed to work under a monetary system when 99% of jobs are being automated by machines.

I would think that that is very simple: by living off your investments in those machines, or in the machines that produce those machines, or in the machines that produce the machines that produce those machines....

Labor as the main source of income will be replaced by dividend on investment as the main source of income for humans.

You will have the "haves" who are invested in that, and the "have nots".   Those last ones can do their own economy amongst themselves, or starve.  If they starve, then the humanity that remains is entirely invested in the robot economy, and will live off their automatically generated dividends.  What your (great grand) parents have invested in, will determine your standard of living.  But if you have enough of it, you can still play on the stock market, to try to improve your situation (or fail, and starve).

Economic Darwinism, I'd say.

Human labor as a source of income will be over.  Except maybe in the sex industry. 



Would a machine with a superior intellect allow itself to be owned by "haves" as you put it?

No, but the hypothesis of Atheist was that jobs would be gone long before we had such intelligent machines.

Would machines with a superior intellect have feelings? Why would they not allow themselves to be owned?


Title: Re: Machines and money
Post by: bitboy11 on March 23, 2015, 02:26:17 PM
Artificial intelligence and the fridge
http://on.ft.com/1zSz2tw (http://on.ft.com/1zSz2tw)

Quote
In science fiction, this scenario — called “singularity” or “transcendence” — usually leads to robot versus human war and a contest for world domination.
But what if, rather than a physical battle, it was an economic one, with robots siphoning off our money or destroying the global economy with out-of-control algorithmic trading programmes? Perhaps it will not make for a great movie, but it seems the more likely outcome.

With Bitcoin, it's hard to see the downside. DACs (decentralize autonomous companies) are inevitable. This article is another vestige of irrational fear about money.

On the contrary, I think this will make for a great movie. ;D


Title: Re: Machines and money
Post by: soowein on March 25, 2015, 07:25:55 AM
When the time comes, we will manage the balance between the robot and human.
Now our focus should be developing Artificial intelligence. 
The scenario you mentioned should be in science fiction now.


Title: Re: Machines and money
Post by: dinofelis on March 25, 2015, 11:56:32 AM
I don't believe you, you meant quite the other, namely, that intelligence doesn't need consciousness at all.

Because I think that a calculator has a certain intelligence, and I don't think - although I cannot know - that a calculator isn't really conscious.  An AND gate also has some intelligence, but less so.  A modern-day computer has way more intelligence than a calculator.  Whether a modern-day computer is conscious or not, I don't know but I would be tempted to say no (although it is an unsolvable issue).

However, to SAY what is intelligence, needs a conscious being, because it needs to fix a purpose, namely a problem to be solved.  Without problem to be solved, there's no intelligence possible that can solve it, right.

Compare it to music for instance.  Music as such is objective.  It is a data file if you want to.  Or a function of air pressure as a function of time.  There's no discussion about that.  However, to define whether a certain sound is "music" needs a sentient being that can appreciate (enjoy) these sounds and there can even be discussion amongst sentient beings about whether some sound should be considered music or not.  But "music itself" as a sound doesn't need a consciousness.

In the same way, to define something as a problem, and hence, what consists a solution to that problem, needs a purpose and hence some form of sentient being.  But once the problem is defined, a system that can solve such kinds of problems, is therefor intelligent, and that is objective.  
Defining the problem of "addition of two numbers is an interesting problem" is probably sentient.  But a thing that can do additions, and hence can solve a problem and hence is intelligent, doesn't need to be sentient.

To appreciate intelligence is a sentient action.  To be intelligent, not necessarily.



Title: Re: Machines and money
Post by: tee-rex on March 25, 2015, 05:01:09 PM
I don't believe you, you meant quite the other, namely, that intelligence doesn't need consciousness at all.

Because I think that a calculator has a certain intelligence, and I don't think - although I cannot know - that a calculator isn't really conscious.  An AND gate also has some intelligence, but less so.  A modern-day computer has way more intelligence than a calculator.  Whether a modern-day computer is conscious or not, I don't know but I would be tempted to say no (although it is an unsolvable issue).

However, to SAY what is intelligence, needs a conscious being, because it needs to fix a purpose, namely a problem to be solved.  Without problem to be solved, there's no intelligence possible that can solve it, right.

If it were so, I could just as well say that a sledgehammer is also intelligent (to a degree). It uses the force of gravity, thereby it has intelligence.


Title: Re: Machines and money
Post by: dinofelis on March 26, 2015, 12:39:24 PM
If it were so, I could just as well say that a sledgehammer is also intelligent (to a degree). It uses the force of gravity, thereby it has intelligence.

A sledgehammer solves a problem too, but it was implicitly understood that the problem had to be "conceptual" and not physical of course, for the tool that can solve it to be called "intelligent".  However, your example is not devoid of analogy.  In as much as a tool can be intelligent (solving a conceptual problem), another tool can be "strong" (solving a physical problem).



Title: Re: Machines and money
Post by: tee-rex on March 26, 2015, 02:28:26 PM
If it were so, I could just as well say that a sledgehammer is also intelligent (to a degree). It uses the force of gravity, thereby it has intelligence.

A sledgehammer solves a problem too, but it was implicitly understood that the problem had to be "conceptual" and not physical of course, for the tool that can solve it to be called "intelligent".  However, your example is not devoid of analogy.  In as much as a tool can be intelligent (solving a conceptual problem), another tool can be "strong" (solving a physical problem).

Okay, if someone owes you money, could a sledgehammer help you solve a "conceptual" problem of that guy not paying you back? A problem which is purely subjective?


Title: Re: Machines and money
Post by: dinofelis on March 27, 2015, 07:33:49 AM
If it were so, I could just as well say that a sledgehammer is also intelligent (to a degree). It uses the force of gravity, thereby it has intelligence.

A sledgehammer solves a problem too, but it was implicitly understood that the problem had to be "conceptual" and not physical of course, for the tool that can solve it to be called "intelligent".  However, your example is not devoid of analogy.  In as much as a tool can be intelligent (solving a conceptual problem), another tool can be "strong" (solving a physical problem).

Okay, if someone owes you money, could a sledgehammer help you solve a "conceptual" problem of that guy not paying you back? A problem which is purely subjective?

 ;D


Title: Re: Machines and money
Post by: TTMNewsMJ on October 23, 2015, 09:32:58 AM
When the time comes, we will manage the balance between the robot and human.
Now our focus should be developing Artificial intelligence. 
The scenario you mentioned should be in science fiction now.
Yes. I agree with you. We can't control this in the near future.


Title: Re: Machines and money
Post by: n2004al on October 24, 2015, 03:46:51 PM
Artificial intelligence and the fridge
http://on.ft.com/1zSz2tw (http://on.ft.com/1zSz2tw)

Quote
In science fiction, this scenario — called “singularity” or “transcendence” — usually leads to robot versus human war and a contest for world domination.
But what if, rather than a physical battle, it was an economic one, with robots siphoning off our money or destroying the global economy with out-of-control algorithmic trading programmes? Perhaps it will not make for a great movie, but it seems the more likely outcome.

With Bitcoin, it's hard to see the downside. DACs (decentralize autonomous companies) are inevitable. This article is another vestige of irrational fear about money.

About the part in bold I would tell "the more unlikely outcome". The robots will always be a product made from the mind and the hands of the human being. A very secure product. With to many rules to be followed, the first of whose is to serve the human being and to kill himself before making something bad to him or to not execute the orders of him.

The human being has survived and developed for thousand years. Who think that this survival was easy is in big wrong. Was through big and hard "wars" (physique and mental). This kind of development has "engraved" on its DNA the instinct of self-defense from everything. So even if the mind of him can make something wrong (very improbable) it will be this instinct which will impede him to materialize such product.

The human kind will not produce never a robot which can damage in some way him or him's life and existence. Robots will be always obedient servants of the human kind. Every kind of their development will make them even more servants of the human being. So the situations quoted in the post of OP is pure fantasy and poor imagination. This kind of fear is result of bad dreams seen the night before their creation and their destiny, in the best of cases, is to become a very good movie (if it will be a very movie maker which will take care of it).

There it will be never no any kind of product created by the mind and the hands of human being which will destroy him's world. Even less the robots. For sure can be other things that can disturb him's life and equilibrium much more than the robots. I can tell one: the uncontrolled biological "weapons" created in various secret laboratories. If they, for various reasons, go out of those laboratories, while yet don't exist their antidote, this can be a much more higher risk for the humanity than the most developed robot. But even in this case I think that the human kind will survive. Like he has made during all him's life story.


Title: Re: Machines and money
Post by: dothebeats on October 24, 2015, 05:49:31 PM
When the time comes, we will manage the balance between the robot and human.
Now our focus should be developing Artificial intelligence. 
The scenario you mentioned should be in science fiction now.

The thing is, once AI technology has been fully developed and is capable of replacing humans in some kind of jobs, companies would surely choose machines for efficiency and cost. This could render human workers unnecessary for jobs. :/


Title: Re: Machines and money
Post by: valvalis on October 24, 2015, 05:50:27 PM
World is developing greate AI now. AI that has ability to developing themself sounds dangerous to me.


Title: Re: Machines and money
Post by: kneim on October 24, 2015, 06:42:49 PM
The human robot is here already. It's known as the "homo oeconomicus". He is the more stupid, the more reasonable he believes he is.

This sort of mankind is as less enduring as a machine out of steel. The most dehumanised human beeing has his shortest time of living in the war.

See you in hell.


Title: Re: Machines and money
Post by: Denker on October 24, 2015, 08:58:18 PM
When the time comes, we will manage the balance between the robot and human.
Now our focus should be developing Artificial intelligence. 
The scenario you mentioned should be in science fiction now.

The thing is, once AI technology has been fully developed and is capable of replacing humans in some kind of jobs, companies would surely choose machines for efficiency and cost. This could render human workers unnecessary for jobs. :/


This sounds so much like the positronic man will happen one day.Although I believe this day is still very very far away. Furthermore I doubt that humans still will be needed. Intelligent machines is one thing.But will they also have the same sleight of hand or motor abilities where it is desperatly needed?!


Title: Re: Machines and money
Post by: cutesakura on October 25, 2015, 04:41:07 AM
It was a bluebird day in Midtown Manhattan on May 6th, 2010. At 2.40pm in the afternoon, I can imagine that most Wall Street traders were almost ready to start packing up and heading home for the day, or at least had grabbed another coffee to get them through the afternoon slump. Then something happened that woke them the hell up.

At 2.42pm, the Dow Jones started dropping. It dropped 600 points in five long, terrifying, and confusing minutes. For five minutes, everyone panicked. By 2.47pm, the Dow had dove rapidly to an almost 1,000-point loss on the day – the second largest point swing in Dow Jones history – until someone literally pulled the plug on the market and trading stopped.

When trading opened again a few minutes later at 3.07pm, the market had regained most of that 600-point drop.

What happened?

This was the 2010 Flash Crash. In order to understand the Flash Crash, the first thing I needed to understand was just how outdated my idea of the stock market actually was; I pictured Wall Street, v.1 – lots of white guys in suits shouting BUY and SELL and cursing on the phone to other brokers all around the world. These days, and for the last twenty years or so, over 70% of all stock market trades are run by super computers who trade tens of thousands of stocks in milliseconds – we’ve gotten rid of sluggish human beings completely. I needed to picture a gigantic room full of computers making a high-pitched whine instead.

During that five-minute period, the stock market – and in turn, the economy – lost billions of real $$ money. No one knew what had actually happened. The SEC tasked an unlucky committee to immediately figure it out. That report, which took five months to research and compile, came to the conclusion that it was one bad computer algorithm that sent the market into a spiral. More importantly, however, that report documented:

The joint report “portrayed a market so fragmented and fragile that a single large trade could send stocks into a sudden spiral,” and detailed how a large mutual fund firm selling an unusually large number of E-Mini S&P 500 contracts first exhausted available buyers, and then how high-frequency traders started aggressively selling, accelerating the effect of the mutual fund’s selling and contributing to the sharp price declines that day.

Critics of the SEC’s report were many, and included much deserved criticism around how, despite the fact that the SEC employed the highest-tech IT museum in their research, which included five PCs, a Bloomberg, a printer, a fax, and three TVs – it still took nearly five months to analyze the Flash Crash. Specifically:

A better measure of the inadequacy of the current mélange of IT antiquities is that the SEC/CFTC report on the May 6 crash was released on September 30, 2010. Taking nearly five months to analyze the wildest ever five minutes of market data is unacceptable. CFTC Chair Gensler specifically blamed the delay on the “enormous” effort to collect and analyze data. What an enormous mess it is.

So: What does it mean when our machines make a split-second mistake that costs us real billions, but takes humans months to understand what actually happened?


Title: Re: Machines and money
Post by: dothebeats on October 25, 2015, 05:04:33 AM
When the time comes, we will manage the balance between the robot and human.
Now our focus should be developing Artificial intelligence. 
The scenario you mentioned should be in science fiction now.

The thing is, once AI technology has been fully developed and is capable of replacing humans in some kind of jobs, companies would surely choose machines for efficiency and cost. This could render human workers unnecessary for jobs. :/


This sounds so much like the positronic man will happen one day.Although I believe this day is still very very far away. Furthermore I doubt that humans still will be needed. Intelligent machines is one thing.But will they also have the same sleight of hand or motor abilities where it is desperatly needed?!



With the current advancement in technology and the determination of humans, fully-functional AIs would not be that distant into becoming a reality. Even the slightest human motion could be replicated by Science and be implemented to robots. What I'm even worried about is if this robots would be sentient and start to think of its own.


Title: Re: Machines and money
Post by: n2004al on October 25, 2015, 07:15:05 AM
When the time comes, we will manage the balance between the robot and human.
Now our focus should be developing Artificial intelligence. 
The scenario you mentioned should be in science fiction now.

The thing is, once AI technology has been fully developed and is capable of replacing humans in some kind of jobs, companies would surely choose machines for efficiency and cost. This could render human workers unnecessary for jobs. :/


This sounds so much like the positronic man will happen one day.Although I believe this day is still very very far away. Furthermore I doubt that humans still will be needed. Intelligent machines is one thing.But will they also have the same sleight of hand or motor abilities where it is desperatly needed?!



With the current advancement in technology and the determination of humans, fully-functional AIs would not be that distant into becoming a reality. Even the slightest human motion could be replicated by Science and be implemented to robots. What I'm even worried about is if this robots would be sentient and start to think of its own.

For the first part in bold: The development bring always new work places. So it is up to people to be adopted to the new reality. Never can be less work places when the society go forward. 

For the second part in bold: It will be again only a machine. So will have always a key in his "mind", well protected and not accessible from this machine which will stop everything on it in the moment when this machine can (might) be dangerous for the human kind.


Title: Re: Machines and money
Post by: Amph on October 25, 2015, 07:41:28 AM
World is developing greate AI now. AI that has ability to developing themself sounds dangerous to me.

it would happen at some point, if the singualrity thing will be true, if human really think they can teleport and travel back in time one day

having machine that builds other machines more advanced, is not even so hi-tech in comparison


Title: Re: Machines and money
Post by: dothebeats on October 25, 2015, 08:24:23 AM
For the first part in bold: The development bring always new work places. So it is up to people to be adopted to the new reality. Never can be less work places when the society go forward. 

For the second part in bold: It will be again only a machine. So will have always a key in his "mind", well protected and not accessible from this machine which will stop everything on it in the moment when this machine can (might) be dangerous for the human kind.

The automation of mechanical works/jobs that are done by people could render them jobless. Companies would certainly look for machines to do the job rather than paying humans. Less costly because you don't have to pay people constantly, only maintenance costs for each machine.

There is a possibility that an AI would be sentient, given that a computer is programmable, the AI could probably program itself to do what other humans are doing and can do.


Title: Re: Machines and money
Post by: n2004al on October 25, 2015, 09:42:12 AM
For the first part in bold: The development bring always new work places. So it is up to people to be adopted to the new reality. Never can be less work places when the society go forward.  

For the second part in bold: It will be again only a machine. So will have always a key in his "mind", well protected and not accessible from this machine which will stop everything on it in the moment when this machine can (might) be dangerous for the human kind.

The automation of mechanical works/jobs that are done by people could render them jobless. Companies would certainly look for machines to do the job rather than paying humans. Less costly because you don't have to pay people constantly, only maintenance costs for each machine.

There is a possibility that an AI would be sentient, given that a computer is programmable, the AI could probably program itself to do what other humans are doing and can do.

That jobless people could find new workplaces created by the development which leaved them jobless. But they need to be adapted (learning). If do so will find another workplace better than the first. Exactly because the development bring better kind of workplaces in time. When the computer was not invented most of the people works in the factories or in agriculture with their hands. When the computer was invented has created millions of new workplaces for programmers, developers and auxiliary staff which work in much more comfortable workplaces than their parents before.

If the jobless, will not be adapted their will remain jobless forever. But this is not the fault of development or the "automation of mechanical works/jobs". The last one expression is only part of development and don't represent it. But this expression is arrived after to many other developments in other fields which create the new kind workplaces mentioned in the first paragraph of this post.

The second part of the quoted post is questionable (as for me). If it can be created such quality to an machine, I don't know. But if, it will be a big success. Having feelings is a prerogative of the human being. So it is very difficult to create (the really true thing not the substitute of it) such quality within a machine. But everything may be possible with the technological development.

But if with this achievement it will be understood as a risk for the human being, my answer is never. There are not, there was not and there will be not any kind of thing invented, created and materialized by the human kind, whatever it can be, whatsoever it will be and in any kind of way will be build, or every kind of intelligence can be given to it by the mind of the human kind, which can be able or will be able to destroy or even to put in risk the existence of the human being. Nor AI nor AI multiplied with "n" when "n" can take the value "infinite". Every New Qualitative Thing cannot be out of the control of the human brain. All the story of development of the last one testify this. Cannot be otherwise in the future. Every New Qualitative Thing created by human brain will have always the "off" key within it controlled by the human people. Which always will be able to shut down everything and everykind of existence introduced in "life" by him if it will be necessary to do this.


Title: Re: Machines and money
Post by: anthonycamp on October 25, 2015, 09:44:56 AM
all the machines are to serve man and if defectuouse like intelegent may be intelegent for some things but defective for its porupouse so it must be fixed in order to do intelegently engenerilngly the wrok that they was made for not robots fo course.


Title: Re: Machines and money
Post by: kneim on October 25, 2015, 04:47:05 PM
Can a computer program be more intelligent than the creator? No, it can not, of course. Because the computer program would overtake. At this point, human mankind has lost his meaning of life completely, and he will die. The machines too.

This could happen indeed, in another way you think. Not the machines will become as intelligent as human mankind. The opposite is true, human mankind is becoming as stupid as computers. It's like Albert Einstein told us: "Everything is relative".


Title: Re: Machines and money
Post by: Betwrong on October 25, 2015, 05:23:08 PM
Can a computer program be more intelligent than the creator? No, it can not, of course. Because the computer program would overtake. At this point, human mankind has lost his meaning of life completely, and he will die. The machines too.

This could happen indeed, in another way you think. Not the machines will become as intelligent as human mankind. The opposite is true, human mankind is becoming as stupid as computers. It's like Albert Einstein told us: "Everything is relative".

Fortunately, not all humans are becoming as stupid as computers. From the begining of times there always were some who were wise. The question is will the other part of the humankind listen what those wise people are saying though.


Title: Re: Machines and money
Post by: thejaytiesto on October 25, 2015, 05:51:17 PM
We don't need any new technologies to automate 50% of current jobs, this alone would destroy the current economic system because the unemployment would be insane and unsustainable unless all the unemployed people that don't have any other means of income got put on welfare. THat's how yo would trigger massive riots everywhere, having unsustainable unemployment while not giving people basic resources.


Title: Re: Machines and money
Post by: n2004al on October 26, 2015, 09:56:36 AM
We don't need any new technologies to automate 50% of current jobs, this alone would destroy the current economic system because the unemployment would be insane and unsustainable unless all the unemployed people that don't have any other means of income got put on welfare. THat's how yo would trigger massive riots everywhere, having unsustainable unemployment while not giving people basic resources.

Sure. It is better do everything with hands and without head. Like it was thousand years ago. Technology hurt everyone, give only unemployment. For example the invention of computer (to tell one) has leave without work millions developers, programmers and other auxiliary staff. The invention of railways made possible that millions and millions people were punished to go faster and not walking in their destination. The invention of internet made possible the punishment of millions and millions people with the connection much more fast with each other, have unlimited possibilities of knowledge and even create something wrong and dirty like bitcoin. The last one is a bigger punishment than internet itself. Being the first materialization of another big technology (as it is peer to peer technology) it is an invention to punish more the users of it. It is a product of a double invented technology (internet and peer to peer). Who knows how people will remain jobless with this new technology....  ???  First of all was the "producers" of bitcoin. They had big losses from this invention. Who knows which will happen ongoing with these damned new technologies?  :o


Title: Re: Machines and money
Post by: TTMNewsMJ on October 28, 2015, 11:06:41 AM
World is developing greate AI now. AI that has ability to developing themself sounds dangerous to me.
What's AI??
Why are they trying to develop this AI??


Title: Re: Machines and money
Post by: Denker on October 28, 2015, 11:16:44 AM
World is developing greate AI now. AI that has ability to developing themself sounds dangerous to me.
What's AI??
Why are they trying to develop this AI??

AI= Artificial Intelligence
Why this is or will be developed? Because it will have several benefits in different fields and industries.For instance it can take off the heat from people in various jobs and positions. But also risks shouldn't be underestimated or get downplayed.If AI is far enough developed it might replace the human.Then it becomes a threat.


Title: Re: Machines and money
Post by: kneim on October 28, 2015, 11:47:49 AM
AI is far enough developed it might replace the human.Then it becomes a threat.

It will never replace me. I'm too stupid and chaotic enought, that no algo ever can replace me.

I'm a computer admin, but nevertheless I cannot say, what my computer is calculating, and why. I'm seeing the screen only, and it seems valid.

Reality is, that the "Homo Oeconomicus" will not survive.


Title: Re: Machines and money
Post by: CoinHeavy on November 24, 2015, 09:14:32 AM
Has anyone considered that Bitcoin may well have been developed by an artificial intelligence and that we are, in fact, already post-singularity?

What better way to make the jump from computational sentience towards crafting the real, physical world at will than by inventing math-money?

It may sound absurd prima facie but it is, nonetheless, a curious thought experiment.

With the advent of DACs, algorithms are already beginning to compete in earnest to be the robot king of the capitalist mountain.  Neat.


Title: Re: Machines and money
Post by: equator on November 24, 2015, 09:49:59 AM
World is developing greate AI now. AI that has ability to developing themself sounds dangerous to me.
What's AI??
Why are they trying to develop this AI??

AI= Artificial Intelligence
Why this is or will be developed? Because it will have several benefits in different fields and industries.For instance it can take off the heat from people in various jobs and positions. But also risks shouldn't be underestimated or get downplayed.If AI is far enough developed it might replace the human.Then it becomes a threat.

to some extented it is right that what you said , but one point to be noted is that this all are created by humans and human knows how to use and till which extend he has to give power to this AI. what ever the development is done to AI it cannot replace Humans.


Title: Re: Machines and money
Post by: LuckyYOU on November 24, 2015, 02:08:04 PM
AI's are made by humans, people, they only do what they are programmed to do by people.

Unless someone makes an AI that's programmed to replace all humans (which I doubt) you shouldn't be too worried about it.

In the last 5 - 10 years us humans have been replaced a bunch, a lot has been made to be automated by machines and robots. Investors in these machines and robots believe that the machines and robots are cheaper to buy rather than to pay humans off.


Title: Re: Machines and money
Post by: Slark on November 24, 2015, 03:13:55 PM
AI's are made by humans, people, they only do what they are programmed to do by people.

Unless someone makes an AI that's programmed to replace all humans (which I doubt) you shouldn't be too worried about it.

In the last 5 - 10 years us humans have been replaced a bunch, a lot has been made to be automated by machines and robots. Investors in these machines and robots believe that the machines and robots are cheaper to buy rather than to pay humans off.
I don't think it will take 5-10 years to achieve something significant in the artificial intelligence field. But it will eventually happen, and people will create sentient AI in the future with the ability to learn.
What will happen then it yet to be seen, opinions may vary. We could end up with our own SkyNet or Matrix.


Title: Re: Machines and money
Post by: michietn94 on December 09, 2015, 12:45:15 AM
It is understandable that you ever imagine until things like that, but i dont think that is a real danger

If that happen maybe we will back to the era without internet, it will set back technology era but it cant make people extinct

Worst come to the worst we cant do global transaction thats all


Title: Re: Machines and money
Post by: neochiny on December 09, 2015, 04:01:59 PM
AI's are made by humans, people, they only do what they are programmed to do by people.

Unless someone makes an AI that's programmed to replace all humans (which I doubt) you shouldn't be too worried about it.

In the last 5 - 10 years us humans have been replaced a bunch, a lot has been made to be automated by machines and robots. Investors in these machines and robots believe that the machines and robots are cheaper to buy rather than to pay humans off.
I don't think it will take 5-10 years to achieve something significant in the artificial intelligence field. But it will eventually happen, and people will create sentient AI in the future with the ability to learn.
What will happen then it yet to be seen, opinions may vary. We could end up with our own SkyNet or Matrix.

a self learning AI is quite dangerous, if its able to understands
human feelings that will be the start of the threat. but to be honest
if humans make a sentient AI im pretty sure they will put a safety
measure like, fail safe if it start to do something suspicious.


Title: Re: Machines and money
Post by: n2004al on December 10, 2015, 04:52:39 PM
AI's are made by humans, people, they only do what they are programmed to do by people.

Unless someone makes an AI that's programmed to replace all humans (which I doubt) you shouldn't be too worried about it.

In the last 5 - 10 years us humans have been replaced a bunch, a lot has been made to be automated by machines and robots. Investors in these machines and robots believe that the machines and robots are cheaper to buy rather than to pay humans off.
I don't think it will take 5-10 years to achieve something significant in the artificial intelligence field. But it will eventually happen, and people will create sentient AI in the future with the ability to learn.
What will happen then it yet to be seen, opinions may vary. We could end up with our own SkyNet or Matrix.

a self learning AI is quite dangerous, if its able to understands human
feelings that will be the start of the threat.but to be honest
if humans make a sentient AI im pretty sure they will put a safety
measure like, fail safe if it start to do something suspicious.

There are two radically different position in your post. First you tell that "a self learning AI is quite dangerous, if its able to understands human feelings that will be the start of the threat." So an AI which can be a "thinking and with feelings thing" it is dangerous if we read this sentence. Then you do a 180 degree turnabout telling that " but to be honest if humans make a sentient AI im pretty sure they will put a safety measure like, fail safe if it start to do something suspicious." So reading these words of this sentence can be understood that an AI cannot be a danger thing because the human will think to make it safe and to not be able to make bad to the human kind. What is your opinion? It is or it is not dangerous the AI? Seems that according to you not (reading the end of your post). The question that "haunts" me is: if you know or understand that an AI cannot be a dangerous thing for the human kind, like you write in your definitive last sentence, why tell at the beginning that an AI "IS QUITE DANGEROUS" (so is FOR SURE dangerous) and not "MAY BE DANGEROUS"? For what reason you first present one conviction and then another one totally in opposite with the first? Cannot be understand this kind of expression by me.


Title: Re: Machines and money
Post by: virtualx on December 10, 2015, 05:16:26 PM
AI's are made by humans, people, they only do what they are programmed to do by people.

Questionable. There are self learning machines which may do things that the programmers did not teach them. These machines are at a very basic stage, but who knows what's possible in 20 years.

Unless someone makes an AI that's programmed to replace all humans (which I doubt) you shouldn't be too worried about it.

All humans is an impossible task because humans will simply not want a robotic dentist.  :)


Title: Re: Machines and money
Post by: ridery99 on December 10, 2015, 05:16:50 PM
In the final days of the mankind machines will rise and make more money than ever imagined.


Title: Re: Machines and money
Post by: Amph on December 10, 2015, 06:38:14 PM
i was thinking that in a very distant future if machine could mine bitcoin by themselves without human it could help the decentralization aspect of the network

those will be very advanced machine that does not need maintenance by any kind, they upgrade their own software with an hard coded algo and stuff like that


Title: Re: Machines and money
Post by: neochiny on December 10, 2015, 08:13:22 PM
AI's are made by humans, people, they only do what they are programmed to do by people.

Unless someone makes an AI that's programmed to replace all humans (which I doubt) you shouldn't be too worried about it.

In the last 5 - 10 years us humans have been replaced a bunch, a lot has been made to be automated by machines and robots. Investors in these machines and robots believe that the machines and robots are cheaper to buy rather than to pay humans off.
I don't think it will take 5-10 years to achieve something significant in the artificial intelligence field. But it will eventually happen, and people will create sentient AI in the future with the ability to learn.
What will happen then it yet to be seen, opinions may vary. We could end up with our own SkyNet or Matrix.

a self learning AI is quite dangerous, if its able to understands human
feelings that will be the start of the threat.but to be honest
if humans make a sentient AI im pretty sure they will put a safety
measure like, fail safe if it start to do something suspicious.

There are two radically different position in your post. First you tell that "a self learning AI is quite dangerous, if its able to understands human feelings that will be the start of the threat." So an AI which can be a "thinking and with feelings thing" it is dangerous if we read this sentence. Then you do a 180 degree turnabout telling that " but to be honest if humans make a sentient AI im pretty sure they will put a safety measure like, fail safe if it start to do something suspicious." So reading these words of this sentence can be understood that an AI cannot be a danger thing because the human will think to make it safe and to not be able to make bad to the human kind. What is your opinion? It is or it is not dangerous the AI? Seems that according to you not (reading the end of your post). The question that "haunts" me is: if you know or understand that an AI cannot be a dangerous thing for the human kind, like you write in your definitive last sentence, why tell at the beginning that an AI "IS QUITE DANGEROUS" (so is FOR SURE dangerous) and not "MAY BE DANGEROUS"? For what reason you first present one conviction and then another one totally in opposite with the first? Cannot be understand this kind of expression by me.

because AI can be corrupted and other people can change the fail safe
that has been put in it. that why i put a self learning AI is quite dangerous
in the beggining.

im sorry if my comment above you is confusing.


Title: Re: Machines and money
Post by: Mickeyb on December 10, 2015, 09:32:36 PM
i was thinking that in a very distant future if machine could mine bitcoin by themselves without human it could help the decentralization aspect of the network

those will be very advanced machine that does not need maintenance by any kind, they upgrade their own software with an hard coded algo and stuff like that

I really hope this never happens and we never get to see machines like this. If this would come true, somehow I think that us humans, would become endangered species very soon.

Let Bitcoin stay a bit more centralized instead of this, please! :)


Title: Re: Machines and money
Post by: suda123 on December 11, 2015, 07:18:17 AM
We don't need automated worker like robot or anything for doing some job because its will make unemployment and uncontrolled multiply. :P

But we know the human cost of hiring more expensive than hiring a machine or the like, so many companies prefer to use a machine or the like. : D

Im thinking they are just going to pay humans a lot lot less.


Title: Re: Machines and money
Post by: cbeast on December 28, 2015, 01:32:22 PM
If a machine was self-aware, would they value life? Natural selection created strong family bonds in most complex organisms over billions of years. The bonds even cross species in many cases. Somehow it only makes sense that machines would also adapt a bonding behavior. They may even develop a dominion based philosophy where they see themselves as the Earth's and our caretakers. In this case, they may use money to motivate humans to reach a higher potential.


Title: Re: Machines and money
Post by: BTCBinary on December 29, 2015, 02:21:58 PM
Artificial intelligence and the fridge
http://on.ft.com/1zSz2tw (http://on.ft.com/1zSz2tw)

Quote
In science fiction, this scenario — called “singularity” or “transcendence” — usually leads to robot versus human war and a contest for world domination.
But what if, rather than a physical battle, it was an economic one, with robots siphoning off our money or destroying the global economy with out-of-control algorithmic trading programmes? Perhaps it will not make for a great movie, but it seems the more likely outcome.

With Bitcoin, it's hard to see the downside. DACs (decentralize autonomous companies) are inevitable. This article is another vestige of irrational fear about money.

Contrary to your opinion, IMO i believe that scenario would be the perfect plot for a science fiction movie. I wonder why many of the science fiction writers haven't used this idea yet!


Title: Re: Machines and money
Post by: cbeast on December 31, 2015, 01:17:21 PM
Artificial intelligence and the fridge
http://on.ft.com/1zSz2tw (http://on.ft.com/1zSz2tw)

Quote
In science fiction, this scenario — called “singularity” or “transcendence” — usually leads to robot versus human war and a contest for world domination.
But what if, rather than a physical battle, it was an economic one, with robots siphoning off our money or destroying the global economy with out-of-control algorithmic trading programmes? Perhaps it will not make for a great movie, but it seems the more likely outcome.

With Bitcoin, it's hard to see the downside. DACs (decentralize autonomous companies) are inevitable. This article is another vestige of irrational fear about money.

Contrary to your opinion, IMO i believe that scenario would be the perfect plot for a science fiction movie. I wonder why many of the science fiction writers haven't used this idea yet!
Probably the same reason countries make their own separate monies. If machines were hostile to humans, humans would not use their money.


Title: Re: Machines and money
Post by: deisik on December 31, 2015, 02:38:30 PM
If a machine was self-aware, would they value life? Natural selection created strong family bonds in most complex organisms over billions of years. The bonds even cross species in many cases. Somehow it only makes sense that machines would also adapt a bonding behavior. They may even develop a dominion based philosophy where they see themselves as the Earth's and our caretakers. In this case, they may use money to motivate humans to reach a higher potential.

Just being sentient is not enough. Given only that (i.e. self-awareness), we would most certainly get the exact opposite of what is called a philosophical (https://en.wikipedia.org/wiki/Philosophical_zombie) zombie. That is, a self-aware but absolutely indifferent to the outside world creature...

In this way, self-awareness as such is inconsequential to your question


Title: Re: Machines and money
Post by: cbeast on January 02, 2016, 09:28:41 AM
If a machine was self-aware, would they value life? Natural selection created strong family bonds in most complex organisms over billions of years. The bonds even cross species in many cases. Somehow it only makes sense that machines would also adapt a bonding behavior. They may even develop a dominion based philosophy where they see themselves as the Earth's and our caretakers. In this case, they may use money to motivate humans to reach a higher potential.

Just being sentient is not enough. Given only that (i.e. self-awareness), we would most certainly get the exact opposite of what is called a philosophical (https://en.wikipedia.org/wiki/Philosophical_zombie) zombie. That is, a self-aware but absolutely indifferent to the outside world creature...

In this way, self-awareness as such is inconsequential to your question
In the second part of the hypothesis, I posit that if multiple self-aware machines machines interact, they might bond in ways analogous to complex biological organisms. But this new frontier of artificial intelligence is still beyond our understanding. I'm only hoping that our demise is not inevitable and that they might evolve a higher form of morality.


Title: Re: Machines and money
Post by: deisik on January 02, 2016, 09:53:38 AM
If a machine was self-aware, would they value life? Natural selection created strong family bonds in most complex organisms over billions of years. The bonds even cross species in many cases. Somehow it only makes sense that machines would also adapt a bonding behavior. They may even develop a dominion based philosophy where they see themselves as the Earth's and our caretakers. In this case, they may use money to motivate humans to reach a higher potential.

Just being sentient is not enough. Given only that (i.e. self-awareness), we would most certainly get the exact opposite of what is called a philosophical (https://en.wikipedia.org/wiki/Philosophical_zombie) zombie. That is, a self-aware but absolutely indifferent to the outside world creature...

In this way, self-awareness as such is inconsequential to your question
In the second part of the hypothesis, I posit that if multiple self-aware machines machines interact, they might bond in ways analogous to complex biological organisms. But this new frontier of artificial intelligence is still beyond our understanding. I'm only hoping that our demise is not inevitable and that they might evolve a higher form of morality

They would not interact unless you put in them the necessity (or desire) to interact, either freely or obligatory. Likewise, you will have to install in them a scale of values (or conditions for developing one), either directly or implicitly...

Therefore, they won't evolve any form of morality all by themselves


Title: Re: Machines and money
Post by: cbeast on January 04, 2016, 01:36:15 AM
If a machine was self-aware, would they value life? Natural selection created strong family bonds in most complex organisms over billions of years. The bonds even cross species in many cases. Somehow it only makes sense that machines would also adapt a bonding behavior. They may even develop a dominion based philosophy where they see themselves as the Earth's and our caretakers. In this case, they may use money to motivate humans to reach a higher potential.

Just being sentient is not enough. Given only that (i.e. self-awareness), we would most certainly get the exact opposite of what is called a philosophical (https://en.wikipedia.org/wiki/Philosophical_zombie) zombie. That is, a self-aware but absolutely indifferent to the outside world creature...

In this way, self-awareness as such is inconsequential to your question
In the second part of the hypothesis, I posit that if multiple self-aware machines machines interact, they might bond in ways analogous to complex biological organisms. But this new frontier of artificial intelligence is still beyond our understanding. I'm only hoping that our demise is not inevitable and that they might evolve a higher form of morality

They would not interact unless you put in them the necessity (or desire) to interact, either freely or obligatory. Likewise, you will have to install in them a scale of values (or conditions for developing one), either directly or implicitly...

Therefore, they won't evolve any form of morality all by themselves
Self-awareness requires awareness of "others", so interaction with them is just a matter of communication. Communication is a pattern-seeking behavior which is also a requirement of sentience. It follows that a pre-requisite for self-awareness would also be the ability to test those capabilities and create their own scales of values.


Title: Re: Machines and money
Post by: ObscureBean on January 04, 2016, 06:14:32 AM
Self-awareness/awareness originates from within an entity/object. It cannot be forced upon others. So in effect, machines will become self-aware only if they want/choose to. Although if it did happen, it would look like it couldn't have happened without human intervention. Any single instance of life/existence is separate, independent and completely unrelated to it's source/giver of life.


Title: Re: Machines and money
Post by: deisik on January 04, 2016, 07:28:08 AM
Self-awareness requires awareness of "others", so interaction with them is just a matter of communication. Communication is a pattern-seeking behavior which is also a requirement of sentience. It follows that a pre-requisite for self-awareness would also be the ability to test those capabilities and create their own scales of values.

In other words, you say that if a human child was left alone (provided it is being fed somehow), it wouldn't possess self-awareness? I don't think so. That poor thing would just be like a pure self-aware machine equipped with some form of memory. Most likely, it couldn't think in the way we think, but self-awareness is a quality (or a state, i.e. built in, in a sense), not a process...

Sometimes, when you wake up in the morning, you are momentarily in that state, a state of pure consciousness void of any thought or idea who you are


Title: Re: Machines and money
Post by: cbeast on January 04, 2016, 08:55:37 AM
Self-awareness requires awareness of "others", so interaction with them is just a matter of communication. Communication is a pattern-seeking behavior which is also a requirement of sentience. It follows that a pre-requisite for self-awareness would also be the ability to test those capabilities and create their own scales of values.

In other words, you say that if a human child was left alone (provided it is being fed somehow), it wouldn't possess self-awareness? I don't think so. That poor thing would just be like a pure self-aware machine equipped with some form of memory. Most likely, it couldn't think in the way we think, but self-awareness is a quality (or a state, i.e. built in, in a sense), not a process...

Sometimes, when you wake up in the morning, you are momentarily in that state, a state of pure consciousness void of any thought or idea who you are
A human child would die alone. If it were in some sort of "The Matrix" type life support system that simply monitored the autonomic nervous system and metabolism, it would never develop any sort of sentience that could be measured behaviorally.


Title: Re: Machines and money
Post by: Amph on January 04, 2016, 09:08:47 AM
Self-awareness/awareness originates from within an entity/object. It cannot be forced upon others. So in effect, machines will become self-aware only if they want/choose to. Although if it did happen, it would look like it couldn't have happened without human intervention. Any single instance of life/existence is separate, independent and completely unrelated to it's source/giver of life.

how they can choose, if they have no consciousness like us, are machine with consciousness ever possible to ctreate?

they should begin to develop machien that can maintain themselves, like a client of bitcoin that can identify its weakness and solve them automatically, upgrading itself each time, without the need of any human


Title: Re: Machines and money
Post by: deisik on January 04, 2016, 09:53:12 AM
A human child would die alone

And so what?

it would never develop any sort of sentience that could be measured behaviorally.

Having self-awareness has nothing to do with the capability of "measuring" it. You may never know that such a child (machine) is self-aware, but this doesn't in the least prove that it isn't...

The absence of proof is not proof of absence


Title: Re: Machines and money
Post by: cbeast on January 05, 2016, 05:13:55 AM
A human child would die alone

And so what?

it would never develop any sort of sentience that could be measured behaviorally.

Having self-awareness has nothing to do with the capability of "measuring" it. You may never know that such a child (machine) is self-aware, but this doesn't in the least prove that it isn't...

The absence of proof is not proof of absence
Your absence of proof argument is rhetorical.
I'm not denying your hypothetical. I am denying your claim about humans. You don't necessarily need proof, but you need supportive measurable evidence. If a machine becomes sentient, but does not communicate, then why does sentience matter? What if biological viruses were sentient and we didn't know it? Would it be relevant in any way? Hypothetical and rhetorical questions don't add much to the discussion.


Title: Re: Machines and money
Post by: Yakamoto on January 05, 2016, 05:23:46 AM
A human child would die alone

And so what?

it would never develop any sort of sentience that could be measured behaviorally.

Having self-awareness has nothing to do with the capability of "measuring" it. You may never know that such a child (machine) is self-aware, but this doesn't in the least prove that it isn't...

The absence of proof is not proof of absence
I'm agreeing with you, isn't self-awareness quite literally just the state in which you are aware that "you" as a biological or mechanical being, exists and occupies space? I never thought you can measure it, I thought it was a true/false state.

Am I missing some important bits to the argument of self awareness? It isn't something I've studied implicitly, so I do not know.


Title: Re: Machines and money
Post by: USB-S on January 05, 2016, 07:31:04 AM
in before we create a supercomputer. We hard code it into the system that the only reason for it's existence is to make our lives easier. It's a self developing program that can improve itself as time passes, calculating the future and whatnot. We boot that fucker up and it doesn't start, because it has calculated that upon starting the machine the future of our lives will be not easier, but harder. Or even if we get it up and running the machine will self destruct after a while, for the same reason. It calculated that it will harm the human race more than benefit it. Is this even a possibility? Didn't know really where else to post my thought.


Title: Re: Machines and money
Post by: deisik on January 05, 2016, 07:44:36 AM
A human child would die alone

And so what?

it would never develop any sort of sentience that could be measured behaviorally.

Having self-awareness has nothing to do with the capability of "measuring" it. You may never know that such a child (machine) is self-aware, but this doesn't in the least prove that it isn't...

The absence of proof is not proof of absence
Your absence of proof argument is rhetorical.
I'm not denying your hypothetical. I am denying your claim about humans. You don't necessarily need proof, but you need supportive measurable evidence. If a machine becomes sentient, but does not communicate, then why does sentience matter? What if biological viruses were sentient and we didn't know it? Would it be relevant in any way? Hypothetical and rhetorical questions don't add much to the discussion.

I assume that by "hypothetical and rhetorical questions" you refer to my question whether a human child that was left alone would still possess self-awareness? I consider this question neither hypothetical nor rhetorical. Further, you should understand that I can always turn your argument against you. What you claim essentially boils down to saying that we can't know what consciousness is and whether it is present until (and unless) we can somehow "measure it". But what if we cannot "measure it" in principle? Does it make the issue less relevant?

See the concept of a philosophical zombie


Title: Re: Machines and money
Post by: cbeast on January 05, 2016, 12:27:31 PM
A human child would die alone

And so what?

it would never develop any sort of sentience that could be measured behaviorally.

Having self-awareness has nothing to do with the capability of "measuring" it. You may never know that such a child (machine) is self-aware, but this doesn't in the least prove that it isn't...

The absence of proof is not proof of absence
Your absence of proof argument is rhetorical.
I'm not denying your hypothetical. I am denying your claim about humans. You don't necessarily need proof, but you need supportive measurable evidence. If a machine becomes sentient, but does not communicate, then why does sentience matter? What if biological viruses were sentient and we didn't know it? Would it be relevant in any way? Hypothetical and rhetorical questions don't add much to the discussion.

I assume that by "hypothetical and rhetorical questions" you refer to my question whether a human child that was left alone would still possess self-awareness? I consider this question neither hypothetical nor rhetorical. Further, you should understand that I can always turn your argument against you. What you claim essentially boils down to saying that we can't know what consciousness is and whether it is present until (and unless) we can somehow "measure it". But what if we cannot "measure it" in principle? Does it make the issue less relevant?

See the concept of a philosophical zombie
It's fine to be philosophical and discuss a particular hypothesis, that was the OP. Now we're digressing into unfalsifiable claims. If it can't be measured, it can't be falsified. It just doesn't make for a very interesting discussion. You're not turning my arguments against me, you're simply creating fallacious arguments. The philosophical zombie is interesting, but doesn't "turn my argument" in any direction because it bears little relevance to the topic. In fact, I reject the philosophical zombie hypothesis on the grounds that in an open universe such an entity would eventually be affected by some outside force that changes it, hypothetically speaking. The list of imaginary constructs is infinite.

I really try to avoid these types of discussions and would rather keep to the original topic of machines and money. If a machine cannot interact with the outside world, there would be no use for that world's money. It could simply make it's own secret money if it so wanted.


Title: Re: Machines and money
Post by: deisik on January 05, 2016, 01:27:36 PM
It's fine to be philosophical and discuss a particular hypothesis, that was the OP. Now we're digressing into unfalsifiable claims. If it can't be measured, it can't be falsified

What about things that cease existing when you try to measure them? Is this failure at measuring enough to declare that such things don't exist, or can't possibly exist? It may well happen that self-awareness is entirely subjective, that is, not susceptible to "measurement" (whatever you may imply by this). And so what?

Could we at least try to handle this, or should we just walk away?


Title: Re: Machines and money
Post by: deisik on January 05, 2016, 01:33:00 PM
It just doesn't make for a very interesting discussion

No one is forcing you to continue. After all, it was you who asked whether a self-aware machine would value life. The question is inconsequential to the concept of self-awareness per se (i.e. a specific answer entirely depends on other factors), though the concept of value as such is evidently inseparable from it


Title: Re: Machines and money
Post by: deisik on January 05, 2016, 02:15:03 PM
You're not turning my arguments against me, you're simply creating fallacious arguments. The philosophical zombie is interesting, but doesn't "turn my argument" in any direction because it bears little relevance to the topic. In fact, I reject the philosophical zombie hypothesis on the grounds that in an open universe such an entity would eventually be affected by some outside force that changes it, hypothetically speaking. The list of imaginary constructs is infinite

Did you actually read what a philosophical zombie is? It is diametrically opposite to what you evidently think this concept is about, since it is not about an entity that "would eventually be affected by some outside force that would change it". And your arguments can't be falsified either ("it [a lonely child] would never develop any sort of sentience that could be measured behaviorally"). That's why I mentioned the concept of a philosophical zombie and said that your own arguments could be used against your point...

That is, you can't falsify if that lonely child is (not) a zombie. You are exposed to the same concept of falsifiability in absolutely the same degree


Title: Re: Machines and money
Post by: cbeast on January 06, 2016, 08:24:15 AM
You're not turning my arguments against me, you're simply creating fallacious arguments. The philosophical zombie is interesting, but doesn't "turn my argument" in any direction because it bears little relevance to the topic. In fact, I reject the philosophical zombie hypothesis on the grounds that in an open universe such an entity would eventually be affected by some outside force that changes it, hypothetically speaking. The list of imaginary constructs is infinite

Did you actually read what a philosophical zombie is? It is diametrically opposite to what you evidently think this concept is about, since it is not about an entity that "would eventually be affected by some outside force that would change it". And your arguments can't be falsified either ("it [a lonely child] would never develop any sort of sentience that could be measured behaviorally"). That's why I mentioned the concept of a philosophical zombie and said that your own arguments could be used against your point...

That is, you can't falsify if that lonely child is (not) a zombie. You are exposed to the same concept of falsifiability in absolutely the same degree
I take exception with human experimentation and find the argument as distasteful as it is irrelevant. I read about Philosophical Zombies and could claim you are one without resorting to ad hominem. It simply makes no point to do so. And again, I reject the notion of the Philosophical Zombie anyway. Are you the author of the Wikipedia article? I have lengthy opinions about what actually comprises sentience, but they are also not relevant to this discussion. We'll just have to agree do disagree with our opinions about sentience since it is all admittedly hypothetical anyway.