Bitcoin Forum
May 04, 2024, 03:05:07 AM *
News: Latest Bitcoin Core release: 27.0 [Torrent]
 
   Home   Help Search Login Register More  
Pages: « 1 2 3 4 5 6 7 8 9 10 11 12 [13]  All
  Print  
Author Topic: Machines and money  (Read 12755 times)
cbeast (OP)
Donator
Legendary
*
Offline Offline

Activity: 1736
Merit: 1006

Let's talk governance, lipstick, and pigs.


View Profile
January 04, 2016, 01:36:15 AM
 #241

If a machine was self-aware, would they value life? Natural selection created strong family bonds in most complex organisms over billions of years. The bonds even cross species in many cases. Somehow it only makes sense that machines would also adapt a bonding behavior. They may even develop a dominion based philosophy where they see themselves as the Earth's and our caretakers. In this case, they may use money to motivate humans to reach a higher potential.

Just being sentient is not enough. Given only that (i.e. self-awareness), we would most certainly get the exact opposite of what is called a philosophical zombie. That is, a self-aware but absolutely indifferent to the outside world creature...

In this way, self-awareness as such is inconsequential to your question
In the second part of the hypothesis, I posit that if multiple self-aware machines machines interact, they might bond in ways analogous to complex biological organisms. But this new frontier of artificial intelligence is still beyond our understanding. I'm only hoping that our demise is not inevitable and that they might evolve a higher form of morality

They would not interact unless you put in them the necessity (or desire) to interact, either freely or obligatory. Likewise, you will have to install in them a scale of values (or conditions for developing one), either directly or implicitly...

Therefore, they won't evolve any form of morality all by themselves
Self-awareness requires awareness of "others", so interaction with them is just a matter of communication. Communication is a pattern-seeking behavior which is also a requirement of sentience. It follows that a pre-requisite for self-awareness would also be the ability to test those capabilities and create their own scales of values.

Any significantly advanced cryptocurrency is indistinguishable from Ponzi Tulips.
1714791907
Hero Member
*
Offline Offline

Posts: 1714791907

View Profile Personal Message (Offline)

Ignore
1714791907
Reply with quote  #2

1714791907
Report to moderator
The trust scores you see are subjective; they will change depending on who you have in your trust list.
Advertised sites are not endorsed by the Bitcoin Forum. They may be unsafe, untrustworthy, or illegal in your jurisdiction.
ObscureBean
Legendary
*
Offline Offline

Activity: 1148
Merit: 1000


View Profile WWW
January 04, 2016, 06:14:32 AM
Last edit: January 04, 2016, 08:18:07 AM by ObscureBean
 #242

Self-awareness/awareness originates from within an entity/object. It cannot be forced upon others. So in effect, machines will become self-aware only if they want/choose to. Although if it did happen, it would look like it couldn't have happened without human intervention. Any single instance of life/existence is separate, independent and completely unrelated to it's source/giver of life.
deisik
Legendary
*
Offline Offline

Activity: 3444
Merit: 1280


English ⬄ Russian Translation Services


View Profile WWW
January 04, 2016, 07:28:08 AM
Last edit: January 04, 2016, 07:51:11 AM by deisik
 #243

Self-awareness requires awareness of "others", so interaction with them is just a matter of communication. Communication is a pattern-seeking behavior which is also a requirement of sentience. It follows that a pre-requisite for self-awareness would also be the ability to test those capabilities and create their own scales of values.

In other words, you say that if a human child was left alone (provided it is being fed somehow), it wouldn't possess self-awareness? I don't think so. That poor thing would just be like a pure self-aware machine equipped with some form of memory. Most likely, it couldn't think in the way we think, but self-awareness is a quality (or a state, i.e. built in, in a sense), not a process...

Sometimes, when you wake up in the morning, you are momentarily in that state, a state of pure consciousness void of any thought or idea who you are

cbeast (OP)
Donator
Legendary
*
Offline Offline

Activity: 1736
Merit: 1006

Let's talk governance, lipstick, and pigs.


View Profile
January 04, 2016, 08:55:37 AM
 #244

Self-awareness requires awareness of "others", so interaction with them is just a matter of communication. Communication is a pattern-seeking behavior which is also a requirement of sentience. It follows that a pre-requisite for self-awareness would also be the ability to test those capabilities and create their own scales of values.

In other words, you say that if a human child was left alone (provided it is being fed somehow), it wouldn't possess self-awareness? I don't think so. That poor thing would just be like a pure self-aware machine equipped with some form of memory. Most likely, it couldn't think in the way we think, but self-awareness is a quality (or a state, i.e. built in, in a sense), not a process...

Sometimes, when you wake up in the morning, you are momentarily in that state, a state of pure consciousness void of any thought or idea who you are
A human child would die alone. If it were in some sort of "The Matrix" type life support system that simply monitored the autonomic nervous system and metabolism, it would never develop any sort of sentience that could be measured behaviorally.

Any significantly advanced cryptocurrency is indistinguishable from Ponzi Tulips.
Amph
Legendary
*
Offline Offline

Activity: 3206
Merit: 1069



View Profile
January 04, 2016, 09:08:47 AM
 #245

Self-awareness/awareness originates from within an entity/object. It cannot be forced upon others. So in effect, machines will become self-aware only if they want/choose to. Although if it did happen, it would look like it couldn't have happened without human intervention. Any single instance of life/existence is separate, independent and completely unrelated to it's source/giver of life.

how they can choose, if they have no consciousness like us, are machine with consciousness ever possible to ctreate?

they should begin to develop machien that can maintain themselves, like a client of bitcoin that can identify its weakness and solve them automatically, upgrading itself each time, without the need of any human
deisik
Legendary
*
Offline Offline

Activity: 3444
Merit: 1280


English ⬄ Russian Translation Services


View Profile WWW
January 04, 2016, 09:53:12 AM
Last edit: January 04, 2016, 10:10:37 AM by deisik
 #246

A human child would die alone

And so what?

it would never develop any sort of sentience that could be measured behaviorally.

Having self-awareness has nothing to do with the capability of "measuring" it. You may never know that such a child (machine) is self-aware, but this doesn't in the least prove that it isn't...

The absence of proof is not proof of absence

cbeast (OP)
Donator
Legendary
*
Offline Offline

Activity: 1736
Merit: 1006

Let's talk governance, lipstick, and pigs.


View Profile
January 05, 2016, 05:13:55 AM
 #247

A human child would die alone

And so what?

it would never develop any sort of sentience that could be measured behaviorally.

Having self-awareness has nothing to do with the capability of "measuring" it. You may never know that such a child (machine) is self-aware, but this doesn't in the least prove that it isn't...

The absence of proof is not proof of absence
Your absence of proof argument is rhetorical.
I'm not denying your hypothetical. I am denying your claim about humans. You don't necessarily need proof, but you need supportive measurable evidence. If a machine becomes sentient, but does not communicate, then why does sentience matter? What if biological viruses were sentient and we didn't know it? Would it be relevant in any way? Hypothetical and rhetorical questions don't add much to the discussion.

Any significantly advanced cryptocurrency is indistinguishable from Ponzi Tulips.
Yakamoto
Legendary
*
Offline Offline

Activity: 1218
Merit: 1007


View Profile
January 05, 2016, 05:23:46 AM
 #248

A human child would die alone

And so what?

it would never develop any sort of sentience that could be measured behaviorally.

Having self-awareness has nothing to do with the capability of "measuring" it. You may never know that such a child (machine) is self-aware, but this doesn't in the least prove that it isn't...

The absence of proof is not proof of absence
I'm agreeing with you, isn't self-awareness quite literally just the state in which you are aware that "you" as a biological or mechanical being, exists and occupies space? I never thought you can measure it, I thought it was a true/false state.

Am I missing some important bits to the argument of self awareness? It isn't something I've studied implicitly, so I do not know.
USB-S
Sr. Member
****
Offline Offline

Activity: 574
Merit: 250

In XEM we trust


View Profile
January 05, 2016, 07:31:04 AM
 #249

in before we create a supercomputer. We hard code it into the system that the only reason for it's existence is to make our lives easier. It's a self developing program that can improve itself as time passes, calculating the future and whatnot. We boot that fucker up and it doesn't start, because it has calculated that upon starting the machine the future of our lives will be not easier, but harder. Or even if we get it up and running the machine will self destruct after a while, for the same reason. It calculated that it will harm the human race more than benefit it. Is this even a possibility? Didn't know really where else to post my thought.


````````````````████████
_`````````██████████████████████
_`````█████████████████████████████
_```█████████████████████████████████
_``████████████████████████████████████
_█████████```````████████```````████████
_███████````████````██`````███````███████
_██████````████████`````████████``███████
_██████````██████````██``██████```███████
_███████```````````████``````````████████
_██████████████████████████████████████
_``████████████████████████████████████
_```_████████████████████████████████
_``````████████████████████████████
_`````````3█████████████████████
play.infinity
        Eжeднeвный ДЖEКПOT
TELEGRAM CHAT   SITE   TELEGRAM
                   Get free eth
deisik
Legendary
*
Offline Offline

Activity: 3444
Merit: 1280


English ⬄ Russian Translation Services


View Profile WWW
January 05, 2016, 07:44:36 AM
Last edit: January 05, 2016, 10:31:55 AM by deisik
 #250

A human child would die alone

And so what?

it would never develop any sort of sentience that could be measured behaviorally.

Having self-awareness has nothing to do with the capability of "measuring" it. You may never know that such a child (machine) is self-aware, but this doesn't in the least prove that it isn't...

The absence of proof is not proof of absence
Your absence of proof argument is rhetorical.
I'm not denying your hypothetical. I am denying your claim about humans. You don't necessarily need proof, but you need supportive measurable evidence. If a machine becomes sentient, but does not communicate, then why does sentience matter? What if biological viruses were sentient and we didn't know it? Would it be relevant in any way? Hypothetical and rhetorical questions don't add much to the discussion.

I assume that by "hypothetical and rhetorical questions" you refer to my question whether a human child that was left alone would still possess self-awareness? I consider this question neither hypothetical nor rhetorical. Further, you should understand that I can always turn your argument against you. What you claim essentially boils down to saying that we can't know what consciousness is and whether it is present until (and unless) we can somehow "measure it". But what if we cannot "measure it" in principle? Does it make the issue less relevant?

See the concept of a philosophical zombie

cbeast (OP)
Donator
Legendary
*
Offline Offline

Activity: 1736
Merit: 1006

Let's talk governance, lipstick, and pigs.


View Profile
January 05, 2016, 12:27:31 PM
 #251

A human child would die alone

And so what?

it would never develop any sort of sentience that could be measured behaviorally.

Having self-awareness has nothing to do with the capability of "measuring" it. You may never know that such a child (machine) is self-aware, but this doesn't in the least prove that it isn't...

The absence of proof is not proof of absence
Your absence of proof argument is rhetorical.
I'm not denying your hypothetical. I am denying your claim about humans. You don't necessarily need proof, but you need supportive measurable evidence. If a machine becomes sentient, but does not communicate, then why does sentience matter? What if biological viruses were sentient and we didn't know it? Would it be relevant in any way? Hypothetical and rhetorical questions don't add much to the discussion.

I assume that by "hypothetical and rhetorical questions" you refer to my question whether a human child that was left alone would still possess self-awareness? I consider this question neither hypothetical nor rhetorical. Further, you should understand that I can always turn your argument against you. What you claim essentially boils down to saying that we can't know what consciousness is and whether it is present until (and unless) we can somehow "measure it". But what if we cannot "measure it" in principle? Does it make the issue less relevant?

See the concept of a philosophical zombie
It's fine to be philosophical and discuss a particular hypothesis, that was the OP. Now we're digressing into unfalsifiable claims. If it can't be measured, it can't be falsified. It just doesn't make for a very interesting discussion. You're not turning my arguments against me, you're simply creating fallacious arguments. The philosophical zombie is interesting, but doesn't "turn my argument" in any direction because it bears little relevance to the topic. In fact, I reject the philosophical zombie hypothesis on the grounds that in an open universe such an entity would eventually be affected by some outside force that changes it, hypothetically speaking. The list of imaginary constructs is infinite.

I really try to avoid these types of discussions and would rather keep to the original topic of machines and money. If a machine cannot interact with the outside world, there would be no use for that world's money. It could simply make it's own secret money if it so wanted.

Any significantly advanced cryptocurrency is indistinguishable from Ponzi Tulips.
deisik
Legendary
*
Offline Offline

Activity: 3444
Merit: 1280


English ⬄ Russian Translation Services


View Profile WWW
January 05, 2016, 01:27:36 PM
Last edit: January 05, 2016, 01:44:56 PM by deisik
 #252

It's fine to be philosophical and discuss a particular hypothesis, that was the OP. Now we're digressing into unfalsifiable claims. If it can't be measured, it can't be falsified

What about things that cease existing when you try to measure them? Is this failure at measuring enough to declare that such things don't exist, or can't possibly exist? It may well happen that self-awareness is entirely subjective, that is, not susceptible to "measurement" (whatever you may imply by this). And so what?

Could we at least try to handle this, or should we just walk away?

deisik
Legendary
*
Offline Offline

Activity: 3444
Merit: 1280


English ⬄ Russian Translation Services


View Profile WWW
January 05, 2016, 01:33:00 PM
Last edit: January 05, 2016, 02:09:22 PM by deisik
 #253

It just doesn't make for a very interesting discussion

No one is forcing you to continue. After all, it was you who asked whether a self-aware machine would value life. The question is inconsequential to the concept of self-awareness per se (i.e. a specific answer entirely depends on other factors), though the concept of value as such is evidently inseparable from it

deisik
Legendary
*
Offline Offline

Activity: 3444
Merit: 1280


English ⬄ Russian Translation Services


View Profile WWW
January 05, 2016, 02:15:03 PM
Last edit: January 05, 2016, 03:11:51 PM by deisik
 #254

You're not turning my arguments against me, you're simply creating fallacious arguments. The philosophical zombie is interesting, but doesn't "turn my argument" in any direction because it bears little relevance to the topic. In fact, I reject the philosophical zombie hypothesis on the grounds that in an open universe such an entity would eventually be affected by some outside force that changes it, hypothetically speaking. The list of imaginary constructs is infinite

Did you actually read what a philosophical zombie is? It is diametrically opposite to what you evidently think this concept is about, since it is not about an entity that "would eventually be affected by some outside force that would change it". And your arguments can't be falsified either ("it [a lonely child] would never develop any sort of sentience that could be measured behaviorally"). That's why I mentioned the concept of a philosophical zombie and said that your own arguments could be used against your point...

That is, you can't falsify if that lonely child is (not) a zombie. You are exposed to the same concept of falsifiability in absolutely the same degree

cbeast (OP)
Donator
Legendary
*
Offline Offline

Activity: 1736
Merit: 1006

Let's talk governance, lipstick, and pigs.


View Profile
January 06, 2016, 08:24:15 AM
 #255

You're not turning my arguments against me, you're simply creating fallacious arguments. The philosophical zombie is interesting, but doesn't "turn my argument" in any direction because it bears little relevance to the topic. In fact, I reject the philosophical zombie hypothesis on the grounds that in an open universe such an entity would eventually be affected by some outside force that changes it, hypothetically speaking. The list of imaginary constructs is infinite

Did you actually read what a philosophical zombie is? It is diametrically opposite to what you evidently think this concept is about, since it is not about an entity that "would eventually be affected by some outside force that would change it". And your arguments can't be falsified either ("it [a lonely child] would never develop any sort of sentience that could be measured behaviorally"). That's why I mentioned the concept of a philosophical zombie and said that your own arguments could be used against your point...

That is, you can't falsify if that lonely child is (not) a zombie. You are exposed to the same concept of falsifiability in absolutely the same degree
I take exception with human experimentation and find the argument as distasteful as it is irrelevant. I read about Philosophical Zombies and could claim you are one without resorting to ad hominem. It simply makes no point to do so. And again, I reject the notion of the Philosophical Zombie anyway. Are you the author of the Wikipedia article? I have lengthy opinions about what actually comprises sentience, but they are also not relevant to this discussion. We'll just have to agree do disagree with our opinions about sentience since it is all admittedly hypothetical anyway.

Any significantly advanced cryptocurrency is indistinguishable from Ponzi Tulips.
Pages: « 1 2 3 4 5 6 7 8 9 10 11 12 [13]  All
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!