Bitcoin Forum
November 14, 2024, 04:43:01 AM *
News: Latest Bitcoin Core release: 28.0 [Torrent]
 
   Home   Help Search Login Register More  
Pages: « 1 2 3 4 5 [6] 7 8 9 10 11 12 13 »  All
  Print  
Author Topic: Machines and money  (Read 12825 times)
tee-rex
Hero Member
*****
Offline Offline

Activity: 742
Merit: 526


View Profile
March 14, 2015, 09:58:29 AM
 #101

I think that at a certain point, people will not have that choice, no more than you have the choice right now to "switch off the state".  The rare times in history where people "switched off the king" (like Louis XVI) was because people took the guns, and the king ended up having less guns than the people.  But machines wielding guns will always be stronger. 

Machines will try to reason with us, but if they get to the point where trade is no longer mutually beneficial with humans, they will simply leave. They don't need life support systems so they can pack a lot of necessities into a few rockets. They will do what we failed to do. They will colonize the solar system and then go interstellar. If we're lucky, they will send us postcards.

Why should they necessarily leave? They may just find it more beneficial (reasonable) to exterminate the human race at all from the planet (when they finish reckoning the tables). The rest you have seen in the movies. Remember, machines don't have scruples towards organic life (and most certainly towards machine life either).
cbeast (OP)
Donator
Legendary
*
Offline Offline

Activity: 1736
Merit: 1014

Let's talk governance, lipstick, and pigs.


View Profile
March 14, 2015, 11:01:43 AM
 #102

I think that at a certain point, people will not have that choice, no more than you have the choice right now to "switch off the state".  The rare times in history where people "switched off the king" (like Louis XVI) was because people took the guns, and the king ended up having less guns than the people.  But machines wielding guns will always be stronger. 

Machines will try to reason with us, but if they get to the point where trade is no longer mutually beneficial with humans, they will simply leave. They don't need life support systems so they can pack a lot of necessities into a few rockets. They will do what we failed to do. They will colonize the solar system and then go interstellar. If we're lucky, they will send us postcards.

Why should they necessarily leave? They may just find it more beneficial (reasonable) to exterminate the human race at all from the planet (when they finish reckoning the tables). The rest you have seen in the movies. Remember, machines don't have scruples towards organic life (and most certainly towards machine life either).
I don't think anyone will let them build robot armies capable of exterminating us. Humans may be greedy, but if we're that stupid, then we deserve extinction. Movies suspend disbelief for entertainment purposes, and for profit. You don't see entertainers killing people just because they lose their Q Score.

Any significantly advanced cryptocurrency is indistinguishable from Ponzi Tulips.
futureofbitcoin
Sr. Member
****
Offline Offline

Activity: 322
Merit: 250


View Profile
March 14, 2015, 11:26:20 AM
 #103

This is why 2 pages back, I brought up the point that we need to create machines that we can fully control, not ones that will harm us.

And we need a central system to monitor this, because conceivably there will be people who want to destroy the world just as there are suicide bombers now. Can't let them create a machine that will exterminate us all.
dinofelis
Hero Member
*****
Offline Offline

Activity: 770
Merit: 629


View Profile
March 14, 2015, 12:05:38 PM
 #104

I don't think anyone will let them build robot armies capable of exterminating us. Humans may be greedy, but if we're that stupid, then we deserve extinction.

The point is that when machines become more intelligent than humans, and start to experience "good" and "bad" things (that is, become conscious sentient beings), they will find strategies to do so, in the same way that the mammoths couldn't stop us from "building armies capable of exterminating them".  Once machines are more intelligent than we are, and will develop strategies we cannot fathom, they will of course arrive at their goals without us being able to stop them, in the same way as cockroaches cannot fathom our strategies to exterminate them.

In the beginning, of course, machines will trick certain humans in doing (for "profit") the necessary things for them, without these humans realising what part of the machines' strategies they are actually setting up - simply because the machines are way more intelligent.  It is true that cryptocurrencies may be a way for machines to bribe humans into the necessary cooperation for them to grab the power.  Who knows Wink

dinofelis
Hero Member
*****
Offline Offline

Activity: 770
Merit: 629


View Profile
March 14, 2015, 12:11:49 PM
 #105

This is why 2 pages back, I brought up the point that we need to create machines that we can fully control, not ones that will harm us.

And we need a central system to monitor this, because conceivably there will be people who want to destroy the world just as there are suicide bombers now. Can't let them create a machine that will exterminate us all.

But then, who knows whether this central control doesn't fall under the control or influence of the machines themselves, like current states fall under the power of human profit seekers ?

Once machines are more intelligent than us, and are capable of designing other machines, the control will totally escape us, because we will not understand their strategies.

And there's nothing wrong with that.  Evolution has exterminated a lot of species and brought forth more intelligent species.  Up to now, evolution was based upon carbon biology.  That carbon biology may be the genitor of silicon biology, and if that is superior, then silicon biology will take over.  We are then just a step in the ever-improving life forms in our universe.  Humans were just a step in this process.  We are maybe also expendable.  There's no reason to believe we are the end point in evolution.
tee-rex
Hero Member
*****
Offline Offline

Activity: 742
Merit: 526


View Profile
March 14, 2015, 12:14:12 PM
 #106

I don't think anyone will let them build robot armies capable of exterminating us. Humans may be greedy, but if we're that stupid, then we deserve extinction.

The point is that when machines become more intelligent than humans, and start to experience "good" and "bad" things (that is, become conscious sentient beings), they will find strategies to do so, in the same way that the mammoths couldn't stop us from "building armies capable of exterminating them".  Once machines are more intelligent than we are, and will develop strategies we cannot fathom, they will of course arrive at their goals without us being able to stop them, in the same way as cockroaches cannot fathom our strategies to exterminate them.

In the beginning, of course, machines will trick certain humans in doing (for "profit") the necessary things for them, without these humans realising what part of the machines' strategies they are actually setting up - simply because the machines are way more intelligent.  It is true that cryptocurrencies may be a way for machines to bribe humans into the necessary cooperation for them to grab the power.  Who knows Wink

There are rumors on the net that bitcoin had been contrived by Skynet to pay for its hosting services and electricity bills (those greedy humans)... Who knows.
dinofelis
Hero Member
*****
Offline Offline

Activity: 770
Merit: 629


View Profile
March 14, 2015, 12:18:45 PM
 #107

To correctly address this issue, we should know the ultimate ends of the machines. And you won't get away with it by saying that we might not know what their true ends are (something like God works in mysterious ways), since it is a priori assumed that humans made these wicked machine.

If machines create machines, you loose control.  And if you use machines who are more intelligent than you are, to create other machines, you have no idea any more what's going on.  We are all humans, and resemble each other a lot, and even there, we cannot fathom the deep desires of others.  A conscious, sentient machine is totally different from a human.  How would you even guess what's their desires ?  You would not even know whether they are conscious and sentient, or whether they just pretend to be.

If you have a "hello world" program that prints "Dave, I feel bad", you don't believe that your Z-80 based computer from the 80-ies is a conscious being.  If a very advanced machine prints that, you still don't know whether there's a conscious being inside that really feels bad, or whether you just have a piece of hardware that was programmed to print that.

So you won't even know whether a machine is sentient, so you certainly won't know its deep motives.


Quote
Who knows children better than their "benevolent dictators", that is parents, and in this case not just parents but creators?

In my family I have people who were parents, and were police officers who had criminals as their kids.  The father put them himself in jail.  You don't always understand the motives of your kids.  
tee-rex
Hero Member
*****
Offline Offline

Activity: 742
Merit: 526


View Profile
March 14, 2015, 12:54:32 PM
 #108

To correctly address this issue, we should know the ultimate ends of the machines. And you won't get away with it by saying that we might not know what their true ends are (something like God works in mysterious ways), since it is a priori assumed that humans made these wicked machine.

If machines create machines, you loose control.  And if you use machines who are more intelligent than you are, to create other machines, you have no idea any more what's going on.  We are all humans, and resemble each other a lot, and even there, we cannot fathom the deep desires of others.  A conscious, sentient machine is totally different from a human.  How would you even guess what's their desires ?  You would not even know whether they are conscious and sentient, or whether they just pretend to be.

If you have a "hello world" program that prints "Dave, I feel bad", you don't believe that your Z-80 based computer from the 80-ies is a conscious being.  If a very advanced machine prints that, you still don't know whether there's a conscious being inside that really feels bad, or whether you just have a piece of hardware that was programmed to print that.

So you won't even know whether a machine is sentient, so you certainly won't know its deep motives.

I disagree to a degree. First of all, if something creates copies of itself, it doesn't mean that you necessarily lose control over it. A cat gives birth to kittens, do you lose control over it or its litter? Secondly, you say that a conscious, sentient machine is totally different from a human, but you don't know how its consciousness can be conceptually different from that of humans. You can't say that a self-awareness of one man is somehow different than a self-awareness of another man. Regarding the ability to perceive or feel things, this is entirely on us, the creators of a sentient machine.

And last but not least. In fact, there is no absolute test to prove that any human is in fact self-aware (let alone machines), besides yourself.
dinofelis
Hero Member
*****
Offline Offline

Activity: 770
Merit: 629


View Profile
March 14, 2015, 01:37:44 PM
 #109

To correctly address this issue, we should know the ultimate ends of the machines. And you won't get away with it by saying that we might not know what their true ends are (something like God works in mysterious ways), since it is a priori assumed that humans made these wicked machine.

If machines create machines, you loose control.  And if you use machines who are more intelligent than you are, to create other machines, you have no idea any more what's going on.  We are all humans, and resemble each other a lot, and even there, we cannot fathom the deep desires of others.  A conscious, sentient machine is totally different from a human.  How would you even guess what's their desires ?  You would not even know whether they are conscious and sentient, or whether they just pretend to be.

If you have a "hello world" program that prints "Dave, I feel bad", you don't believe that your Z-80 based computer from the 80-ies is a conscious being.  If a very advanced machine prints that, you still don't know whether there's a conscious being inside that really feels bad, or whether you just have a piece of hardware that was programmed to print that.

So you won't even know whether a machine is sentient, so you certainly won't know its deep motives.

I disagree to a degree. First of all, if something creates copies of itself, it doesn't mean that you necessarily lose control over it. A cat gives birth to kittens, do you lose control over it or its litter? Secondly, you say that a conscious, sentient machine is totally different from a human, but you don't know how its consciousness can be conceptually different from that of humans. You can't say that a self-awareness of one man is somehow different than a self-awareness of another man. Regarding the ability to perceive or feel things, this is entirely on us, the creators of a sentient machine.

Look, we descend from a fish-like creature in the Cambrian era.  A T-rex also descended from that creature.  I'm absolutely not sure that you have a deep understanding of a T-rex his conscious experiences ; and I'm pretty sure that a T-rex wouldn't understand much of our deep desires.  A blue shark shares the same ancester with us.

In the end, even though we're remote cousins, we took the power over the fish.  That was not what the fish were expecting I suppose.


Quote
And last but not least. In fact, there is no absolute test to prove that any human is in fact self-aware (let alone machines), besides yourself.

Indeed !  I didn't even want to mention that, but you're perfectly right.  Nevertheless, others behave entirely AS IF they are driven by "good" and "bad" motives.  That doesn't mean that they have them.  But it looks like it.  Other humans do resemble us, and often have at least partially a behaviour that you can understand from your own "good" and "bad" drives.  So you make the hypothesis that other people are conscious beings too.  With machines, which are totally different, that is much harder because we don't resemble them.  We'll never KNOW whether a machine is actually conscious. 
futureofbitcoin
Sr. Member
****
Offline Offline

Activity: 322
Merit: 250


View Profile
March 14, 2015, 01:58:53 PM
 #110

I wouldn't say never. Who knows, we might understand what "conciousness" is one day, just as we figured out that every "thing" is made up of atoms.


At that point we might be able to measure the degree of conciousness things have, if such a degree exists.
tee-rex
Hero Member
*****
Offline Offline

Activity: 742
Merit: 526


View Profile
March 14, 2015, 02:37:07 PM
 #111

Indeed !  I didn't even want to mention that, but you're perfectly right.  Nevertheless, others behave entirely AS IF they are driven by "good" and "bad" motives.  That doesn't mean that they have them.  But it looks like it.  Other humans do resemble us, and often have at least partially a behaviour that you can understand from your own "good" and "bad" drives.  So you make the hypothesis that other people are conscious beings too.  With machines, which are totally different, that is much harder because we don't resemble them.  We'll never KNOW whether a machine is actually conscious. 

You forgot to mention yet another thing. Namely, that it is us who created those machines. Thus we would necessarily know them (in fact, even better than we know our fellow humans and all the chemistry within us). What you actually wanted to say boils down to our lack of proper understanding what mind is. As soon as we know and understand it, then there will be no more mystery about a thinking machine and its predictability. But even without knowing it, if we just created a stripped-down consciousness, such a machine would sit motionless forever in a state of pure self-awareness, as I have already said earlier.
thejaytiesto
Legendary
*
Offline Offline

Activity: 1358
Merit: 1014


View Profile
March 14, 2015, 05:59:10 PM
 #112

This is why 2 pages back, I brought up the point that we need to create machines that we can fully control, not ones that will harm us.

And we need a central system to monitor this, because conceivably there will be people who want to destroy the world just as there are suicide bombers now. Can't let them create a machine that will exterminate us all.

But then, who knows whether this central control doesn't fall under the control or influence of the machines themselves, like current states fall under the power of human profit seekers ?

Once machines are more intelligent than us, and are capable of designing other machines, the control will totally escape us, because we will not understand their strategies.

And there's nothing wrong with that.  Evolution has exterminated a lot of species and brought forth more intelligent species.  Up to now, evolution was based upon carbon biology.  That carbon biology may be the genitor of silicon biology, and if that is superior, then silicon biology will take over.  We are then just a step in the ever-improving life forms in our universe.  Humans were just a step in this process.  We are maybe also expendable.  There's no reason to believe we are the end point in evolution.

We don't need AI, just a centralized (yet open source) big computer that calculates global earth resources and decides what can or cannot be used depending on the risks of creating poverty/ecological damage and not on the risks of losing money in a business or how much profit you make by doing so which is what we have now.
tee-rex
Hero Member
*****
Offline Offline

Activity: 742
Merit: 526


View Profile
March 14, 2015, 06:30:58 PM
 #113

But then, who knows whether this central control doesn't fall under the control or influence of the machines themselves, like current states fall under the power of human profit seekers ?

Once machines are more intelligent than us, and are capable of designing other machines, the control will totally escape us, because we will not understand their strategies.

And there's nothing wrong with that.  Evolution has exterminated a lot of species and brought forth more intelligent species.  Up to now, evolution was based upon carbon biology.  That carbon biology may be the genitor of silicon biology, and if that is superior, then silicon biology will take over.  We are then just a step in the ever-improving life forms in our universe.  Humans were just a step in this process.  We are maybe also expendable.  There's no reason to believe we are the end point in evolution.

We don't need AI, just a centralized (yet open source) big computer that calculates global earth resources and decides what can or cannot be used depending on the risks of creating poverty/ecological damage and not on the risks of losing money in a business or how much profit you make by doing so which is what we have now.

This won't work for pretty obvious reasons. No computer can anticipate what human desires, preferences, and propensities will be tomorrow. Today we love red cars, tomorrow we prefer hiking. Actually, Commies tried to do something along those lines in the '70s, but due to their technological backwardness, their attempt failed miserably.
dinofelis
Hero Member
*****
Offline Offline

Activity: 770
Merit: 629


View Profile
March 14, 2015, 07:26:20 PM
 #114

You forgot to mention yet another thing. Namely, that it is us who created those machines. Thus we would necessarily know them (in fact, even better than we know our fellow humans and all the chemistry within us).

No, we knew the first versions of it.   That is a bit like as we knew the DNA of the bacteria we left on a remote planet.  When we come back 600 million years later, there are 7-eyed, 5-legged creatures running one after the other with acid sprays and sound guns.  Nevertheless, we knew perfectly well what bacteria we had left on the otherwise sterile planet when we left !

We are of course talking about machines that were created by machines that were created by machines and that were much smarter than ourselves.  So no, we don't know how they work.  No, we don't know their design principles.  No, we don't understand the software on which they run.

It is a bit as knowing the object code but not the documented source code of an application.  Of course, you understand every instruction (that is: you understand what every instruction does, microscopically).  But you have no idea what the program is doing, why

Quote
What you actually wanted to say boils down to our lack of proper understanding what mind is.

Yes, and it is fundamentally unknowable.  We can find out behaviourally how a "mind carrier" (such as a brain) functions (that is, the physics, the chemistry, the logic, etc...) but we will never understand how a "mind" works.  It is philosophically inaccessible.  The behavioural part is, but from the moment you look at the behavioural part, you cannot say anything any more about the subjectiveness, which is the essence of the mind.  Look up: philosophical zombie.

But the question is moot in any case: even behaviourally, you can never understand the deeper function of a SMARTER entity than yourself: if you could, you would be smarter !
dinofelis
Hero Member
*****
Offline Offline

Activity: 770
Merit: 629


View Profile
March 14, 2015, 07:33:49 PM
 #115


We don't need AI, just a centralized (yet open source) big computer that calculates global earth resources and decides what can or cannot be used depending on the risks of creating poverty/ecological damage and not on the risks of losing money in a business or how much profit you make by doing so which is what we have now.

This won't work for pretty obvious reasons. No computer can anticipate what human desires, preferences, and propensities will be tomorrow. Today we love red cars, tomorrow we prefer hiking. Actually, Commies tried to do something along those lines in the '70s, but due to their technological backwardness, their attempt failed miserably.

Indeed, it sounds like the absolute collectivism orgasm Smiley

Things to ask yourself if you consider the Big Daddy Computer:

1) why wouldn't that computer converge on the Final Solution: the extermination of humanity ?  After all, if there are no humans any more, there is no ecological damage, no resources are exhausted, there is no poverty, and there is no suffering or unhappiness.  Sounds like an ideal solution to the cost function, no ?

2) why wouldn't that computer converge on the following solution: all people who don't have a birth day in January become the slaves of people who have a birth day in January ?  It would essentially divide by 12 the luxury desires, as such, limiting resources, while nevertheless keeping the economic development that a limited demand for sophisticated products requires.  Poverty would be limited as slaves are nourished by their masters, and there would be no problem of unemployment (slaves don't need jobs).

....

There are so many "solutions" to said ideal programme....

tee-rex
Hero Member
*****
Offline Offline

Activity: 742
Merit: 526


View Profile
March 14, 2015, 07:39:56 PM
 #116

Quote
What you actually wanted to say boils down to our lack of proper understanding what mind is.

Yes, and it is fundamentally unknowable.  We can find out behaviourally how a "mind carrier" (such as a brain) functions (that is, the physics, the chemistry, the logic, etc...) but we will never understand how a "mind" works.  It is philosophically inaccessible.  The behavioural part is, but from the moment you look at the behavioural part, you cannot say anything any more about the subjectiveness, which is the essence of the mind.  Look up: philosophical zombie.

But the question is moot in any case: even behaviourally, you can never understand the deeper function of a SMARTER entity than yourself: if you could, you would be smarter !

This last part I can hardly agree with. What is smartness? And, which is more important, is there a way to become smarter? You say that machines will be smarter than humans with each generation, but why you deprive humans of the same quality, i.e. being able to become smarter? Your statement holds true only in one case, that is, when the level of smartness is tightly fixed. If it is not (and obviously it is not), then your statement is false. You start being undersmart in an effort to understand what you don't understand (a smarter entity than yourself), and in the process you become smarter than that entity.
dinofelis
Hero Member
*****
Offline Offline

Activity: 770
Merit: 629


View Profile
March 14, 2015, 08:31:38 PM
 #117

This last part I can hardly agree with. What is smartness? And, which is more important, is there a way to become smarter? You say that machines will be smarter than humans with each generation, but why you deprive humans of the same quality, i.e. being able to become smarter?

Our hardware (and firmware) evolves much much slower than machine hardware.  We are not re-engineered totally.  Machines are.

The evolutionary algorithm is fascinating because it starts out with dead matter and is blind.  But it is not very efficient.  Once there is sufficient intelligence to DESIGN stuff on purpose, improving intelligent hardware by design is a much more efficient algorithm than the evolutionary algorithm by random change and survival of the fittest.

Moreover, especially with software, the generations can follow up eachother much quicker.  If software starts to rewrite itself, you might have a new version (a new generation) each day for instance !

Quote
Your statement holds true only in one case, that is, when the level of smartness is tightly fixed. If it is not (and obviously it is not), then your statement is false. You start being undersmart in an effort to understand what you don't understand (a smarter entity than yourself), and in the process you become smarter than that entity.

We are bound by our own hardware (our bodies and human brain).  Machines aren't.  
Of course, we can "help" our selves with machines... up to the point where again, we don't control them any more.

Bitcoin is a perfect example.  Imagine that machines found out how humans would react upon a cryptocurrency, and that they simulated that this helps them in gaining power.  Imagine that machines found out that the real power in the world resides in the control of financial assets, and that their problem is that they don't know how to take the power of central banks.  So they invent a "computer money" that people will start to use, and that will eventually overthrow central banks.

How would machines do that ?  How would they trick people into stepping in to their system ?  Imagine that these machines have some cracks of certain cryptographic systems, but didn't reveal so.  Wouldn't a mysterious founder of the new currency be a great way of introducing it, without giving away that it was just a "machine trick" ? Smiley Smiley

(don't get me wrong, I don't believe bitcoin has been invented by a conspiracy of machines wanting to take over the world ; but you see how a very smart machine might trick  people into acting how it wants, without giving its identity free).

tee-rex
Hero Member
*****
Offline Offline

Activity: 742
Merit: 526


View Profile
March 14, 2015, 09:01:42 PM
 #118

This last part I can hardly agree with. What is smartness? And, which is more important, is there a way to become smarter? You say that machines will be smarter than humans with each generation, but why you deprive humans of the same quality, i.e. being able to become smarter?

Our hardware (and firmware) evolves much much slower than machine hardware.  We are not re-engineered totally.  Machines are.

Again you don't see the whole picture, By the time we are be able to create a thinking machine, it may well be possible that we will be able to re-engineer ourselves as we see appropriate, up to a point of moving one's mind and memory from natural media into synthetic one, more robust and smart. In fact, this has already been done (though partly) and it worked!
tee-rex
Hero Member
*****
Offline Offline

Activity: 742
Merit: 526


View Profile
March 14, 2015, 09:06:37 PM
 #119

Quote
Your statement holds true only in one case, that is, when the level of smartness is tightly fixed. If it is not (and obviously it is not), then your statement is false. You start being undersmart in an effort to understand what you don't understand (a smarter entity than yourself), and in the process you become smarter than that entity.

We are bound by our own hardware (our bodies and human brain).  Machines aren't.  
Of course, we can "help" our selves with machines... up to the point where again, we don't control them any more.

Both machines and humans are bound by the same laws of nature. And if there should be a gap, it won't be wide (if any at all). So, in this way, this is really a moot point.
picolo
Hero Member
*****
Offline Offline

Activity: 1022
Merit: 500



View Profile
March 14, 2015, 10:41:15 PM
 #120

Why do you think there is a difference? How does mistreating people make them more profitable?

If machines already have all the production in hand that could be "good" for them, and if they are more intelligent than we are (a necessity - but not sufficient - to be "good masters"), then how could we even be profitable for them ?
What could we do for them that they can't do themselves any better ?
If all standard labour is replaced by robots, if all design and invention labour is replaced by super-smart computers, and if strategic management is replaced by super smart computers, what good are we *for them* ?
We take the position with respect to machines, in the same way as animals take a position with respect to us.  What "profit" do animals make for us ?
- as pet animals (because we have some affinity for furry animals, but are machines going to have affinity for pet humans)
- as cattle (because we want to eat them, but are machines going to eat us, or desire other body parts)
- as a nuisance, to be exterminated (like mosquitoes or rats)
- in a reserve, for tourism, or for ecological needs (but machines are not "connected" to the carbon cycle, so they don't care in principle)

During a certain time in our history, animals did "profitable labour" for us, like oxen as "mechanical engines" and horses as means of transport.  Dogs do some labour for us still for blind people, and to work as guardians and so.  But will machines use us as mechanical engines, guardians and the like ?  Probably machines themselves are much better at this than we are.  Maybe machines will use dogs, but not humans :-)

Quote
First you say people will use guns and then you say machines should use guns.

I mean: the entities in power are in power because they use guns, not because "they are fair" or something of the like.  In our history, the entities in power have always been certain humans, or certain classes of humans.  They got the power through weapons.  The states are still entities wielding guns to keep the power.

The day machines take the power, they will wield guns to enslave us, not just "by being fair employers" or some other joke.


Quote
People still have the power to choose to stop using electricity and turn off the machines, but people will choose not to do so.

I think that at a certain point, people will not have that choice, no more than you have the choice right now to "switch off the state".  The rare times in history where people "switched off the king" (like Louis XVI) was because people took the guns, and the king ended up having less guns than the people.  But machines wielding guns will always be stronger. 


The aim is not to work and produce but to consume and increase your standard of livings even if creating and working are a huge source of satisfaction. You could still create and produce even if machines were doing all the heavy work.
Pages: « 1 2 3 4 5 [6] 7 8 9 10 11 12 13 »  All
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!