kneim
Legendary
Offline
Activity: 1666
Merit: 1000
|
|
October 28, 2015, 11:47:49 AM |
|
AI is far enough developed it might replace the human.Then it becomes a threat.
It will never replace me. I'm too stupid and chaotic enought, that no algo ever can replace me. I'm a computer admin, but nevertheless I cannot say, what my computer is calculating, and why. I'm seeing the screen only, and it seems valid. Reality is, that the "Homo Oeconomicus" will not survive.
|
|
|
|
CoinHeavy
|
|
November 24, 2015, 09:14:32 AM |
|
Has anyone considered that Bitcoin may well have been developed by an artificial intelligence and that we are, in fact, already post-singularity?
What better way to make the jump from computational sentience towards crafting the real, physical world at will than by inventing math-money?
It may sound absurd prima facie but it is, nonetheless, a curious thought experiment.
With the advent of DACs, algorithms are already beginning to compete in earnest to be the robot king of the capitalist mountain. Neat.
|
|
|
|
equator
Legendary
Offline
Activity: 1190
Merit: 1002
|
|
November 24, 2015, 09:49:59 AM |
|
World is developing greate AI now. AI that has ability to developing themself sounds dangerous to me.
What's AI?? Why are they trying to develop this AI?? AI= Artificial Intelligence Why this is or will be developed? Because it will have several benefits in different fields and industries.For instance it can take off the heat from people in various jobs and positions. But also risks shouldn't be underestimated or get downplayed.If AI is far enough developed it might replace the human.Then it becomes a threat. to some extented it is right that what you said , but one point to be noted is that this all are created by humans and human knows how to use and till which extend he has to give power to this AI. what ever the development is done to AI it cannot replace Humans.
|
|
|
|
LuckyYOU
|
|
November 24, 2015, 02:08:04 PM |
|
AI's are made by humans, people, they only do what they are programmed to do by people.
Unless someone makes an AI that's programmed to replace all humans (which I doubt) you shouldn't be too worried about it.
In the last 5 - 10 years us humans have been replaced a bunch, a lot has been made to be automated by machines and robots. Investors in these machines and robots believe that the machines and robots are cheaper to buy rather than to pay humans off.
|
|
|
|
Slark
Legendary
Offline
Activity: 1862
Merit: 1004
|
|
November 24, 2015, 03:13:55 PM |
|
AI's are made by humans, people, they only do what they are programmed to do by people.
Unless someone makes an AI that's programmed to replace all humans (which I doubt) you shouldn't be too worried about it.
In the last 5 - 10 years us humans have been replaced a bunch, a lot has been made to be automated by machines and robots. Investors in these machines and robots believe that the machines and robots are cheaper to buy rather than to pay humans off.
I don't think it will take 5-10 years to achieve something significant in the artificial intelligence field. But it will eventually happen, and people will create sentient AI in the future with the ability to learn. What will happen then it yet to be seen, opinions may vary. We could end up with our own SkyNet or Matrix.
|
|
|
|
michietn94
Legendary
Offline
Activity: 1274
Merit: 1001
|
|
December 09, 2015, 12:45:15 AM |
|
It is understandable that you ever imagine until things like that, but i dont think that is a real danger
If that happen maybe we will back to the era without internet, it will set back technology era but it cant make people extinct
Worst come to the worst we cant do global transaction thats all
|
.. FANSUNITE | █ █ ███ ███ ███ ▄ ▀ ███ ███ ███ ███ █ █ | | █ █ ███ ███ ███ ▄ ▀ ███ ███ ███ ███ █ █ | |
|
|
|
neochiny
|
|
December 09, 2015, 04:01:59 PM |
|
AI's are made by humans, people, they only do what they are programmed to do by people.
Unless someone makes an AI that's programmed to replace all humans (which I doubt) you shouldn't be too worried about it.
In the last 5 - 10 years us humans have been replaced a bunch, a lot has been made to be automated by machines and robots. Investors in these machines and robots believe that the machines and robots are cheaper to buy rather than to pay humans off.
I don't think it will take 5-10 years to achieve something significant in the artificial intelligence field. But it will eventually happen, and people will create sentient AI in the future with the ability to learn. What will happen then it yet to be seen, opinions may vary. We could end up with our own SkyNet or Matrix. a self learning AI is quite dangerous, if its able to understands human feelings that will be the start of the threat. but to be honest if humans make a sentient AI im pretty sure they will put a safety measure like, fail safe if it start to do something suspicious.
|
|
|
|
n2004al
Legendary
Offline
Activity: 1134
Merit: 1000
|
|
December 10, 2015, 04:52:39 PM |
|
AI's are made by humans, people, they only do what they are programmed to do by people.
Unless someone makes an AI that's programmed to replace all humans (which I doubt) you shouldn't be too worried about it.
In the last 5 - 10 years us humans have been replaced a bunch, a lot has been made to be automated by machines and robots. Investors in these machines and robots believe that the machines and robots are cheaper to buy rather than to pay humans off.
I don't think it will take 5-10 years to achieve something significant in the artificial intelligence field. But it will eventually happen, and people will create sentient AI in the future with the ability to learn. What will happen then it yet to be seen, opinions may vary. We could end up with our own SkyNet or Matrix. a self learning AI is quite dangerous, if its able to understands human feelings that will be the start of the threat.but to be honest if humans make a sentient AI im pretty sure they will put a safety measure like, fail safe if it start to do something suspicious. There are two radically different position in your post. First you tell that "a self learning AI is quite dangerous, if its able to understands human feelings that will be the start of the threat." So an AI which can be a "thinking and with feelings thing" it is dangerous if we read this sentence. Then you do a 180 degree turnabout telling that " but to be honest if humans make a sentient AI im pretty sure they will put a safety measure like, fail safe if it start to do something suspicious." So reading these words of this sentence can be understood that an AI cannot be a danger thing because the human will think to make it safe and to not be able to make bad to the human kind. What is your opinion? It is or it is not dangerous the AI? Seems that according to you not (reading the end of your post). The question that "haunts" me is: if you know or understand that an AI cannot be a dangerous thing for the human kind, like you write in your definitive last sentence, why tell at the beginning that an AI "IS QUITE DANGEROUS" (so is FOR SURE dangerous) and not "MAY BE DANGEROUS"? For what reason you first present one conviction and then another one totally in opposite with the first? Cannot be understand this kind of expression by me.
|
|
|
|
virtualx
|
|
December 10, 2015, 05:16:26 PM |
|
AI's are made by humans, people, they only do what they are programmed to do by people.
Questionable. There are self learning machines which may do things that the programmers did not teach them. These machines are at a very basic stage, but who knows what's possible in 20 years. Unless someone makes an AI that's programmed to replace all humans (which I doubt) you shouldn't be too worried about it.
All humans is an impossible task because humans will simply not want a robotic dentist.
|
...loteo...
DIGITAL ERA LOTTERY | ║ ║ ║ | | r | ▄▄███████████▄▄ ▄███████████████████▄ ▄███████████████████████▄ ▄██████████████████████████▄ ▄██ ███████▌ ▐██████████████▄ ▐██▌ ▐█▀ ▀█ ▐█▀ ▀██▀ ▀██▌ ▐██ █▌ █▌ ██ ██▌ ██▌ █▌ █▌ ██▌ ▐█▌ ▐█ ▐█ ▐█▌ ▐██ ▄▄▄██ ▐█ ▐██▌ ▐█ ██▄ ▄██ █▄ ██▄ ▄███▌ ▀████████████████████████████▀ ▀██████████████████████████▀ ▀███████████████████████▀ ▀███████████████████▀ ▀▀███████████▀▀
| r | | ║ ║ ║ | RPLAY NOWR
BE A MOON VISITOR! |
[/center]
|
|
|
ridery99
|
|
December 10, 2015, 05:16:50 PM |
|
In the final days of the mankind machines will rise and make more money than ever imagined.
|
|
|
|
Amph
Legendary
Offline
Activity: 3248
Merit: 1070
|
|
December 10, 2015, 06:38:14 PM |
|
i was thinking that in a very distant future if machine could mine bitcoin by themselves without human it could help the decentralization aspect of the network
those will be very advanced machine that does not need maintenance by any kind, they upgrade their own software with an hard coded algo and stuff like that
|
|
|
|
neochiny
|
|
December 10, 2015, 08:13:22 PM |
|
AI's are made by humans, people, they only do what they are programmed to do by people.
Unless someone makes an AI that's programmed to replace all humans (which I doubt) you shouldn't be too worried about it.
In the last 5 - 10 years us humans have been replaced a bunch, a lot has been made to be automated by machines and robots. Investors in these machines and robots believe that the machines and robots are cheaper to buy rather than to pay humans off.
I don't think it will take 5-10 years to achieve something significant in the artificial intelligence field. But it will eventually happen, and people will create sentient AI in the future with the ability to learn. What will happen then it yet to be seen, opinions may vary. We could end up with our own SkyNet or Matrix. a self learning AI is quite dangerous, if its able to understands human feelings that will be the start of the threat.but to be honest if humans make a sentient AI im pretty sure they will put a safety measure like, fail safe if it start to do something suspicious. There are two radically different position in your post. First you tell that "a self learning AI is quite dangerous, if its able to understands human feelings that will be the start of the threat." So an AI which can be a "thinking and with feelings thing" it is dangerous if we read this sentence. Then you do a 180 degree turnabout telling that " but to be honest if humans make a sentient AI im pretty sure they will put a safety measure like, fail safe if it start to do something suspicious." So reading these words of this sentence can be understood that an AI cannot be a danger thing because the human will think to make it safe and to not be able to make bad to the human kind. What is your opinion? It is or it is not dangerous the AI? Seems that according to you not (reading the end of your post). The question that "haunts" me is: if you know or understand that an AI cannot be a dangerous thing for the human kind, like you write in your definitive last sentence, why tell at the beginning that an AI "IS QUITE DANGEROUS" (so is FOR SURE dangerous) and not "MAY BE DANGEROUS"? For what reason you first present one conviction and then another one totally in opposite with the first? Cannot be understand this kind of expression by me. because AI can be corrupted and other people can change the fail safe that has been put in it. that why i put a self learning AI is quite dangerous in the beggining. im sorry if my comment above you is confusing.
|
|
|
|
Mickeyb
|
|
December 10, 2015, 09:32:36 PM |
|
i was thinking that in a very distant future if machine could mine bitcoin by themselves without human it could help the decentralization aspect of the network
those will be very advanced machine that does not need maintenance by any kind, they upgrade their own software with an hard coded algo and stuff like that
I really hope this never happens and we never get to see machines like this. If this would come true, somehow I think that us humans, would become endangered species very soon. Let Bitcoin stay a bit more centralized instead of this, please!
|
|
|
|
suda123
|
|
December 11, 2015, 07:18:17 AM |
|
We don't need automated worker like robot or anything for doing some job because its will make unemployment and uncontrolled multiply. But we know the human cost of hiring more expensive than hiring a machine or the like, so many companies prefer to use a machine or the like. : D Im thinking they are just going to pay humans a lot lot less.
|
|
|
|
cbeast (OP)
Donator
Legendary
Offline
Activity: 1736
Merit: 1014
Let's talk governance, lipstick, and pigs.
|
|
December 28, 2015, 01:32:22 PM |
|
If a machine was self-aware, would they value life? Natural selection created strong family bonds in most complex organisms over billions of years. The bonds even cross species in many cases. Somehow it only makes sense that machines would also adapt a bonding behavior. They may even develop a dominion based philosophy where they see themselves as the Earth's and our caretakers. In this case, they may use money to motivate humans to reach a higher potential.
|
Any significantly advanced cryptocurrency is indistinguishable from Ponzi Tulips.
|
|
|
BTCBinary
|
|
December 29, 2015, 02:21:58 PM |
|
Artificial intelligence and the fridge http://on.ft.com/1zSz2twIn science fiction, this scenario — called “singularity” or “transcendence” — usually leads to robot versus human war and a contest for world domination. But what if, rather than a physical battle, it was an economic one, with robots siphoning off our money or destroying the global economy with out-of-control algorithmic trading programmes? Perhaps it will not make for a great movie, but it seems the more likely outcome.
With Bitcoin, it's hard to see the downside. DACs (decentralize autonomous companies) are inevitable. This article is another vestige of irrational fear about money. Contrary to your opinion, IMO i believe that scenario would be the perfect plot for a science fiction movie. I wonder why many of the science fiction writers haven't used this idea yet!
|
|
|
|
cbeast (OP)
Donator
Legendary
Offline
Activity: 1736
Merit: 1014
Let's talk governance, lipstick, and pigs.
|
|
December 31, 2015, 01:17:21 PM |
|
Artificial intelligence and the fridge http://on.ft.com/1zSz2twIn science fiction, this scenario — called “singularity” or “transcendence” — usually leads to robot versus human war and a contest for world domination. But what if, rather than a physical battle, it was an economic one, with robots siphoning off our money or destroying the global economy with out-of-control algorithmic trading programmes? Perhaps it will not make for a great movie, but it seems the more likely outcome.
With Bitcoin, it's hard to see the downside. DACs (decentralize autonomous companies) are inevitable. This article is another vestige of irrational fear about money. Contrary to your opinion, IMO i believe that scenario would be the perfect plot for a science fiction movie. I wonder why many of the science fiction writers haven't used this idea yet! Probably the same reason countries make their own separate monies. If machines were hostile to humans, humans would not use their money.
|
Any significantly advanced cryptocurrency is indistinguishable from Ponzi Tulips.
|
|
|
deisik
Legendary
Offline
Activity: 3542
Merit: 1280
English ⬄ Russian Translation Services
|
|
December 31, 2015, 02:38:30 PM |
|
If a machine was self-aware, would they value life? Natural selection created strong family bonds in most complex organisms over billions of years. The bonds even cross species in many cases. Somehow it only makes sense that machines would also adapt a bonding behavior. They may even develop a dominion based philosophy where they see themselves as the Earth's and our caretakers. In this case, they may use money to motivate humans to reach a higher potential.
Just being sentient is not enough. Given only that (i.e. self-awareness), we would most certainly get the exact opposite of what is called a philosophical zombie. That is, a self-aware but absolutely indifferent to the outside world creature... In this way, self-awareness as such is inconsequential to your question
|
|
|
|
cbeast (OP)
Donator
Legendary
Offline
Activity: 1736
Merit: 1014
Let's talk governance, lipstick, and pigs.
|
|
January 02, 2016, 09:28:41 AM |
|
If a machine was self-aware, would they value life? Natural selection created strong family bonds in most complex organisms over billions of years. The bonds even cross species in many cases. Somehow it only makes sense that machines would also adapt a bonding behavior. They may even develop a dominion based philosophy where they see themselves as the Earth's and our caretakers. In this case, they may use money to motivate humans to reach a higher potential.
Just being sentient is not enough. Given only that (i.e. self-awareness), we would most certainly get the exact opposite of what is called a philosophical zombie. That is, a self-aware but absolutely indifferent to the outside world creature... In this way, self-awareness as such is inconsequential to your question In the second part of the hypothesis, I posit that if multiple self-aware machines machines interact, they might bond in ways analogous to complex biological organisms. But this new frontier of artificial intelligence is still beyond our understanding. I'm only hoping that our demise is not inevitable and that they might evolve a higher form of morality.
|
Any significantly advanced cryptocurrency is indistinguishable from Ponzi Tulips.
|
|
|
deisik
Legendary
Offline
Activity: 3542
Merit: 1280
English ⬄ Russian Translation Services
|
|
January 02, 2016, 09:53:38 AM |
|
If a machine was self-aware, would they value life? Natural selection created strong family bonds in most complex organisms over billions of years. The bonds even cross species in many cases. Somehow it only makes sense that machines would also adapt a bonding behavior. They may even develop a dominion based philosophy where they see themselves as the Earth's and our caretakers. In this case, they may use money to motivate humans to reach a higher potential.
Just being sentient is not enough. Given only that (i.e. self-awareness), we would most certainly get the exact opposite of what is called a philosophical zombie. That is, a self-aware but absolutely indifferent to the outside world creature... In this way, self-awareness as such is inconsequential to your question In the second part of the hypothesis, I posit that if multiple self-aware machines machines interact, they might bond in ways analogous to complex biological organisms. But this new frontier of artificial intelligence is still beyond our understanding. I'm only hoping that our demise is not inevitable and that they might evolve a higher form of morality They would not interact unless you put in them the necessity (or desire) to interact, either freely or obligatory. Likewise, you will have to install in them a scale of values (or conditions for developing one), either directly or implicitly... Therefore, they won't evolve any form of morality all by themselves
|
|
|
|
|