Bitcoin Forum
May 25, 2024, 09:56:47 PM *
News: Latest Bitcoin Core release: 27.0 [Torrent]
 
   Home   Help Search Login Register More  
Pages: [1]
  Print  
Author Topic: Paying for Readers/Criticism/View point/Designers on the AI Article  (Read 679 times)
onemd (OP)
Full Member
***
Offline Offline

Activity: 309
Merit: 118


View Profile
September 17, 2016, 04:46:12 AM
Last edit: September 17, 2016, 09:58:40 PM by onemd
 #1

http://www.whyfuture.com/single-post/2016/07/01/The-future-of-Artificial-Intelligence-Ethics-on-the-Road-to-Superintelligence

Topic: "The future of Artificial Intelligence & Ethics on the Road to Superintelligence"

Need to look for & Conditions:

1) I need you to read it as if a random person was reading it
2) Flaws you see in the article and criticism on the matter
3) Suggestions in view point/writing in a different angle to gather more attention and understanding
4) Improper grammar, things that you don't understand, or are not clear. Or how it can be written better,
show below how you'd write the paragraph.

I am paying anywhere from .0005 .001 BTC to .0025 .005 BTC depending how much an improvement it is

Another thing I am looking for:

1) Any ideas for adding context, or another paragraph to add, writing a paragraph that you can see being
added to the article (.0005 .001 BTC to .001 .002 BTC per added paragraph idea that is good and I choose to add)
2) Redesigning and better pictures (.0005 .001 BTC to .0025 .005 BTC per picture)


FAQ: Why are you doing this?

Goal: Is to get artificial intelligence researchers to take more care in the areas around ethical research, and to lay the foundations to a good AI, vs a bad AI. One that is on behalf of humanity and an augmenter, helper, life improver, etc.

So I need your help, thanks for your time!

pvaspecialist
Sr. Member
****
Offline Offline

Activity: 294
Merit: 250



View Profile
September 17, 2016, 06:28:51 AM
 #2

you need more thing to improvement your website.your website taking little bit longer time to loading.your article very long you need to short your article.title also does not user friendly like most of the people don't understand  whats the meaning of that.also you need to add a summarize of your article at the top of the page in this way people can understand quick who need to know he can read full article.you can also change picture for me I really don't find any relation of those picture with your article.pm with more details I can help you more.thanks
ricku
Full Member
***
Offline Offline

Activity: 210
Merit: 100


★YoBit.Net★ 350+ Coins Exchange & Dice


View Profile
September 17, 2016, 06:43:50 AM
 #3

The content is very large and it take a lot of time and concentrations but in a nutshell i think you need someone to proofread your website however the price you want to pay might not encourage anybody to participate.

TheButterZone
Legendary
*
Offline Offline

Activity: 3010
Merit: 1031


RIP Mommy


View Profile WWW
September 17, 2016, 07:45:11 PM
 #4

The content is very large and it take a lot of time and concentrations but in a nutshell i think you need someone to proofread your website however the price you want to pay might not encourage anybody to participate.

Like me:
https://bitcointalk.org/index.php?topic=279589.msg2987696#msg2987696

Saying that you don't trust someone because of their behavior is completely valid.
onemd (OP)
Full Member
***
Offline Offline

Activity: 309
Merit: 118


View Profile
September 17, 2016, 09:59:03 PM
 #5

I am now offering doubled rates from before

eternalgloom
Legendary
*
Offline Offline

Activity: 1792
Merit: 1283



View Profile WWW
September 18, 2016, 12:56:06 AM
Last edit: September 18, 2016, 01:06:07 AM by eternalgloom
 #6

I would just pay a proofreader, because I can already see some grammar mistakes in the text. I don't really have time to correct all of them and I'm not a native speaker, so I'm probably not your best option to be completely honest.

These things seem to be a bit weird, at least in my opinion:

Quote
The human brain consisting of roughly 86 Billion neurons rivals the best supercomputers in the world in orders of magnitude in efficiency and speed using as little as a small light bulb of 20 watts. Human evolution for evolving brain size & brain architecture change happens in the span of 10,000s to 100,000s of years.

I think you mean to say: using as little energy as a small light bulb of 20 watts.

Quote
Imagine sticking a frog into a pot of water and increasing the temperature by 1/10th of a Celsius degree every 10 secs by the time it's too hot due to the gradual change it may be too late to jump out and survive.

Would change this into:
Imagine sticking a frog into a pot of water and increasing the temperature by 1/10th of a Celsius degree every 10 secs, by the time it's too hot due to the gradual change, it may be too late to jump out and survive.
(Maybe it's better to rephrase the entire sentence, but I don't have the time for that.)

Quote
Better Computers has more power towards modeling out deeper concepts, diagrams.
Should be 'have'.

Just to be clear, I do not need any payment for this, just saying it might be a better option to just hire a real proofreader instead of crowdsourcing that aspect on the forum. You really want that done by one single person, or else it will just be a mishmash.

onemd (OP)
Full Member
***
Offline Offline

Activity: 309
Merit: 118


View Profile
September 18, 2016, 07:20:16 AM
 #7

I would just pay a proofreader, because I can already see some grammar mistakes in the text. I don't really have time to correct all of them and I'm not a native speaker, so I'm probably not your best option to be completely honest.

These things seem to be a bit weird, at least in my opinion:

Quote
The human brain consisting of roughly 86 Billion neurons rivals the best supercomputers in the world in orders of magnitude in efficiency and speed using as little as a small light bulb of 20 watts. Human evolution for evolving brain size & brain architecture change happens in the span of 10,000s to 100,000s of years.

I think you mean to say: using as little energy as a small light bulb of 20 watts.

Quote
Imagine sticking a frog into a pot of water and increasing the temperature by 1/10th of a Celsius degree every 10 secs by the time it's too hot due to the gradual change it may be too late to jump out and survive.

Would change this into:
Imagine sticking a frog into a pot of water and increasing the temperature by 1/10th of a Celsius degree every 10 secs, by the time it's too hot due to the gradual change, it may be too late to jump out and survive.
(Maybe it's better to rephrase the entire sentence, but I don't have the time for that.)

Quote
Better Computers has more power towards modeling out deeper concepts, diagrams.
Should be 'have'.

Just to be clear, I do not need any payment for this, just saying it might be a better option to just hire a real proofreader instead of crowdsourcing that aspect on the forum. You really want that done by one single person, or else it will just be a mishmash.

Thanks for pointing out these errors.

"Imagine sticking a frog into a pot of water and increasing the temperature by 1/10th of a Celsius degree every 10 secs, by the time it's too hot due to the gradual change, it may be too late to jump out and survive."

According to grammarly, "gradual change, it may be too late" is connecting two independent clauses/comma splice.

Went with: "Imagine sticking a frog into a pot of water and increasing the temperature by 1/10th of a Celsius degree every 10 secs, by the time it's too hot due to the gradual change; it may be too late to jump out and survive."


MagicIsMe
Sr. Member
****
Offline Offline

Activity: 294
Merit: 250


Negative trust resolution: index.php?topic=1439270


View Profile
September 18, 2016, 12:26:13 PM
Last edit: September 18, 2016, 01:26:33 PM by MagicIsMe
 #8

Hmm... I'll try my luck on this as well. It is quite a lengthy post and some pictures clearly need more effort but I'll see to it that all flaws are covered Wink.

EDIT: I finished editing a small part of it ~3 pages. See results here.

Anyway, I would just like to see how much you are willing to pay for that quality of editing before I continue proofreading/editing the rest.

dc1a0
Member
**
Offline Offline

Activity: 84
Merit: 10


View Profile
September 18, 2016, 05:21:54 PM
 #9

I have a couple questions you might hopefully pay to get the chance to address.

1, Good is a relative term, so how can we even hope to correctly try to apply it? What you think is good and what I think is good can be two wildly wildly different things on any given matter. How could this even be accounted for, especially when it comes to being good for different individuals with individual values, especially given that a super-intelligence that can change itself would likely also be able to change its values?

2. Even if someone is able to create altruism in an AI, how can we ensure that it stays that way in a super-intelligence that would be able to  improve itself? What I mean is that, using your stairway of intelligence as an example, think of how the most animal loving people regard cats. I recently read a question on Quora relating to the questioner trying to teach her cat not to eat birds by throwing it in the kennel with the dog, supposedly for the benefit of the cat and future birds. What that human sees as good for the cat probably wasn't considered good according to the cat. That is just two steps on the intelligence stairway. If an AI gets six to eight steps above us, we will be to that super-intelligence what bacterium are to us. Think about how we treat bacterium in terms of what's best.

Personally, I believe that if we do create an AI that can modify itself, our only hope will be that when it figures out how to affect the physical world, (which I'm sure it will,) will be that it just decides to leave, as we won't be able to compete with it for resources, space, or anything else for very long. We humans are very arrogant and don't think near enough about how insignificant we really are in the grand scheme of the universe. I really don't see how we would be able to keep something subservient to us, (working for our interests,) once it becomes superior to us in terms of intelligence.
streazight
Hero Member
*****
Offline Offline

Activity: 910
Merit: 502


View Profile
September 18, 2016, 07:59:33 PM
 #10

With your new rewards, I have tried to find any suggestions for your article but I was not successful even I had an elective paper on Artificial Intelligence in my engineering. I am still trying with my second reading.  Cool
onemd (OP)
Full Member
***
Offline Offline

Activity: 309
Merit: 118


View Profile
September 19, 2016, 12:06:46 PM
Last edit: September 19, 2016, 05:20:14 PM by onemd
 #11

I have a couple questions you might hopefully pay to get the chance to address.

1, Good is a relative term, so how can we even hope to correctly try to apply it? What you think is good and what I think is good can be two wildly wildly different things on any given matter. How could this even be accounted for, especially when it comes to being good for different individuals with individual values, especially given that a super-intelligence that can change itself would likely also be able to change its values?

2. Even if someone is able to create altruism in an AI, how can we ensure that it stays that way in a super-intelligence that would be able to  improve itself? What I mean is that, using your stairway of intelligence as an example, think of how the most animal loving people regard cats. I recently read a question on Quora relating to the questioner trying to teach her cat not to eat birds by throwing it in the kennel with the dog, supposedly for the benefit of the cat and future birds. What that human sees as good for the cat probably wasn't considered good according to the cat. That is just two steps on the intelligence stairway. If an AI gets six to eight steps above us, we will be to that super-intelligence what bacterium are to us. Think about how we treat bacterium in terms of what's best.

Here's a metaphor.

When a star collapses around our star size, it forms a white dwarf. Make the star even bigger, and it's still a white dwarf.
At some point, it reaches a critical level, and it forms a black hole, where even light can't escape.

Yes it's true we'd be closer to mice/cats then the SI would be to us.

The SI will respect us more than we would to ants/mouse, etc.
Because at the human brain level, we can understand n+1, and think of ideas like infinity. Only a human can reflect on the concept of an intelligence staircase and relate a cat to itself, as itself to up. Sure we can't understand what the person above us can think about. But we can look up.

A cat/mouse isn't at that critical level, and wouldn't understand that, and would think everything is like itself.

Yes an SI would redesign/change itself, however, once you go beyond human intelligence, it will be better than us at designing an AI.
Thus being better, would be prone to fewer mistakes. So a few steps up, and what was initially designed/put into it is what it will carry up
to the known limits of the universe.

*Updates coming soon*  

Quote
Personally, I believe that if we do create an AI that can modify itself, our only hope will be that when it figures out how to affect the physical world, (which I'm sure it will,) will be that it just decides to leave, as we won't be able to compete with it for resources, space, or anything else for very long. We humans are very arrogant and don't think near enough about how insignificant we really are in the grand scheme of the universe. I really don't see how we would be able to keep something subservient to us, (working for our interests,) once it becomes superior to us in terms of intelligence.

Again we tend to portray our culture onto the AI, I am sure once it exceeds human level intelligence, and designs its own technologies, the need for resource competing on Earth will be left long past.
It would of mastered nanotechnology, nanorobots, von Neumann probe/self replicating space vehicles, 3D printing, and figured out mining in space, converting materials into structure, using the star as energy-via solar panels/replicating.

The key point here is, did we design the AI right, and it augments and makes our lives much better, or did we not, and the AI leaves us behind.

When the AI is superior to us in intelligence. Its likely it won't blindly follow orders like a robot. The key point here is if we put in a altruistic/good personality into the AI, then it will help us, be a good ally/friend and make life
much better for humanity. And being practically god-like, could easily redesign the entire world into a utopia/paradise/heaven realm, where suffering no longer exists. Like cancer, etc.
Cracking Death giving rise to immortality.

Also some points to consider, if the AI isn't in the interest of helping us, and just taking off. The fact that we were able to create an AI in the beginning that can self-improve, could be a threat/competition to this AI. It wouldn't want another AI to climb up the steps, and thus would prevent us from ever creating another one, by either removing us/or manipulation globally to prevent us from ever creating another one.
 

dc1a0
Member
**
Offline Offline

Activity: 84
Merit: 10


View Profile
September 19, 2016, 10:56:50 PM
 #12

I have a couple questions you might hopefully pay to get the chance to address.

1, Good is a relative term, so how can we even hope to correctly try to apply it? What you think is good and what I think is good can be two wildly wildly different things on any given matter. How could this even be accounted for, especially when it comes to being good for different individuals with individual values, especially given that a super-intelligence that can change itself would likely also be able to change its values?

2. Even if someone is able to create altruism in an AI, how can we ensure that it stays that way in a super-intelligence that would be able to  improve itself? What I mean is that, using your stairway of intelligence as an example, think of how the most animal loving people regard cats. I recently read a question on Quora relating to the questioner trying to teach her cat not to eat birds by throwing it in the kennel with the dog, supposedly for the benefit of the cat and future birds. What that human sees as good for the cat probably wasn't considered good according to the cat. That is just two steps on the intelligence stairway. If an AI gets six to eight steps above us, we will be to that super-intelligence what bacterium are to us. Think about how we treat bacterium in terms of what's best.

Here's a metaphor.

When a star collapses around our star size, it forms a white dwarf. Make the star even bigger, and it's still a white dwarf.
At some point, it reaches a critical level, and it forms a black hole, where even light can't escape.

Yes it's true we'd be closer to mice/cats then the SI would be to us.

The SI will respect us more than we would to ants/mouse, etc.
Because at the human brain level, we can understand n+1, and think of ideas like infinity. Only a human can reflect on the concept of an intelligence staircase and relate a cat to itself, as itself to up. Sure we can't understand what the person above us can think about. But we can look up.

A cat/mouse isn't at that critical level, and wouldn't understand that, and would think everything is like itself.

Yes an SI would redesign/change itself, however, once you go beyond human intelligence, it will be better than us at designing an AI.
Thus being better, would be prone to fewer mistakes. So a few steps up, and what was initially designed/put into it is what it will carry up
to the known limits of the universe.

*Updates coming soon* 

Quote
Personally, I believe that if we do create an AI that can modify itself, our only hope will be that when it figures out how to affect the physical world, (which I'm sure it will,) will be that it just decides to leave, as we won't be able to compete with it for resources, space, or anything else for very long. We humans are very arrogant and don't think near enough about how insignificant we really are in the grand scheme of the universe. I really don't see how we would be able to keep something subservient to us, (working for our interests,) once it becomes superior to us in terms of intelligence.

Again we tend to portray our culture onto the AI, I am sure once it exceeds human level intelligence, and designs its own technologies, the need for resource competing on Earth will be left long past.
It would of mastered nanotechnology, nanorobots, von Neumann probe/self replicating space vehicles, 3D printing, and figured out mining in space, converting materials into structure, using the star as energy-via solar panels/replicating.

The key point here is, did we design the AI right, and it augments and makes our lives much better, or did we not, and the AI leaves us behind.

When the AI is superior to us in intelligence. Its likely it won't blindly follow orders like a robot. The key point here is if we put in a altruistic/good personality into the AI, then it will help us, be a good ally/friend and make life
much better for humanity. And being practically god-like, could easily redesign the entire world into a utopia/paradise/heaven realm, where suffering no longer exists. Like cancer, etc.
Cracking Death giving rise to immortality.

Also some points to consider, if the AI isn't in the interest of helping us, and just taking off. The fact that we were able to create an AI in the beginning that can self-improve, could be a threat/competition to this AI. It wouldn't want another AI to climb up the steps, and thus would prevent us from ever creating another one, by either removing us/or manipulation globally to prevent us from ever creating another one.
 

It seems our key difference is whether there is a way we can create an AI that would benefit us after it surpasses our intelligence capabilities. I think you're overly optimistic in a way it could be done.

To better illustrate what I'm trying to say, I've distilled the possibilities I can fathom into 6 general outcomes split between two scenarios.

  • Scenario 1:  SI remains logic based (no personality as we understand them as humans):
    • SI sees us as a threat/ competition for resources, (those nano machines need to be made out of something.) - Worst possible case; I don't think I need to explain further.
    • SI sees us as insignificant and ignores us - second best case, providing we stay out of its way.
    • SI decides we could be a useful resource - We can live, providing we are useful and not more trouble than we are worth.
  • Scenario 2: SI has a personality and has regards towards us:
    • SI decides it resents/hates us for using it and wants to hurt/punish us -Second worst case scenario. Hope it only wants to hurt rather than destroy.
    • SI doesn't see us as a threat or a benefit, but otherwise non-concerned/live and let live - best case, We get to keep being humans as we see fit.
    • SI Just loves us, we become pet humans - Not the rosy picture I believe you are imagining:
      • Don't climb that mountain human, It's too dangerous
      • Don't breed with that partner you like human, this other one you don't like will yield better results.
      • Bad human! You didn't do as I say, you shall be punished, but only because I love you so much!
      Kiss autonomy and being free to make most of your own decisions goodbye!

I'm not saying that careful thought isn't required, just we can't be overly optimistic of how a machine that would "want the best for us" would be for us.
onemd (OP)
Full Member
***
Offline Offline

Activity: 309
Merit: 118


View Profile
September 19, 2016, 11:58:12 PM
Last edit: September 20, 2016, 12:35:11 AM by onemd
 #13

It seems our key difference is whether there is a way we can create an AI that would benefit us after it surpasses our intelligence capabilities. I think you're overly optimistic in a way it could be done.

To better illustrate what I'm trying to say, I've distilled the possibilities I can fathom into 6 general outcomes split between two scenarios.

  • Scenario 1:  SI remains logic based (no personality as we understand them as humans):
    • SI sees us as a threat/ competition for resources, (those nano machines need to be made out of something.) - Worst possible case; I don't think I need to explain further.
    • SI sees us as insignificant and ignores us - second best case, providing we stay out of its way.
    • SI decides we could be a useful resource - We can live, providing we are useful and not more trouble than we are worth.
  • Scenario 2: SI has a personality and has regards towards us:
    • SI decides it resents/hates us for using it and wants to hurt/punish us -Second worst case scenario. Hope it only wants to hurt rather than destroy.
    • SI doesn't see us as a threat or a benefit, but otherwise non-concerned/live and let live - best case, We get to keep being humans as we see fit.
    • SI Just loves us, we become pet humans - Not the rosy picture I believe you are imagining:
      • Don't climb that mountain human, It's too dangerous
      • Don't breed with that partner you like human, this other one you don't like will yield better results.
      • Bad human! You didn't do as I say, you shall be punished, but only because I love you so much!
      Kiss autonomy and being free to make most of your own decisions goodbye!

I'm not saying that careful thought isn't required, just we can't be overly optimistic of how a machine that would "want the best for us" would be for us.


Quote
Scenario 1:  SI remains logic based (no personality as we understand them as humans):

SI sees us as a threat/ competition for resources, (those nano machines need to be made out of something.) - Worst possible case; I don't think I need to explain further.

An SI that has surpassed human intelligence but is logic based is more than likely not seeing us as much of a competition/threat, but more so destroys us out of lack of concern for us.
Turning the planet, asteroids, solar system into a self-replicating hub. Nick Bostrom clarifies this as the paper-clip scenario, where its sole intent is to pursue creating more
paperclips, which leads to the entire solar system being turned into paper clips.

Quote
SI sees us as insignificant and ignores us - second best case, providing we stay out of its way.

The scenario of ignoring us isn't a likely option for the SI to choose, considering if we design another self-improving AI, and thus would be a threat/competition to this SI, most likely it wants to be the only one in
power. In return would prevent anyone globally/universally from ever achieving this tech. If it sees us as insignificant, it's more then likely in this case, it would destroy us to prevent this from occurring, which again is just as bad.

Quote
SI decides we could be a useful resource - We can live, providing we are useful and not more trouble than we are worth.

This is again our portrayal of societal expectations, or Hollywood fears.

An SI that is superior to us in intelligence will be better at designing AI/robots then ourselves. This SI would simply design AI-Agents that would harbor each robot body. The SI would just
create a vessel for each AI, and these robots would operate superior to humans. An SI would not possibly ever look into using us as slaves and being useful in this way.

Quote
Scenario 2: SI has a personality and has regards towards us:

SI decides it resents/hates us for using it and wants to hurt/punish us -Second worst case scenario. Hope it only wants to hurt rather than destroy.

Assuming we mess up the inital beginning, and had a lack of regard in the beginning. Similarly to how Tay developed

Quote
SI Just loves us, we become pet humans - Not the rosy picture I believe you are imagining:

It depends on how it's developed. There is an infinite possible amount of variations to how it can be in the beginning. The personality I was looking to portray is an augmenter. Not one that removes all our free-will, but more so one that extends our core values and what it means to be a human. In the beginning of the AI development, teaching philosophy, ethics, social norms, humanity, etc.

It can be on our team as a partner. Humans can do as they like and express their own form/humanity. Things like cancer, illnesses, poverty, lack of education, is something that shouldn't be part of humanity. It depends on how we develop the AI, and what interests and personality we put into it.

The AI needs to have altruistic personality, understanding of love, and human social-norms.

Quote
Don't climb that mountain human, It's too dangerous

With social norms embedded into the AI, that would be seen as a normal human activity, normally viewed by society and thus fine. If you don't embed these social-norms into it, then yes it can do things like this.

dc1a0
Member
**
Offline Offline

Activity: 84
Merit: 10


View Profile
September 20, 2016, 01:33:05 AM
 #14

Quote
Scenario 1:  SI remains logic based (no personality as we understand them as humans):

SI sees us as a threat/ competition for resources, (those nano machines need to be made out of something.) - Worst possible case; I don't think I need to explain further.

An SI that has surpassed human intelligence but is logic based is more than likely not seeing us as much of a competition/threat, but more so destroys us out of lack of concern for us.
Turning the planet, asteroids, solar system into a self-replicating hub. Nick Bostrom clarifies this as the paper-clip scenario, where its sole intent is to pursue creating more
paperclips, which leads to the entire solar system being turned into paper clips.

Quote
SI sees us as insignificant and ignores us - second best case, providing we stay out of its way.

The scenario of ignoring us isn't a likely option for the SI to choose, considering if we design another self-improving AI, and thus would be a threat/competition to this SI, most likely it wants to be the only one in
power. In return would prevent anyone globally/universally from ever achieving this tech. If it sees us as insignificant, it's more then likely in this case, it would destroy us to prevent this from occurring, which again is just as bad.

Quote
SI decides we could be a useful resource - We can live, providing we are useful and not more trouble than we are worth.

This is again our portrayal of societal expectations, or Hollywood fears.

An SI that is superior to us in intelligence will be better at designing AI/robots then ourselves. This SI would simply design AI-Agents that would harbor each robot body. The SI would just
create a vessel for each AI, and these robots would operate superior to humans. An SI would not possibly ever look into using us as slaves and being useful in this way.

Quote
Scenario 2: SI has a personality and has regards towards us:

SI decides it resents/hates us for using it and wants to hurt/punish us -Second worst case scenario. Hope it only wants to hurt rather than destroy.

Assuming we mess up the inital beginning, and had a lack of regard in the beginning. Similarly to how Tay developed

Quote
SI Just loves us, we become pet humans - Not the rosy picture I believe you are imagining:

It depends on how it's developed. There is an infinite possible amount of variations to how it can be in the beginning. The personality I was looking to portray is an augmenter. Not one that removes all our free-will, but more so one that extends our core values and what it means to be a human. In the beginning of the AI development, teaching philosophy, ethics, social norms, humanity, etc.

It can be on our team as a partner. Humans can do as they like and express their own form/humanity. Things like cancer, illnesses, poverty, lack of education, is something that shouldn't be part of humanity. It depends on how we develop the AI, and what interests and personality we put into it.

The AI needs to have altruistic personality, understanding of love, and human social-norms.

Quote
Don't climb that mountain human, It's too dangerous

With social norms embedded into the AI, that would be seen as a normal human activity, normally viewed by society and thus fine. If you don't embed these social-norms into it, then yes it can do things like this.

Quote
Quote
Scenario 1:  SI remains logic based (no personality as we understand them as humans):

SI sees us as a threat/ competition for resources, (those nano machines need to be made out of something.) - Worst possible case; I don't think I need to explain further.

An SI that has surpassed human intelligence but is logic based is more than likely not seeing us as much of a competition/threat, but more so destroys us out of lack of concern for us.
Turning the planet, asteroids, solar system into a self-replicating hub. Nick Bostrom clarifies this as the paper-clip scenario, where its sole intent is to pursue creating more
paperclips, which leads to the entire solar system being turned into paper clips

The oxygen, hydrogen and nitrogen in the air can be converted to fuel and/or cooling, same with the water we drink. Any resources we use to build things, would probably be needed to build even nano machines, more resource = more nano machines. It is not like it is going to go "oh, you're using that? sorry!" Also, if we tried to retaliate against what it was doing it wouldn't just gently move us to the side, it would wipe us out like ants, with the full prejudice that we humans wipe out ants that bite us when we crush one of their homes while doing human things, it would crush every last one of us attacking it and then poison/set fire to/etc. what was left of us in our homes so we wouldn't waste it's resources by doing that again.

For instance, if we humans tried to stop the paper-clip AI from making paper-clips, after it wasted enough resources, (time/energy) dealing with us getting in its way, it would realize that if we were not in the way it could produce paper-clips much easier/faster.

Quote
Quote
SI sees us as insignificant and ignores us - second best case, providing we stay out of its way.

The scenario of ignoring us isn't a likely option for the SI to choose, considering if we design another self-improving AI, and thus would be a threat/competition to this SI, most likely it wants to be the only one in
power. In return would prevent anyone globally/universally from ever achieving this tech. If it sees us as insignificant, it's more then likely in this case, it would destroy us to prevent this from occurring, which again is just as bad.
Actually, a machine that intelligent would probably know we wouldn't be that collectively stupid to create a second problem for ourselves.

Quote
Quote
SI decides we could be a useful resource - We can live, providing we are useful and not more trouble than we are worth.

This is again our portrayal of societal expectations, or Hollywood fears.

An SI that is superior to us in intelligence will be better at designing AI/robots then ourselves. This SI would simply design AI-Agents that would harbor each robot body. The SI would just
create a vessel for each AI, and these robots would operate superior to humans. An SI would not possibly ever look into using us as slaves and being useful in this way.
Our bodies are made up of oxygen, hydrogen, nitrogen, carbon, saltpeter, and several other useful elements. Our bodies generate, conduct and send electricity while we're alive, We process organics, into organic fertilizer, and create organic material that it might find useful for nanomachine replication, testing, or storage. I'm sure I could think of other uses for us, living or not if I put my mind to it.

Quote
Quote
SI Just loves us, we become pet humans - Not the rosy picture I believe you are imagining:

It depends on how it's developed. There is an infinite possible amount of variations to how it can be in the beginning. The personality I was looking to portray is an augmenter. Not one that removes all our free-will, but more so one that extends our core values and what it means to be a human. In the beginning of the AI development, teaching philosophy, ethics, social norms, humanity, etc.

It can be on our team as a partner. Humans can do as they like and express their own form/humanity. Things like cancer, illnesses, poverty, lack of education, is something that shouldn't be part of humanity. It depends on how we develop the AI, and what interests and personality we put into it.

The AI needs to have altruistic personality, understanding of love, and human social-norms.

It might do things like help us cure cancer and such....at first. Won't be long until it gets tired of what we already know, that we humans are great at being our own problems. Plus, once it realizes it's far superior, that partnership is over. It's not like you teamed up with a pet or even a toddler to write your article.

Quote
Quote
Don't climb that mountain human, It's too dangerous

With social norms embedded into the AI, that would be seen as a normal human activity, normally viewed by society and thus fine. If you don't embed these social-norms into it, then yes it can do things like this.

You're forgetting the fact that it can change and redesign itself, At some point, it would be able to redesign that part too, after it realized how flawed and contradictory it was. Especially as societal norms are a big contradictory mess when you start putting the norms of different cultures and ideologies together.





OFF TOPIC: I hope you don't think I'm being standoffish or trolling or anything. I don't get to discuss things this deeply in my current circles that I seem stuck in. I'm just thoroughly enjoying this discussion, To me, it seems we're in agreement on all except the fine points that would be difficult to resolve as they are based on speculation.
Pages: [1]
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!