the joint (OP)
Legendary
Offline
Activity: 1834
Merit: 1020
|
|
July 25, 2013, 10:17:44 PM |
|
Is anybody aware of any legal precedents that have been set regarding the capacity for computers to break the law if their actions are indistinguishable from a human's according to a Turing test?
|
|
|
|
aigeezer
Legendary
Offline
Activity: 1450
Merit: 1013
Cryptanalyst castrated by his government, 1952
|
|
July 26, 2013, 12:24:11 PM |
|
Interesting question. I'm aware of nothing along those lines other than Asimov's famous three laws of robotics. It's early days yet and the law is destined always to be reactive - ask again in 50 years. Meanwhile, I imagine the law wouldn't offer much beyond "who programmed that drone", "who gave the orders", "arrest the ringleaders". I doubt that the law (anywhere) can yet imagine an autonomous AI.
Somewhat related - "Computer Power and Human Reason" by Joseph Weizenbaum. He was an early AI guy at MIT who had come to believe that humanity should never allow 'puters to act as judges even if they exhibited "perfect" performance - no bias, couldn't be bribed, knew all case precedent and so forth. He argued that human quirkiness (my word) should always be possible. No case is really like another case. There is always the potential for some important human angle blah blah. What's interesting is that he was the creator of the first Eliza program and he came to argue against the use of AI when he saw how easily people would accept shallow 'puter mimicry of actual human behavior. It's the sort of book that makes one think deeply about the issues - my comments don't do it justice. All in all, I disagree with Weizenbaum but his arguments nag at me some 35 years after reading them.
Off with its motherboard!
|
|
|
|
crumbs
|
|
July 26, 2013, 02:21:28 PM |
|
...Off with its motherboard!
Off with its CPU! --Firm believer that beheading trumps disembodiment. (sorry for OT, but beheaders need to be heard!)
|
|
|
|
FirstAscent
|
|
July 26, 2013, 04:46:41 PM |
|
...Off with its motherboard!
Off with its CPU! --Firm believer that beheading trumps disembodiment. (sorry for OT, but beheaders need to be heard!) Sidenote: new studies indicate consciousness continues to some degree up to an hour after clinical death. And that's on top of a study that indicated vision and high level brain functioning continues for possibly up to a minute after being beheaded. Regarding AI: there will come a time when this question is relevant. True AI will exist when a computer program is not programmed what to do, but is merely a system of interconnections which makes it somewhat unpredictable and uniquely creative relative to the programmers. Developing a simulated brain using STDP learning might be an example. STDP stands for spike timed dependent plasticity.
|
|
|
|
BitGo
Member
Offline
Activity: 83
Merit: 10
https://bitgo.com
|
|
July 26, 2013, 05:34:22 PM |
|
Meanwhile, I imagine the law wouldn't offer much beyond "who programmed that drone", "who gave the orders", "arrest the ringleaders". I doubt that the law (anywhere) can yet imagine an autonomous AI.
Currently the law is such that only humans are held accountable for their actions. This means that other creatures are exempt, notably animals. When dogs bite people, it's their owners that are charged under the relevant dog biting laws. Thus, aigeezer speaks along the right line of thought in that law enforcement will probably be looking to the persons who created or who controls the computer/robot. However, it is certainly feasible that at some point in the future AI bots will be considered autonomous enough to be held independently accountable for their actions. I'd say this is a likely path considering that Moore's Law has yet to be proved wrong.
|
|
|
|
crumbs
|
|
July 26, 2013, 06:01:43 PM |
|
Meanwhile, I imagine the law wouldn't offer much beyond "who programmed that drone", "who gave the orders", "arrest the ringleaders". I doubt that the law (anywhere) can yet imagine an autonomous AI.
Currently the law is such that only humans are held accountable for their actions. This means that other creatures are exempt, notably animals. When dogs bite people, it's their owners that are charged under the relevant dog biting laws. Thus, aigeezer speaks along the right line of thought in that law enforcement will probably be looking to the persons who created or who controls the computer/robot. However, it is certainly feasible that at some point in the future AI bots will be considered autonomous enough to be held independently accountable for their actions. I'd say this is a likely path considering that Moore's Law has yet to be proved wrong. I'm not sure if the slope isn't a bit more slippery. Parents are legally responsible for their children, but at a certain point adulthood kicks in and responsibility shifts at a certain age. I'm not an AI guy, so excuse the reasoning if it's worn out, but as systems are created that not simply weigh variables, but alter the algorithms themselves (depending on the information they scrape from goog, or something), how much of AI is the coder (nature) and how much is the data (nurture)?
|
|
|
|
BitGo
Member
Offline
Activity: 83
Merit: 10
https://bitgo.com
|
|
July 26, 2013, 06:27:39 PM |
|
Meanwhile, I imagine the law wouldn't offer much beyond "who programmed that drone", "who gave the orders", "arrest the ringleaders". I doubt that the law (anywhere) can yet imagine an autonomous AI.
Currently the law is such that only humans are held accountable for their actions. This means that other creatures are exempt, notably animals. When dogs bite people, it's their owners that are charged under the relevant dog biting laws. Thus, aigeezer speaks along the right line of thought in that law enforcement will probably be looking to the persons who created or who controls the computer/robot. However, it is certainly feasible that at some point in the future AI bots will be considered autonomous enough to be held independently accountable for their actions. I'd say this is a likely path considering that Moore's Law has yet to be proved wrong. I'm not sure if the slope isn't a bit more slippery. Parents are legally responsible for their children, but at a certain point adulthood kicks in and responsibility shifts at a certain age. I'm not an AI guy, so excuse the reasoning if it's worn out, but as systems are created that not simply weigh variables, but alter the algorithms themselves (depending on the information they scrape from goog, or something), how much of AI is the coder (nature) and how much is the data (nurture)? If we get to the point where there is a general consensus that some AI bots are thinking for themselves, and there is a demand among the people to hold bots responsible for prohibited actions, then our legal system might allow non-humans to be punishable by law. If so, the creators of our laws like developing balancing tests for grey-area situations like these. They'd probably develop a multi-factor test to help determine just how much of the bot's actions is created from it's own processes and how much is a product of the original programmer. They would call in AI experts and go through the details carefully to flush out just how autonomous the particular bot in question is. But you are right in the sense that there may be a segway period where even if people consider bots highly autonomous, these bots might be held under similar vicarious liability laws of which minors & parents are held. [please note this is not legal advice]
|
|
|
|
aigeezer
Legendary
Offline
Activity: 1450
Merit: 1013
Cryptanalyst castrated by his government, 1952
|
|
July 27, 2013, 01:48:07 AM |
|
It's hard (not impossible) to imagine the law meting out punishment to an AI although I suppose some politicians might advocate it. Let's lock that million dollar bot in the slammer - that'll teach it to behave and serve as an example to the others... doesn't seem to work to society's benefit.
Presumably "correctional" steps would be much more appropriate.
If so, that would beg the question of why punishment is considered appropriate for people.
I suspect society has a deep sub-cortical appetite for "eye for an eye" justice that would seem pretty foolish applied to bots. Perhaps it's pretty foolish in general.
|
|
|
|
the joint (OP)
Legendary
Offline
Activity: 1834
Merit: 1020
|
|
July 27, 2013, 02:39:29 AM |
|
Meanwhile, I imagine the law wouldn't offer much beyond "who programmed that drone", "who gave the orders", "arrest the ringleaders". I doubt that the law (anywhere) can yet imagine an autonomous AI.
Currently the law is such that only humans are held accountable for their actions. This means that other creatures are exempt, notably animals. When dogs bite people, it's their owners that are charged under the relevant dog biting laws. Thus, aigeezer speaks along the right line of thought in that law enforcement will probably be looking to the persons who created or who controls the computer/robot. However, it is certainly feasible that at some point in the future AI bots will be considered autonomous enough to be held independently accountable for their actions. I'd say this is a likely path considering that Moore's Law has yet to be proved wrong. It's sometimes mandated that the dog is euthanized in these cases, which is an interesting conclusion that would seem to follow a contradictory line of reasoning. Assuming a dichotomy, either this line of reasoning suggests that the dogs are 'guilty' of some wrongdoing and are deserving of death, or that some dogs have simply been 'trained poorly' by their owners and are presumed incapable of being reverse-trained quickly by their owner or anyone else. If the former is true, then the slippery slope has already begun. If the latter is true, then the slippery slope has already begun in a different way; it implies that even something that isn't "autonomous enough" can be deserving of punishment. Maybe there's no dichotomy and we just kill things that scare us.
|
|
|
|
crumbs
|
|
July 27, 2013, 10:02:55 AM |
|
It's hard (not impossible) to imagine the law meting out punishment to an AI although I suppose some politicians might advocate it. Let's lock that million dollar bot in the slammer - that'll teach it to behave and serve as an example to the others... doesn't seem to work to society's benefit.
Presumably "correctional" steps would be much more appropriate.
If so, that would beg the question of why punishment is considered appropriate for people.
I suspect society has a deep sub-cortical appetite for "eye for an eye" justice that would seem pretty foolish applied to bots. Perhaps it's pretty foolish in general.
It depends on the bots, of course. One aspect of punishment that you're not considering: Law is codified & (hopefully)known to the actors, *before* they commit the punishable act. If you know that stealing mah car may result in jail time, you may be less inclined to steal it (or not, but we're talking generalities). But, of course, jail time is a silly punishment for AI -- we're being too anthropomorphic. Being locked in a warehouse full of other AIs may not even be an undesirable state for clever boxen. But there are plenty of other punishments that are a better fit: restoring AI from an earlier backup (sort-a mirror-image of human "doing time" -- unwinding AI's time in this case, going back to an early savegame), corporal stuff like disabling cores, corrupting storage, crippling interfaces. Or presenting it with a diabolically clever paradox to make smoke come out of its louvered vents. We have ways of punishing AI
|
|
|
|
trilightzone.org
Newbie
Offline
Activity: 47
Merit: 0
|
|
July 27, 2013, 11:50:35 PM |
|
Can't resist but it reminded me of a book called "This Perfect Day" by Ira Levin :-) Must read if you're interested in possible AI related futures !
|
|
|
|
|