Bitcoin Forum
May 24, 2024, 03:40:24 AM *
News: Latest Bitcoin Core release: 27.0 [Torrent]
 
   Home   Help Search Login Register More  
Pages: « 1 [2]  All
  Print  
Author Topic: Machine Learning and the Death of Accountability  (Read 354 times)
c_atlas
Member
**
Offline Offline

Activity: 140
Merit: 56


View Profile
September 15, 2020, 05:28:23 PM
 #21

I want to be convinced, but there's an argument that we don't and can't know whether these ML debuggers are fair. We get into a 'who watches the watchmen'* kind of argument. If we are saying that the way to understand ML 'thought' patterns is to use more ML, I don't think it's the answer. I appreciate that this sort of thing will be used, but I can't see that it will open the black box and make everything understandable to us.

You're missing a key point that suchmoon brought up

...humans can and do make fucked up irrational decisions all the time and good luck explaining those.

I think this is actually a super fair point. Take a car crash for example: when a person crashes a car, the first thing you have to do is figure out what their intention was, since that largely determines the consequences for their action. If the reason they drove into the side of a building was to purposely cause damage, they'll be severely punished. On the other hand, if it turns out their brakes stopped working so they chose to hit a wall instead of a crowd of people, you'll have to ask more questions. You probably won't be able to figure exactly why they chose to hit a specific part of a wall, but the key take away is that they were trying to minimize the potential for harm.

When it comes to ML models, you don't need to look into the 'intent' because the model is trained to behave the way we want it to behave. This means you can outrightly assume* the model believed that hitting the wall at a specific point would result in the most successful outcome for the passengers & pedestrians. If you train the model to make the best decision out of the set of all rational decisions, you know it'll make one of the best decisions it can (even if all choices are poor).

There are lots of cases where people have made radical decisions in an instant which resulted in the prevention of death or injury, often their response will be "it just felt like the right thing to do". One example of this is Capt. Sully landing his plane in the Hudson.


* assuming people extensively test their models, at the end of the day, badly written software is badly written software.
suchmoon
Legendary
*
Offline Offline

Activity: 3682
Merit: 8922


https://bpip.org


View Profile WWW
September 15, 2020, 05:59:58 PM
 #22

I don't know. I want to be convinced, but there's an argument that we don't and can't know whether these ML debuggers are fair. We get into a 'who watches the watchmen'* kind of argument. If we are saying that the way to understand ML 'thought' patterns is to use more ML, I don't think it's the answer. I appreciate that this sort of thing will be used, but I can't see that it will open the black box and make everything understandable to us.

How much understanding do you really need or want? To use a banal example, most of us don't know how cars work yet we drive them and even occasionally kill other people with them without questioning their basic functions.

OTOH we tend to be so extremely risk-averse these days that I'm quite certain any AI will be held to a much higher standard than humans are, so for example if an AI black box kills someone it will likely be crippled/limited/banned until some additional safeguards are built into it, even if statistically it performs better than humans.

*I didn't, but then I'm not a fan of superhero movies. The X-Men ones were okay, but steadily deteriorated as the sequels progressed. And I have no idea how Kamala became Sansa Stark. A question for a different thread.

I have zero clue what this is about so I must be way older or way younger than you... I'd bet on the former Grin
Cnut237 (OP)
Legendary
*
Offline Offline

Activity: 1904
Merit: 1277



View Profile
September 16, 2020, 06:21:00 AM
 #23

I'm quite certain any AI will be held to a much higher standard than humans are, so for example if an AI black box kills someone it will likely be crippled/limited/banned
For obvious problems and errors, yes. I'm talking more about the subtle decisions than can have a profound impact, such as the exams grades example - where the unfairness was only identified because it was a human-devised algorithm.

How much understanding do you really need or want? To use a banal example, most of us don't know how cars work yet we drive them and even occasionally kill other people with them without questioning their basic functions.
I think we need the ability to detect unfairness and bias. Racial profiling for example can be insidious, and difficult to determine. The more control over our lives that we grant to machine-based reasoning, the more we relinquish the ability to question that control. Again, I'm not a Luddite, I think that ML can bring us huge benefits. It's just that I'm also a fan of transparency and accountability.
You have made a series of valid points, yet I'm not entirely convinced.

I have zero clue what this is about so I must be way older or way younger than you... I'd bet on the former Grin
It's more my poor sense of humour than anything else. Typically when I make a joke IRL, no-one laughs or even acknowledges it. I then explain that I've made a joke, and explain why it is funny. This is followed by a couple of seconds of slightly tense silence, before someone changes the subject.






squatz1
Legendary
*
Offline Offline

Activity: 1666
Merit: 1285


Flying Hellfish is a Commie


View Profile
September 16, 2020, 11:36:16 PM
 #24

I don't know. I want to be convinced, but there's an argument that we don't and can't know whether these ML debuggers are fair. We get into a 'who watches the watchmen'* kind of argument. If we are saying that the way to understand ML 'thought' patterns is to use more ML, I don't think it's the answer. I appreciate that this sort of thing will be used, but I can't see that it will open the black box and make everything understandable to us.

How much understanding do you really need or want? To use a banal example, most of us don't know how cars work yet we drive them and even occasionally kill other people with them without questioning their basic functions.

OTOH we tend to be so extremely risk-averse these days that I'm quite certain any AI will be held to a much higher standard than humans are, so for example if an AI black box kills someone it will likely be crippled/limited/banned until some additional safeguards are built into it, even if statistically it performs better than humans.

*I didn't, but then I'm not a fan of superhero movies. The X-Men ones were okay, but steadily deteriorated as the sequels progressed. And I have no idea how Kamala became Sansa Stark. A question for a different thread.

I have zero clue what this is about so I must be way older or way younger than you... I'd bet on the former Grin

I feel like for a lot of these new uses of technology (think, AI Lawyer) a real certified lawyer (by the bar) is going to, obviously, have to govern what happens at the end of the process. Can't just have a virtual lawyer set something up and then trust it not to make a mistake when there are hundreds of thousands of laws that are everchanging. Plus for liability purposes, people would still want the 'final look' to be done from a real lawyer.

But slowly these machines will learn that last process and then be able to do everything by themselves. Pretty crazy, but that's the future.

But yes, we don't need to know everything that we interact with -- I don't know how email works and I use it everyday. Same sort of thing with a large amount of the pieces of tech we use daily.




▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄    ▄▄▄▄                  ▄▄▄   ▄▄▄▄▄        ▄▄▄▄▄   ▄▄▄▄▄▄▄▄▄▄▄▄    ▄▄▄▄▄▄▄▄▄▄▄▄▄▄   ▄▄▄▄▄▄▄▄▄▄▄▄▄▄   ▄▄▄▄▄▄▄▄▄▄▄
 ▀████████████████▄  ████                 █████   ▀████▄    ▄████▀  ▄██████████████   ████████████▀  ▄█████████████▀  ▄█████████████▄
              ▀████  ████               ▄███▀███▄   ▀████▄▄████▀               ████   ████                ████                   ▀████
   ▄▄▄▄▄▄▄▄▄▄▄█████  ████              ████   ████    ▀██████▀      ██████████████▄   ████████████▀       ████       ▄▄▄▄▄▄▄▄▄▄▄▄████▀
   ██████████████▀   ████            ▄███▀     ▀███▄    ████        ████        ████  ████                ████       ██████████████▀
   ████              ████████████▀  ████   ██████████   ████        ████████████████  █████████████▀      ████       ████      ▀████▄
   ▀▀▀▀              ▀▀▀▀▀▀▀▀▀▀▀   ▀▀▀▀   ▀▀▀▀▀▀▀▀▀▀▀▀  ▀▀▀▀        ▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀   ▀▀▀▀▀▀▀▀▀▀▀▀        ▀▀▀▀       ▀▀▀▀        ▀▀▀▀▀

#1 CRYPTO CASINO & SPORTSBOOK
  WELCOME
BONUS
.INSTANT & FAST.
.TRANSACTION.....
.PROVABLY FAIR.
......& SECURE......
.24/7 CUSTOMER.
............SUPPORT.
BTC      |      ETH      |      LTC      |      XRP      |      XMR      |      BNB      |     more
mu_enrico
Copper Member
Legendary
*
Offline Offline

Activity: 2338
Merit: 2145


Slots Enthusiast & Expert


View Profile WWW
September 17, 2020, 02:24:55 PM
 #25

This presents other issues. For example, the connection dropping, or someone sitting just out of shot feeding the pupil the correct answers.
Sorry, I forgot to follow this discussion because of too much gambling problem Grin
The issues you mention above are about the execution or implementation problems, so there are ways to reduce the risk so that it becomes insignificant. These problems will always there in any programs, no matter how careful your strategic planning was.

I see the problem is more about the formulation phase (or called fundamental flaws), why they decided to use such mechanism, i.e., AI for predicting the exam results, and take the prediction like the real results. Yes, I'm skeptical because I learned assembly and 8051, so my view about computer is still just a "calculator."

Let's compare this to a simulation to predict football (soccer) matches. AI can utilize big data (as big as it wants), using historical match results, weather, health report, training report, etc., and my belief is that the prediction won't be accurate. The real deal is the actual football match, or in this case, the real exam.

███████████████████████
████████████████████
██████████████████
████████████████████
███▀▀▀█████████████████
███▄▄▄█████████████████
██████████████████████
██████████████████████
███████████████████████
█████████████████████
███████████████████
███████████████
████████████████████████
███████████████████████████
███████████████████████████
███████████████████████████
█████████▀▀██▀██▀▀█████████
█████████████▄█████████████
███████████████████████
████████████████████████
████████████▄█▄█████████
████████▀▀███████████
██████████████████
▀███████████████████▀
▀███████████████▀
█████████████████████████
O F F I C I A L   P A R T N E R S
▬▬▬▬▬▬▬▬▬▬
ASTON VILLA FC
BURNLEY FC
BK8?.
..PLAY NOW..
Cnut237 (OP)
Legendary
*
Offline Offline

Activity: 1904
Merit: 1277



View Profile
September 17, 2020, 02:33:56 PM
 #26

~
Thanks for that. The lawyer bit is certainly a thought-provoking example.

I don't know how email works and I use it everyday. Same sort of thing with a large amount of the pieces of tech we use daily.
I'd probably make a distinction between 'functional' tech, and the sort of machine-learning that involving classifying and categorising humans. It's this latter type where I'm more worried about a loss of accountability and the difficulty (or impossibility) or determining how it arrives at its decisions. SuchMoon's point about ML debuggers is a good one, but I'm not convinced that it doesn't just move the problem one step along.


I see the problem is more about the formulation phase (or called fundamental flaws), why they decided to use such mechanism, i.e., AI for predicting the exam results, and take the prediction like the real results.
Machine-learning is being used (and will increasingly be used) simply because it is cheaper and more efficient than having teams of humans doing the work. I think the question of whether it will be used to make decisions previously made by humans has effectively already been settled.
I am talking specifically about ML, rather than all forms of computer decision-making. ML is where the computer itself determines the methods by which it arrives at results. If ML decides that person A shouldn't be entitled to a credit card or a mortgage, person B shouldn't be considered for a job role, person C should be investigated by the police... then we only see these outputs, we have no way of unravelling how the machine reached these decisions, and so no way of determining whether the decisions were justified. That's what I meant by the death of accountability.

too much gambling problem Grin
I feed your raw data into an ML black box. The computer decides to block your access to all gambling sites. Because this is ML, there is no human-devised algorithm that we can consult to determine why you were blocked. Perhaps the computer did it to protect you from losing money, perhaps it thought you were too good and wanted to protect the gambling sites from big losses, perhaps it thought you were cheating. We can't know why, can't get any justification for the decision.






suchmoon
Legendary
*
Offline Offline

Activity: 3682
Merit: 8922


https://bpip.org


View Profile WWW
September 17, 2020, 03:32:48 PM
 #27

I think we need the ability to detect unfairness and bias. Racial profiling for example can be insidious, and difficult to determine. The more control over our lives that we grant to machine-based reasoning, the more we relinquish the ability to question that control. Again, I'm not a Luddite, I think that ML can bring us huge benefits. It's just that I'm also a fan of transparency and accountability.

I'm gonna fall back on my favorite argument in this thread - ML can't be much worse than humans in this respect. Humans are biased, error-prone, lazy, distracted, physically weak, mentally unstable, etc and yet we trust each other unconditionally 99% of the time. You could argue we have certain systems built around it to provide controls and accountability (like elections or multisig) but the fact is that most of the time these systems can't actually prevent a human from screwing up, particularly if they do it deliberately, whereas a computer system (ML or not) can have conditions on what it is allowed or not allowed to make a decision on. Granted those conditions might have flaws too etc etc but it's still a far more controllable situation IMO.

On the flipside, ML can actually be used to help us detect unfairness and bring accountability where we as humans fail to. Not to get too far into the overpoliticized police situations, but keep in mind that a lot of those police departments that are having trouble with racial biases are actually using software supposed to help them avoid that. They track officer-citizen interactions, complaints, etc and try to figure out which officers are outliers, need training, or more. It's just that the software generally sucks, typically based on some hardcoded vision of what the "correct" conduct should be, developed by the lowest bidder, and it's all for the most part reactive despite it being used to "forecast" trouble. Lots of possibilities for well-designed ML models there I think.
mu_enrico
Copper Member
Legendary
*
Offline Offline

Activity: 2338
Merit: 2145


Slots Enthusiast & Expert


View Profile WWW
September 17, 2020, 04:10:05 PM
 #28

If ML decides that person A shouldn't be entitled to a credit card or a mortgage, person B shouldn't be considered for a job role, person C should be investigated by the police... then we only see these outputs, we have no way of unravelling how the machine reached these decisions, and so no way of determining whether the decisions were justified. That's what I meant by the death of accountability.
If this is only your concern, then simply put code to log every decision path, and you will be able to read the "accountability."

I feed your raw data into an ML black box. The computer decides to block your access to all gambling sites. Because this is ML, there is no human-devised algorithm that we can consult to determine why you were blocked.
Up to this moment, there is no "computer decides" in the real world. The computer isn't responsible for anything, but the company (or the creator) is. Even if the company uses AI to decide, they will still be held accountable and have to answer all the questions. Suppose the algorithm devised by the company harms a manly wealthy man. He can always sue the company.

███████████████████████
████████████████████
██████████████████
████████████████████
███▀▀▀█████████████████
███▄▄▄█████████████████
██████████████████████
██████████████████████
███████████████████████
█████████████████████
███████████████████
███████████████
████████████████████████
███████████████████████████
███████████████████████████
███████████████████████████
█████████▀▀██▀██▀▀█████████
█████████████▄█████████████
███████████████████████
████████████████████████
████████████▄█▄█████████
████████▀▀███████████
██████████████████
▀███████████████████▀
▀███████████████▀
█████████████████████████
O F F I C I A L   P A R T N E R S
▬▬▬▬▬▬▬▬▬▬
ASTON VILLA FC
BURNLEY FC
BK8?.
..PLAY NOW..
Cnut237 (OP)
Legendary
*
Offline Offline

Activity: 1904
Merit: 1277



View Profile
September 18, 2020, 08:26:26 AM
Merited by c_atlas (1)
 #29

If this is only your concern, then simply put code to log every decision path, and you will be able to read the "accountability."
In theory, sure. In practice I think the numbers involved in reverse-engineering the decisions would be far too daunting for it to be practicable. ML is of course adaptive, so you'd have to go right back to the beginning and understand every input it's ever received, and all subsequent reasoning. Particularly if we think about neural nets as an application of ML.

On the flipside, ML can actually be used to help us detect unfairness and bring accountability where we as humans fail to.
Another good point, thanks.



And thanks to everyone who has contributed to this thread so far; it's appreciated. I do still believe that loss of accountability is a valid issue, but you have all made many good points, and have helped me reach a more nuanced perspective. Even where I've disagreed, you've still turned my black-and-white into shades of grey, and I can see now that the situation is perhaps not as bleak as I'd originally envisaged, and that - as with most things - there are positives as well as negatives.






Spendulus
Legendary
*
Offline Offline

Activity: 2898
Merit: 1386



View Profile
September 21, 2020, 10:14:26 PM
 #30

the computer only predicts an outcome on the basis of the data fed in and how the coders programmed it to react on a standardized case by case basis

Maybe last millennium.    Tongue  In the early 2000s, computer scientists took what we know about human learning, and created Machine Learning (in the thread title too).   Now, with each exam analyzed anywhere by any system, the model and the computer learns - and gets better.  By 2050 there should be nothing a computer cannot do better than our best humans.
 
I disagreee. We can do stupid far better than any machine. As they progress, we shall race ahead of them in Excellence in Stupid.
Pages: « 1 [2]  All
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!