Bitcoin Forum

Other => Politics & Society => Topic started by: Cnut237 on September 11, 2020, 07:33:54 AM



Title: Machine Learning and the Death of Accountability
Post by: Cnut237 on September 11, 2020, 07:33:54 AM
Recently in the UK there has been a furore over algorithm-determined school exam results. The pandemic meant that pupils couldn't sit exams, and an algorithm was devised that determined what results each pupil would get. However, many pupils, particularly those from disadvantaged backgrounds, received worse than predicted results, whereas pupils from more affluent backgrounds suffered no ill effects. There were widespread protests at the perceived unfairness, and the algorithm was hauled out into the open and dissected. The formula was quite rudimentary, and the inbuilt bias perfectly clear for anyone with a basic grasp of maths to see. The outcome was  that the protests were upheld, and the unfair results overturned.

The reason this could happen is that the algorithm was devised by people. Their assumptions and their methods could be unpicked and understood. However, the trend, now that we are in the era of big data, is for machine-learning. Computers can devise much more efficient processes than can humans. If the same thing had happened in a few years' time, it is quite likely that the grades would have been determined by machine-learning, with initial data fed in, results coming out, and no human understanding how the processing from input to output works. Indeed, with the computer itself unable to explain (because we have not yet reached that level of AI). This combination of factors, machine-learning on the one hand, and the computer being unable to explain its reasoning on the other, leads to an absolute removal of all accountability for decisions that can have a profound impact on people's lives. Humans can argue convincingly that they have simply input some initial parameters, and had no part in the decision-making. But those machine-learning decisions can't be pulled into the open, can't be dissected, can't be understood. Machine-learning without sentient computers means that all accountability is thrown away. No-one is responsible for anything. Now this may change once AI reaches a sufficient level that a computer can explain its reasoning in terms that humans can understand... but that is years or decades away, and until we reach that point, the possibilities look quite scary. We live in a competitive world, and the advantages of machine-learning are too tempting for countries and companies to pass up. ML is pursued fervently, no matter the implications. Will we really throw away accountability for (and understanding of) a lot of really important decisions?


Title: Re: Machine Learning and the Death of Accountability
Post by: mu_enrico on September 11, 2020, 08:11:02 AM
Removing exams and replace it with trend analysis + teacher's opinion? It doesn't sound right!

An exam is the most objective way of measuring academics knowledge at a particular time. Well, judging from my experience where I only studied hard several months or weeks before the assessment (and spent most of my time as a clown), I'd be a pedicab driver by now if the system exists back then.

The thing is, the teacher's opinion is subjective, garbage in, garbage out, the data feeds into the algorithm can also be garbage and subjective. The coders are human with limited knowledge and wisdom, and thus, the product is far from perfection to determine who gets what. What the algorithm did was forecasting, and reality would likely differ from the forecast.

Just do online exams with video conference apps, no need for this madness.


Title: Re: Machine Learning and the Death of Accountability
Post by: Coyster on September 11, 2020, 09:03:13 AM
No data or algorithm can be used to accurately determine the outcome of an examination that's yet to be written, imo, this should not be an option to grade students in schools. What data would either the teachers or the computer use to determine the grade: previous grades, social life of the student, level of intelligence, IQ level, background etc? I've seen good students go into an examination unprepared and fail, likewise bad students change their mindset, study hard and come out top to everyone's surprise.

If a student doesn't take an examination themself, then there's no algorithm or computer that can bring out a result that'll be accurate, the computer only predicts an outcome on the basis of the data fed in and how the coders programmed it to react on a standardized case by case basis, but there's no arbitrary standard for a student in every exam, they can be bad today and excellent tomorrow.


Title: Re: Machine Learning and the Death of Accountability
Post by: Cnut237 on September 11, 2020, 09:34:46 AM
Just do online exams with video conference apps, no need for this madness.
This presents other issues. For example, the connection dropping, or someone sitting just out of shot feeding the pupil the correct answers.

No data or algorithm can be used to accurately determine the outcome of an examination that's yet to be written, imo, this should not be an option to grade students in schools. What data would either the teachers or the computer use to determine the grade
In this instance there was bias based on the previous performance of the school, such that good students in badly performing schools were unfairly penalised, and bad students at good schools escaped unscathed. There's a quick overview here (https://www.bbc.co.uk/news/explainers-53807730).



the computer only predicts an outcome on the basis of the data fed in and how the coders programmed it
This really is the whole point of my thread. At the moment, in situations like that of the exam grades in my example, an unfair algorithm can be unpicked and assessed for bias, and corrections can be made or the whole thing can be thrown out. There is a degree of transparency to the process. What I am suggesting is that in the very near future, we are likely to lose that transparency, that our lives and life-chances will in part be determined by machine-learning algorithms where there is no accountability, and no possibility of even understanding how the results were arrived at. In the exams example, it was a simple human algorithm. If instead the outcomes had been determined by machine-learning, there may have been a suspicion of unfairness, but no way to prove if that was actually the case, and no evidence available to challenge the decision.


Title: Re: Machine Learning and the Death of Accountability
Post by: squatz1 on September 13, 2020, 11:24:36 PM
No data or algorithm can be used to accurately determine the outcome of an examination that's yet to be written, imo, this should not be an option to grade students in schools. What data would either the teachers or the computer use to determine the grade: previous grades, social life of the student, level of intelligence, IQ level, background etc? I've seen good students go into an examination unprepared and fail, likewise bad students change their mindset, study hard and come out top to everyone's surprise.

If a student doesn't take an examination themself, then there's no algorithm or computer that can bring out a result that'll be accurate, the computer only predicts an outcome on the basis of the data fed in and how the coders programmed it to react on a standardized case by case basis, but there's no arbitrary standard for a student in every exam, they can be bad today and excellent tomorrow.

+1 to this.

Not exactly sure why this was even allowed in any school setting, but I would LOVE to see a source on this. There's literally no way that parents / children were happy that this was being used. At first it sounds like a good idea, until you notice that the machine learning technology is HEAVILY reliant on teacher input / past performance to come to a conclusion.

But yeah, I had tons of friends who were horrible at regular class and great test takers -- and I've also seen the opposite. Not sure why this would be used though.

This WONT be the norm in education though, people aren't happy when you give them a bad grade and then you try to blame a computer.....lol


Title: Re: Machine Learning and the Death of Accountability
Post by: c_atlas on September 14, 2020, 02:53:07 AM
Funny how you always hear about these machine learning experiments going wrong, does anyone remember Microsoft's Tay bot on Twitter? How about all the stories like 'AI can't recognize black faces' or 'AI converts speech-to-text better when the speaker is white (https://www.theregister.com/2020/03/24/speech_to_text_ai_system/) than when they're black', or Amazon's recruiter AI that didn't like women (https://www.reuters.com/article/us-amazon-com-jobs-automation-insight-idUSKCN1MK08G).

I'm not really a fan of machine learning and AI. It isn't that it's not interesting or desirable, I just think it's sad that people think we've reached the limits of human ingenuity and we feel that we have to develop something to think for us. I'm not sure that people will ever be ready for a world where AI can make perfectly unbiased decisions, keep in mind most decisions result in winners and losers. When people make the decision, they can come to a compromise; when a machine designed to make a clear cut choice has to decide, by design it has to avoid letting the loser down easy, otherwise it would be biased in favour of the loser (in this case, the losers might be kids who 'deserve' [i.e are predicted] to fail their class).

As a side note, assigning students scores based on machine learning predictions is just ridiculous. Whoever thought this would work should be shamed publicly.


Title: Re: Machine Learning and the Death of Accountability
Post by: squatz1 on September 14, 2020, 02:59:28 AM
Funny how you always hear about these machine learning experiments going wrong, does anyone remember Microsoft's Tay bot on Twitter? How about all the stories like 'AI can't recognize black faces' or 'AI converts speech-to-text better when the speaker is white (https://www.theregister.com/2020/03/24/speech_to_text_ai_system/) than when they're black', or Amazon's recruiter AI that didn't like women (https://www.reuters.com/article/us-amazon-com-jobs-automation-insight-idUSKCN1MK08G).

I'm not really a fan of machine learning and AI. It isn't that it's not interesting or desirable, I just think it's sad that people think we've reached the limits of human ingenuity and we feel that we have to develop something to think for us. I'm not sure that people will ever be ready for a world where AI can make perfectly unbiased decisions, keep in mind most decisions result in winners and losers. When people make the decision, they can come to a compromise; when a machine designed to make a clear cut choice has to decide, by design it has to avoid letting the loser down easy, otherwise it would be biased in favour of the loser (in this case, the losers might be kids who 'deserve' [i.e are predicted] to fail their class).

As a side note, assigning students scores based on machine learning predictions is just ridiculous. Whoever thought this would work should be shamed publicly.

I'm assumining (and hoping) that they have been shamed publicly. You didn't even give people the chance to improve (if they had poor performance in class) and just simply handed off good grades to people who may not have performed as well when the big test came.

But onto the part about AI and such. I don't think there is anything WRONG with using AI for certain things. Amazons recruiting AI could've truly been amazing, the problem is that they fed it 'bad data' which meant that it was skewed towards men. Here's the quote from the article:

But by 2015, the company realized its new system was not rating candidates for software developer jobs and other technical posts in a gender-neutral way.

That is because Amazon’s computer models were trained to vet applicants by observing patterns in resumes submitted to the company over a 10-year period. Most came from men, a reflection of male dominance across the tech industry.


So yeah, garbage in is garbage out. They'll be able to improve on this if they feed the machine the resumes of 'successful' women in the industry to help balance out the problem.

There can be some genuinely good uses of AI in the world. Imagine if a lawyer AI could help handle VERY SIMPLE legal documents that are typically very repetitive and mundane. These would be quickly looked over by a real lawyer and then sent on their way. Just one example though.


Title: Re: Machine Learning and the Death of Accountability
Post by: Vod on September 14, 2020, 03:40:53 AM
the computer only predicts an outcome on the basis of the data fed in and how the coders programmed it to react on a standardized case by case basis

Maybe last millennium.    :P  In the early 2000s, computer scientists took what we know about human learning, and created Machine Learning (in the thread title too).   Now, with each exam analyzed anywhere by any system, the model and the computer learns - and gets better.  By 2050 there should be nothing a computer cannot do better than our best humans.

What Coyster describes in the quote above is not Machine Learning - ML is more than programming, much like we are more than our instincts.  


Title: Re: Machine Learning and the Death of Accountability
Post by: c_atlas on September 14, 2020, 04:03:01 AM
I'm assumining (and hoping) that they have been shamed publicly. You didn't even give people the chance to improve (if they had poor performance in class) and just simply handed off good grades to people who may not have performed as well when the big test came.
On the other hand, students who cheated on smaller assessments like assignments, graded homework, quizzes, etc may have been projected to finish with an exceptional grade, when in reality they would have been knocked down to a satisfactory one after failing their tests.

But onto the part about AI and such. I don't think there is anything WRONG with using AI for certain things. Amazons recruiting AI could've truly been amazing, the problem is that they fed it 'bad data' which meant that it was skewed towards men. Here's the quote from the article:
Sure, I think there are lots of great applications of AI/ML, but I don't think those fields should receive as much attention as they do. At the end of the day, most of our modern AI is just curve fitting. Neural networks are turning out to be great with computer vision and image recognition, which gives us cool stuff like level 2 autonomous vehicles, some interesting facial recognition technology, and maybe in the short term, better targeted advertising. My worry is that people will start using 'AI' as a default solution when some other statistical or even pure mathematical approach may work better.

AI shouldn't be a substitute for thinking.

So yeah, garbage in is garbage out. They'll be able to improve on this if they feed the machine the resumes of 'successful' women in the industry to help balance out the problem.
There's still the open ended question of when is your model good enough. When have you balanced it out? Your model might give you perfect candidates, but since you can't be certain your model gives you the best candidates, you're still going to have to have a human intervene and provide their own input.

Then there's the question of what makes a good candidate. Suppose you decide the most fair way to treat all gender's equally is to completely ignore gender altogether. If you select for people who you would like to work with, and you would prefer to hire people who aren't combative (since they'll do what they're told and won't ask for a raise as often), your model may end up preferring women. It's not that the model is 'flawed', it's simply what it gave you based on your desire to have agreeable coworkers.

Pretend you can somehow build a model that can accurately rank a person by intelligence. Assume the distribution of applicants follows some type of normal distribution with barely any applicants on the tail ends. Now, when you run your model on your applicants dataset, you're surprised because it seems to skew heavily in favour of male applicants. What went wrong? It wasn't the model, it was the applicants themselves. The male and female intelligence distributions are extremely similar, you have lots of people in the middle around the average and barely any really stupid or really smart people. Your applicant pool is a subset of the set of all people, and since technology is known to be dominated by men, you get a lot more male applicants than female applicants. The result is you're more likely to find one of the really smart people in the pool of men than you are in the pool of women.

I guess the question to ask is then: how can you trust your model to map your inputs to the desired outputs when you aren't even sure what the desired output is?

There can be some genuinely good uses of AI in the world. Imagine if a lawyer AI could help handle VERY SIMPLE legal documents that are typically very repetitive and mundane. These would be quickly looked over by a real lawyer and then sent on their way. Just one example though.
How long before the lawyers get a law signed saying AI lawyers are illegal  ;)


Title: Re: Machine Learning and the Death of Accountability
Post by: suchmoon on September 14, 2020, 04:53:52 AM
This combination of factors, machine-learning on the one hand, and the computer being unable to explain its reasoning on the other, leads to an absolute removal of all accountability for decisions that can have a profound impact on people's lives.

I'm not so certain that it's really that significant. I'm old enough to remember writing assembly code so that was obviously the most direct responsibility a person could have over giving instructions to a computer but you know what - it's a great thing that we generally don't do that anymore outside of spaceships perhaps. Even friggin' dishwasher software is probably written using some sort of toolkit these days, taking care of mundane stuff like memory allocation which humans would definitely mess up.

Machine learning can definitely create issues if used by incompetent dolts but on the other hand - used responsibly it can solve massive problems that are simply unsolvable otherwise. I think we'll find a balance where we will use proven well-tested models as building blocks for more complex systems and we'll learn for figure out whom to blame... just like we don't blame Microsoft when we write buggy code in C# or feed garbage data to well-intended code.


Title: Re: Machine Learning and the Death of Accountability
Post by: Cnut237 on September 14, 2020, 08:16:02 AM
Machine learning can definitely create issues if used by incompetent dolts but on the other hand - used responsibly it can solve massive problems that are simply unsolvable otherwise. I think we'll find a balance where we will use proven well-tested models as building blocks for more complex systems and we'll learn for figure out whom to blame... just like we don't blame Microsoft when we write buggy code in C# or feed garbage data to well-intended code.

I'm strongly in favour of machine learning, and have dabbled in it myself. I should really have been clearer, but I suppose the point I'm making is all about the distinction between machines that arrive at conclusions based on initial rules and conditions supplied by humans, and machines that arrive - through impenetrable reasoning - at their own conclusions. I'd agree that we will eventually figure out who to blame, but it's an issue that needs addressing sooner rather than later. Similar to the situation with driverless cars.

A decent example is probably the iterations of Alpha Go, Gogle's Go-playing computer. The initial version was trained on huge numbers of human games, and became exceptionally proficient, and beat the world champion. It does to an extent use creative new strategies, but the overall style is explicable:

Quote
Toby Manning, the match referee for AlphaGo vs. Fan Hui, has described the program's style as "conservative".[64] AlphaGo's playing style strongly favours greater probability of winning by fewer points over lesser probability of winning by more points.[17] Its strategy of maximising its probability of winning is distinct from what human players tend to do which is to maximise territorial gains, and explains some of its odd-looking moves.[65] It makes a lot of opening moves that have never or seldom been made by humans, while avoiding many second-line opening moves that human players like to make. It likes to use shoulder hits, especially if the opponent is over concentrated.
https://en.wikipedia.org/wiki/AlphaGo#Style_of_play

But then we look at the later version, AlphaGo Zero. This version was entirely self-taught, with no input from human games at all. Playing against the earlier version, it won 100-0. The removal of training from human matches meant that the machine's creativity was no longer constrained, and it developed some astonishingly effective strategies that were completely unknown to human players.

Quote
AlphaGo Zero’s training involved four TPUs and a single neural network that initially knew nothing about go. The AI learned without supervision—it simply played against itself, and soon was able to anticipate its own moves and how they would affect a game’s outcome. “This technique is more powerful than previous versions of AlphaGo because it is no longer constrained by the limits of human knowledge,” according to a blog post authored by DeepMind co-founder Demis Hassabis and David Silver, who leads the company’s reinforcement learning research group.
https://www.scientificamerican.com/article/ai-versus-ai-self-taught-alphago-zero-vanquishes-its-predecessor/

I'm just saying that we reach a point with ML where it is impossible for humans to understand how the conclusions were arrived at... and that once pure-ML (rather than human-devised) algorithms start to have a serious impact on human lives, then accountability becomes a huge issue, which the human programmers can convincingly sidestep.

One again, I am in favour of ML as it can bring huge advances, and beneath everything it is just more efficient. I'm just looking ahead to some of the problems we may face, and wondering how these might be resolved.


---


At first it sounds like a good idea, until you notice that the machine learning technology is HEAVILY reliant on teacher input / past performance to come to a conclusion.
[...]
This WONT be the norm in education though, people aren't happy when you give them a bad grade and then you try to blame a computer.....lol

I was trying to say (but not being very clear about it) that in this example, there was no machine-learning, the algorithm was devised entirely by humans, which meant that when people started to question the results, the algorithm was looked at and found to contain inbuilt bias (for info, the algorithm is below). But when, instead of this, we are using machine-devised algorithms and machine-learning, and there is no possibility of unpicking the algorithm (because we won't understand how the computer has reached its conclusions), then how do we determine unfairness or accountability? Machine-learning is black box reasoning, and impenetrable to humans. Whose fault is it when it outputs unfair results? And how do we prove that the results are unfair?

Quote
Synopsis
The examination centre provided a list of teacher predicted grades, called 'centre assessed grades' (CAGs)
The students were listed in rank order with no ties.
For large cohorts (over 15)
With exams with a large cohort; the previous results of the centre were consulted. For each of the three previous years, the number of students getting each grade (A* to U) is noted. A percentage average is taken.
This distribution is then applied to the current years students-irrespective of their individual CAG.
A further standardisation adjustment could be made on the basis of previous personal historic data: at A level this could be a GCSE result, at GCSE this could be a Key Stage 2 SAT.
For small cohorts, and minority interest exams (under 15).
The individual CAG is used unchanged

The formulas
for large schools with n>=15
Pkj=(1-rj)Ckj+rj(Ckj+qkj-pkj)
for small schools with n<15
Pkj=CAG

The variables
n is the number of pupils in the subject being assessed
k is a specific grade
j indicates the school
Ckj is the historical grade distribution of grade at the school (centre) over the last three years, 2017-19.
That tells us already that the history of the school is very important to Ofqual. The grades other pupils got in previous years is a huge determinant to the grades this year’s pupils were given in 2020. The regulator argues this is a plausible assumption but for many students it is also an intrinsically unfair one: the grades they are given are decided by the ability of pupils they may have never met.
qkj is the predicted grade distribution based on the class’s prior attainment at GCSEs. A class with mostly 9s (the top grade) at GCSE will get a lot of predicted A*s; a class with mostly 1s at GCSEs will get a lot of predicted Us.
pkj is the predicted grade distribution of the previous years, based on their GCSEs. You need to know that because, if previous years were predicted to do poorly and did well, then this year might do the same.

rj is the fraction of pupils in the class where historical data is available. If you can perfectly track down every GCSE result, then it is 1; if you cannot track down any, it is 0.
CAG is the centre assessed grade.
Pkj is the result, which is the grade distribution for each grade k at each school j.
https://en.wikipedia.org/wiki/Ofqual_exam_results_algorithm


Title: Re: Machine Learning and the Death of Accountability
Post by: suchmoon on September 14, 2020, 12:24:05 PM
I'm just saying that we reach a point with ML where it is impossible for humans to understand how the conclusions were arrived at... and that once pure-ML (rather than human-devised) algorithms start to have a serious impact on human lives, then accountability becomes a huge issue, which the human programmers can convincingly sidestep.

I'm pretty sure we can still test and debug ML models, so deliberately not doing that and evading responsibility wouldn't be much different from screwing up human-written code.


Title: Re: Machine Learning and the Death of Accountability
Post by: c_atlas on September 14, 2020, 12:56:59 PM

...the point I'm making is all about the distinction between machines that arrive at conclusions based on initial rules and conditions supplied by humans, and machines that arrive - through impenetrable reasoning - at their own conclusions....

...But then we look at the later version, AlphaGo Zero. This version was entirely self-taught, with no input from human games at all.

I would rephrase it to say it was almost entirely self-taught, keep in mind, human's still supplied the rules for the game.

Quote
The neural network initially knew nothing about Go beyond the rules.
https://en.wikipedia.org/wiki/AlphaGo_Zero#Training

To some extent, you can only take advantage of reinforcement learning if you can gamify your problem, which isn't as easy as it may seem. The reason AI experts are using Go, StarCraft, etc. as algorithm training grounds is that gamification of the real world is incredibly challenging. It's super easy to feed a neural net the rules and success criteria for Go because it's well defined. It's a lot harder to define success criteria in problems like speech recognition, self-driving cars, image recognition, and so on.

Side note: everyone should watch the AlphaGo movie.

I'm just saying that we reach a point with ML where it is impossible for humans to understand how the conclusions were arrived at... and that once pure-ML (rather than human-devised) algorithms start to have a serious impact on human lives, then accountability becomes a huge issue, which the human programmers can convincingly sidestep.

I'm not so sure that's the case. While it's true we may not know precisely why the model made a certain decision, we can probably still quantify the incentives behind the decision. For example, AlphaGo Zero knew the game state before and after it made a "1 in 10,000" move, so the programmers knew the success probability before and after that move.

These models always think in terms of probability. If you're analysing the outcome of an autonomous vehicle collision, you can at least piece together the fact that the model made the car turn exactly x amount at y speed because that action was computed to result in the highest survival rate for the passenger out of all actions in the search space.


Title: Re: Machine Learning and the Death of Accountability
Post by: Cnut237 on September 14, 2020, 01:24:33 PM
AlphaGo Zero knew the game state before and after it made a "1 in 10,000" move, so the programmers knew the success probability before and after that move.
I'm not convinced by this argument. The point with AG Zero is that it learnt by itself; there was no strategy input from human Go experts, and indeed no interaction with human Go matches at all. AG Zero learnt by itself, and strategised by itself, and was able to trounce humans and 'human taught' AI, and indeed ML AI that had human Go matches as a primer (previous iterations of the AG machine). AG Zero determines success/failure probabilities that humans can't process.

I'm pretty sure we can still test and debug ML models
In response to this, and as a mild qualifier to my response to c_atlas's point, I'd argue that yes, we can to an extent, and in certain simple situations. But if we are talking about complex neural nets, and taking AG Zero as an example of where the technology is headed, I'd say this isn't always the case now, and most certainly won't be the case in the future. Taking the point to an absurd extreme, if we consider the eventual aim of sentient AI, then there's no way that humans have the ability to simply debug it. I do think that the 'black box' problem of ML is real.


Title: Re: Machine Learning and the Death of Accountability
Post by: suchmoon on September 14, 2020, 01:56:11 PM
In response to this, and as a mild qualifier to my response to c_atlas's point, I'd argue that yes, we can to an extent, and in certain simple situations. But if we are talking about complex neural nets, and taking AG Zero as an example of where the technology is headed, I'd say this isn't always the case now, and most certainly won't be the case in the future. Taking the point to an absurd extreme, if we consider the eventual aim of sentient AI, then there's no way that humans have the ability to simply debug it. I do think that the 'black box' problem of ML is real.

Then we would have ML models testing and debugging each other perhaps? ;D

Think of it this way: humans can and do make fucked up irrational decisions all the time and good luck explaining those. We spend enormous amounts of effort to train humans for certain tasks and bad stuff still happens and then we try to figure out how to reduce the probability of that happening again. Is ML going to be really that much worse? I mean it will probably screw up in different ways but we also have different (and arguably more robust) tools to deal with that than we could deal with human decision making processes.


Title: Re: Machine Learning and the Death of Accountability
Post by: Dorodha on September 14, 2020, 02:40:05 PM
Machine learning or ML-related skills are at the top in terms of IT skills required in the present age. Experts say that in the future there is immense potential for things like machine learning and artificial intelligence. it is important to acquire skills in such technology sector there is no need for simple accountability to make all the decisions if you can predict its future in the right way. At present the demand for machine learning engineers is increasing in big technology institutes as well as general institutes.


Title: Re: Machine Learning and the Death of Accountability
Post by: suchmoon on September 14, 2020, 05:46:57 PM
Machine learning or ML-related skills are at the top in terms of IT skills required in the present age. Experts say that in the future there is immense potential for things like machine learning and artificial intelligence. it is important to acquire skills in such technology sector there is no need for simple accountability to make all the decisions if you can predict its future in the right way. At present the demand for machine learning engineers is increasing in big technology institutes as well as general institutes.

I couldn't agree more. If we can predict the future - fuck accountability, let's just all buy winning lottery tickets.

OTOH, the above post is a good example of how far we still need to go with simple things like language translation or whatever world-salad generator this person was using, so the ML apocalypse is probably still quite far away.


Title: Re: Machine Learning and the Death of Accountability
Post by: c_atlas on September 14, 2020, 06:05:33 PM
Machine learning or ML-related skills are at the top in terms of IT skills required in the present age. Experts say that in the future there is immense potential for things like machine learning and artificial intelligence. it is important to acquire skills in such technology sector there is no need for simple accountability to make all the decisions if you can predict its future in the right way. At present the demand for machine learning engineers is increasing in big technology institutes as well as general institutes.

I couldn't agree more. If we can predict the future - fuck accountability, let's just all buy winning lottery tickets.

OTOH, the above post is a good example of how far we still need to go with simple things like language translation or whatever world-salad generator this person was using, so the ML apocalypse is probably still quite far away.

I have a feeling the word-salad generator was just their brain. In that case, only Neuralink can help.


Title: Re: Machine Learning and the Death of Accountability
Post by: squatz1 on September 14, 2020, 07:29:43 PM
I'm just saying that we reach a point with ML where it is impossible for humans to understand how the conclusions were arrived at... and that once pure-ML (rather than human-devised) algorithms start to have a serious impact on human lives, then accountability becomes a huge issue, which the human programmers can convincingly sidestep.

I'm pretty sure we can still test and debug ML models, so deliberately not doing that and evading responsibility wouldn't be much different from screwing up human-written code.

We CAN totally test and debug models, but for the example of education is it really worth it? I feel like they're trying to solve a 'problem' that isn't really a 'problem' Grading tests and rating performance isn't a large issue in education (at least in my own personal opinion, I guess researchers could disagree with me)

Quote
I was trying to say (but not being very clear about it) that in this example, there was no machine-learning, the algorithm was devised entirely by humans, which meant that when people started to question the results, the algorithm was looked at and found to contain inbuilt bias (for info, the algorithm is below). But when, instead of this, we are using machine-devised algorithms and machine-learning, and there is no possibility of unpicking the algorithm (because we won't understand how the computer has reached its conclusions), then how do we determine unfairness or accountability? Machine-learning is black box reasoning, and impenetrable to humans. Whose fault is it when it outputs unfair results? And how do we prove that the results are unfair?

Ah I see, this wasn't machine learning to begin with. Just an algorithm which was HEAVILY flawed from the beginning. But yes, I would agree with you -- things could get so complex when it comes to the usage of machine learning that people are literally unable to piece together why a particular decision was made and how they can 'mess with' data to fix it.

It does make sense to use this for certain tasks, but education prediction of grades is not something I would think needs to be fixed.
'


Title: Re: Machine Learning and the Death of Accountability
Post by: Cnut237 on September 15, 2020, 04:47:57 PM
Then we would have ML models testing and debugging each other perhaps? ;D

I don't know. I want to be convinced, but there's an argument that we don't and can't know whether these ML debuggers are fair. We get into a 'who watches the watchmen'* kind of argument. If we are saying that the way to understand ML 'thought' patterns is to use more ML, I don't think it's the answer. I appreciate that this sort of thing will be used, but I can't see that it will open the black box and make everything understandable to us.


*I didn't, but then I'm not a fan of superhero movies. The X-Men ones were okay, but steadily deteriorated as the sequels progressed. And I have no idea how Kamala (https://memory-alpha.fandom.com/wiki/Kamala) became Sansa Stark. A question for a different thread.


Title: Re: Machine Learning and the Death of Accountability
Post by: c_atlas on September 15, 2020, 05:28:23 PM
I want to be convinced, but there's an argument that we don't and can't know whether these ML debuggers are fair. We get into a 'who watches the watchmen'* kind of argument. If we are saying that the way to understand ML 'thought' patterns is to use more ML, I don't think it's the answer. I appreciate that this sort of thing will be used, but I can't see that it will open the black box and make everything understandable to us.

You're missing a key point that suchmoon brought up

...humans can and do make fucked up irrational decisions all the time and good luck explaining those.

I think this is actually a super fair point. Take a car crash for example: when a person crashes a car, the first thing you have to do is figure out what their intention was, since that largely determines the consequences for their action. If the reason they drove into the side of a building was to purposely cause damage, they'll be severely punished. On the other hand, if it turns out their brakes stopped working so they chose to hit a wall instead of a crowd of people, you'll have to ask more questions. You probably won't be able to figure exactly why they chose to hit a specific part of a wall, but the key take away is that they were trying to minimize the potential for harm.

When it comes to ML models, you don't need to look into the 'intent' because the model is trained to behave the way we want it to behave. This means you can outrightly assume* the model believed that hitting the wall at a specific point would result in the most successful outcome for the passengers & pedestrians. If you train the model to make the best decision out of the set of all rational decisions, you know it'll make one of the best decisions it can (even if all choices are poor).

There are lots of cases where people have made radical decisions in an instant which resulted in the prevention of death or injury, often their response will be "it just felt like the right thing to do". One example of this is Capt. Sully landing his plane in the Hudson.


* assuming people extensively test their models, at the end of the day, badly written software is badly written software.


Title: Re: Machine Learning and the Death of Accountability
Post by: suchmoon on September 15, 2020, 05:59:58 PM
I don't know. I want to be convinced, but there's an argument that we don't and can't know whether these ML debuggers are fair. We get into a 'who watches the watchmen'* kind of argument. If we are saying that the way to understand ML 'thought' patterns is to use more ML, I don't think it's the answer. I appreciate that this sort of thing will be used, but I can't see that it will open the black box and make everything understandable to us.

How much understanding do you really need or want? To use a banal example, most of us don't know how cars work yet we drive them and even occasionally kill other people with them without questioning their basic functions.

OTOH we tend to be so extremely risk-averse these days that I'm quite certain any AI will be held to a much higher standard than humans are, so for example if an AI black box kills someone it will likely be crippled/limited/banned until some additional safeguards are built into it, even if statistically it performs better than humans.

*I didn't, but then I'm not a fan of superhero movies. The X-Men ones were okay, but steadily deteriorated as the sequels progressed. And I have no idea how Kamala (https://memory-alpha.fandom.com/wiki/Kamala) became Sansa Stark. A question for a different thread.

I have zero clue what this is about so I must be way older or way younger than you... I'd bet on the former ;D


Title: Re: Machine Learning and the Death of Accountability
Post by: Cnut237 on September 16, 2020, 06:21:00 AM
I'm quite certain any AI will be held to a much higher standard than humans are, so for example if an AI black box kills someone it will likely be crippled/limited/banned
For obvious problems and errors, yes. I'm talking more about the subtle decisions than can have a profound impact, such as the exams grades example - where the unfairness was only identified because it was a human-devised algorithm.

How much understanding do you really need or want? To use a banal example, most of us don't know how cars work yet we drive them and even occasionally kill other people with them without questioning their basic functions.
I think we need the ability to detect unfairness and bias. Racial profiling for example can be insidious, and difficult to determine. The more control over our lives that we grant to machine-based reasoning, the more we relinquish the ability to question that control. Again, I'm not a Luddite, I think that ML can bring us huge benefits. It's just that I'm also a fan of transparency and accountability.
You have made a series of valid points, yet I'm not entirely convinced.

I have zero clue what this is about so I must be way older or way younger than you... I'd bet on the former ;D
It's more my poor sense of humour than anything else. Typically when I make a joke IRL, no-one laughs or even acknowledges it. I then explain that I've made a joke, and explain why it is funny. This is followed by a couple of seconds of slightly tense silence, before someone changes the subject.


Title: Re: Machine Learning and the Death of Accountability
Post by: squatz1 on September 16, 2020, 11:36:16 PM
I don't know. I want to be convinced, but there's an argument that we don't and can't know whether these ML debuggers are fair. We get into a 'who watches the watchmen'* kind of argument. If we are saying that the way to understand ML 'thought' patterns is to use more ML, I don't think it's the answer. I appreciate that this sort of thing will be used, but I can't see that it will open the black box and make everything understandable to us.

How much understanding do you really need or want? To use a banal example, most of us don't know how cars work yet we drive them and even occasionally kill other people with them without questioning their basic functions.

OTOH we tend to be so extremely risk-averse these days that I'm quite certain any AI will be held to a much higher standard than humans are, so for example if an AI black box kills someone it will likely be crippled/limited/banned until some additional safeguards are built into it, even if statistically it performs better than humans.

*I didn't, but then I'm not a fan of superhero movies. The X-Men ones were okay, but steadily deteriorated as the sequels progressed. And I have no idea how Kamala (https://memory-alpha.fandom.com/wiki/Kamala) became Sansa Stark. A question for a different thread.

I have zero clue what this is about so I must be way older or way younger than you... I'd bet on the former ;D

I feel like for a lot of these new uses of technology (think, AI Lawyer) a real certified lawyer (by the bar) is going to, obviously, have to govern what happens at the end of the process. Can't just have a virtual lawyer set something up and then trust it not to make a mistake when there are hundreds of thousands of laws that are everchanging. Plus for liability purposes, people would still want the 'final look' to be done from a real lawyer.

But slowly these machines will learn that last process and then be able to do everything by themselves. Pretty crazy, but that's the future.

But yes, we don't need to know everything that we interact with -- I don't know how email works and I use it everyday. Same sort of thing with a large amount of the pieces of tech we use daily.


Title: Re: Machine Learning and the Death of Accountability
Post by: mu_enrico on September 17, 2020, 02:24:55 PM
This presents other issues. For example, the connection dropping, or someone sitting just out of shot feeding the pupil the correct answers.
Sorry, I forgot to follow this discussion because of too much gambling problem ;D
The issues you mention above are about the execution or implementation problems, so there are ways to reduce the risk so that it becomes insignificant. These problems will always there in any programs, no matter how careful your strategic planning (https://en.wikipedia.org/wiki/Strategic_planning) was.

I see the problem is more about the formulation phase (or called fundamental flaws), why they decided to use such mechanism, i.e., AI for predicting the exam results, and take the prediction like the real results. Yes, I'm skeptical because I learned assembly and 8051, so my view about computer is still just a "calculator."

Let's compare this to a simulation to predict football (soccer) matches. AI can utilize big data (as big as it wants), using historical match results, weather, health report, training report, etc., and my belief is that the prediction won't be accurate. The real deal is the actual football match, or in this case, the real exam.


Title: Re: Machine Learning and the Death of Accountability
Post by: Cnut237 on September 17, 2020, 02:33:56 PM
~
Thanks for that. The lawyer bit is certainly a thought-provoking example.

I don't know how email works and I use it everyday. Same sort of thing with a large amount of the pieces of tech we use daily.
I'd probably make a distinction between 'functional' tech, and the sort of machine-learning that involving classifying and categorising humans. It's this latter type where I'm more worried about a loss of accountability and the difficulty (or impossibility) or determining how it arrives at its decisions. SuchMoon's point about ML debuggers is a good one, but I'm not convinced that it doesn't just move the problem one step along.


I see the problem is more about the formulation phase (or called fundamental flaws), why they decided to use such mechanism, i.e., AI for predicting the exam results, and take the prediction like the real results.
Machine-learning is being used (and will increasingly be used) simply because it is cheaper and more efficient than having teams of humans doing the work. I think the question of whether it will be used to make decisions previously made by humans has effectively already been settled.
I am talking specifically about ML, rather than all forms of computer decision-making. ML is where the computer itself determines the methods by which it arrives at results. If ML decides that person A shouldn't be entitled to a credit card or a mortgage, person B shouldn't be considered for a job role, person C should be investigated by the police... then we only see these outputs, we have no way of unravelling how the machine reached these decisions, and so no way of determining whether the decisions were justified. That's what I meant by the death of accountability.

too much gambling problem ;D
I feed your raw data into an ML black box. The computer decides to block your access to all gambling sites. Because this is ML, there is no human-devised algorithm that we can consult to determine why you were blocked. Perhaps the computer did it to protect you from losing money, perhaps it thought you were too good and wanted to protect the gambling sites from big losses, perhaps it thought you were cheating. We can't know why, can't get any justification for the decision.


Title: Re: Machine Learning and the Death of Accountability
Post by: suchmoon on September 17, 2020, 03:32:48 PM
I think we need the ability to detect unfairness and bias. Racial profiling for example can be insidious, and difficult to determine. The more control over our lives that we grant to machine-based reasoning, the more we relinquish the ability to question that control. Again, I'm not a Luddite, I think that ML can bring us huge benefits. It's just that I'm also a fan of transparency and accountability.

I'm gonna fall back on my favorite argument in this thread - ML can't be much worse than humans in this respect. Humans are biased, error-prone, lazy, distracted, physically weak, mentally unstable, etc and yet we trust each other unconditionally 99% of the time. You could argue we have certain systems built around it to provide controls and accountability (like elections or multisig) but the fact is that most of the time these systems can't actually prevent a human from screwing up, particularly if they do it deliberately, whereas a computer system (ML or not) can have conditions on what it is allowed or not allowed to make a decision on. Granted those conditions might have flaws too etc etc but it's still a far more controllable situation IMO.

On the flipside, ML can actually be used to help us detect unfairness and bring accountability where we as humans fail to. Not to get too far into the overpoliticized police situations, but keep in mind that a lot of those police departments that are having trouble with racial biases are actually using software supposed to help them avoid that. They track officer-citizen interactions, complaints, etc and try to figure out which officers are outliers, need training, or more. It's just that the software generally sucks, typically based on some hardcoded vision of what the "correct" conduct should be, developed by the lowest bidder, and it's all for the most part reactive despite it being used to "forecast" trouble. Lots of possibilities for well-designed ML models there I think.


Title: Re: Machine Learning and the Death of Accountability
Post by: mu_enrico on September 17, 2020, 04:10:05 PM
If ML decides that person A shouldn't be entitled to a credit card or a mortgage, person B shouldn't be considered for a job role, person C should be investigated by the police... then we only see these outputs, we have no way of unravelling how the machine reached these decisions, and so no way of determining whether the decisions were justified. That's what I meant by the death of accountability.
If this is only your concern, then simply put code to log every decision path, and you will be able to read the "accountability."

I feed your raw data into an ML black box. The computer decides to block your access to all gambling sites. Because this is ML, there is no human-devised algorithm that we can consult to determine why you were blocked.
Up to this moment, there is no "computer decides" in the real world. The computer isn't responsible for anything, but the company (or the creator) is. Even if the company uses AI to decide, they will still be held accountable and have to answer all the questions. Suppose the algorithm devised by the company harms a manly wealthy man. He can always sue the company.


Title: Re: Machine Learning and the Death of Accountability
Post by: Cnut237 on September 18, 2020, 08:26:26 AM
If this is only your concern, then simply put code to log every decision path, and you will be able to read the "accountability."
In theory, sure. In practice I think the numbers involved in reverse-engineering the decisions would be far too daunting for it to be practicable. ML is of course adaptive, so you'd have to go right back to the beginning and understand every input it's ever received, and all subsequent reasoning. Particularly if we think about neural nets as an application of ML.

On the flipside, ML can actually be used to help us detect unfairness and bring accountability where we as humans fail to.
Another good point, thanks.



And thanks to everyone who has contributed to this thread so far; it's appreciated. I do still believe that loss of accountability is a valid issue, but you have all made many good points, and have helped me reach a more nuanced perspective. Even where I've disagreed, you've still turned my black-and-white into shades of grey, and I can see now that the situation is perhaps not as bleak as I'd originally envisaged, and that - as with most things - there are positives as well as negatives.


Title: Re: Machine Learning and the Death of Accountability
Post by: Spendulus on September 21, 2020, 10:14:26 PM
the computer only predicts an outcome on the basis of the data fed in and how the coders programmed it to react on a standardized case by case basis

Maybe last millennium.    :P  In the early 2000s, computer scientists took what we know about human learning, and created Machine Learning (in the thread title too).   Now, with each exam analyzed anywhere by any system, the model and the computer learns - and gets better.  By 2050 there should be nothing a computer cannot do better than our best humans.
 
I disagreee. We can do stupid far better than any machine. As they progress, we shall race ahead of them in Excellence in Stupid.