Cnut237 (OP)
Legendary
Offline
Activity: 1904
Merit: 1277
|
|
September 11, 2020, 07:33:54 AM Merited by Quickseller (2) |
|
Recently in the UK there has been a furore over algorithm-determined school exam results. The pandemic meant that pupils couldn't sit exams, and an algorithm was devised that determined what results each pupil would get. However, many pupils, particularly those from disadvantaged backgrounds, received worse than predicted results, whereas pupils from more affluent backgrounds suffered no ill effects. There were widespread protests at the perceived unfairness, and the algorithm was hauled out into the open and dissected. The formula was quite rudimentary, and the inbuilt bias perfectly clear for anyone with a basic grasp of maths to see. The outcome was that the protests were upheld, and the unfair results overturned.
The reason this could happen is that the algorithm was devised by people. Their assumptions and their methods could be unpicked and understood. However, the trend, now that we are in the era of big data, is for machine-learning. Computers can devise much more efficient processes than can humans. If the same thing had happened in a few years' time, it is quite likely that the grades would have been determined by machine-learning, with initial data fed in, results coming out, and no human understanding how the processing from input to output works. Indeed, with the computer itself unable to explain (because we have not yet reached that level of AI). This combination of factors, machine-learning on the one hand, and the computer being unable to explain its reasoning on the other, leads to an absolute removal of all accountability for decisions that can have a profound impact on people's lives. Humans can argue convincingly that they have simply input some initial parameters, and had no part in the decision-making. But those machine-learning decisions can't be pulled into the open, can't be dissected, can't be understood. Machine-learning without sentient computers means that all accountability is thrown away. No-one is responsible for anything. Now this may change once AI reaches a sufficient level that a computer can explain its reasoning in terms that humans can understand... but that is years or decades away, and until we reach that point, the possibilities look quite scary. We live in a competitive world, and the advantages of machine-learning are too tempting for countries and companies to pass up. ML is pursued fervently, no matter the implications. Will we really throw away accountability for (and understanding of) a lot of really important decisions?
|
|
|
|
mu_enrico
Copper Member
Legendary
Offline
Activity: 2506
Merit: 2215
Slots Enthusiast & Expert
|
|
September 11, 2020, 08:11:02 AM Merited by Quickseller (2) |
|
Removing exams and replace it with trend analysis + teacher's opinion? It doesn't sound right!
An exam is the most objective way of measuring academics knowledge at a particular time. Well, judging from my experience where I only studied hard several months or weeks before the assessment (and spent most of my time as a clown), I'd be a pedicab driver by now if the system exists back then.
The thing is, the teacher's opinion is subjective, garbage in, garbage out, the data feeds into the algorithm can also be garbage and subjective. The coders are human with limited knowledge and wisdom, and thus, the product is far from perfection to determine who gets what. What the algorithm did was forecasting, and reality would likely differ from the forecast.
Just do online exams with video conference apps, no need for this madness.
|
| │ | ███████████████████████ ███████████████████████ ███████████████████████ ███████████████████████ ███▀▀▀█████████████████ ███▄▄▄█████████████████ ███████████████████████ ███████████████████████ ███████████████████████ █████████████████████ ███████████████████ ███████████████ ████████████████████████ | ███████████████████████████ ███████████████████████████ ███████████████████████████ █████████▀▀██▀██▀▀█████████ █████████████▄█████████████ ████████▄█████████▄████████ █████████████▄█████████████ █████████████▄█▄███████████ ██████████▀▀█████████████ ██████████▀█▀██████████ ▀███████████████████▀ ▀███████████████▀ █████████████████████████ | | | O F F I C I A L P A R T N E R S ▬▬▬▬▬▬▬▬▬▬ ASTON VILLA FC BURNLEY FC | | | BK8? | | | . ..PLAY NOW.. |
|
|
|
Coyster
Legendary
Offline
Activity: 2198
Merit: 1306
Playbet.io - Crypto Casino and Sportsbook
|
|
September 11, 2020, 09:03:13 AM |
|
No data or algorithm can be used to accurately determine the outcome of an examination that's yet to be written, imo, this should not be an option to grade students in schools. What data would either the teachers or the computer use to determine the grade: previous grades, social life of the student, level of intelligence, IQ level, background etc? I've seen good students go into an examination unprepared and fail, likewise bad students change their mindset, study hard and come out top to everyone's surprise.
If a student doesn't take an examination themself, then there's no algorithm or computer that can bring out a result that'll be accurate, the computer only predicts an outcome on the basis of the data fed in and how the coders programmed it to react on a standardized case by case basis, but there's no arbitrary standard for a student in every exam, they can be bad today and excellent tomorrow.
|
|
|
|
Cnut237 (OP)
Legendary
Offline
Activity: 1904
Merit: 1277
|
|
September 11, 2020, 09:34:46 AM |
|
Just do online exams with video conference apps, no need for this madness.
This presents other issues. For example, the connection dropping, or someone sitting just out of shot feeding the pupil the correct answers. No data or algorithm can be used to accurately determine the outcome of an examination that's yet to be written, imo, this should not be an option to grade students in schools. What data would either the teachers or the computer use to determine the grade
In this instance there was bias based on the previous performance of the school, such that good students in badly performing schools were unfairly penalised, and bad students at good schools escaped unscathed. There's a quick overview here. the computer only predicts an outcome on the basis of the data fed in and how the coders programmed it
This really is the whole point of my thread. At the moment, in situations like that of the exam grades in my example, an unfair algorithm can be unpicked and assessed for bias, and corrections can be made or the whole thing can be thrown out. There is a degree of transparency to the process. What I am suggesting is that in the very near future, we are likely to lose that transparency, that our lives and life-chances will in part be determined by machine-learning algorithms where there is no accountability, and no possibility of even understanding how the results were arrived at. In the exams example, it was a simple human algorithm. If instead the outcomes had been determined by machine-learning, there may have been a suspicion of unfairness, but no way to prove if that was actually the case, and no evidence available to challenge the decision.
|
|
|
|
squatz1
Legendary
Offline
Activity: 1666
Merit: 1285
Flying Hellfish is a Commie
|
|
September 13, 2020, 11:24:36 PM |
|
No data or algorithm can be used to accurately determine the outcome of an examination that's yet to be written, imo, this should not be an option to grade students in schools. What data would either the teachers or the computer use to determine the grade: previous grades, social life of the student, level of intelligence, IQ level, background etc? I've seen good students go into an examination unprepared and fail, likewise bad students change their mindset, study hard and come out top to everyone's surprise.
If a student doesn't take an examination themself, then there's no algorithm or computer that can bring out a result that'll be accurate, the computer only predicts an outcome on the basis of the data fed in and how the coders programmed it to react on a standardized case by case basis, but there's no arbitrary standard for a student in every exam, they can be bad today and excellent tomorrow.
+1 to this. Not exactly sure why this was even allowed in any school setting, but I would LOVE to see a source on this. There's literally no way that parents / children were happy that this was being used. At first it sounds like a good idea, until you notice that the machine learning technology is HEAVILY reliant on teacher input / past performance to come to a conclusion. But yeah, I had tons of friends who were horrible at regular class and great test takers -- and I've also seen the opposite. Not sure why this would be used though. This WONT be the norm in education though, people aren't happy when you give them a bad grade and then you try to blame a computer.....lol
|
|
|
|
c_atlas
Member
Offline
Activity: 140
Merit: 56
|
|
September 14, 2020, 02:53:07 AM Merited by Quickseller (2) |
|
Funny how you always hear about these machine learning experiments going wrong, does anyone remember Microsoft's Tay bot on Twitter? How about all the stories like 'AI can't recognize black faces' or 'AI converts speech-to-text better when the speaker is white than when they're black', or Amazon's recruiter AI that didn't like women. I'm not really a fan of machine learning and AI. It isn't that it's not interesting or desirable, I just think it's sad that people think we've reached the limits of human ingenuity and we feel that we have to develop something to think for us. I'm not sure that people will ever be ready for a world where AI can make perfectly unbiased decisions, keep in mind most decisions result in winners and losers. When people make the decision, they can come to a compromise; when a machine designed to make a clear cut choice has to decide, by design it has to avoid letting the loser down easy, otherwise it would be biased in favour of the loser (in this case, the losers might be kids who 'deserve' [i.e are predicted] to fail their class). As a side note, assigning students scores based on machine learning predictions is just ridiculous. Whoever thought this would work should be shamed publicly.
|
|
|
|
squatz1
Legendary
Offline
Activity: 1666
Merit: 1285
Flying Hellfish is a Commie
|
|
September 14, 2020, 02:59:28 AM |
|
Funny how you always hear about these machine learning experiments going wrong, does anyone remember Microsoft's Tay bot on Twitter? How about all the stories like 'AI can't recognize black faces' or 'AI converts speech-to-text better when the speaker is white than when they're black', or Amazon's recruiter AI that didn't like women. I'm not really a fan of machine learning and AI. It isn't that it's not interesting or desirable, I just think it's sad that people think we've reached the limits of human ingenuity and we feel that we have to develop something to think for us. I'm not sure that people will ever be ready for a world where AI can make perfectly unbiased decisions, keep in mind most decisions result in winners and losers. When people make the decision, they can come to a compromise; when a machine designed to make a clear cut choice has to decide, by design it has to avoid letting the loser down easy, otherwise it would be biased in favour of the loser (in this case, the losers might be kids who 'deserve' [i.e are predicted] to fail their class). As a side note, assigning students scores based on machine learning predictions is just ridiculous. Whoever thought this would work should be shamed publicly. I'm assumining (and hoping) that they have been shamed publicly. You didn't even give people the chance to improve (if they had poor performance in class) and just simply handed off good grades to people who may not have performed as well when the big test came. But onto the part about AI and such. I don't think there is anything WRONG with using AI for certain things. Amazons recruiting AI could've truly been amazing, the problem is that they fed it 'bad data' which meant that it was skewed towards men. Here's the quote from the article: But by 2015, the company realized its new system was not rating candidates for software developer jobs and other technical posts in a gender-neutral way.
That is because Amazon’s computer models were trained to vet applicants by observing patterns in resumes submitted to the company over a 10-year period. Most came from men, a reflection of male dominance across the tech industry.
So yeah, garbage in is garbage out. They'll be able to improve on this if they feed the machine the resumes of 'successful' women in the industry to help balance out the problem. There can be some genuinely good uses of AI in the world. Imagine if a lawyer AI could help handle VERY SIMPLE legal documents that are typically very repetitive and mundane. These would be quickly looked over by a real lawyer and then sent on their way. Just one example though.
|
|
|
|
Vod
Legendary
Offline
Activity: 3878
Merit: 3166
Licking my boob since 1970
|
|
September 14, 2020, 03:40:53 AM |
|
the computer only predicts an outcome on the basis of the data fed in and how the coders programmed it to react on a standardized case by case basis
Maybe last millennium. In the early 2000s, computer scientists took what we know about human learning, and created Machine Learning (in the thread title too). Now, with each exam analyzed anywhere by any system, the model and the computer learns - and gets better. By 2050 there should be nothing a computer cannot do better than our best humans. What Coyster describes in the quote above is not Machine Learning - ML is more than programming, much like we are more than our instincts.
|
|
|
|
c_atlas
Member
Offline
Activity: 140
Merit: 56
|
|
September 14, 2020, 04:03:01 AM |
|
I'm assumining (and hoping) that they have been shamed publicly. You didn't even give people the chance to improve (if they had poor performance in class) and just simply handed off good grades to people who may not have performed as well when the big test came.
On the other hand, students who cheated on smaller assessments like assignments, graded homework, quizzes, etc may have been projected to finish with an exceptional grade, when in reality they would have been knocked down to a satisfactory one after failing their tests. But onto the part about AI and such. I don't think there is anything WRONG with using AI for certain things. Amazons recruiting AI could've truly been amazing, the problem is that they fed it 'bad data' which meant that it was skewed towards men. Here's the quote from the article:
Sure, I think there are lots of great applications of AI/ML, but I don't think those fields should receive as much attention as they do. At the end of the day, most of our modern AI is just curve fitting. Neural networks are turning out to be great with computer vision and image recognition, which gives us cool stuff like level 2 autonomous vehicles, some interesting facial recognition technology, and maybe in the short term, better targeted advertising. My worry is that people will start using 'AI' as a default solution when some other statistical or even pure mathematical approach may work better. AI shouldn't be a substitute for thinking. So yeah, garbage in is garbage out. They'll be able to improve on this if they feed the machine the resumes of 'successful' women in the industry to help balance out the problem.
There's still the open ended question of when is your model good enough. When have you balanced it out? Your model might give you perfect candidates, but since you can't be certain your model gives you the best candidates, you're still going to have to have a human intervene and provide their own input. Then there's the question of what makes a good candidate. Suppose you decide the most fair way to treat all gender's equally is to completely ignore gender altogether. If you select for people who you would like to work with, and you would prefer to hire people who aren't combative (since they'll do what they're told and won't ask for a raise as often), your model may end up preferring women. It's not that the model is 'flawed', it's simply what it gave you based on your desire to have agreeable coworkers. Pretend you can somehow build a model that can accurately rank a person by intelligence. Assume the distribution of applicants follows some type of normal distribution with barely any applicants on the tail ends. Now, when you run your model on your applicants dataset, you're surprised because it seems to skew heavily in favour of male applicants. What went wrong? It wasn't the model, it was the applicants themselves. The male and female intelligence distributions are extremely similar, you have lots of people in the middle around the average and barely any really stupid or really smart people. Your applicant pool is a subset of the set of all people, and since technology is known to be dominated by men, you get a lot more male applicants than female applicants. The result is you're more likely to find one of the really smart people in the pool of men than you are in the pool of women. I guess the question to ask is then: how can you trust your model to map your inputs to the desired outputs when you aren't even sure what the desired output is? There can be some genuinely good uses of AI in the world. Imagine if a lawyer AI could help handle VERY SIMPLE legal documents that are typically very repetitive and mundane. These would be quickly looked over by a real lawyer and then sent on their way. Just one example though.
How long before the lawyers get a law signed saying AI lawyers are illegal
|
|
|
|
suchmoon
Legendary
Offline
Activity: 3850
Merit: 9087
https://bpip.org
|
|
September 14, 2020, 04:53:52 AM |
|
This combination of factors, machine-learning on the one hand, and the computer being unable to explain its reasoning on the other, leads to an absolute removal of all accountability for decisions that can have a profound impact on people's lives.
I'm not so certain that it's really that significant. I'm old enough to remember writing assembly code so that was obviously the most direct responsibility a person could have over giving instructions to a computer but you know what - it's a great thing that we generally don't do that anymore outside of spaceships perhaps. Even friggin' dishwasher software is probably written using some sort of toolkit these days, taking care of mundane stuff like memory allocation which humans would definitely mess up. Machine learning can definitely create issues if used by incompetent dolts but on the other hand - used responsibly it can solve massive problems that are simply unsolvable otherwise. I think we'll find a balance where we will use proven well-tested models as building blocks for more complex systems and we'll learn for figure out whom to blame... just like we don't blame Microsoft when we write buggy code in C# or feed garbage data to well-intended code.
|
|
|
|
Cnut237 (OP)
Legendary
Offline
Activity: 1904
Merit: 1277
|
|
September 14, 2020, 08:16:02 AM Last edit: September 14, 2020, 08:37:08 AM by Cnut237 |
|
Machine learning can definitely create issues if used by incompetent dolts but on the other hand - used responsibly it can solve massive problems that are simply unsolvable otherwise. I think we'll find a balance where we will use proven well-tested models as building blocks for more complex systems and we'll learn for figure out whom to blame... just like we don't blame Microsoft when we write buggy code in C# or feed garbage data to well-intended code.
I'm strongly in favour of machine learning, and have dabbled in it myself. I should really have been clearer, but I suppose the point I'm making is all about the distinction between machines that arrive at conclusions based on initial rules and conditions supplied by humans, and machines that arrive - through impenetrable reasoning - at their own conclusions. I'd agree that we will eventually figure out who to blame, but it's an issue that needs addressing sooner rather than later. Similar to the situation with driverless cars. A decent example is probably the iterations of Alpha Go, Gogle's Go-playing computer. The initial version was trained on huge numbers of human games, and became exceptionally proficient, and beat the world champion. It does to an extent use creative new strategies, but the overall style is explicable: Toby Manning, the match referee for AlphaGo vs. Fan Hui, has described the program's style as "conservative".[64] AlphaGo's playing style strongly favours greater probability of winning by fewer points over lesser probability of winning by more points.[17] Its strategy of maximising its probability of winning is distinct from what human players tend to do which is to maximise territorial gains, and explains some of its odd-looking moves.[65] It makes a lot of opening moves that have never or seldom been made by humans, while avoiding many second-line opening moves that human players like to make. It likes to use shoulder hits, especially if the opponent is over concentrated. https://en.wikipedia.org/wiki/AlphaGo#Style_of_playBut then we look at the later version, AlphaGo Zero. This version was entirely self-taught, with no input from human games at all. Playing against the earlier version, it won 100-0. The removal of training from human matches meant that the machine's creativity was no longer constrained, and it developed some astonishingly effective strategies that were completely unknown to human players. AlphaGo Zero’s training involved four TPUs and a single neural network that initially knew nothing about go. The AI learned without supervision—it simply played against itself, and soon was able to anticipate its own moves and how they would affect a game’s outcome. “This technique is more powerful than previous versions of AlphaGo because it is no longer constrained by the limits of human knowledge,” according to a blog post authored by DeepMind co-founder Demis Hassabis and David Silver, who leads the company’s reinforcement learning research group. https://www.scientificamerican.com/article/ai-versus-ai-self-taught-alphago-zero-vanquishes-its-predecessor/I'm just saying that we reach a point with ML where it is impossible for humans to understand how the conclusions were arrived at... and that once pure-ML (rather than human-devised) algorithms start to have a serious impact on human lives, then accountability becomes a huge issue, which the human programmers can convincingly sidestep. One again, I am in favour of ML as it can bring huge advances, and beneath everything it is just more efficient. I'm just looking ahead to some of the problems we may face, and wondering how these might be resolved. --- At first it sounds like a good idea, until you notice that the machine learning technology is HEAVILY reliant on teacher input / past performance to come to a conclusion. [...] This WONT be the norm in education though, people aren't happy when you give them a bad grade and then you try to blame a computer.....lol
I was trying to say (but not being very clear about it) that in this example, there was no machine-learning, the algorithm was devised entirely by humans, which meant that when people started to question the results, the algorithm was looked at and found to contain inbuilt bias (for info, the algorithm is below). But when, instead of this, we are using machine-devised algorithms and machine-learning, and there is no possibility of unpicking the algorithm (because we won't understand how the computer has reached its conclusions), then how do we determine unfairness or accountability? Machine-learning is black box reasoning, and impenetrable to humans. Whose fault is it when it outputs unfair results? And how do we prove that the results are unfair? Synopsis The examination centre provided a list of teacher predicted grades, called 'centre assessed grades' (CAGs) The students were listed in rank order with no ties. For large cohorts (over 15) With exams with a large cohort; the previous results of the centre were consulted. For each of the three previous years, the number of students getting each grade (A* to U) is noted. A percentage average is taken. This distribution is then applied to the current years students-irrespective of their individual CAG. A further standardisation adjustment could be made on the basis of previous personal historic data: at A level this could be a GCSE result, at GCSE this could be a Key Stage 2 SAT. For small cohorts, and minority interest exams (under 15). The individual CAG is used unchanged
The formulas for large schools with n>=15 Pkj=(1-rj)Ckj+rj(Ckj+qkj-pkj) for small schools with n<15 Pkj=CAG
The variables n is the number of pupils in the subject being assessed k is a specific grade j indicates the school Ckj is the historical grade distribution of grade at the school (centre) over the last three years, 2017-19. That tells us already that the history of the school is very important to Ofqual. The grades other pupils got in previous years is a huge determinant to the grades this year’s pupils were given in 2020. The regulator argues this is a plausible assumption but for many students it is also an intrinsically unfair one: the grades they are given are decided by the ability of pupils they may have never met. qkj is the predicted grade distribution based on the class’s prior attainment at GCSEs. A class with mostly 9s (the top grade) at GCSE will get a lot of predicted A*s; a class with mostly 1s at GCSEs will get a lot of predicted Us. pkj is the predicted grade distribution of the previous years, based on their GCSEs. You need to know that because, if previous years were predicted to do poorly and did well, then this year might do the same.
rj is the fraction of pupils in the class where historical data is available. If you can perfectly track down every GCSE result, then it is 1; if you cannot track down any, it is 0. CAG is the centre assessed grade. Pkj is the result, which is the grade distribution for each grade k at each school j. https://en.wikipedia.org/wiki/Ofqual_exam_results_algorithm
|
|
|
|
suchmoon
Legendary
Offline
Activity: 3850
Merit: 9087
https://bpip.org
|
|
September 14, 2020, 12:24:05 PM |
|
I'm just saying that we reach a point with ML where it is impossible for humans to understand how the conclusions were arrived at... and that once pure-ML (rather than human-devised) algorithms start to have a serious impact on human lives, then accountability becomes a huge issue, which the human programmers can convincingly sidestep.
I'm pretty sure we can still test and debug ML models, so deliberately not doing that and evading responsibility wouldn't be much different from screwing up human-written code.
|
|
|
|
c_atlas
Member
Offline
Activity: 140
Merit: 56
|
|
September 14, 2020, 12:56:59 PM |
|
...the point I'm making is all about the distinction between machines that arrive at conclusions based on initial rules and conditions supplied by humans, and machines that arrive - through impenetrable reasoning - at their own conclusions....
...But then we look at the later version, AlphaGo Zero. This version was entirely self-taught, with no input from human games at all.
I would rephrase it to say it was almost entirely self-taught, keep in mind, human's still supplied the rules for the game. The neural network initially knew nothing about Go beyond the rules. https://en.wikipedia.org/wiki/AlphaGo_Zero#TrainingTo some extent, you can only take advantage of reinforcement learning if you can gamify your problem, which isn't as easy as it may seem. The reason AI experts are using Go, StarCraft, etc. as algorithm training grounds is that gamification of the real world is incredibly challenging. It's super easy to feed a neural net the rules and success criteria for Go because it's well defined. It's a lot harder to define success criteria in problems like speech recognition, self-driving cars, image recognition, and so on. Side note: everyone should watch the AlphaGo movie. I'm just saying that we reach a point with ML where it is impossible for humans to understand how the conclusions were arrived at... and that once pure-ML (rather than human-devised) algorithms start to have a serious impact on human lives, then accountability becomes a huge issue, which the human programmers can convincingly sidestep.
I'm not so sure that's the case. While it's true we may not know precisely why the model made a certain decision, we can probably still quantify the incentives behind the decision. For example, AlphaGo Zero knew the game state before and after it made a "1 in 10,000" move, so the programmers knew the success probability before and after that move. These models always think in terms of probability. If you're analysing the outcome of an autonomous vehicle collision, you can at least piece together the fact that the model made the car turn exactly x amount at y speed because that action was computed to result in the highest survival rate for the passenger out of all actions in the search space.
|
|
|
|
Cnut237 (OP)
Legendary
Offline
Activity: 1904
Merit: 1277
|
|
September 14, 2020, 01:24:33 PM Merited by Quickseller (2) |
|
AlphaGo Zero knew the game state before and after it made a "1 in 10,000" move, so the programmers knew the success probability before and after that move.
I'm not convinced by this argument. The point with AG Zero is that it learnt by itself; there was no strategy input from human Go experts, and indeed no interaction with human Go matches at all. AG Zero learnt by itself, and strategised by itself, and was able to trounce humans and 'human taught' AI, and indeed ML AI that had human Go matches as a primer (previous iterations of the AG machine). AG Zero determines success/failure probabilities that humans can't process. I'm pretty sure we can still test and debug ML models
In response to this, and as a mild qualifier to my response to c_atlas's point, I'd argue that yes, we can to an extent, and in certain simple situations. But if we are talking about complex neural nets, and taking AG Zero as an example of where the technology is headed, I'd say this isn't always the case now, and most certainly won't be the case in the future. Taking the point to an absurd extreme, if we consider the eventual aim of sentient AI, then there's no way that humans have the ability to simply debug it. I do think that the 'black box' problem of ML is real.
|
|
|
|
suchmoon
Legendary
Offline
Activity: 3850
Merit: 9087
https://bpip.org
|
|
September 14, 2020, 01:56:11 PM |
|
In response to this, and as a mild qualifier to my response to c_atlas's point, I'd argue that yes, we can to an extent, and in certain simple situations. But if we are talking about complex neural nets, and taking AG Zero as an example of where the technology is headed, I'd say this isn't always the case now, and most certainly won't be the case in the future. Taking the point to an absurd extreme, if we consider the eventual aim of sentient AI, then there's no way that humans have the ability to simply debug it. I do think that the 'black box' problem of ML is real.
Then we would have ML models testing and debugging each other perhaps? Think of it this way: humans can and do make fucked up irrational decisions all the time and good luck explaining those. We spend enormous amounts of effort to train humans for certain tasks and bad stuff still happens and then we try to figure out how to reduce the probability of that happening again. Is ML going to be really that much worse? I mean it will probably screw up in different ways but we also have different (and arguably more robust) tools to deal with that than we could deal with human decision making processes.
|
|
|
|
Dorodha
Member
Offline
Activity: 252
Merit: 11
|
|
September 14, 2020, 02:40:05 PM |
|
Machine learning or ML-related skills are at the top in terms of IT skills required in the present age. Experts say that in the future there is immense potential for things like machine learning and artificial intelligence. it is important to acquire skills in such technology sector there is no need for simple accountability to make all the decisions if you can predict its future in the right way. At present the demand for machine learning engineers is increasing in big technology institutes as well as general institutes.
|
|
|
|
suchmoon
Legendary
Offline
Activity: 3850
Merit: 9087
https://bpip.org
|
|
September 14, 2020, 05:46:57 PM |
|
Machine learning or ML-related skills are at the top in terms of IT skills required in the present age. Experts say that in the future there is immense potential for things like machine learning and artificial intelligence. it is important to acquire skills in such technology sector there is no need for simple accountability to make all the decisions if you can predict its future in the right way. At present the demand for machine learning engineers is increasing in big technology institutes as well as general institutes.
I couldn't agree more. If we can predict the future - fuck accountability, let's just all buy winning lottery tickets. OTOH, the above post is a good example of how far we still need to go with simple things like language translation or whatever world-salad generator this person was using, so the ML apocalypse is probably still quite far away.
|
|
|
|
c_atlas
Member
Offline
Activity: 140
Merit: 56
|
|
September 14, 2020, 06:05:33 PM |
|
Machine learning or ML-related skills are at the top in terms of IT skills required in the present age. Experts say that in the future there is immense potential for things like machine learning and artificial intelligence. it is important to acquire skills in such technology sector there is no need for simple accountability to make all the decisions if you can predict its future in the right way. At present the demand for machine learning engineers is increasing in big technology institutes as well as general institutes.
I couldn't agree more. If we can predict the future - fuck accountability, let's just all buy winning lottery tickets. OTOH, the above post is a good example of how far we still need to go with simple things like language translation or whatever world-salad generator this person was using, so the ML apocalypse is probably still quite far away. I have a feeling the word-salad generator was just their brain. In that case, only Neuralink can help.
|
|
|
|
squatz1
Legendary
Offline
Activity: 1666
Merit: 1285
Flying Hellfish is a Commie
|
|
September 14, 2020, 07:29:43 PM |
|
I'm just saying that we reach a point with ML where it is impossible for humans to understand how the conclusions were arrived at... and that once pure-ML (rather than human-devised) algorithms start to have a serious impact on human lives, then accountability becomes a huge issue, which the human programmers can convincingly sidestep.
I'm pretty sure we can still test and debug ML models, so deliberately not doing that and evading responsibility wouldn't be much different from screwing up human-written code. We CAN totally test and debug models, but for the example of education is it really worth it? I feel like they're trying to solve a 'problem' that isn't really a 'problem' Grading tests and rating performance isn't a large issue in education (at least in my own personal opinion, I guess researchers could disagree with me) I was trying to say (but not being very clear about it) that in this example, there was no machine-learning, the algorithm was devised entirely by humans, which meant that when people started to question the results, the algorithm was looked at and found to contain inbuilt bias (for info, the algorithm is below). But when, instead of this, we are using machine-devised algorithms and machine-learning, and there is no possibility of unpicking the algorithm (because we won't understand how the computer has reached its conclusions), then how do we determine unfairness or accountability? Machine-learning is black box reasoning, and impenetrable to humans. Whose fault is it when it outputs unfair results? And how do we prove that the results are unfair? Ah I see, this wasn't machine learning to begin with. Just an algorithm which was HEAVILY flawed from the beginning. But yes, I would agree with you -- things could get so complex when it comes to the usage of machine learning that people are literally unable to piece together why a particular decision was made and how they can 'mess with' data to fix it. It does make sense to use this for certain tasks, but education prediction of grades is not something I would think needs to be fixed. '
|
|
|
|
Cnut237 (OP)
Legendary
Offline
Activity: 1904
Merit: 1277
|
|
September 15, 2020, 04:47:57 PM |
|
Then we would have ML models testing and debugging each other perhaps? I don't know. I want to be convinced, but there's an argument that we don't and can't know whether these ML debuggers are fair. We get into a 'who watches the watchmen'* kind of argument. If we are saying that the way to understand ML 'thought' patterns is to use more ML, I don't think it's the answer. I appreciate that this sort of thing will be used, but I can't see that it will open the black box and make everything understandable to us. * I didn't, but then I'm not a fan of superhero movies. The X-Men ones were okay, but steadily deteriorated as the sequels progressed. And I have no idea how Kamala became Sansa Stark. A question for a different thread.
|
|
|
|
|