Bitcoin Forum
May 04, 2024, 07:10:32 AM *
News: Latest Bitcoin Core release: 27.0 [Torrent]
 
   Home   Help Search Login Register More  
Pages: [1]
  Print  
Author Topic: ‘Black Box’ proble: people don’t trust AI because they don't know how it decides  (Read 191 times)
Sherwood_Archer (OP)
Jr. Member
*
Offline Offline

Activity: 126
Merit: 3


View Profile
October 03, 2018, 03:39:06 PM
 #1

“IBM’s attempt to promote its supercomputer programme to cancer doctors (Watson for Oncology) was a PR disaster,”
“The problem with Watson for Oncology was that doctors simply didn’t trust it.”
When Watson’s results agreed with physicians, it provided confirmation, but didn’t help reach a diagnosis. When Watson didn’t agree, then physicians simply thought it was wrong.
“AI’s decision-making process is usually too difficult for most people to understand,” Polonski continues. “And interacting with something we don’t understand can cause anxiety and make us feel like we’re losing control.”

I think this is also happening to cryptocurrency. People do not understand how it works so they do not trust it. Since they do not trust it, they do not "buy" it, and just laugh at people who are into it.
1714806632
Hero Member
*
Offline Offline

Posts: 1714806632

View Profile Personal Message (Offline)

Ignore
1714806632
Reply with quote  #2

1714806632
Report to moderator
1714806632
Hero Member
*
Offline Offline

Posts: 1714806632

View Profile Personal Message (Offline)

Ignore
1714806632
Reply with quote  #2

1714806632
Report to moderator
1714806632
Hero Member
*
Offline Offline

Posts: 1714806632

View Profile Personal Message (Offline)

Ignore
1714806632
Reply with quote  #2

1714806632
Report to moderator
"Bitcoin: mining our own business since 2009" -- Pieter Wuille
Advertised sites are not endorsed by the Bitcoin Forum. They may be unsafe, untrustworthy, or illegal in your jurisdiction.
bluefirecorp_
Full Member
***
Offline Offline

Activity: 574
Merit: 152


View Profile
October 03, 2018, 03:47:32 PM
 #2

The AI is normally right.

Take a look at AlphaGo. That deep neural network is the best Go player in all of history. Even the scientists that wrote AlphaGo have no idea how it comes up with its movements at this point.

Theoretically, we could examine the neural network and watch the playback in real time. However, the amount of data and processing of that is absolutely mind numbing with its staggering complexities.

At this point, I trust Watson more than I do an average doctor.

green547
Full Member
***
Offline Offline

Activity: 385
Merit: 101



View Profile
October 03, 2018, 05:19:52 PM
 #3

Artificial Intelligence scares the hell out of me.  Everyone keeps saying its inevitable that AI robots will overtake humans, umm WHY THE HELL are we still developing AI if its basically a guarantee this will happen.
bluefirecorp_
Full Member
***
Offline Offline

Activity: 574
Merit: 152


View Profile
October 03, 2018, 05:27:21 PM
 #4

Artificial Intelligence scares the hell out of me.  Everyone keeps saying its inevitable that AI robots will overtake humans, umm WHY THE HELL are we still developing AI if its basically a guarantee this will happen.

Because AI could give us singularity, cure the human condition, and become our care takers for the rest of eternity.

Utopia is close to dystopia.

CoinCube
Legendary
*
Offline Offline

Activity: 1946
Merit: 1055



View Profile
October 03, 2018, 05:27:28 PM
 #5

Artificial Intelligence scares the hell out of me.  Everyone keeps saying its inevitable that AI robots will overtake humans, umm WHY THE HELL are we still developing AI if its basically a guarantee this will happen.

Because we as a species are very stupid.

Development of something like AI should be done very slowly if at all and globally coordinated to minimize the chance of very poor outcomes.

We lack the wisdom for such coordination so we will rush full speed ahead each human faction seeking to one up the other with better AI.

UconBit
Jr. Member
*
Offline Offline

Activity: 112
Merit: 2


View Profile
October 03, 2018, 05:56:28 PM
 #6

Artificial Intelligence scares the hell out of me.  Everyone keeps saying its inevitable that AI robots will overtake humans, umm WHY THE HELL are we still developing AI if its basically a guarantee this will happen.

Because we as a species are very stupid.

Development of something like AI should be done very slowly if at all and globally coordinated to minimize the chance of very poor outcomes.

We lack the wisdom for such coordination so we will rush full speed ahead each human faction seeking to one up the other with better AI.

I think it is not the case of stupid though when AI overtake humans, I think it would stem out from human greed or hubris. AI for now will only do what it is expected to do. Once someone power plays and codes it to do something more than that in exchange for money or just to be famous then the problem starts.
Currently, AI is on your phone apps, medical equipment, your fitbit. It's doing more good and not any harm afaik.
Sealis
Jr. Member
*
Offline Offline

Activity: 140
Merit: 2


View Profile
October 03, 2018, 06:31:22 PM
 #7

Artificial Intelligence scares the hell out of me.  Everyone keeps saying its inevitable that AI robots will overtake humans, umm WHY THE HELL are we still developing AI if its basically a guarantee this will happen.

Because we as a species are very stupid.

Development of something like AI should be done very slowly if at all and globally coordinated to minimize the chance of very poor outcomes.

We lack the wisdom for such coordination so we will rush full speed ahead each human faction seeking to one up the other with better AI.

I think it is not the case of stupid though when AI overtake humans, I think it would stem out from human greed or hubris. AI for now will only do what it is expected to do. Once someone power plays and codes it to do something more than that in exchange for money or just to be famous then the problem starts.
Currently, AI is on your phone apps, medical equipment, your fitbit. It's doing more good and not any harm afaik.
Its more on the idea that AI's can calculate operations and stuff that we can't. Some problems require very complex equations and solutions that the human brain cant handle it, so there comes the AI. just like what this post said.
The AI is normally right.

Take a look at AlphaGo. That deep neural network is the best Go player in all of history. Even the scientists that wrote AlphaGo have no idea how it comes up with its movements at this point.

Theoretically, we could examine the neural network and watch the playback in real time. However, the amount of data and processing of that is absolutely mind numbing with its staggering complexities.

At this point, I trust Watson more than I do an average doctor.

AlphaGo creates movements that are inconceivable to normal GO players, because it has made millions of computations in its brain, and thats just for GO. If we were to apply it to various problems in the Science world, don't you think it would make solving them a lot easier?
bluefirecorp_
Full Member
***
Offline Offline

Activity: 574
Merit: 152


View Profile
October 03, 2018, 06:39:23 PM
 #8


The AI is normally right.

Take a look at AlphaGo. That deep neural network is the best Go player in all of history. Even the scientists that wrote AlphaGo have no idea how it comes up with its movements at this point.

Theoretically, we could examine the neural network and watch the playback in real time. However, the amount of data and processing of that is absolutely mind numbing with its staggering complexities.

At this point, I trust Watson more than I do an average doctor.

AlphaGo creates movements that are inconceivable to normal GO players, because it has made millions of computations in its brain, and thats just for GO. If we were to apply it to various problems in the Science world, don't you think it would make solving them a lot easier?

Nope. AlphaGo is application specific. Turning an application specific intelligence into a general intelligence isn't the right path.

It'd be akin to trying to mine Ethereum with a Bitcoin ASIC, it's just not going to go very well.

Spendulus
Legendary
*
Offline Offline

Activity: 2898
Merit: 1386



View Profile
October 03, 2018, 08:07:27 PM
 #9

...
“AI’s decision-making process is usually too difficult for most people to understand,” Polonski continues. “And interacting with something we don’t understand can cause anxiety and make us feel like we’re losing control.”

I think this is also happening to cryptocurrency. People do not understand how it works so they do not trust it. Since they do not trust it, they do not "buy" it, and just laugh at people who are into it.

Let me know when we understand the human brain.
Impulseboy
Jr. Member
*
Offline Offline

Activity: 196
Merit: 4


View Profile
October 04, 2018, 02:13:36 AM
 #10

Artificial Intelligence scares the hell out of me.  Everyone keeps saying its inevitable that AI robots will overtake humans, umm WHY THE HELL are we still developing AI if its basically a guarantee this will happen.

I think this is a valid argument. However, the more I get to understand AI and cryptocurrency, the less scared I get about the coming of artificial intelligence. Of course, we are not sure if it will really have the ability to think for itself and decide humans are a waste of space, but perhaps we will not come to that point. What do you think?
goldSkylark
Jr. Member
*
Offline Offline

Activity: 126
Merit: 1


View Profile
October 05, 2018, 03:10:03 AM
 #11

This is a case of “does the end justify the means?”  Yeah, AI promises so many advantages but how does it arrive to its decisions? And I think the mystery worries society. AI has this stigma of being a replacement for society, which isn’t the case. It’s only meant to upgrade our lives. People just need to be more informed of how AI operates, in my opinion.
IndigoRed
Jr. Member
*
Offline Offline

Activity: 196
Merit: 1


View Profile
October 09, 2018, 12:42:21 AM
 #12

I agree. We need to learn to trust AI. That’s the only way we can co-exist, right?
This article says it best,
"To trust an AI system, we must have confidence in its decisions. We need to know that a decision is reliable and fair, that it can be accounted for, and that it will cause no harm. We need assurance that it cannot be tampered with and that the system itself is secure. Reliability, fairness, interpretability, robustness, and safety are the underpinnings of trusted AI."

Makes sense, right?

https://thenextweb.com/contributors/2018/10/06/we-need-to-build-ai-systems-we-can-trust/
James_Cline
Jr. Member
*
Offline Offline

Activity: 85
Merit: 1


View Profile
October 12, 2018, 02:06:22 AM
 #13


people simply fear what they don’t understand. It’s a default reaction, I believe. But once people learn its potential benefits, they’ll eventually embrace it, right?
Blanca_Gregory
Jr. Member
*
Offline Offline

Activity: 72
Merit: 2


View Profile
October 12, 2018, 02:53:09 AM
 #14

There's a post I read over at reddit where it said that it's impossible to have an AI that can think for itself in the future, and this was said by someone who is studying AI in college. So what do we know? Maybe it's not for us to understand how the AI thinks, because, really. What will you see when you crack open an AI's black box anyway?
Pages: [1]
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!