Bitcoin Forum
June 08, 2024, 02:20:25 AM *
News: Latest Bitcoin Core release: 27.0 [Torrent]
 
   Home   Help Search Login Register More  
Pages: [1]
  Print  
Author Topic: Failure of Tesla sensors and possible solution  (Read 51 times)
Vod (OP)
Legendary
*
Offline Offline

Activity: 3724
Merit: 3090


Licking my boob since 1970


View Profile WWW
September 24, 2020, 05:27:46 AM
 #1

This video is now famous as a complete failure of AI during driving.

But the AI did not make a mistake - the forward cameras could not see the obstacle due to glare.  With all the advanced programming, they were blinded by the light, while a human could have moved their head around to see better.

We cannot have the potential for these kinds of mistakes.  Tesla needs to implement their 5G mesh network so other cars driving by would detect the obstacle and report it.   That is what mesh networks are good for.




https://nastyscam.com - featuring 13 years of OGNasty scams     https://vod.fan - advanced image hosting - coming soonish!
Ucy
Sr. Member
****
Offline Offline

Activity: 2576
Merit: 402


View Profile
September 24, 2020, 10:07:58 AM
 #2

Just seeing the issue on Tesla for the first time. Will probably watch the video later to understand what the problem was.
 I wonder if the tesla is driven completely by AI (or both AI and human/humans are drivers?).  By the way, isn't the AI suppose to know the solution to the glare problem? I assume that the AI is programmed to get a clear view or behave differently when the glare happens. Or didn't the developers anticipate this?
Well, I think qualified humans should always be in control of this thing. AI could do what it can do best while people watch over it.
Natsuu
Full Member
***
Offline Offline

Activity: 1148
Merit: 158


★Bitvest.io★ Play Plinko or Invest!


View Profile
September 24, 2020, 10:57:13 AM
 #3


 I wonder if the tesla is driven completely by AI (or both AI and human/humans are drivers?).  By the way, isn't the AI suppose to know the solution to the glare problem? I assume that the AI is programmed to get a clear view or behave differently when the glare happens. Or didn't the developers anticipate this?

There are two things that is coming to my mind in this case given by your question. (1) the AI took a lot of time to have a solution for the glare problem, after that the car breaks but the momentum is too high that the amount of deceleration needed to stop the car didn't work (you can see in the video that the car breaks). (2) the developers may have anticipated this situation, its just that the flare coincides with the blockage of the  road, also the negligence of the driver to fully trust an AI in HIGH ROAD.

This video is now famous as a complete failure of AI during driving.

We cannot have the potential for these kinds of mistakes.  Tesla needs to implement their 5G mesh network so other cars driving by would detect the obstacle and report it.   That is what mesh networks are good for.


They are really need to implement the highest grade network, for the safety of the consumers. But the implementation of this networks requires a lot of time and effort to be accomplished. They need to have specific data's for this to happen, and one of them is to have the locations of every car in the area with their IP's on it. By having the IP's they can do this but this data will infringe the privacy of the tesla car owners and is prone to hacking.


Artemis3
Legendary
*
Offline Offline

Activity: 2030
Merit: 1563


CLEAN non GPL infringing code made in Rust lang


View Profile WWW
September 24, 2020, 07:56:54 PM
 #4

This can be solved with another type of sensor positioned elsewhere. I think even Tesla has newer models with more sensors precisely because of this.

I have seen an open source smartphone app for self driving (a few supported) cars. It works with the single camera from your phone. But just because it works, doesn't mean it cannot be improved with more sensors.

I think a combination of optical/lidar/radar exists which is used by Google's Waymo to prevent this kind of situation. I also think the later Tesla models improved their sensors to account for these situations. Tesla officially still regard their system as "hands on", the human is always in control and often warns the driver of potential problems. Even if no problem is detected by the system, if you human want to act, you still can.

The programing of the software could be improved to assume danger in case of blindness (ie. slow down/stop). This also does not account the other side of the coin, what if the car is driving in total darkness (assume your front lights are damaged), the ai can do it while you the human cannot. How about fog? Same thing. So even if you say "a human could have just moved the head", you could also find several unsolvable situations for the humans where their lack of sensing and slow reaction cannot solve.

AI is far better for driving than humans already. If you remove the humans from the streets, you could even remove the traffic lights. The problems can only get solved and improved upon, until nobody drives anymore. And that is coming, like it or not. Only a few people like racers would still do it, but the masses won't.

The rest of the vehicles are doing the same. ships and planes as well. I guess you could still have your sail boat or your Cessna, but i see no reason commercial freight and passenger service to need more than a (remote) operator. Also, goodbye to suicide pilots and hijacks.

When you do a suborbital flight from New York to Tokyo in 30 minutes, are you manually driving the rocket? No you don't. SpaceX plans to offer this service. Most rail systems work this way, some don't even bother with a human operator onboard anymore.

██████
███████
███████
████████
BRAIINS OS+|AUTOTUNING
MINING FIRMWARE
|
Increase hashrate on your Bitcoin ASICs,
improve efficiency as much as 25%, and
get 0% pool fees on Braiins Pool
Pages: [1]
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!