Uber halts self-driving tests after pedestrian killed in Arizona

Started by tradephoric, March 19, 2018, 01:57:16 PM

Previous topic - Next topic

tradephoric

Uber's use of fewer safety sensors prompts questions after fatal crash
https://www.cnbc.com/2018/03/27/ubers-use-of-fewer-safety-sensors-prompts-questions-after-fatal-crash.html

The Volvo sports utility vehicles had one lidar sensor, compared to 7 lidar sensors on the previous Uber fleet of Ford Fusions.  According to Raj Rajkumar, the lack of lidar sensors introduced blind zones around the perimeter of the SUV that could not fully detect pedestrians.  Of course the pedestrian walking their bike across the street was well back from the "perimeter" of the vehicle in the video (before being struck)... shouldn't the main lidar sensor still have spotted the pedestrian in this accident? 

QuoteIn scaling back to a single lidar on the Volvo, Uber introduced a blind zone around the perimeter of the SUV that cannot fully detect pedestrians, according to interviews with former employees and Raj Rajkumar, the head of Carnegie Mellon University's transportation center who has been working on self-driving technology for over a decade.


adventurernumber1

#101
Self-driving cars used to be one of my worst fears, and I was completely against it, but (before this news even happened I had already developed this mindset, and have had it for a while) now it doesn't seem like such a horrible idea. I suppose if technology gets good enough, it could probably prove to be safer at driving than us imperfect, distracted, and road-rage-prone humans. I think that if we maybe try to safely phase this in, automobile accidents might decrease - and if we can quickly switch to all electric cars, then our air will be in good shape as well - hopefully we will be looking at a bright future in transportation. However, with that said, maybe Uber was rushing into the self-driving vehicle testing in this case - honestly, I was baffled to find out that these things were already on actual roads with actual people (even though the technology isn't perfected yet). It may be more complicated and expensive, but I think they should find some way to test these new cars where there is no real people driving on real roads, so that they can find their faults and then fix them, with no risk of hurting anyone in the trial-and-error process. That may have been a mistake as this happened; then again, it's possible that it could be the pedestrian's fault if they weren't paying attention or being careless - but we may never know the truth as no details have been released. Whatever actually happened, this is truly a tragic incident. Hopefully we can prevent something like this from happening again.  :no:


Now alternating between different highway shields for my avatar - my previous highway shield avatar for the last few years was US 76.

Flickr: https://www.flickr.com/photos/127322363@N08/

YouTube: https://www.youtube.com/channel/UC-vJ3qa8R-cc44Cv6ohio1g

DaBigE

My biggest fear: Hacking and viruses. Eventually, everything will be interconnected whether we like it or not. People have hacked into construction and dynamic overhead signs before (granted, those arguably have some of the weakest security measures in-place). Once we lose control of our cars and they're all interconnected into the real information superhighway, how will we protect against a new type of kidnapping? Someone remotely disabling the safety devices? Someone pushing ransomware on our vehicle(s)?
Maybe this isn't as feasible as I fear it is?
"We gotta find this road, it's like Bob's road!" - Rabbit, Twister

jeffandnicole

Quote from: DaBigE on March 29, 2018, 01:46:34 AM
My biggest fear: Hacking and viruses. Eventually, everything will be interconnected whether we like it or not. People have hacked into construction and dynamic overhead signs before (granted, those arguably have some of the weakest security measures in-place). Once we lose control of our cars and they're all interconnected into the real information superhighway, how will we protect against a new type of kidnapping? Someone remotely disabling the safety devices? Someone pushing ransomware on our vehicle(s)?
Maybe this isn't as feasible as I fear it is?

It's yet to happen to planes, which of course would easily harm many more people at once.  Planes are largely automated and computerized, and have been for a while.  Since most of us don't fly planes, it's out of mind, so you don't think of it that way.

In fact, there's not much I can think of that isn't computerized.

kalvado

Quote from: jeffandnicole on March 29, 2018, 06:14:46 AM
Quote from: DaBigE on March 29, 2018, 01:46:34 AM
My biggest fear: Hacking and viruses. Eventually, everything will be interconnected whether we like it or not. People have hacked into construction and dynamic overhead signs before (granted, those arguably have some of the weakest security measures in-place). Once we lose control of our cars and they're all interconnected into the real information superhighway, how will we protect against a new type of kidnapping? Someone remotely disabling the safety devices? Someone pushing ransomware on our vehicle(s)?
Maybe this isn't as feasible as I fear it is?

It's yet to happen to planes, which of course would easily harm many more people at once.  Planes are largely automated and computerized, and have been for a while.  Since most of us don't fly planes, it's out of mind, so you don't think of it that way.

In fact, there's not much I can think of that isn't computerized.
Hacking of comuterized "off the shelf" car was demonstrated. DHS claimed they did hack into pretty old and (by today's standard) barely computerized 757, publishing little details. It didn't take any hacking for Toyota to put a computerized clusterf on a road..
I am more concerned about support for massively smart cars. Today's estimate for new car is 20+ years on the road. And there is reasonable mechanical support for those clunkers via junk yard parts and small shops.
That is equivalent to having win98 today in IT world.How much love those old systems would get?

kalvado

Quote from: tradephoric on March 28, 2018, 06:57:01 PM
Uber's use of fewer safety sensors prompts questions after fatal crash
https://www.cnbc.com/2018/03/27/ubers-use-of-fewer-safety-sensors-prompts-questions-after-fatal-crash.html

The Volvo sports utility vehicles had one lidar sensor, compared to 7 lidar sensors on the previous Uber fleet of Ford Fusions.  According to Raj Rajkumar, the lack of lidar sensors introduced blind zones around the perimeter of the SUV that could not fully detect pedestrians.  Of course the pedestrian walking their bike across the street was well back from the "perimeter" of the vehicle in the video (before being struck)... shouldn't the main lidar sensor still have spotted the pedestrian in this accident? 

QuoteIn scaling back to a single lidar on the Volvo, Uber introduced a blind zone around the perimeter of the SUV that cannot fully detect pedestrians, according to interviews with former employees and Raj Rajkumar, the head of Carnegie Mellon University's transportation center who has been working on self-driving technology for over a decade.
Maybe they concluded that optical sensors on the back gets too contaminated too quickly, or that on the sides radars - similar to blind spot detectors - are sufficient. But this was indeed a prime spot accident - something moving into a path of a car... The only possibility I see is that some sensors were inop, and car fell into some backup mode. Why no big red flashing light and pulling over for manual takeover in such condition?...
Although, as with all high level event, there is a strong possibility of some top manager saying "ah, OK, I think it should work this way" regarding the most incredible problem in the system.

adventurernumber1

Quote from: DaBigE on March 29, 2018, 01:46:34 AM
My biggest fear: Hacking and viruses. Eventually, everything will be interconnected whether we like it or not. People have hacked into construction and dynamic overhead signs before (granted, those arguably have some of the weakest security measures in-place). Once we lose control of our cars and they're all interconnected into the real information superhighway, how will we protect against a new type of kidnapping? Someone remotely disabling the safety devices? Someone pushing ransomware on our vehicle(s)?
Maybe this isn't as feasible as I fear it is?

That does sound scary as hell.  :paranoid:

Maybe when they finish making self-driving cars, they should include an emergency switch of some sort that us humans have control over, in case something crazy like that happened.
Now alternating between different highway shields for my avatar - my previous highway shield avatar for the last few years was US 76.

Flickr: https://www.flickr.com/photos/127322363@N08/

YouTube: https://www.youtube.com/channel/UC-vJ3qa8R-cc44Cv6ohio1g

DaBigE

Quote from: adventurernumber1 on March 29, 2018, 11:14:29 AM
Quote from: DaBigE on March 29, 2018, 01:46:34 AM
My biggest fear: Hacking and viruses. Eventually, everything will be interconnected whether we like it or not. People have hacked into construction and dynamic overhead signs before (granted, those arguably have some of the weakest security measures in-place). Once we lose control of our cars and they're all interconnected into the real information superhighway, how will we protect against a new type of kidnapping? Someone remotely disabling the safety devices? Someone pushing ransomware on our vehicle(s)?
Maybe this isn't as feasible as I fear it is?

That does sound scary as hell.  :paranoid:

Maybe when they finish making self-driving cars, they should include an emergency switch of some sort that us humans have control over, in case something crazy like that happened.

A switch will do you no good if there is no steering wheel, as Chevy was proposing in one of their test vehicles. The news stations that carried that story were of course filled with people freaking out about the concept of a steering wheel-less car.
"We gotta find this road, it's like Bob's road!" - Rabbit, Twister

kalvado

Quote from: DaBigE on March 29, 2018, 11:19:09 AM
Quote from: adventurernumber1 on March 29, 2018, 11:14:29 AM
Quote from: DaBigE on March 29, 2018, 01:46:34 AM
My biggest fear: Hacking and viruses. Eventually, everything will be interconnected whether we like it or not. People have hacked into construction and dynamic overhead signs before (granted, those arguably have some of the weakest security measures in-place). Once we lose control of our cars and they're all interconnected into the real information superhighway, how will we protect against a new type of kidnapping? Someone remotely disabling the safety devices? Someone pushing ransomware on our vehicle(s)?
Maybe this isn't as feasible as I fear it is?

That does sound scary as hell.  :paranoid:

Maybe when they finish making self-driving cars, they should include an emergency switch of some sort that us humans have control over, in case something crazy like that happened.

A switch will do you no good if there is no steering wheel, as Chevy was proposing in one of their test vehicles. The news stations that carried that story were of course filled with people freaking out about the concept of a steering wheel-less car.
A switch will do you no good period.
In some of high profile Toyota crashes, when car started to accelerate uncontrollably - people didn't kick shiftstick into neutral and/or cut off ignition - when they had time for that, and doing so would literally save their life.

jeffandnicole

Quote from: kalvado on March 29, 2018, 11:43:51 AM
Quote from: DaBigE on March 29, 2018, 11:19:09 AM
Quote from: adventurernumber1 on March 29, 2018, 11:14:29 AM
Quote from: DaBigE on March 29, 2018, 01:46:34 AM
My biggest fear: Hacking and viruses. Eventually, everything will be interconnected whether we like it or not. People have hacked into construction and dynamic overhead signs before (granted, those arguably have some of the weakest security measures in-place). Once we lose control of our cars and they're all interconnected into the real information superhighway, how will we protect against a new type of kidnapping? Someone remotely disabling the safety devices? Someone pushing ransomware on our vehicle(s)?
Maybe this isn't as feasible as I fear it is?

That does sound scary as hell.  :paranoid:

Maybe when they finish making self-driving cars, they should include an emergency switch of some sort that us humans have control over, in case something crazy like that happened.

A switch will do you no good if there is no steering wheel, as Chevy was proposing in one of their test vehicles. The news stations that carried that story were of course filled with people freaking out about the concept of a steering wheel-less car.
A switch will do you no good period.
In some of high profile Toyota crashes, when car started to accelerate uncontrollably - people didn't kick shiftstick into neutral and/or cut off ignition - when they had time for that, and doing so would literally save their life.

Yep.  People's first reactions are "WTF???".

It's kinda like the skidding on ice situation.  There's a correct way to steer out of the slide, but many people can't concentrate enough to do it.

Brandon

There's an update on the story from the NTSB.

NTSB: Uber's sensors worked; its software utterly failed in fatal crash

QuoteThe problem was that Uber's software became confused, according to the NTSB. "As the vehicle and pedestrian paths converged, the self-driving system software classified the pedestrian as an unknown object, as a vehicle, and then as a bicycle with varying expectations of future travel path," the report says.

This is my major issue with autonomous vehicles - software.

The link to the preliminary NTSB report: https://www.ntsb.gov/investigations/AccidentReports/Reports/HWY18MH010-prelim.pdf
"If you think this has a happy ending, you haven't been paying attention." - Ramsay Bolton, "Game of Thrones"

"Symbolic of his struggle against reality." - Reg, "Monty Python's Life of Brian"

kalvado

Quote from: Brandon on May 24, 2018, 04:40:26 PM
There's an update on the story from the NTSB.

NTSB: Uber's sensors worked; its software utterly failed in fatal crash

QuoteThe problem was that Uber's software became confused, according to the NTSB. "As the vehicle and pedestrian paths converged, the self-driving system software classified the pedestrian as an unknown object, as a vehicle, and then as a bicycle with varying expectations of future travel path," the report says.

This is my major issue with autonomous vehicles - software.

The link to the preliminary NTSB report: https://www.ntsb.gov/investigations/AccidentReports/Reports/HWY18MH010-prelim.pdf

I like another part better:
Quote
At 1.3 seconds before impact, the self-driving system determined that an emergency braking maneuver was needed to mitigate a collision. According to Uber, emergency braking maneuvers are not enabled while the vehicle is under computer control, to reduce the potential for erratic vehicle behavior. The vehicle operator is relied on to intervene and take action. The system is not designed to alert the operator.

Brandon

Quote from: kalvado on May 24, 2018, 04:58:11 PM
Quote from: Brandon on May 24, 2018, 04:40:26 PM
There's an update on the story from the NTSB.

NTSB: Uber's sensors worked; its software utterly failed in fatal crash

QuoteThe problem was that Uber's software became confused, according to the NTSB. "As the vehicle and pedestrian paths converged, the self-driving system software classified the pedestrian as an unknown object, as a vehicle, and then as a bicycle with varying expectations of future travel path," the report says.

This is my major issue with autonomous vehicles - software.

The link to the preliminary NTSB report: https://www.ntsb.gov/investigations/AccidentReports/Reports/HWY18MH010-prelim.pdf

I like another part better:
Quote
At 1.3 seconds before impact, the self-driving system determined that an emergency braking maneuver was needed to mitigate a collision. According to Uber, emergency braking maneuvers are not enabled while the vehicle is under computer control, to reduce the potential for erratic vehicle behavior. The vehicle operator is relied on to intervene and take action. The system is not designed to alert the operator.


That last part is the one that really bothers me the most.

QuoteThe system is not designed to alert the operator.

That's a major programming mistake, IMHO.
"If you think this has a happy ending, you haven't been paying attention." - Ramsay Bolton, "Game of Thrones"

"Symbolic of his struggle against reality." - Reg, "Monty Python's Life of Brian"

TheHighwayMan3561

If it's on me to intervene when the system isn't prepared for the unexpected, there's no point to a self-driving car.

Rule #1 of driving is expect the unexpected.
self-certified as the dumbest person on this board for 5 years running

Bruce

In other words, Uber's negligence killed a person.

All self-driving vehicle testing needs to be shut down until some strict regulations can be put in place. There should be scrutiny from the public at every step of the process, and all testing should be done away from public roads until they're foolproof.

DaBigE

Quote from: Bruce on May 24, 2018, 11:45:21 PM
In other words, Uber's negligence killed a person.

All self-driving vehicle testing needs to be shut down until some strict regulations can be put in place. There should be scrutiny from the public at every step of the process, and all testing should be done away from public roads until they're foolproof.

Soooo, basically never?
"We gotta find this road, it's like Bob's road!" - Rabbit, Twister

DaBigE

Quote from: Brandon on May 24, 2018, 05:14:20 PM
Quote from: kalvado on May 24, 2018, 04:58:11 PM
Quote from: Brandon on May 24, 2018, 04:40:26 PM
There's an update on the story from the NTSB.

NTSB: Uber's sensors worked; its software utterly failed in fatal crash

QuoteThe problem was that Uber's software became confused, according to the NTSB. "As the vehicle and pedestrian paths converged, the self-driving system software classified the pedestrian as an unknown object, as a vehicle, and then as a bicycle with varying expectations of future travel path," the report says.

This is my major issue with autonomous vehicles - software.

The link to the preliminary NTSB report: https://www.ntsb.gov/investigations/AccidentReports/Reports/HWY18MH010-prelim.pdf

I like another part better:
Quote
At 1.3 seconds before impact, the self-driving system determined that an emergency braking maneuver was needed to mitigate a collision. According to Uber, emergency braking maneuvers are not enabled while the vehicle is under computer control, to reduce the potential for erratic vehicle behavior. The vehicle operator is relied on to intervene and take action. The system is not designed to alert the operator.


That last part is the one that really bothers me the most.

QuoteThe system is not designed to alert the operator.

That's a major programming mistake, IMHO.

Even if it had alerted the operator, 1.3 seconds is likely too short of a time to make much of a difference. The driver would be lucky to perceive the issue, let alone have time to react and for the car to stop. Factor in typical perception-reaction time (2.5 seconds), plus the proper amount of time to come to a complete stop, and you might as well have the system shut down and go all-manual in heavy traffic or "confusing" environments.
"We gotta find this road, it's like Bob's road!" - Rabbit, Twister

SP Cook

Quote
In other words, Uber's negligence killed a person.


Correct.  And the penalty in Arizona is 4 to 8 years.  Which is what everybody involved should serve, thus bringing an end to the idiotic idea that cars can drive themselves, and if a few hundred people get killed trying to prove that, its OK.

jeffandnicole

Quote from: SP Cook on May 25, 2018, 09:24:23 AM
Quote
In other words, Uber's negligence killed a person.


Correct.  And the penalty in Arizona is 4 to 8 years.  Which is what everybody involved should serve, thus bringing an end to the idiotic idea that cars can drive themselves, and if a few hundred people get killed trying to prove that, its OK.

Companies kill people all the time.  Wrongful death lawsuits generally go after the company, not a person, even when the death can be attributed to a person not properly doing their job.

But let's ignore all of that, because it's not automobile related.

kalvado

Quote from: SP Cook on May 25, 2018, 09:24:23 AM
Quote
In other words, Uber's negligence killed a person.


Correct.  And the penalty in Arizona is 4 to 8 years.  Which is what everybody involved should serve, thus bringing an end to the idiotic idea that cars can drive themselves, and if a few hundred people get killed trying to prove that, its OK.
4 to 8 years? Doesn't match anything in AZ law.
Negligent homicide is class 4;  1.5-3 years, likely reducible to 1
Manslaughter is class 2; 4 to 10 years, again likely reducable to 2

But what keeps me amazed is how blood thirsty and hateful people are...

slorydn1

Hmm, lets see here.


Pedestrian crosses road, in the dark, wearing dark colored clothing, 360 feet away from the nearest legal place to do so, in front of a well lit motor vehicle that she should have easily seen, that was travelling in the lane farthest away from her at the start, 2 mph below the posted speed limit and it's Uber and the test driver who were negligent????? I guess I am in the minority that believes in personal responsibility anymore.


Although I do agree with others that the automated vehicles should not be allowed in traffic until all the software bugs are worked out, I am not 100 percent convinced in this case that even an alert human driver, paying full attention in a normal car, would have seen the pedestrian in enough time to have affected the outcome.
Please Note: All posts represent my personal opinions and do not represent those of any governmental agency, non-governmental agency, quasi-governmental agency or wanna be governmental agency

Counties: Counties Visited

kalvado

Quote from: slorydn1 on May 30, 2018, 05:11:36 PM
Hmm, lets see here.


Pedestrian crosses road, in the dark, wearing dark colored clothing, 360 feet away from the nearest legal place to do so, in front of a well lit motor vehicle that she should have easily seen, that was travelling in the lane farthest away from her at the start, 2 mph below the posted speed limit and it's Uber and the test driver who were negligent????? I guess I am in the minority that believes in personal responsibility anymore.


Although I do agree with others that the automated vehicles should not be allowed in traffic until all the software bugs are worked out, I am not 100 percent convinced in this case that even an alert human driver, paying full attention in a normal car, would have seen the pedestrian in enough time to have affected the outcome.

I would say that it is not the driver, but software designer who is negligent. Although that negligence is very far from being criminally prosecutable, there is definitely something wrong with completely disabling emergency fallback function. Fact that software saw something and tried to avoid the hit - but could not do so - adds a lot of insult to that fatal injury.

slorydn1

Quote from: kalvado on May 30, 2018, 05:32:17 PM
Quote from: slorydn1 on May 30, 2018, 05:11:36 PM
Hmm, lets see here.


Pedestrian crosses road, in the dark, wearing dark colored clothing, 360 feet away from the nearest legal place to do so, in front of a well lit motor vehicle that she should have easily seen, that was travelling in the lane farthest away from her at the start, 2 mph below the posted speed limit and it's Uber and the test driver who were negligent????? I guess I am in the minority that believes in personal responsibility anymore.


Although I do agree with others that the automated vehicles should not be allowed in traffic until all the software bugs are worked out, I am not 100 percent convinced in this case that even an alert human driver, paying full attention in a normal car, would have seen the pedestrian in enough time to have affected the outcome.

I would say that it is not the driver, but software designer who is negligent. Although that negligence is very far from being criminally prosecutable, there is definitely something wrong with completely disabling emergency fallback function. Fact that software saw something and tried to avoid the hit - but could not do so - adds a lot of insult to that fatal injury.

I do absolutely agree with you there, 100 percent. Still should the person who was struck even belong where she was when this occurred?

The other side of this, though, is that this could have been another car in the same lane as the Uber where the driver somehow miraculously slammed on the brakes in enough time to stop before hitting the pedestrian, only to get rear ended by the Uber because the driver wan't paying attention and the software wouldn't have stopped the car because it was "instructed" not to. This is why I agree the automated Uber shouldn't have been allowed in live traffic to begin with.
Please Note: All posts represent my personal opinions and do not represent those of any governmental agency, non-governmental agency, quasi-governmental agency or wanna be governmental agency

Counties: Counties Visited

Rothman

Doesn't matter.  You strike a pedestrian, it's bad news for you.
Please note: All comments here represent my own personal opinion and do not reflect the official position(s) of NYSDOT.

AlexandriaVA

Shocker...an AARoads discussion of a pedestrian accident devolves to victim blaming



Opinions expressed here on belong solely to the poster and do not represent or reflect the opinions or beliefs of AARoads, its creators and/or associates.