It’s tough to say they’re causing accidents when the majority of the accidents is them getting rear ended. Normally, when one gets rear ended it’s the rear ender not the rear endee that’s at fault.
Now I do understand that it’s because they’re not driving like humans and humans, the rear enders, are expecting the car to do something different, i.e. not slow down and stop for the yellow light or turn when there’s a certain amount of space between cars, while the self-driving car has been programmed to be more cautious or law abiding. Regardless, I would say that if you rear end someone it’s your fault for not being attentive enough, not the rear endee’s fault.
And hopefully technology will get rid of the rear end crash. I can’t find the source but I read somewhere that no Volvo with rear end crash prevention technology has ever been invovled in a rear end crash… (Ie front facing camera that stops the car before it plows into the car ahead)
Edit: maybe I’m conflating their goal of 0 with actual 0
I’m on the side of, “if you’re introducing a new technology, and crashes result, you’re doing something wrong.” I don’t think it matters that the vehicles are technically following the law when they stop short. So maybe look behind and slow down more gradually if someone’s following too close. I do that on the highway. I’ll use my flashers if I have to slow down for traffic, and definitely look behind me to see if someone’s going to run up on me.
Driving is a social activity. It’s not just following laws. You have to take into account what other road users are doing, and act accordingly.
interesting points you guys both bring up.
Traffic laws are pretty black and white on rear end crashes. It’s always the fault of the person hitting the car ahead of them. Doesn’t matter if it’s an autonomous car following the letter of the law, an autonomous car stopping short because a fire hydrant tricked it, or a human slamming on the brakes for one reason or another. That’s a bright line rule.
What isn’t know is what’s the rate per million miles driven (or whatever the metric is) of rear end crashes for autonomous vs. human piloted lead cars. That’s what we really need to know. Not every human-human crash is reported. And not every human-computer crash is reported either. So we’d need someone way above my pay grade (i.e., took an intro to stats course in college in 1993) to work on this data.
One other thought. If,say, rear end collisions have gone up with autonomous vehicles, have other, presumably less safe/higher morbidity and mortality, crash gone down, like t-bones when red lights are run. Or peds in cross wakks
Who knows? Food for thought.
Perhaps of interest: a more-technical-than-usual discussion of sensor technologies used in autonomous vehicles:
It’s more complicated than I realized. They (can) use:
- long-range radar
- short- and medium-range radar
- optical cameras
“Days before an Uber in self-driving mode struck and killed a pedestrian in Tempe, Ariz., in March, a manager in the testing operations group told company executives and lawyers in an email that the vehicles were “routinely in accidents resulting in damage,” including in Pittsburgh. That employee, Robbie Miller, said one of Uber’s autonomous Volvo SUVs swerved from the road and onto the sidewalk, where it continued driving… Mr. Miller alleged in an email that the episode was “essentially ignored” for days.”
Fresh Air radio conversation about autonomous cars.
- AVs could be as big as a house, feel like a house inside.
- But if they stay narrow, 6 ft, car lanes could be narrowed from 12 ft to 7 ft. So narrower roads or more lanes.
- Fewer crashes.
- More cars on the road, especially in big cities & wealthy areas.
- Cars could be adopting many semi-autonomous features soon, e.g. automatic braking.
- So far ride services (Uber, Lyft) are pulling people out of public transit.
- But those services and AVs don’t obviate denser & public transit such as trains, subways, buses.
- Small autonomous buses could provide more frequent, on-demand service where bus service currently exists, and new service in outlying suburbs.
I can support the last bullet point, getting people from suburban sprawl housing to transit lines, but not to supply transit service in built-up areas. Nearly every other bullet point in that list is a bad thing, as concerns cycling. Narrowing lanes from 12 to 10 feet makes some sense, but not to add another driving lane. Instead, doing that usually involves a reduction in speed limit, which would be a good thing. It would also be a good thing for the AV, since it would only be accelerating and decelerating from 25 vs 35. But they don’t say that and don’t mean that and couldn’t sell the idea on that.
Relevant to all cars, not just autonomous, last year I wondered here “Will cars of the future enforce the speed limit?“. https://www.bikepgh.org/message-board/topic/car-less-drivers/page/3/#post-342797 . The technology is called speed governors or speed limiters.
The technology is becoming feasible now, but at least in the UK, if not also in the US, the car lobby is blocking it.
Given this, is everybody still optimistic that autonomous cars will obey the speed limit?
Year in review: The hype around driverless cars came crashing down in 2018
self-driving tesla hits robot
This says the “killing” of the robot was a publicity stunt by Russian company Promobot. Which might explain why most of the comments on their youtube are in Russian and why the tipping motion of the “struck” robot was so gradual. We should ignore the story. https://www.wired.com/story/tesla-promobot-pave-self-driving-education/
BikePGH says “TAKE OUR 2019 SURVEY ABOUT SHARING THE ROAD WITH AUTONOMOUS VEHICLES”.
It's been two years: We want to hear about your thoughts and experiences sharing the road with Autonomous Vehicles. Take BikePGH's 2019 AV Survey: https://t.co/wrAH9Jbgmp????????? #AutomousVehicles #Pittsburgh pic.twitter.com/OrINmyKgtg
— BikePGH (@BikePGH) January 16, 2019
Platooning autonomous buses and trucks coming to selected PA higgways as soon as this spring .
Self driving car lidar may ruin camera sensors.
"Autonomous-vehicle developers are generally struggling with a multitude of basic scenarios, from making unprotected left-hand turns to judging whether an idling car is double-parked." And don't even get them started on how to handle bikes and peds https://t.co/uJj0SpsJZ8
— Eric Boerer (@ErokEric) January 17, 2019
Self-driving cars will drive around instead of parking to save owners $$.
The family of the woman who was killed by the Uber autonomous car in Tempe is suing the city for bad road design.
Eric Boerer of Bike Pittsburgh and Erin Potts of Healthy Ride discussed cycling near autonomous vehicles on The Confluence radio show. (Synopsis: cyclists would like AVs to drive at under 25mph, cycling in Pittsburgh is up, protected bike lanes are good, people are more suspicious of Uber since their AV killed a pedestrian). Listen from 22:30 to 33:20
I just read an article about the Bike PGH AV Opinion survey. I wrote a letter to BikePGH in 2017 after the first one to point out that this is not a survey, but an online poll. I’m writing both as a grumpy old timer as well as someone who is genuinely concerned that BikePGH may be missleading the public on the opinions of bicyclists. A survey is a scientfic tool which weighs the actual demographics of the survey pool and asks questions from randomized participants, it is reflective of the opinions or behaviors of the population surveyed. While a poll is the collection of data of self-selected participants, as the data you collected shows, these folks are not representative of the bicycling public -so neither are their opinions! Surveys are hard and costly, but you get what you pay for.
Thanks for listening.
Totally agree. I thought the same thing when I was reading it. They really need to put in a limitations part including the fact that this isn’t a representative sample of anything other than those motivated to take the online poll. The data and results are still valuable but it needs to be presented the right way.
What they did may not be statistically meaningful or good science, but it fits one of Merriam-Webster’s definitions of the word “survey”:”to query (someone) in order to collect data for the analysis of some aspect of a group or area”. There’s no requirement that the analysis must be done in a certain way or that the result must be presented accurately. Even an online poll is still a kind of survey, as long as it involves collecting data for some kind of analysis. Here they analysed the professed opinions of people who felt like taking a certain online survey.
I agree that a disclaimer would have been appropriate. But most readers really should know enough not to need one.
This argues that Google is making the same mistake with Waymo that Xerox made with PARCs GUI.
Europe is moving ahead with speed limiters. It’s sad how the US is falling behind on regulation of technology.
All New Cars To Have Speed Limiters Fitted, Rules European Parliament
Here’s a related article about Volvo deciding that their newer cars should be speed limited to 112 mph worldwide and perhaps use geofencing to limit it either further in sensitive areas such as hospitals, schools, etc. The car company is talking about this in terms of morality. They also feel that a car can be going fast enough that the safety equipment in the car can’t compensate. Very thought provoking.
I saw a documentary recently on PBS about self-driving cars (NOVA: Look who’s driving). It was clear that Tesla, by introducing a so-called level 3 car (the car can drive itself most of the time but needs a human to take over in emergencies) was basically deciding it was ok to kill some people for the benefit of being the leader in this technology. As they have done.
Another point on this: Uber deployed technology that had no provision for jaywalkers. And killed a woman as a result.
We should not expect companies to make driving safer as they introduce this technology, unless we make them do so.
- This reply was modified 5 days, 8 hours ago by jonawebb.
I think people should avoid using the label “jaywalker” when referring to Uber’s Tempe car crash & death. From my reading of the stories and viewing of the video, I believe Elaine Herzberg was crossing the southbound lanes of Mill Ave about where I’ve placed the “EH” in this satellite photo. She was walking in the direction indicated, as if she had just followed the path across the road’s median. It was a case of horrible road design to build a path across the median that invites pedestrian use but to encourage cars to speed, almost inviting them to hit pedestrians that cross the road there. Technically, I think Herzberg was crossing the road illegally, but the road design encouraged it. (This picture created using current Google satellite picture. Streetview pictures suggest that the median has since been re-landscaped, eliminating the path across the median.)
I don’t think statistics support such criticism of self driving companies or technology. Per mile traveled, Uber’s experimental vehicles are safer than human drivers. They even had a human driver as a backup but she was busy watching video on her phone. Putting humans inside all of their autonomous vehicles has been ludicrously expensive but they have done it as a safety measure.
Also, coverage of the NTSB report has been framed and sensationalized in a way that I think paints an inaccurate picture of the logic and sensors involved. Just one example, it portrays not enabling emergency brake usage as an oversight. In reality, it is a conscious tradeoff. If enabled, there would be inappropriate triggering. That can be dangerous and also result in fatalities. At this point in development, not enabling automated e-braking is likely the appropriate decision. Or at least that is the analysis we should be talking about instead of painting it as an oversight.
This isn’t to say use of the technology should go without heavy scrutiny and regulation. Rather, just that the sky isn’t falling.
- This reply was modified 4 days, 22 hours ago by dfiler.
It has been known for decades that humans will not properly execute a task where > 90% of the time they are doing nothing, and then, in an emergency situation, they are expected to take control and respond properly. This was known when I was working on autonomous vehicles at CMU 30 years ago. It is as well understood as knowing that people cannot lift 500 pounds.
Tesla and Uber have deliberately ignored human factors research and deployed vehicles that are engineered to kill, and are killing, drivers and pedestrians.
This is not to say that autonomous vehicles research and testing has not led to an increase in safety; I think it’s been shown that systems that alert the driver when they are getting sleepy and drifting out of lane can work, for example. But assuming that the marketplace will lead car companies to build and deploy safe cars is not working.
Of course it is an imperfect safety fallback strategy. But human drivers are also imperfect. Comparing the current AVs with human drivers seems worthwhile when evaluating risk.
I’m left wondering what has caused you to be so critical of vehicles which are statistically safer than human drivers when operated the way they are currently being tested. There must be something else going on that contributes to your desire to portray these companies in a negative light. Seriously, calling them “engineered to kill” seems crazy when in fact they are safer than human operated vehicles already. Care to shed some light on that so that your perspective is better understood?
Human drivers kill over 36000 people per year in this country alone. With that in mind, perhaps we’ve actually been to cautious in rolling out / experimenting with autonomous vehicles. Statistically, it would be worth letting these companies take even more risks when testing if it means consumer ready, safer-than-human performance can be achieved at an earlier date.
Granted, tesla’s assisted mode is a different story, but still, actual numbers haven’t born out a scenario meriting such cynicism.
It’s just that I know what I’m talking about. The so-called Level 3 design is inherently unsafe (and is known to be so, which is why e.g. Google is avoiding it). Tesla introducing it to the mass market and not doing anything to prevent it being used in unsafe situations (for example, engineering it to be unable to deal with cross traffic, but not preventing drivers from using it where there is cross traffic) is designing a system to that will kill people.
Indeed, if you look at the very larger research area of autonomous vehicles and driver assistance, there is a benefit. It’s just that when you count the number of people killed by Tesla’s unsafe design and Uber’s irresponsible testing of a system that can’t deal with pedestrians outside of crosswalks that you see the problem.
Maybe Self Driving safety drivers need to be given something to do, pointing, describing the road conditions, describing what they expect the car to do the entire time they are driving, pointing out items that might be missed by the tech and providing some feedback for developers, basically forcing themselves to pay attention to what’s going on, like the rail workers in this article… https://www.atlasobscura.com/articles/pointing-and-calling-japan-trains
The Nova article I posted has some good info on this. There is apparently good work on defining edge cases, including a company here in Pittsburgh working explicitly on this. I’m aware that good people are working to solve this problem, and I think it will eventually be solved. I just think Uber and Tesla are jumping the gun, and killing people as a result.
I compare it to the MCAS system on the 737 Max. Sure, plane travel is extremely safe, and the 737 Max is safe, too, on a per-miles-traveled basis. But the unsafe technology of MCAS killed people, and did it predictably. There’s lots of evidence Boeing did or should have known about that. But they ignored the evidence, because market pressure overrode their obligation to the public. That’s what I think is happening with autonomous driving. It’s a new technology, being promoted as safe, and people are willing to overlook a few deaths here and there. So there’s no real motivation to make sure it actually is safe.
You must be logged in to reply to this topic. Click here to login.