Car-less Drivers

← Back to Forums


jonawebb
Participant
#

jonawebb
Participant
#


Ahlir
Participant
#

This seems like a reasonable thread for this post…

Google has been working on having their autonomous car software be aware of bicycles which, apparently, have movement characteristics different from those of cars:
Google talks up its self-driving cars’ cyclist-detection algorithms

On a related note: that Tesla driver who died this past week in a truck collision in Florida was apparently watching a Harry Potter movie while driving. Not sure what this really means, it’s either meant to absolve Tesla of blame or it’s a reminder that humans will inevitably do things not observed in your AI’s training corpus…

Collision avoidance still requires some human participation; who would have guessed?


jonawebb
Participant
#

Well, that’s the problem (and well understood by human factors folks). You can’t make a system partly autonomous and expect the human to be ready to take over when something goes wrong.
The Tesla crash will be an interesting first test of who is responsible for the failure. My guess: same as always, nobody.


paulheckbert
Moderator
#

Joshua Brown, the autonomous Tesla “driver” that died, appears quite cavalier and reckless in the first video here:


Marko82
Participant
#

Google: Our Self-Driving Cars Are Nice to Cyclists
http://www.pcmag.com/news/345837/google-our-self-driving-cars-are-nice-to-cyclists

The cars’ software can recognize cyclists’ hand signals and is programmed to give bicycles a wide berth when the car is passing them, among other measures to minimize the risk of a collision.


jonawebb
Participant
#

@marko82 note comment on this story: “Google cars are already a hazard because they drive like granny – slowing down and blocking traffic and stopping unexpectedly. What do you bet these biker updates make that situation even worse?

I suspect we’ll be getting stuck behind a Google car going 10 miles an hour because it won’t pass a biker.”


Benzo
Participant
#

Saw an uber vehicle yesterday on river ave with a ton of sensors and such on top, some weird spinning device as well on the roof. Thought it might be doing something like google streetmap style imagery or maybe just a continuous scan.


paulheckbert
Moderator
#

That’s probably a LIDAR scanner, which is a form of laser rangefinder. It gives a grid of range data (distance to nearest object) in all directions. See https://www.youtube.com/watch?v=nXlqv_k4P8Q

Many (most?) autonomous cars use LIDAR, since it’s more reliable in some respects than binocular vision or multiple cameras at detecting obstacles.


edmonds59
Participant
#

I wonder about the efficacy of LIDAR once any number of autobots are equipped with it. One would think that once more than a handful of vehicles are using it in any given area, spewing signal about, the signal noise would become completely disabling. I don’t know a thing about the technology though.
I suppose it would work if all the vehicles in a given area were constantly sharing data and creating some kind of real time “hive” map of the environment.


jonawebb
Participant
#

I don’t think there’s much chance for interference since you’re scanning using a directional laser. Two different laser beams pointing at the same spot at the same time are pretty unlikely.
I was at CMU when this technology was being pioneered for autonomous vehicles. One problem I remember is that highly reflective surfaces don’t reflect the laser back at the scanner. It bounces off them like a mirror instead. So a polished object can simply disappear in the scan.
This led to an amusing incident during a test by the Daimler-Benz group. One of the executives parked his pristine car and it happened to be somewhat in the path of the autonomous vehicle. You can guess what happened.


jonawebb
Participant
#

Some thoughts on how driverless cars might communicate with the public: http://www.nytimes.com/2016/08/31/technology/how-driverless-cars-may-interact-with-people.html

I see something like this as a potential solution to the problem of driverless cars being pushovers. If they can communicate with human drivers, they may be able to issue threats, and get respect. OTOH I can also see them behind one of us, trying to get us to move out of the way.


edmonds59
Participant
#

The first autonomous vehicle that tries to communicate with me, I’m going see if the old “Liar’s Paradox” works the way it’s supposed to.


helen s
Participant
#

now- the driver less bike!


jonawebb
Participant
#

Uber, Pittsburgh, Peduto, & driverless cars: http://mobile.nytimes.com/2016/09/11/technology/no-driver-bring-it-on-how-pittsburgh-became-ubers-testing-ground.html


Eric
Member
#

I just read this article.  I was passed by 5 ubers today while riding on river Ave and the strip (Penn) and it was by far the best set of passes I’ve experienced.  4+ feet. Not crazy speeds. Didn’t try to cut back in front of me nor ride my tail.  Didn’t veer into oncoming traffic and put everyone at risk.

 

Bring em on.


Marko82
Participant
#

Our Reporter Goes for a Spin in a Self-Driving Uber Car

Uber will start collecting data to answer some of those questions with itsdriverless car tests in Pittsburgh, known for its unique topography and urban planning. The city, in essence a peninsula surrounded by mountains, is laid out in a giant triangle, replete with sharp turns, steep grades, sudden speed limit changes and dozens of tunnels. There are 446 bridges, more than in Venice, Italy. Residents are known for the “Pittsburgh left,” a risky intersection turn.
“It’s the ideal environment for testing,” said Raffi Krikorian, engineering director of Uber’s Advanced Technologies Center. “In a lot of ways, Pittsburgh is the double-black diamond of driving,” he said, using a ski analogy to underscore the challenge.


Marko82
Participant
#

There were quite a few Pgh/Uber articles today, so I guess Uber’s PR person earned their pay.

Here’s another one with some good pictures of Pittsburgh, and a few showing where the car got confussed.

http://www.businessinsider.com/uber-driverless-car-in-pittsburgh-review-photos-2016-9


Gordon
Participant
#

I hope the outcome of Uber autonomous car trial will help end the “car culture” by reducing the needs of private motor vehicles and the infrastructures to support them. In addition, by replacing human drivers with robots, streets will be safer because robots don’t take risks, get impatient, fatigued, drunk or drugged. However, many motorists don’t care about other road users’ safety and won’t relinquish the control of their motor vehicles. Governments can usher the change by introducing these steps gradually:

1. Reduce the parking spaces in the cities and replace them with bicycle lanes.

2. Toll human-operated vehicles in the cities. Collected tolls will be used to fund city-owned autonomous motor vehicles and “complete street” infrastructures

3. Toll human-operated vehicles on highways. Change highway designs and make them safe for all road users instead of speedy movement of motor vehicles.

4. Increase the difficulty of driver license tests and require drivers to re-take the tests every 1-3 years. Increase registration fees for human-operated motor vehicles.

5. Ban manufacture and sales of new human-operated motor vehicles except those designed for race courses or off-road use.


jonawebb
Participant
#

…by replacing human drivers with robots, streets will be safer because robots don’t take risks, get impatient, fatigued, drunk or drugged.

Robots have to take risks in order to drive. Driving on a road with other traffic inherently involves risk. A car could veer from an oncoming lane, a parked car could suddenly pull into traffic. Without taking risks, a robotic vehicle would be forced to stay in its parking space.
Whether or not robotic vehicles improve safety or not depends on how the vehicle is programmed — will it be programmed to get from point A to point B as fast as possible? Or will it be programmed to follow every traffic law to the letter, even though that will slow it down?
The vehicles from Uber seem to have been programmed to follow traffic laws closely, and that is great. But they are only the beginning. When robotic vehicles become more mainstream, will people be comfortable with their cars making them late to work — or will they prefer vehicles that go faster while taking more risks? What will prevent car manufacturers from competing with each other on who can develop the faster car? When has safety (vs. “performance”) ever been a successful selling point for car manufacturers in this country?


chrishent
Member
#

When has safety (vs. “performance”) ever been a successful selling point for car manufacturers in this country?

That’s one of the biggest selling points for SUVs. Very few people actually need an SUV. Can’t tell you how many times I’ve heard someone say they bought an SUV because it’s “safer”


Gordon
Participant
#

 

@jonawebb: you were right about robots having problems with dealing with risks. The Uber autonomous car could get stuck in an intersection with four-way stop signs because it would wait for all traffic coming to a complete stop.[1] It also did not know how to deal with illegally double-parked trucks.[2]

I think the ultimate solution might be banning all human-operated motor vehicles, and let the governments decide on the programming for autonomous motor vehicles.

Ref:

  1. https://www.engadget.com/2016/09/14/uber-pittsburgh-self-driving-cars-experience/
  2. https://www.wired.com/2016/09/self-driving-autonomous-uber-pittsburgh/

jonawebb
Participant
#

@chrishent, of course in the case of the SUV they’re talking about safety of the people inside the SUV. People outside the SUV are put in more danger because the SUV is bigger and heavier. Here, we’re talking about safety for the people outside the robot car. That kind of safety has never been a marketing issue in this country.


jonawebb
Participant
#

alleghenian
Member
#

There are some highway entries where you have a stop sign before merging, so you usually have to gun it in front of someone when you see a small gap in traffic to get onto the highway.  Squirrel Hill outbound on the parkway east, Millvale outbound on 28,  Greentree inbound on the parkway west.  I’m sure they tested scenarios like this, but hopefully the uber cars won’t just sit there forever.


jonawebb
Participant
#

Just a year or two ago we had no driverless car laws. Now we have a hodgepodge: http://www.post-gazette.com/news/transportation/2016/09/16/Hodgepodge-of-self-driving-vehicle-laws-raises-safety-concerns/stories/201609150203

And for those who wish/hope that car companies will be extra careful about safety because they can be held responsible if a driverless car kills somebody:

As far as liability in a crash, the responsibility remains the same whether a vehicle has a human or electronic driver, said James Lynch, chief actuary at the Insurance Information Institute in New York City.

“You don’t become more liable because your machine went haywire and caused a crash than if your driver went haywire,” he said.


Gordon
Participant
#

Interesting read on how share-use autonomous cars can make streets safer for everyone: http://www.citylab.com/tech/2015/08/how-driverless-cars-could-turn-parking-lots-into-city-parks/400568/ Park(ing) Day Pittsburgh commented on the article: “Autonomous cars could result in many parking spaces turning into parks.”

BTW, I have my first sighting of the Uber autonomous car today at Morewood and Center!


Steven
Participant
#

“As far as liability in a crash, the responsibility remains the same whether a vehicle has a human or electronic driver, said James Lynch, chief actuary ….”

But if liability is decided by juries, they may not feel the same way about that nice old granny who didn’t mean to accidentally knock a hole in that building, versus rich heartless rich Uber/Google/Tesla which must be punished for endangering the public with their flawed product. A jury that can imagine themselves doing the dumb thing a human did may not see themselves doing whatever dumb thing the machine did.


jonawebb
Participant
#

Mick
Participant
#

Robocar companies may well pay higher civil penalties.

I’m guessing that Robocar companies will self insure.  So they really only have to pay for damage, not damage+ insurance profit.  Also, since their cars will presumably be far safer, they probably will be able to pay the higher penalties for far lower overall cost.

I wonder if insurance companies, and their pet senators despise robocars the same way they hate reasonable health care plans.    I mean the industry of misleading consumers about insurance plans is what the US does instead of making steel, isn’t it?

 


jonawebb
Participant
#

Here is the policy itself: http://www.nhtsa.gov/nhtsa/av/

I really think relying on the courts to regulate the car companies through lawsuits brought when there is a crash will work just as well as it has worked in regulating other industries. We need Federal oversight to ensure that the promises being made about driverless cars being safer aren’t gradually discarded in favor of them being faster and higher “performance.”


Gordon
Participant
#

Seattle tech group, Madrona Venture Group, recommended reserving one lane of I-5 for autonomous vehicles: http://www.madrona.com/i-5/

I believe tolling human-operated motor vehicles, replacing parking spaces with bike lanes, and encouraging autonomous motor vehicle use in the cities would have greater impact on safety, but what Madrona Venture Group recommended might be more practical with the current autonomous technology.


jonawebb
Participant
#

Germany is developing laws for driverless vehicles that sound like what people here want:

Dobrindt wants three things: that a car always opts for property damage over personal injury; that it never distinguishes between humans based on categories such as age or race; and that if a human removes his or her hands from the steering wheel – to check email, say – the car’s manufacturer is liable if there is a collision.

https://www.newscientist.com/article/mg23130923-200-germany-to-create-worlds-first-highway-code-for-driverless-cars/


The Iguana
Participant
#

Sorry, But Driverless Cars Aren’t Right Around the Corner

From https://www.linkedin.com/pulse/sorry-driverless-cars-arent-right-around-corner-john-battelle?trk=eml-email_feed_ecosystem_digest_01-hero-0-null&midToken=AQGDy4rbq2sSyQ&fromEmail=fromEmail&ut=0muVZBYD2dLDs1

At the root of our potential disagreement is the Trolley Problem. You’ve most likely heard of this moral thought experiment, but in case you’ve not, it posits a life and death situation where taking no action insures the death of certain people, and taking another action insures the death of several others. The Trolley Problem was largely a philosophical puzzle until recently, when its core conundrum emerged as a very real algorithmic hairball for manufacturers of autonomous vehicles.

Our current model of driving places agency?—?or social responsibility?—?squarely on the shoulders of the driver. If you’re operating a vehicle, you’re responsible for what that vehicle does. Hit a squadron of school kids because you were reading a text? That’s on you. Drive under the influence of alcohol and plow into oncoming traffic? You’re going to jail (if you survive, of course).

But autonomous vehicles relieve drivers of that agency, replacing it with algorithms that respond according to pre-determined rules. Exactly how those rules are determined, of course, is where the messy bits show up. In a modified version of the Trolley Problem, imagine you’re cruising along in your autonomous vehicle, when a team of Pokemon Go playing kids runs out in front of your car. Your vehicle has three choices: Swerve left into oncoming traffic, which will almost certainly kill you. Swerve right across a sidewalk and you dive over an embankment, where the fall will most likely kill you. Or continue straight ahead, which would save your life, but most likely kill a few kids along the way.

What to do? Well if you had been driving, I’d wager your social and human instincts may well kick in, and you’d swerve to avoid the kids. I mean, they’re kids, right?!

But Mercedes Benz, which along with just about every other auto manufacturer runs an advanced autonomous driving program, has made a different decision: It will plow right into the kids. Why? Because Mercedes is a brand that for more than a century has meant safety, security, and privilege for its customers. So its automated software will chose to protect its passengers above all others. And let’s be honest?—?who wants to buy an autonomous car that might choose to kill you in any given situation?

It’s pretty easy to imagine that every single automaker will adopt Mercedes’ philosophy. Where does that leave us? A fleet of autonomous robot killers, all making decisions that favor their individual customers over societal good? It sounds far fetched, but spend some time considering this scenario, and it becomes abundantly clear that we have a lot more planning to do before we can unleash this new form of robot agency on the world. It’s messy, difficult work, and it most likely requires we rethink core assumptions about how roads are built, whether we need (literal) guardrails to protect us, and whether (or what kind of) cars should even be allowed near pedestrians and inside congested city centers. In short, we most likely need an entirely new plan for transit, one that deeply rethinks the role automobiles play in our lives.

That’s going to take at least a generation. And as President Obama noted at a technology event last week, it’s going to take government.

…government will never run the way Silicon Valley runs because, by definition, democracy is messy. This is a big, diverse country with a lot of interests and a lot of disparate points of view. And part of government’s job, by the way, is dealing with problems that nobody else wants to deal with.?—?President Obama
….


Marko82
Participant
#

^ I think  the Trolley Problem is a red herring (hey, two metaphors in under ten words), because the life & death decision implied in the trolley example just doesn’t happen in the real world all that often.  In reality, both for humans and I suppose for algorithms, we make decisions based on (implied) probability.  So maybe the probability of hitting the head-on car is 80%, going over the cliff is 90%, and hitting the kids is 100% – we would all probably choose the best option.  And the sensors and algorithm would make this calculation faster & with more data than the human probably could (i.e., at current speed with brakes applied fully how far will the car take to come to a complete stop).  Besides, the solution of a moral problem like this can be solved easily with legislation,  assuming the folks who make laws can agree on an outcome.

There are still a lot of technical issues with these computer-driven cars that we need to keep an eye on, so lets not get too distracted with Philosophy major’s thought experiments that although messy, can be solved easily.

 

 


Ahlir
Participant
#

Check out http://moralmachine.mit.edu/, it has a “test” that takes you through a series of situations in which you have to make Trolley choices (as a bonus, the domain is autonomous vehicles). You get feedback on what the system thinks are your criteria and their relative importance. I discovered, for example, that I believe humans (or “hoomans” in their nomenclature) are more worthy than animals.

This stuff is fascinating. I vaguely recall, a number of years ago, researchers started running these experiments, to empirically probe how ethics actually worked. Given that historically philosophy always seemed to be a refuge for imponderables that eventually succumbed to science (physics, psychology, etc), it was gratifying to see that something as complex as ethics could begin to be studied systematically.

But one of the annoying things about these stories is that the writer gets to arbitrarily specify the setup. They never seem to be something you’d find in real life. Five people tied up on one track and only one on the other? Please. In the original setup, why wouldn’t the AV have been programmed to continuously analyse its situation and, given sufficient uncertainty, slow down if the likelihood of people running out in front was high enough? Don’t you slow down when things look iffy? How about slamming on the brakes and injuring the occupants and/or peds but not killing anyone? The world is way more complicated than presented in these abstractions.


jonawebb
Participant
#

jonawebb
Participant
#

On the vision of self-driving cars taking over the roads: “Then again, if this is where we’re headed, American cities can do something, too: Give bicyclists protected bike lanes that cars can’t swerve into even if they want to.”
http://www.slate.com/blogs/future_tense/2016/12/20/uber_investigating_problem_with_self_driving_cars_they_can_harm_bicyclists.html


paulheckbert
Moderator
#

This is not about car-less drivers, but perhaps of interest:

http://spectrum.ieee.org/cars-that-think/transportation/self-driving/the-selfdriving-cars-bicycle-problem

The Self-Driving Car’s Bicycle Problem

“Bicycles are probably the most difficult detection problem that autonomous vehicle systems face,” … “A car is basically a big block of stuff. A bicycle has much less mass and also there can be more variation in appearance — there are more shapes and colors and people hang stuff on them.”


Marko82
Participant
#

Self-Driving Cars Have a Bicycle Problem:  Bikes are hard to spot and hard to predict

http://spectrum.ieee.org/transportation/self-driving/selfdriving-cars-have-a-bicycle-problem

However, when it comes to spotting and orienting bikes and bicyclists, performance drops significantly. Deep3DBox is among the best, yet it spots only 74 percent of bikes in the benchmarking test. And though it can orient over 88 percent of the cars in the test images, it scores just 59 percent for the bikes.

← Back to Forums

You must be logged in to reply to this topic. Click here to login.

Supported by