Advocates React to Testing Truly Driverless Cars on Our Streets

In this round, there won't be a human sitting behind the steering wheel

Photo: Wikimedia Commons
Photo: Wikimedia Commons

Note: GJEL Accident Attorneys regularly sponsors coverage on Streetsblog San Francisco and Streetsblog California. Unless noted in the story, GJEL Accident Attorneys is not consulted for the content or editorial direction of the sponsored content.

As reported in the Chronicle, the Examiner, and, well, pretty much all the major news outlets, yesterday the California Office of Administrative Law approved regulations that will allow autonomous car (AV) testing in California–without a human backup driver sitting behind the steering wheel.

Testing under the new rules could start as early as April 2.

“People walking and biking on San Francisco’s high-injury corridors already face enough hazards–they do not need to be guinea pigs for the autonomous vehicle industry,” wrote Cathy DeLuca, Policy & Program Director for Walk San Francisco, in an email to Streetsblog.

The San Francisco Bicycle Coalition made a similar statement. “Any new technologies deployed on our streets that do not prioritize the safety of the most vulnerable road users are unacceptable.”

Of course, several companies, including Waymo, General Motors, Uber, and others have been and are testing some 300 self-driving cars in the state. But these cars are required to have humans sitting in the driver’s seat, hopefully with their hands hovering over the steering wheel, ready to take over if anything goes awry.

And things do go awry, as this infamous video shows:

“After Uber deployed autonomous vehicles that proved unready for San Francisco streets, we made our position clear to the DMV,” continued the SFBC in its statement. “We think that the promise of autonomous vehicles eventually improving safety is real, and we look forward to working with the folks behind that technology to realize that promise.”

According to the new regs, any company testing one of these cars will have to closely monitor it and can take over by remote control. But the ability to intervene in an emergency will be diminished.

John M. Simpson, Privacy and Technology Project Director of Consumer Watchdog  likens this safety backup to “…playing a video game, except lives will be at stake.”

With all existing robot cars, the human backup driver still has to take over frequently–Waymo has the best record, but in 2017, control of its self-driving cars had to be taken over by the human driver once every 5,600 miles. “…the new rules will threaten highway safety,” wrote Simpson.

“There is a lot of work to be done before it’s safe for driverless autonomous vehicles to be allowed on public streets,” added DeLuca.

So if robot cars aren’t ready, what’s the rush? Is it because of the promise that driverless cars will solve all our transportation woes?

Not if we continue to design cities for cars–AV or conventional–rather than people. That’s the argument made by Allison Arieff, SPUR’s Editorial Director, in a beautifully executed interactive editorial for The New York Times entitled “Automated Vehicles Can’t Save Cities.”

  • Ethan

    We know how dangerous human drivers are to pedestrians and cyclists. Humans speed, drive distracted, lack situational awareness, drive tired, drunk, or otherwise impaired, etc… Waymo already built a physical pretend city on a former military base to simulate environments and test their cars. They’ve also been testing in the suburban environment of Chandler, Arizona. They recently started testing in San Francisco with a human behind the wheel just in case. At a certain point the cars drive as safe or safer than humans. The sooner they reach that point, the sooner dangerous human drivers can stop harming or killing pedestrians and cyclists. If Waymo chooses to test without a human behind the wheel, that’s a promising indicator their internal data tells them they’ve reached a key level of safety.

  • Mario Tanev

    I don’t understand the hysteria about lack of drivers. Drivers are pretty awful. To allow operation, the requirements should be that the vehicle has pedestrian/bicyclist/vulnerable user detection, a safety record better than the average driver and no rule violations (no red-light running, double parking, bike lane parking, sidewalk parking, crosswalk parking) allowed. Uber would have been disqualified due to their red-light running.

    Transit/bike/walking advocates: You’re right that autonomous vehicles are not here to solve traffic woes (especially if people cling to the “private” aspect). I see them improving safety for vulnerable road users and instituting rule-abiding driving culture.

  • Cliff Bargar

    The “hysteria” is that there are currently no real standards specifying how rigorously these devices need to be tested. I don’t doubt that in the long run this will improve safety but in the short term it’s unacceptable to just take these companies at their word that they’ve done their due diligence.

  • Cliff Bargar

    Their internal data may tell them that but I don’t see any reason why we should be taking their word for it or just blindly trusting their data. We need real safety and testing standards before deploying these on our public streets.

  • p_chazz

    Advocates will always be against anything that they perceive as not advancing their core constituencies–pedestrians and bicyclists and which involves vehicles in any way. Haters gonna hate. Stop the death machines!!! They’re eeeevil!!!

  • Reminds me of this Jack Kornfield story about the boat drifting down the river…

  • crazyvag

    I think the standard is to be better than a human driver. Here in SF, most self-driving cars are so safe they are more likely to annoy drivers than pedestrians or bikes.

  • crazyvag

    Are you saying that the existing safety reports are not adequate? Anyone can see how many miles a car goes between incidents and every incident – no matter who is at fault – gets reported publicly to DMV.

    I think it’s up to DMV and CPUC to crank the math comparing it to humans, but you’re welcome to do it yourself – like many reporters have.

  • Cliff Bargar

    That’s not an objective discretely testable measurement; that’s data that you can only generate over thousands of miles of driving (which is really only valid if you don’t change the software!).

  • Cliff Bargar

    Yes, I’m saying that they’re entirely inadequate when it comes to determining if a product is safe. I work at a company that makes surgical robots; we do a ton of testing in a variety of settings as part of our product development process. But whenever we release a new product (or just a software update) we need to compile a rigorous report for the FDA based on the expected types of hardware and software failures and the tests performed to make sure that the system holds up in those cases.

    Obviously our devices aren’t a 1:1 comparison to driverless cars. But both are immensely complex devices with a number of possible points of failure; I’m saying that our regulatory agencies can’t just take the manufacturers’ word for it that they’ve done their due diligence.

  • crazyvag

    You’re wrong, you’d need to drive 100 million miles not thousands to prove that since the US rate is 1.25 deaths per 100 million miles driven. And even then, what kind of miles? Is a mile in SF same as a mile in Manhattan? What about San Jose? Each city is different, and given that already low death rate per mile, you need to measure safety in a different way. There’s no right or wrong answer, and perhaps it’s best to omit specific numbers because these can be gamed.

    Consider that if you were to specify SF miles, then all cities will jump with their own laws leading state or fed to overrule patchwork of laws.

    That’s why companies are asked about disengagements / mile because that’s data that can be dug deeper for analysis.

    Also, given the number of lawyers salivating (sadly) at the first death caused by self-driving car, I think companies are wise to deploy at a level they are comfortable assuming the risk for.

  • crazyvag

    Your surgical robot is a great example. Do you need to show that your robot is safer than a doctor by performing more surgeries than doctors and report or failure rate that is lower than currently?

  • Cliff Bargar

    The analogy breaks down a little bit here because our devices aren’t autonomous in the same sense and are under the control of a licensed/trained surgeon during a case, so getting regulatory clearance doesn’t depend on the device itself outperforming existing devices or manual tools, but mainly demonstrating that the device is safe and that performance is nominally equivalent to existing predicates (I don’t work in the regulatory department, though, so this is a simplification). There’s still a need for studies comparing outcomes of robotics vs. laparoscopic vs. open surgery as clinical evidence (and there have been thousands of such studies), but that’s a different story.

    Something we do have to show though, which I think is similar to the aerospace industry and should be required for autonomous vehicles, is that all of the possible failure modes of each subsystem are cataloged, accounted for, and have the necessary mitigations to prevent low level failures (i.e. one sensor malfunctioning) from causing substantial harm to the patient (or passengers, or innocent bystanders). This element of both the robustness of the design and testing currently seems to be fully up to the discretion of each company.

  • Cliff Bargar

    I think we’re sort of agreeing here? My point is that just using data from testing over thousands (or millions) of miles on streets is an important but insufficient step in determining if these devices should be considered safe (see my other post).

    I firmly disagree with your last point, though – the threat of a lawsuit is in no way a sufficient deterrent from preventing these companies from releasing devices that aren’t fully ready. We’ve even seen that in the actual results of Uber’s and Tesla’s being used on the streets.

  • Though the idea sounds exciting and is pretty intriguing; however it can actually be quite risky. The debate about this may not end as people will surely jump into it with their own perception. Nevertheless to me, it just seems like a cool and innovative phenomenon having its own pros and cons. Visit us at


An Uber AV car.

Bike Coalition Pushes Back Against Robot Car Testing in San Francisco

Note: GJEL Accident Attorneys regularly sponsors coverage on Streetsblog San Francisco and Streetsblog California. Unless noted in the story, GJEL Accident Attorneys is not consulted for the content or editorial direction of the sponsored content. Testing of driverless cars in California without a backup driver behind the wheel could begin as early as Monday, according to rules passed […]