Pages Navigation Menu

SHOWFUN - Show & Fun & More!

Dumb new RAND report claims it’s impossible to road-test a robot

The RAND Corporation policy think tank has released a new report on the state of self-driving car research, arguing that the industry’s goal of a safe, government-approved robo-car is a pipe dream with no currently viable path to market. The biggest problem? It’s impossible to road-test a car well enough to be sure that it will be safe in all situations. By their calculations, it could take billions of miles and over 100 years of testing to adequately prove that the robo-car is will save lives relative to the human average.

The report is, of course, nonsense — and some of the best arguments as to why have been made by the study’s own authors, in the past.

The core conceit of this report can be stated like this: We trust humans to drive (obviously), and we have decades of data about how they drive, where they struggle, and which activities present the highest statistical levels of risk when they’re behind the wheel. If we’re going to make the argument that robot cars will save lives by replacing human drivers, then we’ll need to collect similar data for comparison — makes sense so far. They argue that legislators will basically have no choice in the long run but to either nix the idea of fully autonomous cars, or simply accept that the vehicles will have to hit the road before we have a real proof that they’re safer than human drivers.

This is where the report really loses the thread. It makes the argument that we’d need to drive these cars for billions of miles to adequately compare its driving safety to that of human drivers. Collectively, humans drive trillions of miles every year — beat that, Google!

However, this is the wrong standard to use for comparison. Yes, it’s true that the few hundred autonomous cars that currently exist cannot drive enough miles to actually produce the lowered accident frequencies we’re looking for with low enough margins for error — but so what? Your average human driver (that is to say, not the imaginary and irrelevant entity made up of statistical averages, but an actual human being who represents those averages fairly well) gets an absolute maximum of a few hundred thousand test miles over the course of an entire lifetime. Yet, we allow this person to drive.

Why? Because assessing the skill of an individual driver takes a wildly different volume of data than assessing the skill of a population of individual drivers. When concerning ourselves with all of humanity, we need those enormous sample sizes to let us see the overall truths that lay behind the incredible diversity of human behavior. Some people are good drivers, others bad, and we all vary day to day. Robots, for better or worse, are much more uniform, and must be treated more like an individual driver despite the number of vehicles that driver will go on to control. It is simply not necessary to ask for such high volumes of testing for what is, at the end of the day, a single driver.

That’s why we don’t mind giving a license to a 17-year-old with maybe an hour of observation by a tester. When we want to tell with high-enough reliability whether this single human being performs up to the standards required by the population, nobody suggests that we must test that person for billions of miles to make an informed decision. We don’t necessarily have to measure how safe any one human driver is by mechanically watching at them over and over in a huge number of repetitions of every possible situation. We can instead measure how safe they will likely be by taking a small dataset and using it to make predictions about what we would observe if we did do a billion-mile road test. We have a long and successful history of comparing these individual predictions to the big-data statistical measurements we get from the population into which that individual will be released.

Robo-ethicsNow, of course, I’m not arguing that we should release a commercial self-driving car the very instant it can barely pass a human driving test even once — there would be a certain justice to that sort of vehicular Turing Test, but it’s not reasonable with lives on the line. We can make certain assumptions about a human’s instincts and intuitions that we simply can’t make about a robot, so we have to throw a wider array of stimuli at its weird, alien mind to see how it will react.

Robots also allow us the luxury of directly editing the behavior of the driver in all situations, while humans will almost always panic or forget their training in the most crucial moments. It is simply more effective to invest in robot testing than human testing, in terms of the safety return, and so it’s our responsibility to do so. It’s totally reasonable to be far more strict with Google’s robo-car than we are with Gavin The Seventeen Year Old, who loves driving to speed metal and probably doesn’t remember much about emergency driving anyway. However, it does stop being reasonable at a certain point — a point we’ll reach long before pre-sale autonomous cars drive their billionth mile.

Study co-author Nidhi Karla wrote a related blog post for RAND earlier this very year, arguing that adequate testing could take anywhere from 12 years to six weeks, given different numbers of testing vehicles. The six-week fleet would need 10,000 cars; dividing up the US government’s recent $4 billion dollar investment is autonomous car research, that’s about $400,000 apiece. That’s not a realistic use of money but the point is that, in principle, this is not remotely as difficult a problem to solve as RAND argues in this, the latest of its many and conflicting takes on robot car safety.

Karla pointed out that there could very likely be a moral imperative to release autonomous vehicles as quickly as possible, getting them on the road so they can deliver us those billions and trillions of miles worth of data. If self-driving tech could someday cut fatalities to nearly zero, then getting to that point as quickly as possible might be worth releasing imperfect tech for the masses, collecting their crash data, and saving far more lives in the long run.

That’s actually kind of the extremist position, more of a thought experiment than an empirical argument, but it illustrates the absurdity of the billion-mile road test before approving any robot driver. The report’s authors pay lip-service to the need for some other form of testing to complement road testing (perhaps simulated road tests run at the speed of a super-computer?) but without a viable such strategy on offer, they’re unavoidably arguing for an indefinite stall on self-driving development.

When two human beings crash and kill each other, it is only a tragedy, serving no purpose and improving nothing in the future. When two robots collide and kill their passengers, it is a tragedy and an opportunity to learn and apply better practices in the future. If we ignore the emergent, life-saving implications of that distinction, we’ll end up serially underestimating the number of lives we could save, and how soon we could save them.

Leave a Comment

Captcha image