“Regulation is typically a one-time, yes-no decision, but the world changes, so regulation needs to learn and adapt,” says Jonathan Wiener, the Perkins Professor of Law, Professor of Public Policy, and Professor of Environmental Policy. Last year, he co-led an interdisciplinary Duke Bass Connections project on developing model regulatory design to address automated vehicles. Students in the project, titled Governance and Adaptive Regulation of Transformational Technologies in Transportation, explored different policy instruments and the evolving landscape of relevant risks — ordinary car accidents as well as network failures or hackers — before proposing a regulatory framework. Their final reports included a proposal to the N.C. Department of Transportation for testing self-driving trucks.
“We’re going to learn more in the future about how automated vehicles interact with human drivers, how well their on-board sensors detect obstacles, how vehicle-to-vehicle and vehicle-to-infrastructure communication can help avoid accidents, and how people and pedestrians react,” says Wiener, co-director of both the Rethinking Regulation program at Duke University’s Kenan Institute for Ethics and the new Duke Center on Risk at Duke Science & Society. He worked with Lori Bennear, the Juli Plant Grainger Associate Professor of Energy Economics and Policy at the Nicholas School of the Environment, on the Bass Connections project, and they are writing a paper on a framework for adaptive regulation. “So automated vehicles are a good example of new technology for which an adaptive approach that can take into account learning over time may be needed.”
“Adaptive licensing” is one policy instrument Wiener and his colleagues introduced to their Bass Connections students and are examining in their scholarship, taking note of its advantages and possible limitations.
“Instead of just saying ‘yes’ or ‘no’ to automated vehicles for everyone, regulators might say, ‘Automated vehicles should be licensed first for the drivers who would benefit most from them,’” says Wiener. “Those might be the least experienced drivers, or those unable to drive, or those who have the worst accident records. And as technology improves and as we learn from those first sub-populations to improve the vehicles and networks, then their use could be expanded.”
The concept is neither simple nor straightforward, he says, given the many gradations of automated technology that range from lane assist and automatic braking to completely self-driving cars. Moreover, “emergencies may come up quickly, and the human, having relied on the algorithm, may not be attentive or skilled to take over quickly,” he says. “Those kinds of human-machine interactions present a challenge to the partial or gradual introduction of the new technology that we are going to learn a lot about. And fully automated vehicles and a full fleet of automated vehicles networked with each other, may be safer than a mixed fleet in most conditions, and use less energy with less pollution — but may also pose a rare risk of system-wide failure due to glitch or hacking. We need to study these kinds of potential risk-risk tradeoffs.”