Regulation is often viewed as a one-time decision. But the world changes. Is it possible or desirable to design regulatory policies so that they too can be updated — through periodic decisions, or even automatically — as new information becomes available on changing science, technology, society and social values? Can the rules addressing emerging technologies presenting both promise and risks — for example, autonomous vehicles — be adapted as our experience with those technologies develops? Can and should regulations be “built to learn?”
Professor Jonathan Wiener has been addressing these questions of “adaptive regulation” as part of a broad research agenda. Working with several colleagues across the Duke campus, he has been studying how risk governance policies respond to unplanned shocks, how governments evaluate the performance of their past policies, and how policies can be made more adaptive. Risk regulation should be reconceived as “an ongoing process of learning,” says Wiener, the Perkins Professor of Law and Professor of Public Policy and Environmental Policy. He is also co-director of the Rethinking Regulation program at Duke University’s Kenan Institute for Ethics, and co-director of the new Duke Center on Risk. With Duke colleagues Edward Balleisen, Lori Bennear, and Kimberly Krawiec, Wiener recently published a book on unplanned reactions to crises: Policy Shock: Recalibrating Risk and Regulation after Oil Spills, Nuclear Accidents, and Financial Crises (Cambridge University Press, 2017).
“For all sorts of problems, the standard regulatory approach is a one-time decision, ‘yes’ or ‘no’ to a new technology, or setting a limit on emissions, that may be in place indefinitely,” he says. “But we know that technology changes, science teaches us new things, the economy and demographics change, and social values may change. So especially where we think the world will change significantly and we’re going to learn more about the benefits and risks of a new technology or about the costs and benefits of the policy, we could gain by having better learning built into the regulatory system from the beginning.”
But adaptive regulation can come in many forms. In an Oct. 31 address to the Society for Risk Analysis-Latin America Congress in Mexico City, Wiener outlined multiple approaches for doing so: surveying variations across jurisdictions to find which policies yield which outcomes; learning from experimental trials of specific policy treatments; mandating retrospective review of policies to revise those policies for better outcomes and also to improve the accuracy of prospective impact assessments; periodic reviews and revisions of ongoing policies; sunset and reauthorization clauses; automatic policy adjustments based on pre-set criteria; allowing discretionary policy adjustment by an oversight committee; adaptive licensing of new products to targeted subpopulations; and more.
In a forthcoming work, Wiener and Rethinking Regulation co-director Lori Bennear, the Juli Plant Grainger Associate Professor of Energy Economics and Policy at the Nicholas School of the Environment, examine those approaches, to develop a framework for instrument choice for adaptive regulation. Fully planned adaptive regulation, they write, could replace static regulatory decision-making with “a series of multiple sequential decisions with monitoring, review, and adaptive adjustment over time.” Although there has been much written on adaptive ecosystem management and retrospective policy review, there has not yet been a full comparison of the many different potential ways to undertake adaptive policy revisions, or how frequently reviews should occur. “How does the frequency of the periodic review correspond to how much the world is really changing and how much we’re learning?” Wiener asks.
In their article, Wiener and Bennear offer a detailed analysis of the differences among these instruments, including the choice between “discretionary adaptive regulation,” through which an agency or committee might look at new information and make a judgement about how to change the policy, and a system of pre-set automatic adaptive thresholds.
“The policy could set a rule or a standard at the outset, but state that if a specific important indicator goes above or below a certain level, the policy will change,” Wiener says. But the success of that approach, he notes, depends on prospectively selecting the right measure to trigger a future policy adjustment: “It doesn’t really take account of learning new things that we hadn’t realized were as important at the beginning of the regulatory process.” To fill that gap, he and Bennear are developing a framework and typology of choice among instruments for adaptive regulation.
Case study: Applying adaptive regulation to autonomous vehicles
In the last academic year, Wiener, Bennear, and their colleague Michael Clamann led an interdisciplinary Duke Bass Connections project that focused on developing model regulatory design to address autonomous vehicles. Students in the project, titled Governance and Adaptive Regulation of Transformational Technologies in Transportation, explored different policy instruments and the evolving landscape of relevant risks — ordinary car accidents as well as network failures or hackers — before proposing a regulatory framework. Their final reports included a proposal to the N.C. Department of Transportation for testing self-driving trucks.
“We’re going to learn more in the future about how autonomous vehicles interact with human drivers and how well their on-board sensors are in detecting obstacles, how vehicle-to-vehicle communication networks operate, how vehicle-to-infrastructure communication can help avoid accidents, and how people and pedestrians react to self-driving cars,” Wiener says. “So autonomous vehicles are a good example of new technology for which an adaptive approach that can take into account learning over time may be needed.”
“Adaptive licensing” is one policy instrument Wiener and Bennear introduced to their Bass Connections students and are examining in their scholarship, taking note of its advantages and possible limitations.
“Instead of just saying ‘yes’ or ‘no’ to autonomous vehicles for everyone, regulators might say, ‘Autonomous vehicles should be licensed first for the drivers who would benefit most from them,’” Wiener explains. “Those might be the worst human drivers — those who are the least experienced, or who have been identified as having the worst accident records. And as technology improves and as we learn from those first sub-populations to improve the vehicles and networks, then their use could be expanded.”
The concept is neither simple nor straightforward, he says, given the many gradations of autonomous automotive technology that range from lane assist and automatic braking to completely self-driving cars. Requiring human intervention in emergency conditions poses its own challenges. “Emergencies may come up quickly, and the human, having relied on the autonomous algorithm, may not be attentive or even skilled to take over,” he says. “Those kinds of human-machine interactions present a challenge to the partial or gradual introduction of the new technology that we are going to learn a lot about. And a large network of fully autonomous vehicles may be safer than human drivers in most conditions, and use less energy with less pollution, but may pose a rare risk of wide system failure due to glitch or hacking.”
Read more scholarship from Professor Jonathan Wiener, including “Precautionary Principle,” in Principles of Environmental Law 174-185 (Ludwig Krämer & Emanuela Orlando, eds., 2018), which discusses a version of adaptive regulation known as “provisionality.”