Safety-Critical Autonomous Systems: What is Possible? What is Required?
The last 20 years have seen enormous progress in autonomous vehicles, from planetary rovers, to unmanned aerial vehicles, to the self-driving cars that we are starting to see on the roads around us. An open question is whether we can we make self-driving cars that are safer than human-driven cars, how much safer they need to be, and what advances will be required to bring them to fruition. In this talk, I will discuss some of the approaches used in the aerospace industry, where flight critical subsystems must achieve probability of failure rates of less than 1 failure in 10^9 flight hours (i.e. less than 1 failure per 100,000 years of operation). Systems that achieve this level of reliability are hard to design, hard to verify, and hard to validate, especially if software is involved. I will describe some of the challenges that the aerospace community faces in designing systems with this level of reliability, how they are designed and implemented done today, and what is being done for the next generation of (much more complex, software-driven) aerospace systems. I will also speculate about whether similar approaches are needed in self-driving cars, and whether these levels of safety are achievable.
Richard M. Murray received the B.S. degree in Electrical Engineering from California Institute of Technology in 1985 and the M.S. and Ph.D. degrees in Electrical Engineering and Computer Sciences from the University of California, Berkeley, in 1988 and 1991, respectively. He is currently the Thomas E. and Doris Everhart Professor of Control & Dynamical Systems and Bioengineering at Caltech. Murray's research is in the application of feedback and control to networked systems, with applications in biology and autonomy. Current projects include analysis and design of biomolecular feedback circuits, synthesis of discrete decision-making protocols for reactive systems, and design of highly resilient architectures for autonomous systems.