Autonomy is surely a core theme of technology in the 21st century. Within 20 years, we expect to see fully autonomous vehicles, aircraft, robots, devices, swarms, and software, all of which will (and must) be able to make their own decisions without direct human intervention. The economic implications are enormous: for example, the global civil unmanned air- vehicle (UAV) market has been estimated to be £6B over the next 10 years, while the world-wide market for robotic systems is expected to exceed $50B by 2025.
This potential is both exciting and frightening. Exciting, in that this technology can allow us to develop systems and tackle tasks well beyond current possibilities. Frightening in that the control of these systems is now taken away from us. How do we know that they will work? How do we know that they are safe? And how can we trust them? All of these are impossible questions for current technology. We cannot say that such systems are safe, will not deliberately try to injure humans, and will always try their best to keep humans safe. Without such guarantees, these new technologies will neither be allowed by regulators nor accepted by the public.
Imagine that we have a generic architecture for autonomous systems such that the choices the system makes can be guaranteed? And these guarantees are backed by strong mathematical proof? If we have such an architecture, upon which our autonomous systems (be they robots, vehicles, or software) can be based, then we can indeed guarantee that our systems never intentionally act dangerously, will endeavour to be safe, and will - as far as possible - act in an ethical and trustworthy way. It is important to note that this is separate from the problem of how accurately the system understands its environment. Due to inaccuracy in modelling the real world, we cannot say that a system will be absolutely safe or will definitely achieve something; instead we can say that it tries to be safe and decides to carry out a task to its best ability. This distinction is crucial: we can only prove that the system never decides to do the wrong thing, we cannot guarantee that accidents will never happen. Consequently, we also need to make an autonomous system judge the quality of its understanding and require it to act taking this into account. We should also verify, by our methods, that the system's choices do not exacerbate any potential safety problems.
Our hypothesis is that by identifying and separating out the high-level decision-making component within autonomous systems, and providing comprehensive formal verification techniques for this, we can indeed directly tackle questions of safety, ethics, legality and reliability. In this project, we build on internationally leading work on agent verification (Fisher), control and learning (Veres), safety and ethics (Winfield), and practical autonomous systems (Veres, Winfield) to advance the underlying verification techniques and so develop a framework allowing us to tackle questions such as the above. In developing autonomous systems for complex and unknown environments, being able to answer such questions is crucial.
|