Autonomous systems promise to improve our lives;
driverless trains and robotic cleaners are examples of autonomous systems that are already among us and
work well within confined environments.
It is time we work to ensure developers can design trustworthy autonomous systems for dynamic environments and
provide evidence of their trustworthiness.
Due to the complexity of autonomous systems, typically involving AI
components, low-level hardware control, and sophisticated interactions with
humans and the uncertain environment, evidence of any nature requires efforts from a
variety of disciplines. To tackle this challenge, we gathered consortium of
experts on AI, robotics, human-computer interaction, systems and software
engineering, and testing. Together, we will establish the foundations and
techniques for verification of properties of autonomous systems to inform
designs, provide evidence of key properties, and guide monitoring
after deployment.
Currently, verifiability is hampered by several issues: difficulties to
understand how evidence provided by techniques that focus on individual
aspects of a system (control engineering, AI, or human
interaction, for example) compose to provide evidence for the system as
whole; difficulties of communication between stakeholders that use different
languages and practices in their disciplines; difficulties in dealing with
advanced concepts in AI, control and hardware design, software for critical
systems; and others. As a consequence, autonomous systems are often developed
using advanced engineering techniques, but outdated approaches to
verification.
We propose a creative programme of work that will enable fundamental changes
to the current state of the art and of practice. We will define a
mathematical framework that enables a common understanding of the diverse
practices and concepts involved in verification of autonomy. Our framework
will provide the mathematical underpinning, required by any engineering
effort, to accommodate the notations used by the various disciplines. With
this common understanding, we will justify translations between languages,
compositions of artefacts (engineering models, tests, simulations, and so on)
defined in different languages, and system-level inferences from
verifications of components.
With such a rich foundation and wealth of results, we will
transform the state of practice. Currently, developers build systems from
scratch, or reusing components without any evidence of their
operational conditions. Resulting systems are deployed in constrained
conditions (reduced speed or contained environment, for example) or offered
for deployment at the user's own risk. Instead, we envisage the future
availability of a store of verified autonomous systems and components.
In such a future, in the store, users will find not just system
implementations, but also evidence of their operational conditions and
expected behaviour (engineering models, mathematical results, tests, and so
on). When a developer checks in a product, the store will require all these
artefacts, described in well understood languages, and will automatically
verify the evidence of trustworthiness. Developers will also be able to check
in components for other developers; equally, they will be accompanied by
evidence required to permit confidence in their use.
In this changed world, users will buy applications with clear guarantees of
their operational requirements and profile. Users will also be able to ask
for verification of adequacy for customised platforms and environment, for
example. Verification is no longer an issue.
Working with the EPSRC TAS Hub and other nodes, and our extensive range of
academic and industrial partners, we will collaborate to ensure that the
notations, verification techniques, and properties, that we consider,
contribute to our common agenda to bring autonomy to our everyday lives.
|