by Farhad Arbab (CWI)
Components that comprise the agents, such as drones, in a distributed cyber-physical system may individually take seemingly reasonable actions, based on their available information Such actions may collectively lead to an eventual unacceptable system behaviour. How can we identify culpable components and agents? At CWI we develop formal models for compositional construction and analysis of such systems, along with tools and techniques to narrow the set of possible components that contribute to undesirable behaviour.
Events and activities in the real world of physical objects comprise too large a set of possibilities and contingencies to account for or control in any static plan or algorithm, meaning that cyber-physical systems simply need to be robust enough to deal with those contingencies as they come up. Thus, a software agent must tolerate less-than ideal conditions and cope with hindrances, while it stretches its resources and capabilities to still achieve its goals by sub-optimal measures. When something goes awry – as it invariably does – the ability to diagnose what went wrong and why is crucial for a system to take corrective or compensatory actions, and/or try to avoid similar undesirable behaviour in the future.
Figure 1: surveillance drone (source: Shutterstock).
Consider a team of crop surveillance drones, deployed to survey a field and relay locations of possible signs of disease, drought, flooding, etc. The team may include flying drones as well as land lobbers. Maintaining an acceptable altitude, awareness of battery levels, avoiding birds of prey, circumnavigating unforeseen obstacles such as soil shifts and slides or weed growth on paths, are among concerns that such autonomous agents must handle gracefully. Engineering best practices call for isolating such separate concerns into different modules by design, thus allowing for their reuse, as well as their independent verification. Of course, the behaviour of each agent that is composed of such modules must satisfy certain crucial properties, such as ‘when energy level is low, the drone still manages to reach its charging station’. Moreover, we need to establish that a certain team of such agents exhibits collective behaviour that satisfies some other set of crucial properties, such as ‘every patch of the field gets surveyed within a bounded time window’.
Compositional techniques comprise the hallmark of all disciplines of engineering: they enable construction of complex systems by composition of simpler components or sub-systems, such that designers can obtain and analyse the properties of interest of a resulting system through composition and analysis of the corresponding properties of its constituent parts. To compositionally model cyber-physical systems, we employ an automata-based paradigm called Soft Component Automata (SCA) [3]. An SCA is a state-transition system where transitions are labelled with actions and preferences. The intuition to the preferences is that higher-preference transitions typically contribute more towards the goal; if a component is in a state where it wants the system to move north, a transition with action “north” has a higher preference than a transition with action “south”. Preferences thus provide a natural fall-back mechanism for the agent: in ideal circumstances, the agent would perform only actions with the highest preferences, but in some circumstances, the agent may be permitted to choose a transition of lower preference. The behaviour of an SCA is given by all transitions whose preference exceeds a threshold. By adjusting the threshold, the designer can include transitions of lower preference, thus obtaining a more flexible system by virtue of the newly available actions — even though those actions contribute less directly towards the goal.
Soft component automata constitute a generalisation of constraint automata [2], one of the many semantics of Reo [1], a language for verifiable coordination of interactions among components. Consequently, our framework possesses the interesting feature that, like Reo, it blurs the line between computation and coordination — both are formalised by the same type of automata, which allows us to reason about these concepts in a uniform fashion.
When the collective behaviour of such a system violates some of its verification requirements, it is useful to know not only the particular scenario that led or leads to such violation (i.e., a counterexample), but also the likely culprits. For instance, if a flying surveillance drone fails to maintain its target altitude, a counter-example can reveal that the drone attempted to reach the far side of the field, ran out of energy and was required to perform an emergency landing. Many existing LTL-based verification tools are able to provide such counterexamples. Taking this idea of diagnostics one step further in the context of a compositional design, we would like to be able to identify the component (or components) responsible for the first action in the counterexample that violated the requirement. Such an indication may assist the designer of a system in finding the component responsible for a failure (which, in our example, may turn out to be the route planning component), or, at the very least, rule out components that are not directly responsible (such as the wildlife evasion component).
Link:
http://projects.cwi.nl/dedea
References:
[1] F. Arbab: “Puff, The Magic Protocol”, Formal Modeling: Actors, Open Systems, Biological Systems 2011, SRI International, Springer LNCS, vol. 7000, pp. 169-206.
[2] C. Baier, M. Sirjani, F. Arbab, J. Rutten: “Modeling component connectors in Reo by constraint automata”, Science of Computer Programming, 61:75–113, 2006, http://dx.doi.org/10.1016/j.scico.2005.10.008.
[3] T. Kappé, F. Arbab, C. Talcott: “A Compositional Framework for Preference-Aware Agents”, CWI Technical Report FM-1603, 2016, https://repository.cwi.nl/noauth/search/fullrecord.php?publnr=24625.
Please contact:
Farhad Arbab, CWI, The Netherlands