Hostname: page-component-78c5997874-94fs2 Total loading time: 0 Render date: 2024-11-11T01:31:46.953Z Has data issue: false hasContentIssue false

Specifying and Reasoning about CPS through the Lens of the NIST CPS Framework

Published online by Cambridge University Press:  04 March 2022

THANH HAI NGUYEN
Affiliation:
Department of Computer Science, New Mexico State University, Las Cruces, USA (e-mails: thanhnh@nmsu.edu, bundasma@nmsu.edu, stran@nmsu.edu)
MATTHEW BUNDAS
Affiliation:
Department of Computer Science, New Mexico State University, Las Cruces, USA (e-mails: thanhnh@nmsu.edu, bundasma@nmsu.edu, stran@nmsu.edu)
TRAN CAO SON
Affiliation:
Department of Computer Science, New Mexico State University, Las Cruces, USA (e-mails: thanhnh@nmsu.edu, bundasma@nmsu.edu, stran@nmsu.edu)
MARCELLO BALDUCCINI
Affiliation:
Saint Joseph’s University, Philadelphia, USA (e-mails: mbalducc@sju.edu, kcampbel@sju.edu)
KATHLEEN CAMPBELL GARWOOD
Affiliation:
Saint Joseph’s University, Philadelphia, USA (e-mails: mbalducc@sju.edu, kcampbel@sju.edu)
EDWARD R. GRIFFOR
Affiliation:
National Institute of Standards and Technologies, Gaithersburg, USA (e-mail: edward.griffor@nist.gov)
Rights & Permissions [Opens in a new window]

Abstract

This paper introduces a formal definition of a Cyber-Physical System (CPS) in the spirit of the CPS Framework proposed by the National Institute of Standards and Technology (NIST). It shows that using this definition, various problems related to concerns in a CPS can be precisely formalized and implemented using Answer Set Programming (ASP). These include problems related to the dependency or conflicts between concerns, how to mitigate an issue, and what the most suitable mitigation strategy for a given issue would be. It then shows how ASP can be used to develop an implementation that addresses the aforementioned problems. The paper concludes with a discussion of the potentials of the proposed methodologies.

Type
Original Article
Copyright
© The Author(s), 2022. Published by Cambridge University Press

1 Introduction

The utility (potable water, wastewater) distribution systems, the electric power grid, the transportation network, automated driving systems (ADS), hospital robots, and smart-home systems are a few examples of cyber-physical systems (CPS) Footnote 1 that are (or soon to be) a part of our daily life. Before any CPS is deployed into the real-world, several concerns need to be investigated and addressed, for example, why should someone trust that the CPS will perform its functions safely, securely and reliably? How will such a system respond to a certain critical conditions and will that response be acceptable? In other words, evidence must be gathered and argued to be sufficient to conclude that critical properties of a CPS have been assured before its deployment. For financial and practical reasons, the validation and verification of a CPS should be done as early as possible, starting with its design. CPS are complex systems that evolve with use, requiring a principled methodology and tools for developing an assurance case before release to the market. Such a methodology and the tools for applying it are two key contributions of this paper. We present here a formalization of a CPS with a clearly defined semantics that enables the assessment of critical system properties. The need for such a foundation for assurance can be seen in the next example.

Example 1 Suppose that we would like to develop an Automated Driving System (ADS). We have two constraints that we would like to enforce: (a) packets sent from the wind-sensor, a part of the situational awareness module (SAM), to the main processor must be fast and reliable; (b) all communication channel must be encrypted. We will refer to (a) and (b) as an Integrity concern and Encryption concern, respectively.

Consider a situation in which the ADS has only one possible communication channel, which is fast, reliable when encryption is disabled, but is not when encryption is enabled. In this situation, the two constraints are in conflict with each other. It is impossible to satisfy both of them.

Assume that we also have some preference, called Verification, which is related to the verification of received data. Encrypted data would have been preferred to non-encrypted one. If the wind-sensor uses the non-encrypted socket communication, it can satisfy (or positively affect) the Integrity concern but it does not satisfy (or negatively affect) the Verification preference.

In this paper, we view a CPS as a dynamic system that consists of several components with various constraints and preferences which will be referred as concerns hereafter. Given a concrete state of the system, a concern might or might not be satisfied. We aim at laying the mathematical foundation for the study of CPS’ concerns. This foundation must allow CPS developers and practitioners to represent and reason about the concerns and answer questions such as (i) will a certain concern or a set of concerns be satisfied? (ii) is there any potential conflict between the concerns? and (iii) how can we generate the best plan that addresses an issue raised by the lack of satisfaction of a concern? Readers familiar with research in representing and reasoning about dynamic systems might wonder whether well-known formalisms for representing and reasoning about dynamic systems such as automata, action languages, Markov decision process, etc. could be used for this purpose. Indeed, our proposed framework extends these formalisms by adding a layer for modeling the components and concerns in CPS.

To achieve our goal, we propose a formalism for representing and reasoning about concerns of CPS. We will focus on the properties described in the CPS Framework (CPSF) proposed by the CPS Public Working Group (CPS PWG) organized by the National Institute of Standards and Technology (NIST) (Griffor et al. Reference Griffor, Greer, Wollman and Burns2017a; Griffor et al. Reference Griffor, Greer, Wollman and Burns2017b; Wollman et al. Reference Wollman, Weiss, Li-Baboud, Griffor and Burns2017). This framework defines several important concepts related to CPS such as facets (modes of the system engineering process: conceptualization, realization and assurance), concerns (areas of concern), and aspects (clusters of concerns: functional, business, human, trustworthiness, timing, data, composition, boundaries, and lifecycle). These concepts are organized in an ontology which is easily extensible and allows us to better manage development and implementation within, and across, multiple application domains. We formally propose the notion of a CPS system that (i) considers constraints among concerns; (ii) enables the automatic identification of conflicts between concerns; and (iii) enables the application of planning techniques in computing mitigation strategies. Building and establishing upon CPSF are important properties of our research, which distinguish it from much of the work done on CPS so far. While most of the prior research is focused on a specific class of CPS or of aspects, for example, CPS for smart grids or concerns related to cybersecurity (Uluagac et al. Reference Uluagac, Aksu and Babun2019), the methodology we provide is intentionally domain-independent and applicable to any class of CPS.

The paper is organized as follows. Section 2 presents a brief overview of the CPS framework, answer set programming, action language, and reasoning with ontologies using answer set programming. Section 3 contains the main contribution of the paper, a formalization of a CPS theory, which includes a specification of CPS domain and the semantics defining when a concern is satisfied. It also formally defines several reasoning tasks related to the satisfaction of concerns such as (i) when is a concern satisfied; (ii) what are the most/least trustworthy components of a CPS system; (iii) is the CPS system compliant; (iv) computing a mitigation strategy for a system when some concerns become unsatisfied; (v) which mitigation strategy has the best chance to succeed. Section 4 provides an answer set programming implementation of the tasks. The paper concludes with the discussion of the related work. The paper is arranged in a way such that it can be of interest to different groups of readers. Specifically, it separates the formal definitions of a CPS, and the reasoning tasks associated with it, from a concrete implementation of the reasoning tasks. As such, a reader only interested in the formal theories would likely be interested in Section 3. On the other hand, the code in Section 4 would be of interest to readers who would like to experiment with their own CPS.

2 Background

This section reviews the background notions that will be used in the paper, including the CPS ontology, answer set programming, and the use of logic programming in ontology reasoning.

2.1 NIST CPS framework and the CPS ontology

One of the major challenges in designing, maintaining and operating CPS is the diversity of areas of expertise involved in these tasks, and in the structure of the CPS itself. For example, developing a “smart ship” (Moschopoulos Reference Moschopoulos2001) involves close interaction among, and cooperation of, experts in disciplines ranging from cybersecurity to air conditioning systems and from propulsion to navigation. As demonstrated by, for example, NASA’s Mars Climate Orbiter, Footnote 2 ensuring a shared understanding of a CPS and the interoperability of its components is an essential step towards its success – a goal that is made even more elusive by the fact that the areas of knowledge relevant to a CPS vary greatly depending to the type of CPS considered.

For this purpose, NIST recently hosted a Public Working Group on CPS with the aim of capturing input from those involved in CPS to define a CPS reference framework supporting common definitions and facilitating interoperability between such systems, regardless of the type of CPS considered. A key outcome of that work was the CPS Framework (Release 1.0, published as three separate NIST Special Publications Griffor et al. Reference Griffor, Greer, Wollman and Burns2017a; Griffor et al. Reference Griffor, Greer, Wollman and Burns2017b; Wollman et al. Reference Wollman, Weiss, Li-Baboud, Griffor and Burns2017), which proposes a means of describing three facets during the life of a CPS: conceptualization, realization, and assurance of CPS; and to facilitate these descriptions through analytical lenses, called aspects, which group common concerns addressed by the builders and operators of the CPS. The CPS Framework articulates the artifacts of a CPS in a precise way, including the concerns that motivate important requirements to be considered in conceptualizing, realizing (including operating), and assuring CPS. Albeit helpful, being a reference framework the CPS Framework only helps with the specification of a CPS and the discussion among experts. It does not, by itself, reduce the amount of work necessary to analyze the CPS and its evolution of the CPS lifecycle.

This realization gave impulse to the investigation that ultimately resulted in the CPS Ontology (Balduccini et al. Reference Balduccini, Griffor, Huth, Vishik, Burns and Wollman2018; Nguyen et al. Reference Nguyen, Son, Bundas, Balduccini, Garwood and Griffor2020a), which provides a CPS analysis methodology based on the CPS Framework featuring a vocabulary that describes and supports the understanding and development of new and existing CPS, including those designed to interact with other CPS and function in multiple interconnected infrastructure environments.

At the core of the CPS Framework and of the CPS Ontology are the notions of domains, facets (conceptualization, realization and assurance), aspects and concerns, and a cyber-physical functional decomposition. The product of the conceptualization facet is a model of the CPS (requirements added to address prioritized concerns), the product of the realization facet is a CPS satisfying the model and the product of the assurance facet is assurance case for the prioritized set of concerns. Domains represent the different application areas of CPS such as automated driving systems, electrical grid, etc. Concerns are characteristics of a system that one or more of its stakeholders are concerned about. They are addressed throughout the lifecycle of a CPS, including development, maintenance, operation and disposal. Requirements are assertions about the state variables of a CPS aimed at addressing the concerns. The reader should note that, in line with the current CPSF specification, we consider the term property to be a synonym of requirement, and we use the two terms interchangeably in the rest of this paper. Artifacts are the elements of products of the facets for a CPS and include requirements, design elements, tests, and judgments. Aspects are the ten high-level concerns of the CPS Framework: functional, business, human, trustworthiness, timing, data, communication, boundaries, composition, and lifecycle.

  • Functional aspect is a set of concerns related to the sensing, computational, control, communications and actuation functions of the CPS.

  • Business aspect includes the concerns about enterprise, time to market, environment, regulation, cost, etc.

  • Human aspect is a set of concerns related to how a CPS is used by humans or interacts with them.

  • Trustworthiness aspect is a set of concerns related to the trustworthiness of CPS including security, privacy, safety, reliability, and resilience. In this paper we adopt the definition of trustworthiness from the NIST CPS Framework, where the term is taken to denote the demonstrable likelihood that the system performs according to designed behavior under any set of conditions as evidenced by its characteristics. Footnote 3

  • Timing aspect: Concerns about time and frequency in CPS, including the generation and transport of time and frequency signals, time-stamping, managing latency, timing composability, etc.

  • Data aspect includes the concerns about data interoperability including data semantics, identify operations on data, relationships between data, and velocity of data.

  • Communications aspect includes the concerns about the exchange of information between components of a CPS.

  • Boundaries aspect is set of concerns about the interdependence among behavioral domains. Concerns related to the ability to successfully operate a CPS in multiple application area.

  • Composition aspect includes the concerns about the ability to compute selected properties of a component assembly from the properties of its components. Compositionality requires components that are composable: they do not change their properties in an assembly. Timing composability is particularly difficult.

  • Lifecycle aspect: Concerns about the lifecycle of CPS including its components.

The CPS Ontology defines concepts and individuals related to concepts (with focus on Trustworthiness) and the relationships between them (e.g. has-subconcern). Figure 2, excluding the nodes labeled CAM, SAM and BAT and links labeled “relates” and “active,” shows a fragment of the CPS ontology where circle nodes represent specific concerns and gray rectangle nodes represent properties. To facilitate information sharing, the CPS Ontology leverages standards such as the Resource Description Framework (RDF. Footnote 4 ) and the Web Ontology Language (OWL Footnote 5 ) for describing the data, representing the entities and their relationships, formats for encoding the data and related metadata for sharing and fusing. An entity or relationship is defined in the ontology by an RDF-triple (subject, predicate, object). Below are the main classes and relationships in the CPS ontology.

Aspects and Concerns. The ontology defines the highest-level concept of Concern with its refinement of Aspect. In the concern tree in Figure 1, the circle nodes of a concern tree represent specific concerns which are individuals of class Concern. The root nodes of the concern tree is a particular kind of concern that is an instance of class Aspect (subclass of Concern). Specific concerns are represented as individuals: Trustworthiness as an individual of class Aspect, Security and Cybersecurity of class Concern. Edges linking aspects and concerns are represented by the relation has-subconcern. A relation has-subconcern is used to associate a concern with its sub-concerns. Thus, Trustworthiness aspect has-subconcern Security, which in turn has-subconcern Cybersecurity.

Fig. 1. NIST CPS ontology.

Fig. 2. CPS Ontology and LKAS domain.

Properties. Properties of a CPS are represented by individuals of class Property. In the CPS Framework, a concern can be addressed by a combination of properties. An edge that links a property p with an aspect or concern c is represented by the relation addressed-by, which says that concern c is addressed by property p. For example in Figure 2 (LKAS domain), concern Integrity has been addressed by some properties: Secure-Boot, Advanced-Mode, Powerful-Mode, Normal-Mode and Saving-Mode.

To ease the reading, we provide a summary of the main classes and relationships in the CPS ontology in Table 1.

Table 1. Main components of the CPS ontology.

2.2 Answer set programming

Answer Set Programming (ASP) (Marek and Truszczyński Reference Marek and Truszczyński1999; Niemelä Reference Niemelä1999) is a declarative programming paradigm based on logic programming under the answer set semantics. A logic program $\Pi$ is a set of rules of the form:

\begin{equation*}c \leftarrow a_1,\ldots,a_m, \:{not}\: b_{1},\ldots,\:{not}\: b_n\end{equation*}

where c, $a_i$ ’s, and $b_i$ ’s are literals of a propositional language Footnote 6 and $\mathit{not}$ represents (default) negation. c can be absent. Intuitively, a rule states that if $a_i$ ’s are believed to be true and none of the $b_i$ ’s is believed to be true then c must be true. For a rule r, $r^\text{+}$ and $r^-$ , referred to as the positive and negative body, respectively, denote the sets $\{a_1,\ldots,a_m\}$ and $\{b_{1},\ldots,b_n\}$ , respectively.

Let $\Pi$ be a program. An interpretation I of $\Pi$ is a set of ground atoms occurring in $\Pi$ . The body of a rule r is satisfied by I if $r^\text{+} \subseteq I$ and $r^- \cap I = \emptyset$ . A rule r is satisfied by I if the body of r is satisfied by I implies $I \models c$ . When c is absent, r is a constraint and is satisfied by I if its body is not satisfied by I. I is a model of $\Pi$ if it satisfies all rules in $\Pi$ .

For an interpretation I and a program $\Pi$ , the reduct of $\Pi$ w.r.t. I (denoted by $\Pi^I$ ) is the program obtained from $\Pi$ by deleting (i) each rule r such that $r^- \cap I \neq \emptyset$ , and (ii) all atoms of the form $\:{not}\: a$ in the bodies of the remaining rules. Given an interpretation I, observe that the program $\Pi^I$ is a program with no occurrence of $\:{not}\: a$ . An interpretation I is an answer set (Gelfond and Lifschitz Reference Gelfond and Lifschitz1990) of $\Pi$ if I is the least model (wrt. $\subseteq$ ) of $\Pi^I$ .

A program $\Pi$ can have several answer sets, one answer set, or no answer set. $\Pi$ is said to be consistent if it has at least one answer set; it is inconsistent otherwise. Several extensions (e.g. choice atoms, aggregates, etc.) have been introduced to simplify the use of ASP. We will use and explain them when needed. Given a program $\Pi$ and an atom a, we write $\Pi \models a$ to say that a belongs to every answer set of $\Pi$ . to say that a belongs to at least one answer set of $\Pi$ .

We illustrate the concepts of answer set programming by showing how the 3-coloring problem of a bi-directed graph G can be solved using logic programming under the answer set semantics. Let the three colors be red (r), blue (b), and green (g) and the vertex set of G be $\{0,1,\ldots,n\}$ . Let $\Pi\left(G\right)$ be the program consisting of

  • the set of atoms $edge\left(u,v\right)$ for every edge $\left(u,v\right)$ of G,

  • for each vertex u of G, the rule stating that u must be assigned one of the colors red, blue, or green:

    \begin{equation*} 1 \{color\left(u,g\right); color\left(u,r\right); color\left(u,b\right) \} 1 \leftarrow \end{equation*}

This rule uses the choice atom, introduced in Niemelä et al. (Reference Niemelä, Simons and Soininen1999), to simplify the use of ASP. This atom says that exactly one of the atoms $color\left(u,g\right)$ , $color\left(u,r\right)$ , and $color\left(u,b\right)$ must be true.

  • for each edge $\left(u,v\right)$ of G, three rules representing the constraint that u and v must have different color:

    \begin{eqnarray*}& \leftarrow & color\left(u, r\right), color\left(v, r\right), edge\left(u,v\right) \\& \leftarrow & color\left(u, b\right), color\left(v, b\right), edge\left(u,v\right) \\& \leftarrow & color\left(u, g\right), color\left(v, g\right), edge\left(u,v\right)\end{eqnarray*}

It can be shown that for each graph G, (i) $\Pi\left(G\right)$ has no answer set, that is, is inconsistent iff the 3-coloring problem of G does not have a solution; and (ii) if $\Pi\left(G\right)$ is consistent then each answer set of $\Pi\left(G\right)$ corresponds to a solution of the 3-coloring problem of G and vice versa.

2.3 Action language $\mathcal{B}$

We review the basics of the action description language $\mathcal{B}$ (Gelfond and Lifschitz Reference Gelfond and Lifschitz1998). An action theory in $\mathcal{B}$ is defined over two disjoint sets, a set of actions A and a set of fluents F. A fluent literal is either a fluent $f \in \mathbf{F}$ or its negation $\neg f$ . A fluent formula is a propositional formula constructed from fluent literals. An action domain is a set of laws of the following form:

(1) \begin{align} \mathit{Executability \: condition{:}} & \:\quad \mathbf{executable} \:\: a \:\: \mathbf{if} \:\: p_1,\ldots,p_n \end{align}
(2) \begin{align} \mathit{Dynamic \: law{:}} & \quad a \:\: \mathbf{causes} \:\: f \:\: \mathbf{if} \:\: p_1,\ldots,p_n \end{align}
(3) \begin{align} \mathit{Static \: Causal \: Law{:}} & \quad f \:\: \mathbf{if} \:\: p_1,\ldots,p_n \end{align}

where f and $p_i$ ’s are fluent literals and a is an action. (1) encodes an executability condition of an action a. Intuitively, an executability condition of the form (1) states that a can only be executed if $p_i$ ’s hold. (2), referred to as a dynamic causal law, represents the (conditional) effect of a. It states that f is caused to be true after the execution of a in any state of the world where $p_1,\dots,p_n$ are true. When $n=0$ in (2), we often omit laws of this type from the description. (3) represents a static causal law, that is, a relationship between fluents. It conveys that whenever the fluent literals $p_1,\dots,p_n$ hold then so is f. For convenience, we sometimes denote the set of laws of the form (3), (2), and (1) by K, $D_D$ , and $D_E$ , respectively, for each action domain D.

A domain given in $\mathcal{B}$ defines a transition function from pairs of actions and states Footnote 7 to sets of states whose precise definition is given below. Intuitively, given an action a and a state s, the transition function $\Phi$ defines the set of states $\Phi(a,s)$ that may be reached after executing the action a in state s. If $\Phi(a,s)$ is an empty set it means that the execution of a in s results in an error. We now formally define $\Phi$ .

Let D be a domain in $\mathcal{B}$ . A set of fluent literals is said to be consistent if it does not contain f and $\neg f$ for some fluent f. An interpretation I of the fluents in D is a maximal consistent set of fluent literals of D. A fluent f is said to be true (resp. false) in I iff $f \in I$ (resp. $\neg f \in I$ ). The truth value of a fluent formula in I is defined recursively over the propositional connectives in the usual way. For example, $f \wedge g$ is true in I iff f is true in I and g is true in I. We say that a formula $\varphi$ holds in I (or I satisfies $\varphi$ ), denoted by $I \models \varphi$ , if $\varphi$ is true in I.

Let u be a consistent set of fluent literals and K a set of static causal laws. We say that u is closed under K if for every static causal law

\begin{equation*} f \:\: \mathbf{if} \:\: p_1,\ldots,p_n\end{equation*}

in K, if $u \models p_1 \wedge \ldots \wedge p_n$ then $u \models f$ . By $Cl_K\left(u\right)$ we denote the least consistent set of literals from D that contains u and is also closed under K. It is worth noting that $Cl_K\left(u\right)$ might be undefined. For instance, if u contains both f and $\neg f$ for some fluent f, then $Cl_K\left(u\right)$ cannot contain u and be consistent; another example is that if $u = \{f,g\}$ and K contains

\begin{equation*} f \:\: \mathbf{if } \; h \quad \quad \text{ and } \quad \quad \neg h \:\: \mathbf{if } \; \:\: f, g \end{equation*}

then $Cl_K\left(u\right)$ does not exist because it has to contain both h and $\neg h$ , which means that it is inconsistent.

Formally, a state of D is an interpretation of the fluents in F that is closed under the set of static causal laws K of D.

An action a is executable in a state s if there exists an executability proposition

\begin{equation*}\mathbf{executable} \quad a \quad \mathbf{if} \quad f_1,\ldots,f_n \end{equation*}

in D such that $s \models f_1 \wedge \ldots \wedge f_n$ . Clearly, if $n=0$ , then a is executable in every state of D. The direct effect of an action a in a state s is the set

\begin{equation*}e\left(a,s\right) = \{f |a \quad \mbox{ causes } \quad f \quad \mathbf{if } \; f_1,\ldots,f_n \in D, s \models f_1 \wedge\ldots \wedge f_n\}.\end{equation*}

For a domain D, $\Phi(a,s)$ , the set of states that may be reached by executing a in s, is defined as follows.

  1. 1. If a is executable in s, then

    \begin{equation*}\Phi\left(a,s\right) = \{s' \; | \; \; s' \mbox{ is a state and }s' =Cl_{K}\left (e\left(a,s \right ) \cup \left (s \cap s'\right)\right)\}; \end{equation*}
  2. 2. If a is not executable in s, then $\Phi\left(a,s\right) = \emptyset$ .

Every domain D in $\mathcal{B}$ has a unique transition function $\Phi$ , which we call the transition function of D. The transition function allows one to compute the set of states reached by the execution of a sequence of actions $\alpha = \left[a_1, \ldots ,a_n\right]$ from a state $s_0$ , denoted by $\hat{\Phi}\left(\alpha,s_0\right)$ , as follows:

  1. 1. If $n=0$ then $\hat{\Phi}\left(\alpha,s_0\right) = s_0$

  2. 2. If $n> 0$ then $\hat{\Phi}\left( \alpha, s_0\right) = \cup_{u \in \Phi\left(a_1,s_0\right)} \hat{\Phi}\left( \alpha', u\right)$ where $\alpha'=[a_2,\ldots,a_n]$ and if $\hat{\Phi}\left( \alpha', u\right) = \emptyset$ for some u then $\hat{\Phi}\left( \alpha, s_0\right) = \emptyset$ .

2.4 Representation and reasoning with CPS ontology in ASP

Various researchers have explored the relationship between ASP and the Semantic Web (e.g. Eiter Reference Eiter2007; Nguyen et al. Reference Nguyen, Son and Pontelli2018b; a; 2020b), in particular with the goal of leveraging existing ontologies. In these works, an ASP program is used for reasoning about classes, properties, inheritance, relations, etc. Given ASP’s non-monotonic nature, it also provides sufficient flexibility for dealing in a principled way with default values, exceptions and for reasoning about the effects of actions and change.

We use a similar approach in this paper to leverage the existing CPS Ontology for reasoning tasks related to CPS and concerns. Our approach includes the ability to query the CPS Ontology for relevant knowledge and provide it to an ASP-based reasoning component. Because the present paper is focused on the latter, for simplicity of presentation we assume that all relevant classes, instances, relations, properties of the CPS ontology are already encoded by an ASP program. We denote this program by $\Pi$ ( $\Omega$ ) where $\Omega$ denotes the ontology, which is the CPS ontology in this case. We list the predicates that will be frequently discussed in this paper.

  • class(X): $\mathtt{X}$ is a class;

  • subClass(X,Y): $\mathtt{X}$ is a subclass of $\mathtt{Y}$ ;

  • aspect(I) (resp. concern(I), prop(I), decomp_func(I)): I is an individual of class aspect (resp. concern, property, decomposition function);

  • subCo(I,J): J is sub-concern of I; and

  • addBy(C,P): concern C is addressed by property P (a link from a property P to a concern C in the ontology);

  • positiveImpact(P,C): The satisfaction of property P impacts positively on the satisfaction of concern C.

  • func(F,C): F is a functional decomposition of concern C.

Listing 1 represents the ASP program $\Pi\left(\Omega\right)$ of CPS Ontology $\Omega$ . The predicate RDFtriple(S,P,O) denotes the RDF triple store which has been queried and extracted from $\Omega$ by using SPARQL. Footnote 8 Lines 1–2 define the class(X) and subClass(X,Y) based on the ontology extraction. Line 3 reasons the extension about subclass relationship. Lines 4–6 encode the definitions of instance(I) and isInstanceOf(I,X) with the similar method. The concern, aspect, property and decomposition function instances are defined in Lines 7–10. And, the three rules in Lines 11–14 represent the encoding of subCo(I,J), addBy(C,P), func(F,C) and positiveImpact(P,C) relationships respectively.

Given a collection of individuals in the CPS ontology $\Omega$ , $\Pi\left(\Omega\right)$ will allow us to check $addBy\left(c,p\right)$ , $subCo\left(i,j\right)$ , $func\left(f,c\right)$ , $positiveImpact\left(p,c\right)$ , etc; whether a concern c is addressed by a property p, concern j is a sub-concern of concern i, f is functional decomposition of concern c, the satisfaction of p impacts positively on concern c, etc. respectively. They are written as: $\Pi\left(\Omega\right) \models addBy\left(c,p\right)$ , $\Pi\left(\Omega\right) \models subCo\left(i,j\right)$ , $\Pi\left(\Omega\right) \models func\left(f,c\right)$ , $\Pi\left(\Omega\right) \models positiveImpact\left(p,c\right)$ , etc.

Similar rules for reasoning about the inheritance between concerns, inheritance between subconcerns and concerns, etc. are introduced whenever they are used subsequently. We note that the CPS framework does come with an informal semantics about when a concern is supposedly be satisfied. The work in Balduccini et al. (Reference Balduccini, Griffor, Huth, Vishik, Burns and Wollman2018) provides a preliminary discussion on how the satisfaction of a concern can be determined. It does not present a formal description of the CPS system as in this paper and does not address the functional decomposition issue though.

3 CPS theory specification

3.1 Formal definition

In this section, we develop a formal definition of CPS theory and its semantics. The proposed notion of a CSP theory will allow one to specify and reason about the concerns of the CPS. Our discussion will focus on Trustworthiness aspect in the CPS ontology but the proposed methodology is generic and is applicable to the full CPS ontology. To motivate the definition, we use the following example:

Example 2 (Extended from Reference Balduccini, Griffor, Huth, Vishik, Burns and WollmanBalduccini et al. 2018) Consider a lane keeping/assist system (LKAS) of an advanced car that uses a camera (CAM) and a situational awareness module (SAM). The SAM processes the video stream from the camera and controls the automated navigation system through a physical output. In addition, the system also has a battery (BAT).

CAM and SAM may use encrypted memory (data_encrypted) and a secure boot (secure_boot). Safety mechanisms in the navigation system cause it to shut down if issues are detected in the input received from SAM. The CAM and SAM can be in one of two operational modes, the basic mode (basic_mode or b_mode) and the advanced mode (advanced_mode or a_mode). The two properties address concern Integrity relevant to operation function. In advanced mode, the component consumes much more energy than if it were in basic mode.

BAT serves the system energy consumption and relates with one of three properties, saving_mode (s_mode) or normal_mode (n_mode) or powerful_mode (p_mode). Three properties address concern Integrity relevant to the energy functionality.

The relationship between SAM, CAM and BAT are: (1) If both SAM and CAM are in advanced_mode, the battery has to work in saving_mode. (2) if CAM and SAM are in basic_mode, the battery can be in powerful_mode or normal_mode and (3) if one of SAM and CAM is in advanced_mode and the other one is in basic_mode, then the battery must work in normal_mode.

The relationship between the LKAS domain and the CPS ontology is shown in Figure 2. Informally, the CPSF defines that the concern Integrity is satisfied if secure_boot is satisfied and its two functionalities, operation and energy, are satisfied; the operation functionality is satisfied if at least one of the properties {advanced_mode, basic_mode} is satisfied; and the energy functionality is satisfied if there is at least one of {saving_mode, normal_mode, powerful_mode} properties is satisfied. Intuitively, this can be represented by the following formula:

(4) \begin{equation} \begin{array}{l} \left({\tt \small secure\_boot}\right) \land \left({\tt \small advanced\_mode} \lor {\tt \small basic\_mode}\right) \\ \land \left({\tt \small saving\_mode} \lor {\tt \small normal\_mode} \lor {\tt \small powerful\_mode}\right) \end{array}\end{equation}

The example shows that a CPS system is a dynamic domain and contains different components, each associated with some properties which affect the satisfaction of concerns defined in the CPS ontology. In addition, the satisfaction of concerns depends on the truth values of formulae constructed using properties and a concern might be related to a group of properties. We will write $\omega\left(c\right)$ to denote the set of properties that addresses a concern c. We therefore define a CPS system as follows.

Definition 1 (CPS System) A CPS system $\mathcal{S}$ is a tuple ( $CO, A, F, R, \Gamma$ ) where:

  • CO is a set of components;

  • A is a set of actions that can be executed over $\mathcal{S}$ ;

  • F is a finite set of fluents (or state variables) of the system;

  • R is a set of relations that maps each physical component $co \in CO$ to a set of properties $R\left(co\right)$ defined in the CPS ontology; and

  • $\Gamma$ is a set of triples of the form $\left(c, \mathit{fu}, \psi\right)$ where c is a concern, $\mathit{fu}$ is a functional decomposition of concern c, and $\psi$ is a formula constructed over $\omega\left(c\right)$ .

In Definition 1, $\left(A,F\right)$ represents the dynamic domain of $\mathcal{S}$ , $\Gamma$ represents constraints on the satisfaction of concerns in the CPSF ontology in $\mathcal{S}$ , and R encodes the properties of components in $\mathcal{S}$ which are related to the concerns specified in the CPSF. As the truth values of these properties can be changed by actions, we assume that

\begin{equation*}\cup_{co \in CO} R\left(co\right) \cup \{active\left(co,p\right) | co\in CO, p\in R\left(co\right)\} \subseteq F.\end{equation*}

where $active\left(co,p\right)$ is true means that the component co is currently active with property p. $\left(A,F\right)$ is an action theory as described in Subsection 2.3. Note that $\left(A,F\right)$ can be non-deterministic due to the presence of statements of the form (3). Although it is possible, this rarely happens in practical applications. We will, therefore, assume that $\left(A,F\right)$ is deterministic throughout this paper. We illustrate Definition 1 in the following example.

Example 3 The CPS system in Example 2 can be described by $\mathcal{S}_{lkas} = $ ( $CO_{lkas},A_{lkas}, F_{lkas}, $ $R_{lkas}, \Gamma_{lkas}$ ) where:

  • $CO_{lkas} = \{{\tt SAM}, {\tt CAM}, {\tt BAT}\}$ .

  • $F_{lkas}$ contains the following fluents:

  • $\mathtt{active\left(X,P\right)}$ denotes that component $X \in CO_{lkas}$ is working actively with property P, for example, active(cam,basic_mode), active(cam,data_encrypted), active(sam,finger_printing) and active(bat,normal_mode) states that the camera is working in basic mode, with encrypted data, the SAM is authenticated by fingerprinting method and the battery is working in normal mode.

  • $\mathtt{ on\left(X\right)}$ ( $\mathtt{ off\left(X\right)}$ ) denotes that component X is (isn’t) ready for use.

  • the set of properties that are related to the components ( $\mathtt{P}$ denotes that the truth value of property P), for example, basic_mode, oauth, etc. These properties are drawn in Figure 2 (rectangle boxes except the three components SAM, CAM, BAT).

The relationship among the fluents are encoded below:

  • $active\left(BAT, saving\_mode\right) \:\: \mathbf{if }\:\: active\left(SAM, advanced\_mode\right), active\left(CAM,\right.$ $\left.advanced\_mode\right)$ which encodes the statement if both SAM and CAM are in advanced_mode, the battery has to work in saving_mode.

  • $active\left(BAT, normal\_mode\right) \:\: \mathbf{if }\:\: active\left(SAM, advanced\_mode\right), active\left(CAM,\right.$ $\left.basic\_mode\right)$ and $active\left(BAT, normal\_mode\right) \:\: \mathbf{if }\:\: active\left(SAM, basic\_mode\right), active\left(CAM,\right.$ $\left.advanced\_mode\right)$ encode the statement if one of SAM and CAM is in advanced_mode and the other one is in basic_mode, then the battery must work in normal_mode.

  • $active\left(BAT, powerful\_mode\right) \lor active\left(BAT, normal\_mode\right) \:\: \mathbf{if }\:\: active\left(SAM,\right.$ $\left.basic\_mode\right), active\left(CAM, basic\_mode\right)$ which encodes the statement if both SAM and CAM are in basic_mode, the battery can be in powerful_mode or normal_mode.

  • $A_{lkas}$ contains the following actions:

  • $\mathtt{switM\left(X,M\right)}$ : switching the component X to a mode M. The set of the form (1) and (2) for the action that switches the CAM from basic_mode to advanced_mode $\mathtt{ switM\left(cam, advanced\_mode\right)}$ contains the following statements:

  • $\mathbf{executable}\:\: \mathtt{ switM\left(cam, advanced\_mode\right)} \:\: \mathbf{if} \:\:\: \mathtt{on\left(cam\right), active \left(cam, basic\_mode\right)}$ which says that the action $\mathtt{ switM\left(cam, advanced\_mode\right)}$ can only be executed if the component CAM is on and in the basic_mode.

  • $\mathtt{ switM\left(cam, advanced\_mode\right)} \:\: \mathbf{causes} \:\: \mathtt{ active\left(cam, advanced\_mode\right),}$ $\mathtt{\quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \:\:\:\:\: \neg active\left(cam, basic\_mode\right)}$ .

This states that if we switch the component CAM to the advanced_mode then it is in the advanced_mode and not in the basic_mode.

The statements for $\mathtt{switM\left(cam, basic\_mode\right)}$ that switches the CAM from advanced_mode to basic_mode are similar. And the similar statements for $\mathtt{switM\left(sam, basic\_mode\right)}$ and $\mathtt{switM\left(sam, advanced\_mode\right)}$ which switch the component SAM to basic_mode and advanced_mode respectively.

  • There are also actions that switch other components to different modes or methods. These are:

  • $\mathtt{switA\left(X,A\right)}$ : switching between authorization methods where $X = SAM$ .

  • $\mathtt{ switV\left(X,V\right)}$ : switching between verification methods where X can be SAM or CAM.

  • $\mathtt{ switEM\left(X,EM\right)}$ : switching between encryption method where X can be SAM or CAM.

  • $\mathtt{ switEA\left(X,EA\right)}$ : switching between encryption algorithms where X can be SAM or CAM.

The set of statements of the form (1) and (2) associated with these actions are similar to those associated with $\mathtt{switM\left(X,M\right)}$ and is omitted here for brevity.

  • $\mathtt{ tOn\left(P\right)}$ and $\mathtt{ tOff\left(P\right)}$ denote the actions of enabling and disabling the truth value of property P, respectively. The sets of statements of the form (1) and (2) associated to each of these actions is similar. We list those associated with $\mathtt{tOn\left(P\right)}$ as an example:

  • $\mathbf{executable}\:\: \mathtt{ tOn\left(basic\_mode\right)} \:\: \mathbf{if} \:\:\: \mathtt{\neg basic\_mode}$ : this can only be executed if the system property is not in the basic_mode.

  • $\mathtt{ tOn\left(basic\_mode\right)} \:\: \mathbf{causes} \:\: \mathtt{ basic\_mode}$ : set the system property to basic_mode.

  • $\mathtt{ patch\left(P\right)}$ denotes action of patching some properties P with available patch software. The set of statements for action $\mathtt{patch\left(P\right)}$ could be: $\mathbf{executable}\:\: \mathtt{ patch\left(conn\_encrypted\right)} \:\: \mathbf{if} \:\:\: \mathtt{\neg conn\_encrypted}, \mathtt{availablePatch} \mathtt{\left(conn\_encrypted\right)}$ $\mathtt{ patch\left(conn\_encrypted\right)} \:\: \mathbf{causes} \:\: \mathtt{ conn\_encrypted}$

  • $R_{lkas} =\{$ CAM $\mapsto \{{\tt \small ip\_filtering}$ , ${\tt \small algo\_DES}$ , ${\tt \small algo\_AES}$ , ${\tt \small algo\_RSA}$ , ${\tt \small data\_encrypted}$ , ${\tt \small conn\_encrypted}$ , ${\tt \small mac\_check}$ , ${\tt \small protocol\_encrypted}$ , ${\tt \small secure\_boot}$ , ${\tt \small basic\_mode}$ , ${\tt \small advanced\_mode}$ , ${\tt \small trusted\_auth\_device}$ , ${\tt \small trusted\_environment}$ , ${\tt \small iris\_scan}\}$ , SAM $\mapsto \{{\tt \small data\_encrypted}$ , ${\tt \small algo\_RSA}$ , ${\tt \small algo\_DES}$ , ${\tt \small algo\_AES}$ , ${\tt \small protocol\_encrypted}$ , ${\tt \small conn\_encrypted}$ , ${\tt \small firewall\_setup}$ , ${\tt \small mac\_check}$ , ${\tt \small ip\_filtering}$ , ${\tt \small advanced\_mode}$ , ${\tt \small basic\_mode}$ , ${\tt \small finger\_printing}$ , ${\tt \small two\_factors}$ , ${\tt \small iris\_scan}$ , ${\tt \small oauth}$ , ${\tt \small opt\_code}$ , ${\tt \small email\_verify}$ , ${\tt \small ip\_check}$ , ${\tt \small trusted\_environment}$ , ${\tt \small secure\_boot}\}$ , BAT $\mapsto \{{\tt powerful\_mode}$ , ${\tt \small trusted\_environment}$ , ${\tt \small normal\_mode}$ , ${\tt \small saving\_mode}\}\}$ .

    The components and relations to the properties are illustrated by the arrow lines with “relates” labels in the bottom part of Figure 2.

  • $\Gamma_{lkas}$ contains the following triples (see also Figure 3):

  • (Integrity, operation, advanced_mode $\lor$ basic_mode) says the satisfaction of formula advanced_mode $\lor$ basic_mode addresses the concern Integrity in the relevant functional decomposition operation.

  • (Integrity, energy, saving_mode $\lor$ normal_mode $\lor$ powerful_mode) denotes the formula saving_mode $\lor$ normal_mode $\lor$ powerful_mode addresses the concern Integrity in the relevant functional decomposition energy.

  • (authorization, sign_in, oauth $\land$ opt_code) denotes the satisfaction of formula oauth $\land$ opt_code addresses the relevant functional decomposition sign_in of the concern authorization.

  • (authorization, sign_in, two_factors $\lor$ finger_printing $\lor$ iris_scan) denotes the formula two_factors $\lor$ finger_printing $\lor$ iris_scan addresses the concern authorization in the relevant functional decomposition sign_in.

  • (authorization, sign_in, oauth $\land$ ip_check $\land$ email_verify) denotes that the concern authorization with the relevant functional decomposition sign_in is addressed by formula oauth $\land$ ip_check $\land$ email_verify.

Fig. 3. Integrity and authorization concerns with their functionalities and properties.

In addition, the functional decomposition of the Integrity concern indicates that the formula $\left({\tt \small secure\_boot}\right) \land \left({\tt \small advanced\_mode} \lor {\tt \small basic\_mode}\right) \land \left({\tt \small saving\_mode} \lor {\tt \small normal\_mode} \lor {\tt \small powerful\_mode}\right)$ addresses the Integrity concern.

Likewise, the formula

\begin{equation*}\begin{array}{l}{\tt trusted\_auth\_device} \land {\tt \small trusted\_environment} \land \\\left({\tt two\_factors} \lor {\tt finger\_printing} \lor {\tt iris\_scan} \right. \lor \\\left({\tt oauth} \land {\tt opt\_code}\right) \lor\left. \left( {\tt oauth} \land {\tt ip\_check} \land {\tt email\_verify}\right)\right)\end{array}\end{equation*}

addresses the authorization concern.

Given a CPS system $\mathcal{S}$ with a set of fluents F, a state s of $\mathcal{S}$ is an interpretation of F that satisfies the set of static causal laws of the form (3) (Subsection 2.3).

Definition 2 (CPS Theory) A CPS theory is a pair $\left(\mathcal{S},I\right)$ where $\mathcal{S}$ is a CPS system and I is a state representing the initial configuration of $\mathcal{S}$ .

3.2 The semantics of CPS theories

Given $\left(\mathcal{S},I\right)$ where $\mathcal{S}=\left(CO,A,F,R,\Gamma\right)$ , the action domain $\left(A,F\right)$ specifies a transition function $\Phi_\mathcal{S}$ between states (Subsection 2.3). In each state, the satisfaction of a particular concern in the CPSF is evaluated using the relationship R and the components C. We will define this relation next. First, we note that a concern in a CPS can be related to some components in $\mathcal{S}$ , directly through the R relation and the formulae in $\Gamma$ or indirectly through the inheritance in the CPS ontology. Observe that the development of the CPS relies on the following intuition:

  • A concern might have several sub-concern;

  • A concern might be addressed by a set of functional decompositions which are represented by Boolean formulae.

This leads to the following informal meaning of the notion of satisfaction of a concern in a state of the CPS:

  • For each concern c, if $\Gamma$ does not contain any tuple of the form $\left(c,fu,\psi\right)$ then c is satisfied in a state s when every of its direct subconcerns is satisfied; for example, the Trustworthiness concern is satisfied in a state s of the LKAS system if its children, Safety, Reliability, Security, Resilience, and Privacy, are satisfied; and every of its properties is satisfied.

  • For each concern c, if $\Gamma$ contains some tuple of the form $\left(c,fu,\psi\right)$ then c is satisfied when $\psi_c = \wedge_{\left(c,fu,\psi\right) \in \Gamma} \psi$ is satisfied in s and every property p related to c–as specified by the CPS ontology–is satisfied in s; for example, the Integrity concern is satisfied in the state s of the LKAS system if the formula (4) is satisfied in s where secure_boot is a property related to Integrity and the other conjuncts are the two disjunctions representing the two functional decomposition of Integrity.

Next, we formalize precisely the notion of satisfaction of a concern. Let $\Lambda(c)$ be the conjunction of $\wedge_{\left(c,fu,\psi\right) \in \Gamma} \psi $ and all properties that are related to c and not appearing in any formula of the form $\left(c,fu,\psi\right) \in \Gamma$ . For example, in formula (4), the last two conjuncts are the two functional decompositions of Integrity from $\Gamma_{lkas}$ and the first conjunct is a property that does not appear in any functional decomposition of Integrity. In the following, we denote $\langle c \rangle$ is the set of descendants of c such that for each $d \in \langle c \rangle$ , d has no sub-concern.

Definition 3 Let s be a state in $\mathcal{S}=\left(CO,A,F,R,\Gamma\right)$ and c be a concern. We say that c is satisfied in s, denoted by $s \models c$ , if

  • $s \models \Lambda(c)$ ; and

  • every sub-concern c’ of c is satisfied by s.

Having defined when a concern is satisfied in a state, we can define the notion of satisfaction of a concern after the execution of a sequence of actions as follows. Recall the transition function $\Phi_\mathcal{S}$ dictates how the system changes from one state to another state and the set of states resulting from the execution of a sequence of actions $\alpha$ from a state can be computed by $\hat{\Phi}_\mathcal{S}$ . Therefore, we can define the satisfaction of a concern c after

Definition 4 Let $\left(\mathcal{S},I\right)$ be a CPS theory, $\alpha$ a sequence of actions, and c a concern in the CPS Ontology. c is satisfied after the execution of a sequence of actions $\alpha$ from the initial state I, denoted by $\left(\mathcal{S},I\right) \models c \;\mathbf{after} \; \alpha$ , iff

(5) \begin{equation} \hat{\Phi}_\mathcal{S} \left( \alpha, I \right) \ne \emptyset \wedge \forall u \in \hat{\Phi}_\mathcal{S} \left( \alpha, I \right). \left[u \models c \right]\end{equation}

In the above definition, the condition $\hat{\Phi} \left( \alpha, I \right) \ne \emptyset$ guarantees that $\alpha$ is a valid sequence of actions, that is, its execution in I does not fail. The second condition is the standard definition of logical entailment.

Definitions 34 provide the basis for us to answer questions related to the satisfaction of a concern in a state or after a sequence of actions is executed, that is, the concern satisfaction problem. In the following, we will discuss other problems that are of importance for the design and development of CPS systems.

3.3 Reasoning tasks in CPS

Knowing when a concern is (is not) satisfied is very important. We now discuss the issues related to the satisfaction of concerns in a CPS. We focus on the following problems:

  1. 1. What is the most/least trustworthy Footnote 9 component in a CPS?

  2. 2. Are there non-compliance in a given CPS? How to detect non-compliance?

  3. 3. What to do if an (external or internal) event occurs and leads to an undesirable situation? How to recover from such situation?

  4. 4. What is a best or most preferred mitigation strategy for a given situation?

In what follows, we provide precise formulations of the aforementioned tasks and propose solutions for them. For simplicity of presentation, we focus on discussing these questions with respect to a given state. The answers to these questions after the execution of a sequence of actions from the initial state can be defined similarly to the definition of the satisfaction of a concern via the function $\Phi$ , as in Definition 4. Our implementation covers both situations.

3.3.1 Most/Least trustworthy components

Given $\mathcal{S}=\left(CO,A,F,R,\Gamma\right)$ and a state s in $\mathcal{S}$ . A component $x \in CO$ might be related to many concerns through the properties in $R\left(x\right)$ , whose truth values depend on the state s. Recall that for each property p and component x, $active\left(x,p\right)$ is true in s indicates that component is active with property p in the state s; furthermore, the CPS ontology contains the specification that p positively or negatively impacts a concern c. The latter are defined by the predicates $addBy\left(c,p\right)$ and $positiveImpact\left(p,c\right)$ in $\Omega$ (Subsection 2.4). As such, when a component is active with a property, it can positively impact a concern. For example, in Figures 2 and 3, the property secure_boot addresses the Integrity concern and is described to impact positively on the satisfaction of Integrity concern by $\Omega$ . In the current state, the component SAM is working on property secure_boot. Assuming that concern Integrity is satisfied in this state, we say that component SAM directly positively affects to the Integrity concern through property secure_boot. We say that a component x directly impacts a concern c in state s through a property p if the following conditions hold:

  1. 1. x works with property p in state s; and

  2. 2. p addresses concern c and p is true in s.

If x directly impacts c in state s through p and the CPS ontology specifies that the satisfaction of property p impacts positively on the satisfaction of c and c is satisfied in state s, then we say that x directly and positively affects c.

As the notion of concern satisfaction is propagated through the sub-concern relationship, it is natural for us to define that component x impacts (resp. affects positively) concern c through property p in a state s, denoted by $impact\left(x,c,s\right)$ (resp. $pos\left(x,p,c,s\right)$ ), if (i) x directly addresses (resp. direct positively affects) c through a property p; or (ii) there exists some sub-concern c’ of c that is addressed (resp. positively affected) by x.

In the above example (see also Figure 2), the component SAM directly positively affects to the Integrity concern through property secure_boot then SAM also affects positively concerns Cyber-Security, Security and Trustworthiness in the concern tree through property secure_boot.

Given a component x, the ratio between the number of concerns that are positively affected by x and the number of concerns that are addressed by x characterizes how effectively x influences the system. For this reason, we will use this number to characterize the trustworthiness of components in the system. So, we define

(6) \begin{equation} tw\left(x,s\right) = \frac{\Sigma_{p \in R\left(x\right)} | \{c | s \models c \wedge positiveImpact\left (p, c \right) \wedge p \in s \wedge active\left(x,p \right)\}|}{\Sigma_{p \in R\left(x\right)} | \{c | \left(s \not \models c \vee \neg positiveImpact\left (p, c \right)\right) \wedge addBy\left (c,p \right) \wedge p \in s \wedge active\left(x,p \right) \}| \text{+} 1}\end{equation}

Assume that all concerns and properties are equally important, we could compare the trustworthiness of a component $x \in CO$ with that of a component $x' \in CO$ by comparing the ratios tw.

Definition 5 For a CPS system $\mathcal{S} = \left(CO,A,F,R,\Gamma\right)$ , $x_1, x_2 \in CO$ , and state s of $\mathcal{S}$ ,

  • $x_1$ is more trustworthy than $x_2$ in s, denoted by $x_1 \succ_s x_2$ (or $x_2$ is less trustworthy than $x_1$ , denoted by $x_2 \prec_s x_1$ ), if

  • $tw\left(x_1,s\right) > tw\left(x_2,s\right)$ ; or

  • $tw\left(x_1,s\right) = tw\left(x_2,s\right) = 0$ and $impact\left(x_1,s\right) < impact\left(x_2,s\right)$ where $impact\left(x,s\right)= \Sigma_{p \in R\left(x\right)} | \{c | \left(s \not \models c \vee \neg positiveImpact\left (p, c \right)\right) \wedge addBy\left (c,p \right) \wedge p \in s \wedge active\left(x,p \right) \}|$ .

  • $x_1$ is as trustworthy as $x_2$ in s, denoted by $x_1 \sim_s x_2$ , if

  • $tw\left(x_1,s\right) = tw\left(x_2,s\right) > 0$ ; or

  • $tw\left(x_1,s\right) = tw\left(x_2,s\right) = 0$ and $impact\left(x_1,s\right) = impact\left(x_2,s\right)$ .

$x_1 \succeq_s x_2$ denotes that $x_1 \succ_s x_2$ or $x_1 \sim_s x_2$ . x is a most (least) trustworthy component of $\mathcal{S}$ in s if $x \succeq_s x'$ ( $x' \succeq_s x$ ) for every $x' \in CO$ .

Proposition 1 Let $\mathcal{S} = \left(CO,A,F,R,\Gamma\right)$ be a CPS system and s be a state in $\mathcal{S}$ . The relation $\succeq_s$ over the components of $\mathcal{S}$ is transitive, symmetric, and total.

Proof. It is easy to see that for any pair of components, either $c_1 \succ_s c_2$ , $c_2 \sim_s c_1$ , or $c_1 \sim_s c_2$ . Furthermore, $c \sim_s c$ . It follows that $\succeq_s$ is therefore transitive, symmetric, and total.

3.3.2 Non-compliance detection in CPS

The design of a CPS is often subject to competing constraints from various people or organizations with different focus and type of expertise. This may result in sets of constraints that are unsatisfiable, for example, a set of concerns cannot (never) be satisfied, giving rise to a non-compliance. Example 1 shows that there exists a situation in which competing concerns cannot be satisfied at the same time. In general, the problem is formulated as follows.

Definition 6 (Lack of Compliance) Given the CPS system $\mathcal{S} = \left(CO,A,F,R,\Gamma\right)$ , an integer n, a set of actions $SA \subseteq A$ , and a set of concerns SC, we say that $\mathcal{S}$ is

  1. 1. weakly n-noncompliant wrt. $\left(SA,SC\right)$ if there exists a sequence $\alpha$ of at most n actions in SA and an initial state I, such that $\left(\mathcal{S},I\right) \not \models c \;\mathbf{after} \; \alpha$ for some concern $c \in SC$ .

  2. 2. strongly n-noncompliant wrt. $\left(SA,SC\right)$ if for every sequence $\alpha$ of at most n actions in SA and an initial state I, $\left(\mathcal{S},I\right) \not \models c \;\mathbf{after} \; \alpha$ for some concern $c \in SC$ .

Given an integer k, weakly k-noncompliant implies that there is a potential that some concern in the set SC of concerns might not be satisfied. Strongly k-noncompliant indicates that there is always some concern that cannot be satisfied. Systems that are strongly k-noncompliant might need to be re-designed.

It is easy to see that, by Definition 4, checking whether a system is weakly k-noncompliant is equivalent to identifying a plan of length k or less that “makes some concern unsatisfied.” On the other hand, checking whether a system is strongly k-noncompliant is equivalent to identifying a plan of length less than k that “satisfies all concerns.” Since we assume that the specification language for CPS is propositional and planning for bounded plans is NP-complete, we can easily derive the following results:

Proposition 2 Given $\mathcal{S}$ , $\left(SA,SC\right)$ , and k, checking whether $\mathcal{S}$ is weakly k-noncompliant is NP-complete and checking whether $\mathcal{S}$ is strongly k-noncompliant is co-NP-complete.

Proof. This relies on the fact that checking whether a planning problem has a solution of length k is NP-complete (e.g. the Plan-Length problem in Ghallab et al. Reference Ghallab, Nau and Traverso2004).

3.3.3 Mitigation strategies

Let $\mathcal{S} = \left(CO,A,F,R,\Gamma\right)$ be a CPS system and s be a state of $\mathcal{S}$ . When some concerns are unsatisfied in s, we need a way to mitigate the issue. Since the execution of actions can change the satisfaction of concerns, the mitigation of an issue can be achieved by identifying a plan that suitably changes the state of properties related to the concerns. The mitigation problem in a CPS can be defined as follows:

Definition 7 (Mitigation Strategy) Let $\mathcal{S} = \left(CO,A,F,R,\Gamma\right)$ be a CPS domain and s a state in $\mathcal{S}$ . Let $\Sigma$ be a set of concerns in $\Omega$ . A mitigation strategy addressing $\Sigma$ is a plan $\alpha$ whose execution at the initial state s results in a state s’ such that for every $c \in \Sigma$ , c is satisfied in s’.

Definition 7 assumes that all plans are equal. This is often not the case in a CPS system. To illustrate this issue,

Example 4 Consider the LKAS system in Example 2. The initial state $I_{lkas}$ is given by: CAM and SAM are in basic_mode and secure_boot, BAT is in powerful_mode and every properties in $I_{lkas}$ are observed to be True. The energy consumption constraints of BAT are encoded in Listing 2. Figure 4 shows a fragment of the CPS theory that is related to the problem described in this example.

Fig. 4. Current configuration of $\Delta_{lkas}$ related to Integrity concern after cyber-attack.

A cyber-attack occurs and the controller module is attacked, which causes basic_mode to become False while advanced_mode is (True). Given this information, we need a mitigation strategy for the set $\Sigma=\{Integrity\}$ . The mitigation strategies (with the length is 2) can be generated as following:

  • $\alpha_1= [\mathtt{\small tOn(basic\_mode)}]$

  • $\alpha_2= [\mathtt{\small switM(cam,advanced\_mode)} , \mathtt{\small switM(sam,advanced\_mode)}]$

  • $\alpha_3= [\mathtt{\small switM(sam,advanced\_mode)} , \mathtt{\small switM(cam,advanced\_mode)}]$

  • $\alpha_4= [\mathtt{\small switM(sam,advanced\_mode)} , \mathtt{\small tOn(basic\_mode)}]$

  • $\alpha_5= [\mathtt{\small switM(cam,advanced\_mode)} , \mathtt{\small tOn(basic\_mode)}]$

As shown in the example, it is desirable to identify the best mitigation strategy. In this paper, we propose two alternatives. The first alternative relies on a notion called likelihood of satisfaction of concerns and the second alternative considers the uncertainty of actions.

Likelihood of Satisfaction (LoS) of Concerns We introduce a notion called likelihood of satisfaction (LoS) of concern and use it to distinguish mitigation strategies. Our notion relies on the positive impacts of properties on concerns within the system (Subsection 2.4). For example, property secure_boot positively impacts Integrity in Example 2 (denoted by positiveImpact(secure_boot,integrity)). For a concern c, we denote with $rel^{\text{+}}\left(c\right)$ the set of all properties that positively impact a concern c. Furthermore, $rel^{\text{+}}_{sat}\left(c,s\right)$ is the set of properties in $rel^\text{+}\left(c\right)$ which hold in state s. The ratio between these two numbers can be used to characterize the positive impact degree of concern c in state s as follows:

(7) \begin{equation} deg^{\text{+}}\left(c,s\right) = \left \{ \begin{array}{rl} \dfrac{| rel^{\text{+}}_{sat}\left(c,s\right) |}{| rel^{\text{+}}\left(c\right) |} & \text{if } rel^{\text{+}}\left(c\right) \neq \emptyset \\ 1 & \text{otherwise} \end{array} \right.\end{equation}

We note that $rel^\text{+}_{sat}$ and tw might appear similar but they are different in the following way: $rel^\text{+}_{sat}$ is concerned with the relationship between properties and concerns while tw focuses on the relationship between components and concerns.

We define the likelihood of satisfaction of a concern as follows.

Definition 8 (Likelihood of Concern Satisfaction)

Given a CPS system $\mathcal{S}$ , a state s in $\mathcal{S}$ , and a concern c, the likelihood of the satisfaction (LoS) of c in s, denoted by $\varphi_{LoS}\left(c,s\right)$ , is defined by:

(8) \begin{equation} \varphi_{LoS}\left(c,s\right) = \left \{ \begin{array}{ll} deg^{\text{+}}\left(c,s\right)*\Pi_{x \in sub\left(c\right)} \varphi_{LoS}\left(x,s\right) & \text{ if } sub\left(c\right) \neq \emptyset \\ \\[-7pt] deg^{\text{+}}\left(c,s\right) & \text{ if } sub\left(c\right) = \emptyset \end{array} \right. \end{equation}

where $sub\left(c\right)$ is the set of subconcerns of c.

Having defined the LoS of different concerns, we can now use this notion in comparing mitigation strategies. It is worth to mention that CPSF defines nine aspect, that is, top-level concerns, (e.g. Trustworthiness, functionality, timing, etc.). Let $TC_\Omega$ be the set of top-level concerns in the CPS ontology. We discuss two possibilities:

  • Weighted LoS: Each top-level concern is associated with a number, that is, each $c \in TC_\Omega$ is associated with a weight $W_c$ (e.g. $W_{functionality}$ for functionality, $W_{trustworthy}$ for Trustworthiness, etc.). The weights represent the importance of the top-level concerns in the CPS. They can be used to compute the weighted LoS of a system $\mathcal{S}$ in state s

    (9) \begin{equation} w\left(\mathcal{S},s\right) = \Sigma_{c \in TC_\Omega} \varphi_{LoS}\left(c,s\right)*W_{c} \end{equation}
    This weighted LoS can be used to define a preference relation between mitigation strategies such as $\beta \prec \alpha$ ( $\alpha$ is better than $\beta$ ) iff $\max_{s' \in \Phi_\mathcal{S}(\alpha,s)} w\left(\mathcal{S},s'\right) \ge \max_{s' \in \Phi_\mathcal{S}(\beta,s)} w\left(\mathcal{S},s'\right)$ .
  • Specified Preferences LoS: An alternative to the weighted LoS of a system is to allow the users to specify a partial ordering over the set $TC_\Omega$ which will be used to define a preference relation among mitigation strategies using well-known preference aggregation strategies (e.g. lexicographic ordering). For example, if ${\tt \small Functionality > Business}$ then a mitigation strategy $\alpha$ is better than a mitigation strategy $\beta$ , written as $\beta \prec \alpha$ , iff $\max_{s' \in \Phi_\mathcal{S}(\alpha,s)} \varphi_{LoS}\left({\tt \small Functionality},s'\right) \ge \max_{s' \in \Phi_\mathcal{S}(\beta,s)} \varphi_{LoS}\left({\tt \small Business},s'\right)$ .

It is easy to see that the above preference relation $\prec$ is also transitive, symmetric, and reflexive and if some strategies exist then most preferred strategies can be computed.

Example 5 (Continuing from Example∼:mitigation) Let us consider the strategies generated in Example 4. All five mitigation strategies ( $\alpha_1,\alpha_2,\alpha_3,\alpha_4$ and $\alpha_5$ ) generated in Section 4.4 can be used to address the issue raised by the cyber-attack. Specifically, the fragment of final state ( $G_{\alpha_i}$ ) relevant to Integrity concern of each plan ( $\alpha_i$ ) is given below:

  • $G_{\alpha_1}$ is { ${\tt \small CAM} \mapsto {\tt \small basic\_mode}$ , ${\tt \small CAM} \mapsto {\tt \small secure\_boot}$ , ${\tt \small SAM} \mapsto {\tt \small basic\_mode}$ , ${\tt \small SAM} \mapsto {\tt \small secure\_boot}$ , ${\tt \small BAT} \mapsto {\tt \small powerful\_mode}$ } or { ${\tt \small CAM} \mapsto {\tt \small basic\_mode}$ , ${\tt \small CAM} \mapsto {\tt \small secure\_boot}$ , ${\tt \small SAM} \mapsto {\tt \small basic\_mode}$ , ${\tt \small SAM} \mapsto {\tt \small secure\_boot}$ , ${\tt \small BAT} \mapsto {\tt \small normal\_mode}$ }. In which, we define $G^1_{\alpha_1}$ is { ${\tt \small CAM} \mapsto {\tt \small basic\_mode}$ , ${\tt \small CAM} \mapsto {\tt \small secure\_boot}$ , ${\tt \small SAM} \mapsto {\tt \small basic\_mode}$ , ${\tt \small SAM} \mapsto {\tt \small secure\_boot}$ , ${\tt \small BAT} \mapsto {\tt \small powerful\_mode}$ }, and $G^2_{\alpha_1}$ is { ${\tt \small CAM} \mapsto {\tt \small basic\_mode}$ , ${\tt \small CAM} \mapsto {\tt \small secure\_boot}$ , ${\tt \small SAM} \mapsto {\tt \small basic\_mode}$ , ${\tt \small SAM} \mapsto {\tt \small secure\_boot}$ , ${\tt \small BAT} \mapsto {\tt \small normal\_mode}$ }.

  • $G_{\alpha_2}$ and $G_{\alpha_3}$ : { ${\tt \small CAM} \mapsto {\tt \small advanced\_mode}$ , ${\tt \small CAM} \mapsto {\tt \small secure\_boot}$ , ${\tt \small SAM} \mapsto {\tt \small advanced\_mode}$ , ${\tt \small SAM} \mapsto {\tt \small secure\_boot}$ , ${\tt \small BAT} \mapsto {\tt \small saving\_mode}$ }

  • $G_{\alpha_4}$ is { ${\tt \small CAM} \mapsto {\tt \small basic\_mode}$ , ${\tt \small CAM} \mapsto {\tt \small secure\_boot}$ , ${\tt \small SAM} \mapsto {\tt \small advanced\_mode}$ , ${\tt \small SAM} \mapsto {\tt \small secure\_boot}$ , ${\tt \small BAT} \mapsto {\tt \small normal\_mode}$ }

  • $G_{\alpha_5}$ is { ${\tt \small CAM} \mapsto {\tt \small advanced\_mode}$ , ${\tt \small CAM} \mapsto {\tt \small secure\_boot}$ , ${\tt \small SAM} \mapsto {\tt \small basic\_mode}$ , ${\tt \small SAM} \mapsto {\tt \small secure\_boot}$ , ${\tt \small BAT} \mapsto {\tt \small normal\_mode}$ }

In each considered state, the statement $X \mapsto P$ denotes that component X is working with property P. For example, ${\tt \small BAT} \mapsto {\tt \small saving\_mode}$ says that the battery is working in saving mode.

Considering the five final configurations of different mitigation strategies in the example above, we have:

  • $deg^{\text{+}}\left(Integrity,G^1_{\alpha_1}\right) = 0.6, \quad \varphi_{LoS}\left(Integrity,G^1_{\alpha_1}\right) = 0.6$ ;

  • $deg^{\text{+}}\left(Integrity,G^2_{\alpha_1}\right) = 0.4, \quad \varphi_{LoS}\left(Integrity,G^2_{\alpha_1}\right) = 0.4$ ;

  • $deg^{\text{+}}\left(Integrity,G_{\alpha_2}\right) = 0.8, \quad \varphi_{LoS}\left(Integrity,G_{\alpha_2}\right) = 0.8;$

  • $deg^{\text{+}}\left(Integrity,G_{\alpha_3}\right) = 0.8, \quad \varphi_{LoS}\left(Integrity,G_{\alpha_3}\right) = 0.8;$

  • $deg^{\text{+}}\left(Integrity,G_{\alpha_4}\right) = 0.6, \quad \varphi_{LoS}\left(Integrity,G_{\alpha_4}\right) = 0.6$ and

  • $deg^{\text{+}}\left(Integrity,G_{\alpha_5}\right) = 0.6, \quad \varphi_{LoS}\left(Integrity,G_{\alpha_5}\right) = 0.6$

We also have that $deg^{\text{+}}\left(availability,\_\right)$ = 1, $deg^{\text{+}}\left(security,\_\right)$ = 1, $deg^{\text{+}}$ $\left(trustworthiness,\_\right)$ = 1, etc. In addition, we also have the LoS values of Trustworthiness aspect in the five different final configurations as following:

  • $\varphi_{LoS}\left(Trustworthiness,G^1_{\alpha_1}\right) = 0.0497$ ,

  • $\varphi_{LoS}\left(Trustworthiness,G^2_{\alpha_1}\right) = 0.0331$ ,

  • $\varphi_{LoS}\left(Trustworthiness,G_{\alpha_2}\right) = 0.0662$ ,

  • $\varphi_{LoS}\left(Trustworthiness,G_{\alpha_3}\right) = 0.0662$ ,

  • $\varphi_{LoS}\left(Trustworthiness,G_{\alpha_4}\right) = 0.0497$ , and

  • $\varphi_{LoS}\left(Trustworthiness,G_{\alpha_5}\right) = 0.0497$ .

Figure 5 shows the Trustworthiness tree for the final configurations of mitigation strategies $\alpha_2$ and $\alpha_3$ ( $G_{\alpha_2}$ and $G_{\alpha_3}$ ), where LoS values are computed and displayed as a number at the top-left of each concern. In all 5 possible strategies, mitigation strategies $\alpha_2$ and $\alpha_3$ are also the best mitigation strategies which are especially relevant to the Trustworthiness attribute, where the LoS of Trustworthiness aspect in final state ( $G_{\alpha_2}$ and $G_{\alpha_3}$ ) is maximum. In this figure, the LoS of Trustworthiness (root concern) is 0.0662 (llh_sat(trustworthiness)=0.0662). By applying a similar methodology for all remaining aspects (i.e. business, functional, timing etc.), we can calculate LoS values for all nine aspects in CPS Ontology.

Fig. 5. Trustworthiness concern tree with LoS of concerns computation.

Mitigation Strategy with The Best Chance to Succeed Preferred mitigation strategies computed using LoS of concern satisfaction assume that actions always succeeded. In practice, actions might not always succeed. In this case, it is preferable to identify strategies with the best chance of success. Assume that each action a is associated with a set of statements of the form:

(10) \begin{equation} \quad {a \quad {\mathbf{success\_with}}{} \quad v \quad \mathbf{if} \quad X}\end{equation}

where $v \in \left[0,1\right]$ and X is a consistent set of literals in $\mathcal{S}$ . This statement says that if each $l \in X$ is true in a state s and a is executable in s then v is the probability of a’s execution in s succeeds. We assume that if a occurs in two statements “ $a \:\: {\mathbf{success\_with}}{} \:\: v_1 \:\: \mathbf{if} \:\: X_1$ ” and “ $a \:\: {\mathbf{success\_with}}{} \:\: v_2 \:\: \mathbf{if} \:\: X_2$ ” with $X_1 \ne X_2$ then $v_1 = v_2$ or there exists $p \in F$ such that $\{p, \neg p\} \subseteq X_1 \cup X_2$ . Furthermore, for a state s in which no statement associated with some action a is applicable, we assume that a succeeds with probability 1 in s if it is executable in s. It is easy to see that this set of statements defines a mapping $pr: A \times States \rightarrow \left[0,1\right]$ where States denotes the set of all states of $\mathcal{S}$ and $pr\left(a,s\right)$ represents the probability that the execution of a in s succeeds. Thus, the execution of a sequence of actions (or a strategy) $\alpha = \left[a_0,\ldots,a_{n-1}\right]$ in a state s succeeds with the probability $\Pi_{i=0}^{n-1} pr\left(a_i,s_i\right)$ where $s_0=s$ , and for $i>0$ , $s_i$ is the result of the execution of $a_{i-1}$ in $s_{i-1}$ . This can be used to define a preference relation between strategies similar to the use of LoS of concern satisfaction, that is, we prefer strategies whose probability of success is maximal. We omit the formal definition here for brevity.

It is worth mentioning that the specification by statements of the form (10) is at the action level. It is assumed that if action a succeeds with a probability v, it means that all of its potential effects will be achieved with the probability v. In some applications, it might be more proper to consider a finer level of probabilistic specification of effects such as if action a succeeds then with a probability $p_i$ , $e_i$ will be true, for $i=1,\ldots,k$ . To work with this type of applications, a probabilistic action language such as the one proposed in Baral et al. (Reference Baral, Tran and Tuan2002) or a specification using Markov decision process could be used. We will leave the discussion related to this type of applications for the future.

4 An ASP-based implementation for reasoning tasks in CPS theories

This section develops an ASP encoding given a CPS theory, building on the work on planning in ASP and on formalizing CPS (e.g. Balduccini et al. Reference Balduccini, Griffor, Huth, Vishik, Burns and Wollman2018; Gelfond and Lifschitz Reference Gelfond and Lifschitz1993). The code is available at https://github.com/thanhnh-infinity/Research_CPS. We start with the encoding of the theory (Subsection 4.1). Afterwards, we develop, for each reasoning task, an ASP module (Subsections 4.2–4.7) which, when added to the encoding of the domain, will compute the answers for the task.

Throughout this section, we assume that $\left(\mathcal{S},I\right)$ where $\mathcal{S}=\left(CO,A,F,R,\Gamma\right)$ is a CPS. The encoding of $\left(\mathcal{S},I\right)$ in ASP will be denoted with $\Pi\left(\mathcal{S}\right)^n$ , where n is a non-negative integer representing the horizon of the system that we are interested in. We note that the encoding of the CPS ontology (Subsection 2.1 and 2.4), $\Pi(\Omega)$ , will be automatically added to any program developed in this section. For this reason, whenever we write $\Pi\left(\mathcal{S}\right)^n$ we mean $\Pi\left(\mathcal{S}\right)^n \cup \Pi(\Omega)$ .

4.1 ASP encoding of a CPS theory

The encoding of a CPS theory contains two parts, one encodes the domain and another the initial state. We first discuss the encoding of the domain.

4.1.1 Encoding of the domain $\mathcal{S}$

$\Pi\left(\mathcal{S}\right)^n$ contains the following rules. Footnote 10

  • The set of rules declaring the time steps: for each $0 \le t \le n$ , an atom $step\left(t\right)$ , that is, the rule $step\left(t\right) \leftarrow $ .

  • The set of rules encoding the components: for each $co \in CO$ , an atom $comp\left(co\right)$ .

  • The set of rules encoding actions: for each $a \in A$ , an atom $action\left(a\right)$ .

  • The set of rules encoding fluents: for each $f \in F$ , an atom $fluent\left(f\right)$ .

  • The set of rules encoding relations: for each $co \in CO$ and $p \in R\left(co\right)$ , an atom $relation\left(co,p\right)$ .

  • The set of rules encoding functional dependencies: for each $\left(c, fu, \varphi\right) \in \Gamma$ , an atom $formula\left(id_\varphi\right)$ , an atom $addFun\left(c,fu, id_\varphi\right)$ , and a set of atoms encoding $\varphi$ , where $id_\varphi$ is a unique identifier associated to $\varphi$ and c is a concern.

  • The rules for reasoning about actions and changes (see, e.g. Reference Son, Baral, Tran and McIlraith Son et al. 2006 ):

  • For each executability condition of the form (1) the rule: $\mathtt{ exec\left(a,T\right)\:{:}{-}\:step\left(T\right), \:h^*\left(p_1,T\right),\ldots,h^*\left(p_n,T\right).}$

  • For each dynamic causal law of the form (2): $\mathtt{ h^*\left(f,T \text{\text{+}} 1\right)\:{:}{-}\:step\left(T\right),\:occurs\left(a,T\right),\:}$ $\mathtt{ h^*\left(p_1,T\right),\ldots,h^*\left(p_n,T\right).}$

  • For each state constraint of the form (3): $\mathtt{ h^*\left(f,T\right)\:{:}{-}\:step\left(T\right),\:h^*\left(p_1,T\right),\ldots,h^*\left(p_n,T\right).}$

  • The rules encoding the inertia axiom: $\mathtt{ h\left(f,T\text{+}1\right)\:{:}{-}\:step\left(T\right), h\left(f,T\right), \:{not}\:\: \neg h\left(f,T\text{+}1\right).}$ $\mathtt{ \neg h\left(f,T\text{+}1\right)\:{:}{-}\:step\left(T\right), \neg h\left(f,T\right), \:{not}\:\: h\left(f,T\text{+}1\right).}$

where $\mathtt{ h^*\left(x,T\right)}$ stands for $\mathtt{ h\left(x,T\right)}$ if $x \in F$ is a fluent and $\mathtt{ \neg h\left(y,T\right)}$ if $x = \neg y$ and $y \in F$ .

We illustrate the ASP encoding of a CPS by presenting the encoding of the LKAS theory in Example 2. Listing 3 shows the encoding of components, actions, and relations of $\mathcal{S}_{lkas}$ without the encoding of the initial state. Listing 4 shows the ASP encoding for $\Gamma_{lkas}$ (see Figure 3).

In Listing 3, Line 1 encodes the components; Lines 2–20 encode the relations; Lines 22–29 encode the actions tOn and tOff. The remaining lines of code encode other actions in similar fashion.

Each formula $\varphi$ related to a concern c is associated with a unique identifier $\varphi^I$ and is converted into a CNF $\varphi_1 \wedge \ldots \wedge \varphi_k$ , each $\varphi_i$ will be associated with a unique identifier $\varphi_i^I$ . The set of identifiers are declared using the predicate formula/1. It will be declared as disjunction or conjunction. Furthermore, set notation is used to encode a disjunction or conjunction, that is, the predicate member(X,G) states that the formulae X is a member of a disjunction or a conjunction G. The predicate func(F,C) states that F is the functional decomposition of concern C.

In Listing 4, the first line uses a special syntax, a short hand, declaring four atoms formula(0), $\ldots$ ,formula(3). The declaration and encoding of the Integrity concern and its related formulas, properties and decomposition functions are presented in Lines 3–13. In which, line 3 declares the concern Integrity. Lines 4–6 encode the conjunctive formula (conjunction(0)) that addresses the Integrity concern and its membership (e.g. the property secure_boot and the two decomposition functions of the Integrity concern). Line 7 specifies the two functional dependencies of the Integrity concern which are operation_func and energy_func. Lines 8–13 specify how the formulae address the functional decompositions. Lines 8–10 declare the disjunctive formula operation_func and define the membership between properties and this formula (e.g. member(advanced_mode,operation_func), member(basic_mode,operation_func) says that advanced_mode and basic_mode are elements of the disjunction operation_func). Similar encoding is applied for disjunctive formulae energy_func in Lines 11–13. Lines 15–30 encode information related to the authorization concern.

4.1.2 Encoding of the initial state

The encoding of the initial state I of a CPS theory $\left(\mathcal{S},I\right)$ , denoted by $\Pi\left(I\right)$ , contains, for each fluent f, $h\left(f, 0\right)$ if f is true in I or $\neg h\left(f, 0\right)$ if f is false in I. Listing 5 shows a snippet of the initial state of $\mathcal{S}_{lkas}$ with Lines 1–7 specifying the true/false properties and Lines 9–17 the specific information about which components operate in which properties in LKAS in the initial state.

The following property (see, Son et al. Reference Son, Baral, Tran and McIlraith2006) will be important for our discussion. It shows that $\Pi\left(\mathcal{S}\right)^n$ correctly computes the function $\Phi_\mathcal{S}$ .

Proposition 3 Let s be a state in $\mathcal{S}$ . Let $\Pi = \Pi\left(\mathcal{S}\right)^1 \cup \{h^*(f, 0) | f \in s\}$ . Assume that a is an action that is executable in s. Then, $s' \in \Phi_\mathcal{S}(a, s)$ iff there exists an answer set S of $\Pi \cup \{occurs\left(a,0\right)\}$ such that $\{h^*(f, 1) | f \in s'\} \subseteq A$ .

It is worth mentioning that $\Pi\left(\mathcal{S}\right)^n$ allows us to reason about effects of actions in the following sense: assume that $\left[a_0,\ldots,a_{n-1}\right]$ is a sequence of actions, then $\Pi\left(\mathcal{S}\right)^n \cup \{occurs\left(a_i, i\right) | i=0, \ldots, n-1\}$ has an answer set S if and only if (i) $a_0$ is executable in the state I; (ii) for each $i > 0$ , $a_{i}$ is executable after the execution of the sequence $\left[a_0,\ldots,a_{i-1}\right]$ ; (iii) for each i, the set $\{f | f\in F, h\left(f,i\right) \in S\} \cup \{\neg f | f\in F, \neg h\left(f,i\right) \in S\}$ is a state of $\mathcal{S}$ .

4.2 Computing satisfaction of concerns

We will next present a set of ASP rules for reasoning about the satisfaction of concerns as specified in Definitions 34. Since a concern is satisfied if all of its functional decompositions and properties are satisfied, we define rules for computing the predicate $\mathtt{ h\left(sat\left(C\right),T\right)}$ which states that concern C is satisfied at the step $\mathtt{T}$ . The rules are given in Listing 6.

The first two lines declare that the negation of a formula or a property is also a formula and thus can be a member of a disjunction or conjunction. The rule on Line 3 says that $\mathtt{h\left(\neg F,T\right)}$ is true if the negation of F is true. This rule uses a special syntax 1{formula(F);prop(F)} which says that there exists at least one F is both a formula and a property. The rule on Line 4 states that $\mathtt{h\left(F,T\right)}$ is true if F is a disjunction and one of its disjuncts is true. The next rule (Line 5) states that $\mathtt{\neg h\left(F,T\right)}$ for a disjunction F is true if it cannot be proven that F is true. This rule applies the well-known negation-as-failure operator in establishing the truth value of $\mathtt{\neg h\left(F,T\right)}$ . Similarly, the next two rules establish the truth value of a conjunction F, that is, h(F,T) is true if none of its conjuncts is false. The remaining rules are used to establish the truth value of $\mathtt{h\left(sat\left(C\right),T\right)}$ , the satisfaction of concern C at step T. Line 8 states that if the formula addressing the concern C cannot be proven to be true then the concern is not satisfied. Rules in line 9-10 propagate the unsatisfaction of a concern from its subconcerns. Finally, a concern is satisfied if it cannot be proven to be unsatisfied (Line 11). We can prove the following proposition that relates the implementation and Definition 3.

Proposition 4 (Concern Satisfaction) For a CPS theory $\Delta=\left(\mathcal{S},I\right)$ and a concern c, c is satisfied (or unsatisfied) in I if $h\left(sat\left(c\right),0\right)$ (or $\neg h\left(sat\left(c\right),0\right)$ ) belongs to every answer set of $\Pi\left(\Delta\right)$ , where $\Pi\left(\Delta\right)=\Pi\left(\mathcal{S}\right)^0 \cup \Pi\left(I\right) \cup \Pi_{sat}$ .

Proof. It is easy to see that for any formula $\varphi$ over the fluents in $\mathcal{S}$ , the encoding and the rules encoding a formula, and the rules in Lines 1–7, $I \models \Lambda(c)$ iff $\mathtt{h}(sat(\Lambda(c)^I) ,0)$ belongs to every answer set of $\Pi(\Delta)$ where $\Lambda(c)^I$ is the identifier associated to the formula $\Lambda(c)$ . Lines 9–10 show that if c has a sub-concern that is not satisfied then it is not satisfied and hence Rule 11 cannot be applied. As such, we have that h(sat(c),0) is in an answer set of $\Pi(\Delta)$ iff the formula $\Lambda(c)$ is true and all sub-concerns of c are satisfied in that answer set iff c is satisfied in I.

Since we will be working with the satisfaction of concerns in the following sections, we will therefore need to include $\Pi_{sat}$ in $\Pi\left(\mathcal{S}\right)^n$ . From now on, whenever we refer to $\Pi\left(\mathcal{S}\right)^n$ , we mean $\Pi\left(\mathcal{S}\right)^n \cup \Pi\left(I\right) \cup \Pi_{sat}$ .

4.3 Computing most/least trustworthy components

Proposition 1 shows that $\succeq_s$ has min/maximal elements, that is, least/most trustworthy components of a system always exist. The program $\Pi_{mlt}\left(\mathcal{S}\right)$ for computing these components is listed below.

In Listing 7, addBy(C,P) and positiveImpact(P,C) are defined in the program $\Pi(\Omega)$ (Subsection 2.4). addBy(C,P) is true means that a property P addresses a concern C. positiveImpact(P,C) is true means that the satisfaction of property P impacts positively on the satisfaction of concern C. The predicate r(X,P,C,T) (Line 1) encodes the relationship between X, P and C at the time T which says that the component X is working with the property P at time T and P addresses concern C. The second rule (Line 2) defines the predicate pos(X,P,C,T) that encodes the positive affected relationship between component X and concern C at time step T through property P which is true if the concern C is satisfied and positiveImpact(P,C) and r(X,P,C,T) hold. Lines 3–4 define nPos(X,P,C,T), which holds at time T if r(X,P,C,T) holds but either positiveImpact(P,C) is not defined in $\Omega$ or concern C is not satisfied. This element is used for the computation of the denominator of equation (6). The rest of the listing defines the relationship higher between components encoding the $\succeq_T$ where T represents the state at the time T of the system and identifying the most and least trustworthy components. Lines 5–6 propagate the positive affected and impact relations (pos/4, nPos/4) of a concern from its subconcerns. $twcp\left(x,tw,t\right)$ (resp. $twcn\left(x,tw,t\right)$ ) encodes the number of concerns positively affected (resp. impacted) by component x at step t. The atom $\#\mathtt{count\{C,P:pos\left(X,C,P,T\right),prop\left(P\right),concern\left(C\right)\}}$ is an aggregate atom in ASP and encodes the cardinality of the set of all concerns positively impacted by P and X.

We can show that the following proposition holds.

Proposition 5 For a CPS theory $\Delta = \left(\mathcal{S},I\right)$ and an answer set S of program $\Pi\left(\mathcal{S}\right)^n \cup \Pi\left(I\right) \cup \Pi_{mlt}$ , if $most\left(x,t\right) \in S$ (resp. $least\left(x,t\right) \in S$ ) then x is a most (resp. least) trustworthy component in the state $s_t$ .

The proof follows immediately from the definition of the predicate addBy, positiveImpact and the definition of aggregate functions in ASP. As such, to identify the most trustworthy component of $\mathcal{S}$ , we only need to compute an answer set S of $\Pi\left(\mathcal{S}\right)^n \cup \Pi\left(I\right) \cup \Pi_{mlt}$ and use Proposition 5.

Example 6 Consider the $\mathcal{S}_{lkas}$ domain.

  • Let us consider the initial configuration $I^{1}_{lkas}$ of LKAS system where every properties are observed to be true. For $\Delta_{lkas} = \left(\mathcal{S}_{lkas},I^{1}_{lkas}\right)$ , we can easily see that (from Figure 2) the atoms: $pos\left(cam,advanced\_mode,integrity,0\right)$ , $pos\left(cam,secure\_boot,cyber\_security,0\right)$ , etc. belong to every answer set of $\Pi\left(\Delta_{lkas}\right) =\Pi\left(\mathcal{S}_{lkas}\right)^n \cup \Pi\left(I^{1}_{lkas}\right) \cup \Pi_{mlt}^{lkas} $ . Similar atoms are present to record the number of concerns affected by different properties. Furthermore, $twcp\left(cam,28,0\right)$ , $twcn\left(cam,6, 0\right)$ , $twcp\left(sam,40,0\right)$ , $twcn\left(sam,0, 0\right)$ , $twcp\left(bat,6,0\right)$ and $twcn\left(bat,5,0\right)$ belong to any answer set of $\Pi\left(\mathcal{S}_{lkas}\right)^n \cup \Pi\left(I^{1}_{lkas}\right) \cup \Pi_{mlt}^{lkas}$ : SAM is the most trustworthy component; BAT is the least trustworthy components at step 0.

  • Now, let us consider $I^{2}_{lkas}$ of LKAS system (Figure 2) where there are two properties that are observed to be False: Firewall-Setup and Trusted-Auth-Device. For $\Delta_{lkas} = \left(\mathcal{S}_{lkas},I^{2}_{lkas}\right)$ , the computation of the program $\Pi\left(\mathcal{S}_{lkas}\right)^n \cup \Pi\left(I^{2}_{lkas}\right) \cup \Pi_{mlt}^{lkas}$ shows us: $twcp\left(cam,22,0\right)$ , $twcn\left(cam,6, 0\right)$ , $twcp\left(sam,22,0\right)$ , $twcn\left(sam,12, 0\right)$ , $twcp\left(bat,0,0\right)$ and $twcn\left(bat,11,0\right)$ belong to any answer set of $\Pi\left(\mathcal{S}_{lkas}\right)^n \cup \Pi\left(I^{2}_{lkas}\right) \cup \Pi_{mlt}^{lkas}$ . In this situation, CAM is the most trustworthy component; BAT is the least trustworthy components at step 0.

We conclude this part with a brief discussion on possible definitions of $\succeq$ . The proposed definition assumes everything being equal (e.g. all concerns and properties are equally important, the roles of every components in a CPS system are equal, etc.). In practice, the ordering $\succeq$ might be qualitative and user-dependent, for example, an user might prefer confidentiality over integrity. $\succeq$ can be defined over a qualitative ordering and implemented in ASP in a similar fashion that preferences have been implemented (e.g. Gelfond and Son Reference Gelfond and Son1998).

4.4 Computing mitigation strategies

The program $\Pi\left(\mathcal{S}\right)^{n} \cup \Pi_{sat}$ can be for computing a mitigation strategy by adding the rules shown in Listing 8:

The first rule containing the atom $\mathtt{1 \{occurs\left(A,T\right):action\left(A\right)\}1}$ – a choice atom – is intuitively used to generate the action occurrences and says that at any step T, exactly one action must occur. The second rule states that an action can only occur if it is executable. The last rule helps enforce that $h\left(sat\left(c\right),n\right)$ must be true in the last state, at step n. For a set of concerns $\Sigma$ , let $\Pi^{n}_{plan}\left[\Sigma\right]$ be the program obtained from $\Pi^{n}_{plan}$ by replacing its last rule with the set $\mathtt{\{{:}{-} not \quad h\left(sat\left(c\right),n\right). | c \in}$ $\Sigma\}$ . Based on the results in answer set planning, we can show:

Proposition 6 Let $\Delta = \left(\mathcal{S},I\right)$ be a CPS theory and $\Sigma$ be a set of concerns in $\Omega$ . Then, $\left[a_0,\ldots,a_{n-1}\right]$ is a mitigation strategy for $\Sigma$ iff $\Pi\left(\Delta\right) \cup \Pi^{n}_{plan}\left[\Sigma\right]$ has an answer set S such that $occurs\left(a_i,i\right) \in S$ for every $i=0,\ldots,n-1$ .

The proof of this proposition relies on the properties of $\Pi\left(\Delta\right)$ discussed in previous section and the set of constraints in $\Pi^{n}_{plan}\left[\Sigma\right]$ .

4.5 Non-compliance detection in CPS systems

The program $\Pi\left(\mathcal{S}\right)^{n} \cup \Pi_{sat}$ can be used in non-compliance detection by adding the rules shown in Listing 9:

The first two rules are similar to the rules for the planning program, with the exception that the action selection focuses on the actions in the set SA. The third rule generates an arbitrary initial state. The rules 4-5 state that if some concern in SC is not satisfied at time T then a conflict arises and the constraint on the last rule says that we would like to create a conflict at step n.

We assume that actions in SA are specified by atoms of the form $sa\_action\left(a\right)$ and concerns in SC are specified by atoms of the form $sc\_concern\left(c\right)$ . It is easy to see that an answer set S of $\Pi\left(\mathcal{S}\right)^{n} \cup \Pi_{sat} \cup \Pi^{n}\left(SA,SC\right)$ represents a situation in which the system will eventually not satisfy some concern in SC. Specifically, if the sequence of actions $\left[a_0,\ldots,a_t\right]$ such that $occurs\left(a_i, i\right) \in S$ and, for $s>t$ , there exists no $occurs\left(a_s,s\right)\in S$ , is executed in the initial state (the set $\{f | h\left(f,0\right) \in S, f \in F\} \cup \{\neg f | \neg h\left(f,0\right) \in S, f \in F\}$ ) then some concern in SC will not be satisfied after n steps. In other words, to check whether $\mathcal{S}$ is weakly n-noncompliant, we only need to check whether $\pi_n = \Pi\left(\mathcal{S}\right)^{n} \cup \Pi_{sat} \cup \Pi^{n}\left(SA,SC\right)$ as an answer set of not. The proof of this property relies on the definition of an answer set for a program with constraints, which say that the constraint :- not conflict(n). must be false in the answer set, which in turn implies that conflict(n) must be true.

If $\mathcal{S}$ is weakly n-noncompliant, we can do one more check to see whether it is strongly n-complaint as follows. Let $\pi_n'$ be a program obtained from $\pi_n$ by replacing “:- not conflict(n)” with “:- conflict(n).” We can show that if $\pi_n'$ has no answer set then for every initial state of $\mathcal{S}$ no action sequence is executable or there exists some action sequence such that conflict(n). is true. Combining with the fact that $\mathcal{S}$ is weakly n-noncompliant, this implies that the domain is strongly n-noncompliant. Again, the proof of this property relies on the definition of answer sets of programs with constraints, which say that the constraint :- conflict(n). must be false in an answer set, which in turn implies that conflict(n) must be false. However, the program having no answer set implies that every executable sequence of actions will generate conflict(n).

4.6 Likelihood of concerns satisfaction and preferred mitigation strategies

In this subsection, we present an ASP program for computing LoS of concerns and preferred mitigation strategies using LoS. Listing 10 shows the ASP encoding for computing of LoS of concerns. It defines the predicate llh_sat(C,N,T) which states that the likelihood of satisfaction of concern C at time step T is N. It starts with the definition of different predicates nAllPosCon/3 and nActPosCon/3 representing $rel^\text{+}\left(c\right)$ and $rel_{sat}^\text{+}\left(c,s\right)$ at the step T, that is, the number of all possible positively impacting properties on concern C and the number of positively impacting properties on concern C holding in step T, respectively. Recall that positiveImpact(P,C) is defined as in Subsection 4.3. Line 5 creates an ordering between subconcerns of concern C for the computation of llh_sat(C,N,T). The LoS for a concern without a subconcern is computed in Line 8. Rules on the lines 9–12 compute the LoS of concerns in accordance with the order created by rule on Line 1. llh_sat(C,N,T) is then computed using equation (8).

It is easy to check that the above program correctly computes the values of $deg^\text{+}\left(c,s\right)$ and $\varphi_{LoS}\left(c,s\right)$ . Indeed, the program $\Pi\left(\Delta_{lkas}\right) = \Pi\left(\mathcal{S}_{lkas}\right)^n \cup \Pi\left(I_{lkas}\right) \cup \Pi^{c}_{lkas} \cup \Pi_{sat} \cup \Pi^{n}_{plan} \cup \Pi_{LoS}$ correctly computes the LoS of concerns for various concerns as shown in Subsection 3.3.3 (Figure 5).

Having computed LoS of concerns and $\varphi_{LoS}$ , identifying the best strategies in according to the two approaches in Subsection 3.3.3 is simple. We only need to add rules that aggregates the LoS of the top-level concerns specified in the CPS with their corresponding weights or preferences. This is done as follows:

  • Weighted LoS: Listing 11 computes the weighted LoS of the final state. The rule is self-explanatory.

  • Specified Preferences LoS: ASP solver provides a convenient way for computing preferences based on lexicographic order among elements of a set. Assume that Trustworthiness is preferred to Business then the two statements

    #maximize{ $V_1$ @k : llh_sat(trustworthiness, $V_1$ , n)}

    #maximize{ $V_2$ @k’: llh_sat(business, $V_2$ , n)}

with $k > k'$ and n is the length of the plan will return answer sets in the lexicographic order, preferring the concern Trustworthiness over Business. With these statements, any specified preferred LoS over the set of top-level concern can be implemented easily.

4.7 Computing mitigation strategy with the best chance to succeed

To compute strategies with the maximal probability of success, we only need to extend the program $\Pi^n_{plan}$ with the following rules:

  • for each statement “ $a \quad {\mathbf{success\_with}}{} \quad v \quad \mathbf{if} \quad p_1,\ldots,p_n$ ,” the two rules: $\mathtt{ pr\left(a,v,T\right)\:\: {:}{-} \:\: h^*\left(p_1,T\right),\ldots,h^*\left(p_n,T\right).}$ $\mathtt{ dpr\left(a,T\right)\:\: {:}{-} \:\: h^*\left(p_1,T\right),\ldots,h^*\left(p_n,T\right).}$ which check for the satisfaction of the condition in a statement defining the probability of success in the step T and states that it is defined.

  • the rule: $\mathtt{ pr\left(A,1,T\right)\:\: {:}{-} \:\: exec\left(A,T\right), \:{not}\: \ dpr\left(A,T\right).}$ which says that by default, the probability of success of a at step T is 1.

  • computing the probability of the state at step T: $\mathtt{ prob\left(1,0\right).}$ $\mathtt{ prob\left(U*V,T\text{+}1\right)\:\: {:}{-} \:\: prob\left(U,T\right), occurs\left(A,T\right), pr\left(A,V,T\right).}$ where the first rule says that the probability of the state at the time 0 is 1; $prob\left(v,t\right)$ states that the probability of reaching the state at the step t is v and is computed using the second rule.

Let $ \Pi^{n}_{bestPrS}$ be $\Pi^n_{plan}$ and the above rules. We have that if $\left[a_0,\ldots,a_{n-1}\right]$ and S is an answer set of $\Pi\left(\Delta\right) \cup \Pi^{n}_{bestPrS} \cup \{occurs\left(a_i,i\right) | i=0,\ldots,n-1\}$ then $prob\left(\Pi_{i=0}^{n-1} pr\left(a_i,s_i\right),n\right) \in S$ . To compute the best strategy, we add the rule

$\mathtt{\small \#maximize \{V : prob\left(V,n\right)\}.}$

to the program $\Pi^{n}_{bestPrS}$ .

Example 7 Continue with Example 2 after a cyber-attack occurs and causes the property basic-mode to be False. As in Section 4.4, the five mitigation strategies ( $\alpha_1,\alpha_2,\alpha_3,\alpha_4$ and $\alpha_5$ ) are generated to restore the LKAS system. Assume that the probability of success of $\mathtt{\small tOn\left(basic\_mode\right)}$ , $\mathtt{\small switM\left(cam,advanced\_mode\right)}$ , and $\mathtt{\small switM\left(sam,advanced\_mode\right)}$ are 0.2, 0.6, 0.7 in every state, respectively. In this case, the strategies $\alpha_2$ and $\alpha_3$ have the maximal probability to succeed.

5 Towards a decision-support system for CPSF

As a demonstration of the potential use of our approach, in this section we give a brief overview of a decision-support system (version 0.1) that is being built for use by CPS designers, managers and operators. We also include preliminary considerations on performance aspects.

The decision-support system relies on an ASP-based implementation for reasoning tasks in CPS theories (described in Section 4) with the different modules for answering queries described in Section 3.3, and comprises a reasoning component and a visualization component. Figure 6 shows the reasoning component at work on computing satisfaction of concerns related to the LKAS domain example (described in Section 4.2). Figure 7 illustrates the reasoning component at work on other modules (Sections 4.34.7) with different situations related to the LKAS domain. Notice how the user can ask the system to reason about satisfaction of concerns, to produce mitigation plans as well as to select the most preferred mitigation strategy, etc.

Fig. 6. Computing satisfaction of concerns in reasoning component.

Fig. 7. Other reasoning modules in reasoning component.

The output of the reasoning component can then be fed to the visualization component, where advanced visualization techniques allow practitioners to get a birds-eye view of the CPS or dive into specific details. For instance, the sunburst visual from Figure 8 provides a view of the CPS from Figure 2 where the aspects are presented in the inner most ring. Moving outwards, the visualization shows concerns from increasingly deeper parts of the concern tree and properties. The left-hand side of the figure depicts the visualization in the case in which all concerns are satisfied (blue), while the right-hand side shows how the sunburst changes when certain concerns (highlighted as red) are not satisfied. Focusing on the right-hand side, the text box open over the visual reports that the trustworthiness aspect is currently not satisfied and the level at which this concern is not being met is the concern of privacy and the property of manageability. The visual allows for a pinpoint where within the CPS framework issues have arisen that when addressed can enable a working state. We omit the details of visualization component description as it is not the focus of this paper.

Fig. 8. Visualization component.

To ensure flexibility and to allow for investigation on the scalability on larger CPS, the decision-support system is designed to support a variety of hybrid ontology-ASP reasoning engines. Currently, we consider four reasoning engines: the naïve engine is implemented by connecting, in a loosely-coupled manner, Footnote 11 the SPARQL reasoner Footnote 12 and the Clingo ASP solver. This engine issues a single SPARQL query to the ontology reasoner at the beginning of the computation, fetching all necessary data. The Clingo-Python engine is another loosely-coupled engine, leveraging Clingo’s ability to run Python code at the beginning of the computation. This engine issues multiple queries in correspondence to the occurrences of special “external predicates” in the ASP program, which in principle allows for a more focused selection of the content of the ontology. The DLVHex2 engine also uses a similar fragmentation of queries, but the underlying solver allows for the queries to be executed at run-time, which potentially results in more focused queries, executed only when strictly needed. Finally, the Hexlite engine leverages a similar approach, but was specifically designed as a smaller, more performant alternative to DLVHex2.

In this preliminary phase of our investigation on scalability, all reasoning engines have exhibited similar performance, as exemplified by Table 2. The table summarizes the results of question-answering experiments on the Lane Keeping/Assist System (LKAS) domain and on the Smart Elevator domain (Nguyen et al. Reference Nguyen, Son, Bundas, Balduccini, Garwood and Griffor2020a). The reasoning tasks considered are for answering queries discussed earlier, including:

  • ( $\mathbf{Q}_1$ ) Computing satisfaction of concerns.

  • ( $\mathbf{Q}_2$ ) Computing most/least trustworthy components.

  • ( $\mathbf{Q}_3$ ) Generating mitigation strategies.

  • ( $\mathbf{Q}_4$ ) Non-compliance detection in a CPS.

  • ( $\mathbf{Q}_5$ ) Selecting the best mitigation strategy by preferred mitigation strategies.

  • ( $\mathbf{Q}_6$ ) Computing the likelihood of concerns satisfaction.

Table 2. CPS domains querying, extracting and reasoning summary

In Table 2, the performance of the execution for each query ( $\mathbf{Q}_1$ - $\mathbf{Q}_6$ ) Footnote 13 is measured by the average processing time of reasoning computations in our experiment CPS theories (LKAS and Smart Elevator) with different initial situations (different initial configurations). While the results show that the naïve engine is marginally better than the others, the differences are quite negligible, all within 10%.

It is conceivable that larger-scale experiments will eventually exhibit similar patterns to those found in other research on the scalability of hybrid systems (e.g. Balduccini and Lierler Reference Balduccini and Lierler2017). A thorough analysis will be the subject of a separate paper where we have done some preliminary experiment with our CPS reasoning system and found that it can work ontologies with more than 150K triples, 85 classes, 61K individuals, 30 object properties, 40 data properties, and 45 subclass relations within a minute.

6 Related work

Due to the difference in level of abstraction, most of the approaches from the literature can be viewed as orthogonal and complementary to ours. Thus, we focus our review of related work on what we consider to be the most relevant approaches.

The literature from the area of cybersecurity is often focused on the notion of graph-based attack models. Of particular relevance is the work on Attack-Countermeasure Trees (ACT) (Roy et al. Reference Roy, Kim and Trivedi2012). An ACT specifies how an attacker can achieve a specific goal on a IT system, even when mitigation or detection measures are in place. While ACT are focused on the Cybersecurity concern, our approach is rather generally applicable to the broader Trustworthiness aspect of CPS and can in principle be extended to arbitrary aspects of CPS and their dependencies. The underlying formalization methodology also allows for capturing sophisticated temporal models and ramified effects of actions. In principle, our approach can be extended to allow for quantitative reasoning, for example, by leveraging recent work on Constraint ASP and probabilistic ASP (Balduccini and Lierler Reference Balduccini and Lierler2017; Baral et al. Reference Baral, Gelfond and Rushton2009; Ostrowski and Schaub Reference Ostrowski and Schaub2012). As we showed above, one may then generate answers to queries that are optimal with respect to some metrics. It is worth pointing out that the combination of physical (non-linear) interaction and logical (discrete or Boolean) interaction of CPS can be modeled as a mixed-integer, non-linear optimization problem (MINLP) extended with logical inference. MINLP approaches can support a limited form of logic, for example, through disjunctive programming (Balas Reference Balas1975). But these methods seem to struggle with supporting richer logics and inferences such as “what-if” explorations. For relevant work in this direction, we refer the reader to Mistr et al. (Reference Mistr, D’Iddio, Huth and Misener2017), D’Iddio and Huth (Reference D’Iddio and Huth2017).

One major focus in the area of cybersecurity is the identification and mitigation of compromised devices. Behavior analysis and behavioral detection are some of the approaches used in this area. Uluagac et al. (Reference Uluagac, Aksu and Babun2019) proposes a system-level framework for the identification of compromised smart grid devices. The approach employs a combination of system call and function call tracing, which are paired with signal processing and statistical analysis. In a similar vein, Shoukry et al. (Reference Shoukry, Chong, Wakaiki, Nuzzo, Seshia, Hespanha and Tabuada2018) covers model-based techniques for addressing the problem of sensors that can be manipulated by an attacker. It is worth noting that, in our methodology, the presence or lack of compromised devices or components – and even the type of compromise – can be captured by means of properties, which in turn affect specific concerns. Techniques such as those described in the cited papers can then be used to determine whether such properties are satisfied or not.

Another related, complementary approach is presented in Aerts et al. (Reference Aerts, Reniers and Mousavi2017), where the authors tackle the problem of validation and verification of requirements. The paper proposes model-based testing as a solution to two key problems in validation and verification of requirements: translating requirements into concrete test inputs and determining what the outcome of such tests says about the satisfaction of the requirements. From this point of view, the approach from Aerts et al. (Reference Aerts, Reniers and Mousavi2017) can be used to provide the information about satisfaction of requirements that is necessary for the reasoning tasks covered by in our investigation. (2016) analyzes the role of models in the engineering of CPS and argues for classes of models that trade accuracy and detail in favor of simplicity and clarity of semantics. This idea is in line with the considerations that prompted the development of CPSF, and which are infused in our work through its legacy. In a related fashion, Roehm et al. (Reference Roehm, Oehlerking, Woehrle and Althoff2019) proposes a survey of conformance relations, where the term describes the link between functional behavior of a model and the behavior of the implemented system (or of a more concretized model). Conformance relations are typically applied to the task of analyzing requirements and their link to the CPS being modeled, and in that sense (Roehm et al. Reference Roehm, Oehlerking, Woehrle and Althoff2019) is orthogonal to our work. On the other hand, the paper elicits the interesting issue of whether the characterization of CPS from CPSF might be viewed, itself, as a conformance relation. This is an open question, which we plan to address in the future.

Tepjit et al. (Reference Tepjit, Horvath and Rusak2019) presents a rich survey of frameworks for implementing reasoning mechanisms in smart CPS. It is to be noted that the focus of the paper is on the reasoning mechanisms that occur within a CPS in order to achieve “smartness” (Tepjit et al. Reference Tepjit, Horvath and Rusak2019), while our focus is on reasoning mechanisms that allow designers, maintainers and operators to reason about a CPS – where the CPS itself may or may not be “smart.” There is certainly a certain degree of overlap between this paper and our work, but also of important differences. In particular, the reasoning mechanisms we discussed here are not always applicable at the system level, which is the focus of Tepjit et al. (Reference Tepjit, Horvath and Rusak2019). For instance, our techniques could be used in real-time by a CPS to determine whether its functional aspect is satisfied, but it may be unrealistic for a CPS to reason about its own trustworthiness. From another point of view, reasoning mechanisms discussed in Tepjit et al. (Reference Tepjit, Horvath and Rusak2019), such as planning and decision-making, can be viewed as tools for the satisfaction of properties. In this sense, a designer might want to use the results of that paper to ensure that the decision-making mechanisms implemented within a CPS satisfy certain properties that are responsible for ensuring the functional aspect of the CPS or even its trustworthiness.

The methodologies proposed in our paper build on a vast number of research results in ASP and related areas such as answer set planning, reasoning about actions, etc. and could be easily extended to deal with other aspects discussed in CPSF. They are well-positioned for real-world applications given the efficiency and scalability of ASP-solvers (e.g. clingo Gebser et al. 2007) that can deal with millions of atoms, incomplete information, default reasoning, and features that allow ASP to interact with constraint solvers and external systems.

7 Conclusions and future work

The paper presents a precise definition of a CPS, which, in conjunction with the CPS Ontology Framework by NIST, allows for the representing and reasoning of various problems that are of interest in the study of CPS. Specifically, the paper defines several problems related to the satisfaction of concerns of a CPS theories such as the problem of identifying non-compliant CPS systems, the problem of identifying the most/least trustworthy or vulnerable components, computing mitigation strategies, a most preferred mitigation strategies, or strategies with the best chance to succeed. For each problem, the paper presents a formal definition of “what is the problem?” and provides an ASP program that can automatically verify such properties. To the best of our knowledge, all of these contributions are new to the research in Cyber-Physical Systems.

The current ASP implementation Footnote 14 (version 0.1) provides a first step towards developing a tool for CPS practitioners and designers. It automatically translates a system specification as an ontological description (e.g. as seen in Figure 5) to ASP code and allows users to ask questions related to the aforementioned issues. It has been validated against small systems. One of our goals in the immediate near future is to develop an user-friendly interface that allows users to design or model their real-world CPS and identify potential issues within their systems and possible ways to address these issues before these issues become harmful.

Disclaimer

Official contribution of the National Institute of Standards and Technology; not subject to copyright in the United States. Certain commercial products are identified in order to adequately specify the procedure; this does not imply endorsement or recommendation by NIST, nor does it imply that such products are necessarily the best available for the purpose. Portions of this publication and research effort are made possible through the help and support of NIST via cooperative agreements 70NANB18H257 and 70NANB21H167.

Footnotes

*

Matthew Bundas has been supported by the GAANN grant #P200A180005. Tran Cao Son has been partially supported by the following NSF grants 1914635, 1757207, and 1812628.

1 For brevity, we use CPS to stand for both the plural and the singular cyber-physical system.

3 This is a pragmatic choice dictated by our intent to provide a formal account of the NIST CPS Framework. The debate on a universally accepted definition of trustworthiness is on-going and is beyond the scope of this paper.

6 For convenience, we often use first order logic literals under the assumption that they represent all suitable ground instantiations.

7 states are defined later.

9 Recall that our discussion focuses on trustworthiness but it can easily be adapted to other aspects defined in the CPS ontology.

10 We follow the convention in logic programming and use strings starting with lower/uppercase letter to denote constants/variables. In addition, this program can be generated automatically given that $\mathcal{S}$ is specified in the syntax given in Section 3.

11 By loosely-coupled, we mean that the components see each other as black-boxes and only exchange information, via simple interfaces, at the end of their respective computations. Compare this with a tightly-coupled architecture, where the components have a richer interfaces for exchange state information and controlling each other’s execution flow while their computations are still running.

13 We use a Macbook Pro 16 running macOS Big Sur Version 11.5.2, 32GB RAM DDR4, 2.6Ghz 6-Core Intel Core i9, and ASP solver Clingo.

References

Aerts, A., Reniers, M. A. and Mousavi, M. R. 2017. Cyber-Physical Systems - Foundations, Principles and Applications, chapter 19. Model-Based Testing of Cyber-Physical Systems. Intelligent Data-Centric Systems, 287304.Google Scholar
Balas, E. 1975. Disjunctive programming: Cutting planes from logical conditions. In Nonlinear Programming 2. Elsevier, 279312.Google Scholar
Balduccini, M., Griffor, E., Huth, M., Vishik, C., Burns, M. and Wollman, D. A. 2018. Ontology-based reasoning about the trustworthiness of cyber-physical systems. ArXiv, abs/1803.07438, 1.Google Scholar
Balduccini, M. and Lierler, Y. 2017. Constraint answer set solver EZCSP and why integration schemas matter. Journal of Theory and Practice of Logic Programming (TPLP), 17, 4, 462515.CrossRefGoogle Scholar
Baral, C., Gelfond, M. and Rushton, N. 2009. Probabilistic reasoning with answer sets. Theory and Practice of Logic Programming, 9, 1, 57144.10.1017/S1471068408003645CrossRefGoogle Scholar
Baral, C., Tran, N. and Tuan, L.-C. Reasoning about actions in a probabilistic setting. In AAAI/IAAI 2002, 507512.Google Scholar
D’Iddio, A. C. and Huth, M. 2017. ManyOpt: An Extensible Tool for Mixed, Non-Linear Optimization Through SMT Solving. CoRR, abs/1702.01332.Google Scholar
Eiter, T. 2007. Answer set programming for the semantic web. In Logic Programming, 23rd International Conference, ICLP 2007, Porto, Portugal, September 8–13, 2007, Proceedings, V. Dahl and I. Niemelä, Eds., Lecture Notes in Computer Science, vol. 4670. Springer, 2326.Google Scholar
Gebser, M., Kaufmann, B., Neumann, A. and Schaub, T. clasp: A conflict-driven answer set solver. In Proceedings of the Ninth International Conference on Logic Programming and Nonmonotonic Reasoning (LPNMR’07), C. Baral, G. Brewka and J. Schlipf, Eds., Lecture Notes in Artificial Intelligence, vol. 4483. Springer-Verlag, 260265.Google Scholar
Gelfond, M. and Lifschitz, V. 1990. Logic programs with classical negation. In Logic Programming: Proceedings of the Seventh International Conference, D. Warren and P. Szeredi, Eds, 579597.Google Scholar
Gelfond, M. and Lifschitz, V. 1993. Representing actions and change by logic programs. Journal of Logic Programming, 17, 2,3,4, 301323.Google Scholar
Gelfond, M. and Lifschitz, V. 1998. Action languages. Electronic Transactions on Artificial Intelligence, 3, 6.Google Scholar
Gelfond, M. and Son, T. C. 1998. Prioritized default theory. In Selected Papers from the Workshop on Logic Programming and Knowledge Representation 1997, LNAI, vol. 1471. Springer Verlag, 164223.Google Scholar
Ghallab, M., Nau, D. and Traverso, P. 2004. Automated Planning: Theory and Practice. Morgan Kaufmann Publishers.Google Scholar
Griffor, E., Greer, C., Wollman, D. A. and Burns, M. J. 2017a. Framework for cyber-physical systems: Volume 1, overview.CrossRefGoogle Scholar
Griffor, E., Greer, C., Wollman, D. A. and Burns, M. J. 2017b. Framework for cyber-physical systems: Volume 2, working group reports.CrossRefGoogle Scholar
Lee, E. A. 2016. Fundamental limits of cyber-physical systems modeling. ACM Transactions on Cyber-Physical Systems, 1, 1, 126.CrossRefGoogle Scholar
Marek, V. and Truszczyński, M. 1999. Stable models and an alternative logic programming paradigm. In The Logic Programming Paradigm: A 25-year Perspective, 375398.Google Scholar
Mistr, M., D’Iddio, A. C., Huth, M. and Misener, R. 2017. Satisfiability modulo theories for process systems engineering. eprints for the optimization community.CrossRefGoogle Scholar
Moschopoulos, J. 2001. Ship control technology; A US Navy perspective. IFAC Proceedings Volumes, 34, 7, 381388.CrossRefGoogle Scholar
Nguyen, T., Son, T. C., Bundas, M., Balduccini, M., Garwood, K. C. and Griffor, E. 2020a. Reasoning about trustworthiness in cyber-physical systems using ontology-based representation and asp. In PRIMA.Google Scholar
Nguyen, T. H., Pontelli, E. and Son, T. C. 2020b. On repairing web services workflows. In Practical Aspects of Declarative Languages, E. Komendantskaya and Y. A. Liu, Eds. Springer International Publishing, Cham, 3753.Google Scholar
Nguyen, T. H., Potelli, E. and Son, T. C. 2018a. Phylotastic: An experiment in creating, manipulating, and evolving phylogenetic biology workflows using logic programming. Theory and Practice of Logic Programming, 18a, 3-4, 656672.Google Scholar
Nguyen, T. H., Son, T. C. and Pontelli, E. 2018b. Automatic web services composition for phylotastic. In PADL, Los Angeles, CA, USA, January 8–9, 2018, Proceedings, 186202.Google Scholar
Niemelä, I. 1999. Logic programming with stable model semantics as a constraint programming paradigm. Annals of Mathematics and Artificial Intelligence, 25, 3,4, 241273.Google Scholar
Niemelä, I., Simons, P. and Soininen, T. 1999. Stable model semantics for weight constraint rules. In Proceedings of the 5th International Conference on on Logic Programming and Nonmonotonic Reasoning, 315332.Google Scholar
Ostrowski, M. and Schaub, T. 2012. ASP modulo CSP: The clingcon system. Journal of Theory and Practice of Logic Programming (TPLP), 12, 45, 485503.Google Scholar
Roehm, H., Oehlerking, J., Woehrle, M. and Althoff, M. 2019. Model conformance for cyber-physical systems: A survey. ACM Transactions on Cyber-Physical Systems, 3, 3, 126.CrossRefGoogle Scholar
Roy, A., Kim, D. S. and Trivedi, K. S. 2012. Attack countermeasure trees (ACT): Towards unifying the constructs of attack and defense trees. Security and Communication Networks, 5, 8, 929943.CrossRefGoogle Scholar
Shoukry, Y., Chong, M., Wakaiki, M., Nuzzo, P., Seshia, S. A., Hespanha, J. P. and Tabuada, P. 2018. SMT-based observer design for cyber-physical systems under sensor attacks. ACM Transactions on Cyber-Physical Systems, 2, 1, 127.CrossRefGoogle Scholar
Son, T., Baral, C., Tran, N. and McIlraith, S. 2006. Domain-dependent knowledge in answer set planning. ACM Transactions on Computational Logic, 7, 4, 613657.CrossRefGoogle Scholar
Tepjit, S., Horvath, I. and Rusak, Z. 2019. The state of framework development for implementing reasoning mechanisms in smart cyber-Physical systems: A literature review. Journal of Computational Design and Engineering, 6, 527541.CrossRefGoogle Scholar
Uluagac, C. S., Aksu, H. and Babun, L. 2019. A system-level behavioral detection framework for compromised CPS devices: Smart-grid case. ACM Transactions on Cyber-Physical Systems, 4, 2.Google Scholar
Wollman, D. A., Weiss, M. A., Li-Baboud, Y.-S., Griffor, E. and Burns, M. J. Framework for cyber-physical systems: Volume 3, timing annex 2017.CrossRefGoogle Scholar
Figure 0

Fig. 1. NIST CPS ontology.

Figure 1

Fig. 2. CPS Ontology and LKAS domain.

Figure 2

Table 1. Main components of the CPS ontology.

Figure 3

Fig. 3. Integrity and authorization concerns with their functionalities and properties.

Figure 4

Fig. 4. Current configuration of $\Delta_{lkas}$ related to Integrity concern after cyber-attack.

Figure 5

Fig. 5. Trustworthiness concern tree with LoS of concerns computation.

Figure 6

Fig. 6. Computing satisfaction of concerns in reasoning component.

Figure 7

Fig. 7. Other reasoning modules in reasoning component.

Figure 8

Fig. 8. Visualization component.

Figure 9

Table 2. CPS domains querying, extracting and reasoning summary