Security Risk Assessment of Critical Infrastructure Systems: A Comparative Study

Security Risk Assessment of Critical Infrastructure Systems: A Comparative Study Abstract Recent cyberattacks on critical infrastructure systems coupled with the technology-induced complexity of the system of systems have necessitated a review of existing methods of assessing critical systems security risk exposure. The question is; do existing security risk assessment methods adequately address the threats of modern critical infrastructure systems? Having examined six existing assessment frameworks, we argue, the complexities associated with modern critical infrastructure systems make existing methods insufficient to assess systems security risks exposure. From systems dynamics perspectives, this paper proposes a dynamic modelling approach as an alternative. 1. INTRODUCTION Economies all over the world depend on the efficient functioning of critical infrastructure systems for survival. Critical infrastructure resources include systems such as energy supply, power grid, water and sewerage, gas pipelines and much more. The criticality of these systems supports the argument for their special protection against any forms of attack. In trying to explore the cybersecurity space of these critical systems, multiple questions come to mind: how effective are existing security risk assessment methods in analysing the security risk exposure of modern critical infrastructure systems and how do we factor in the interdependence and complexity of critical infrastructure systems in the risk assessment process? We argue that the recent cyberattacks against critical infrastructure systems have nictitated the need for both researchers and practitioners to review existing security assessment processes of these systems. In addition to the constant threats from cyber adversaries, modern critical infrastructure systems are also increasingly becoming complex due to the integration of advanced technologies and cumulative interconnectivity. These factors have necessitated the need to review how we examine existing security risk dynamics of these critical systems. According to Paquette et al. [1], the word ‘risk’ is derived from the Italian word risicare, which translates to the English language as ‘to dare’ [1]. NIST SP-30 defines risk as the possible impact or result of an event on assets (and/or resources), and the corresponding consequences that occur [2]. Paquette et al. [1], on their part, argue that risk should not be defined or classified by the size of the risk, but, by the balance of expected and unexpected consequences known as ‘value at risk’ [1]. This is considered as the statistical measure of risk defined as the consequence of a loss by the chance of occurrence or confidence level. The challenge for many institutions in the context of risk assessment is when the risk they face cannot be assessed due to lack of clarity in estimation. In this case, risks could be measured in terms of the business impact (based on loss and cost to recover). This is the factor of the value of assets and the cost of protecting them. The challenge is how to accurately value systems like ICS-SCADA as operational technologies which support the functions of critical infrastructure systems. Faced with this dilemma, institutions often become susceptible to engaging in high-risk activities, yielding short-term benefits due to misunderstanding or ignorance of the exposed risks, comparative to the expected benefits. This paper considers risk as the combination of what can happen, the probability of happening and the consequence (impact) if it does happen. This proposition is supported by Kaplan and Garrick’s model [3]. According to their model, quantitative risk assessment (QRA) is based on ‘the set of triplets’; What can happen? How likely is that to happen? What is the consequence should that happen? This leads to Kaplan’s first risk function (as the probability of an unwanted event and the severity of the consequences of such event) [3]: R={<Ti,Li,Ii>} (1) where Ti denotes the ith risk scenario; Li denotes the likelihood of that scenario, and the Ii denote the resulting impact. This scenario is pivotal to the definition of risk as proposed by this paper. Extending the discussion, Kaplan defines risk as the possibility of a system moving from one state to another in a function of time ‘t’ and completeness ‘c’ [3] given new state of risk ‘R’ as R={<Ti,Li,Ii>}c (2) The Kaplan’s model is modified, to incorporate the likelihood of an event happening and its potential impact. Missing here, however, is the underlying entity upon which impact can be assessed. This is termed as an asset ‘A’. Moreover, the threat by itself is inconsequential in the absence of the underlying vulnerabilities. Threat ‘T’ is therefore introduced, as the function of vulnerabilities. This modifies Equation (2) above; leading to Equation (3): R={<V(Ti),Li,A(Ii)>}c (3) The introduction of asset ‘A’ and vulnerability ‘V’ in (3) makes it even more useful to the overall discussion of system risk. From Equation (3), a new risk model is defined; as a function of probability (P) of threat (T) events exploiting the vulnerability (V) and its severity, measured as its outcome, impact (I) on an asset (A). Risk assessment function is therefore defined in Equation (4); with the following vectors: (asset (A), threat (T), vulnerability (V), likelihood (L), impact (I)) R={<((V(Ti,)Li,A(Ii))t>}c (4) From Equation (4), we deduce the key elements of a new risk assessment model (see Fig. 7) and propose dynamic modelling for the assessment of security risks of critical infrastructure systems. This is how the rest of the paper is structured: Sections 2 looks at the state of the art of the security risk of critical infrastructure systems. An overview of six existing risk assessment models is provided in Section 3. In Section 4, the proposed dynamic assessment modelling is introduced with the review of its key constructs. Simulations run on the modelling are presented and discussed in Section 5. We conclude the paper in Section 6. 2. RELATED STUDIES 2.1. Critical infrastructure systems Critical infrastructure is defined broadly as large-scale sociotechnical systems which provide services to the society, and are essential for its proper functioning [4]. Undoubtedly, critical infrastructure plays very significant roles in the context of public services and for a societal living. There are many accounts of the classification of critical infrastructure. The USA President’s Commission on Critical Infrastructure Protection (PCCIP) (‘National Strategy for Physical Protection of Critical Infrastructure and Key Assets | Homeland Security,’ n.d.) [5], as part of national security strategy, has been very influential in this area. The PCCIP report proposes eight categories of critical infrastructure systems. They include Information and Communications, Electrical Power Systems, Gas and Oil Transportation and Storage, Banking and Finance, Transportation, Water Supply Systems, Emergency Services and Government Services. The interest of this paper is on downstream energy sector infrastructures and supporting technologies. The downstream energy processes include fabrication, refining, processing, purifying of raw gas, as well as the distribution of gases to the final consumer. Among the critical processes which take place to ensure successful energy distribution include pump control, blow-out prevention, well-monitory, manifolds management, net oil measurement, separation and burner management. Other processes include metering, gas processing and transportation, storage monitoring and safety control. The greater demand for visibility from production lines and the quest for process efficiency as well as the zeal to cut down operating cost by asset owners and systems administrators have increased controlled automation in the energy sector supported by advances in electronic industrial control systems (ICS) and logic-based digital systems. ‘Today, ICS and automation are found in nearly every aspect of our daily lives’ [6]. HVAC,1 Supervisory Control and Data Acquisition (SCADA) systems and sensor networks in substation automation and power grid transmission distribution, and robotic control in auto manufacturing are just a few examples of how control systems have permeated in every aspect of human life [6]. Among the common control systems, devices and components include SCADA, distributed control systems (DCS), programmable logic controllers (PLC), human machine interface (HMI), safety instrumentation systems (SIS) and variable frequency drives (VFDs). In addition to general Information Technology (IT) systems that support enterprise information and data management, the control systems devices or tools serve as the operations technologies upon which critical infrastructure systems operate. With this convergence, business owners are faced with managing two networks: IT networks for business information and operational technologies (OT) for operations. ‘Today, this convergence is not only common, but prevalent, and business reasons often requires that certain OT data be communicated to the IT networks’ [6]. 2.2. Risk assessment in critical infrastructure systems The reality is that, modern connected industrial process present danger to the industrial control space. It is fact that logic-based electronic systems are susceptible to many possible failures that are not commonly present in the hard-wired analogue systems, given that many of systems protocols, such as ControlNet, DeviceNet, Profibus and Serial Modbus were all based on propriety vendor-specific technologies [6]. There has also been a push to use open technologies such as Windows OS, Linux, and Ethernet protocols (IPs) for control, monitoring and viewing controlled operations. This convergence has brought new forms of security risks that were not known before the 1990s. The traditional implicit philosophy of the protection of critical resources has been based on the notion of containment to create physical boundaries around assets, compartments or perimeters that need protection [7]. This perception of the ‘inside’ versus the ‘outside’ is problematic in the interconnected environment, as it is getting more difficult to distinguish between insiders and outsiders especially in the interconnected critical infrastructure space. Modern critical infrastructure resources by their nature and design have become technology dependent and highly interconnected, which have amplified systems complexity. In such, space, it is not always clear where an organization’s boundary lies, making it difficult to implement traditional perimeter defence systems. Security risks relating to critical infrastructure systems refer to a wide range of physical, logical and technology defects in systems infrastructure setup and their environment. In that space, multiple threat sources exhibit. These include the act of human error, technical hardware failure, technical obsolescence, quality of services deviation from standard services, application/protocol attack, deliberate act of sabotage or vandalism, deliberate act of information extortion, DDoS, Botnets, web interface attack, and advance persistent or state-sponsored threats [8–10]. Other specific threats include Ransomware, Water Hole attack, Dropper, Rootkits, Spyware, Worms, Trojan Horses, Phishing and Spear Phishing. 3. INSTITUTIONAL RISK ASSESSMENT STANDARDS Over the years, various risk assessment models have been proposed to show how security risk assessment concept has been conceptualized. Below are six of such models. 3.1. NIST risk assessment framework (SP800-30/30rev1) NIST2 defines risk assessment as the fundamental components of an organization-wide risk management process, described in NIST SP 800-39. It is argued, the primary objective of information security assessment is to identify, estimate, and prioritize risk to organizational operations (i.e. mission, functions, image, and reputation). Reference is made to the organization’s assets, its people, interacting organizations with the use of its information resources. ‘… For the operational purpose, organizational threats, vulnerabilities, and impacts must be evaluated to identify important trends and decide where effort should be applied to eliminate or reduce threat capabilities; eliminate or reduce vulnerabilities; and assess, coordinate, and deconflict all cyberspace operations…’.3 The purpose is to inform decision-makers and support risk responses by identifying: (i) relevant threats to organizations and against other organizations; (ii) vulnerabilities both internal and external to organizations; (iii) impact to organizations that may occur given the potential for threats exploiting vulnerabilities; and the likelihood of the threats occurring. The result of the assessment process (Fig. 1) is, therefore, the determination of risk (considered as the function of the degree of impact and likelihood of harm occurring). FIGURE 1. View largeDownload slide NIST risk assessment framework. FIGURE 1. View largeDownload slide NIST risk assessment framework. 3.2. ISO/IEC 27 005:2008 ISO/IEC 27 005:2008 consider security risk assessment process in the context of organization-wide risk management activity and identify the key assessment processes as risk identification, analysis and evaluation (Fig. 2).4 Risk identification: The process of discovering, identifying and recording risks factors. It entails identifying the causes and source of the risk (e.g. hazard in the context of physical harm), events, situations or circumstances which could have a material impact on system’s objectives and the nature of the possible impact. The aim is to identify what might happen or what situations might exist that could affect the achievement of system’s objectives. Risk analysis: This involves developing an understanding of the risks to systems. It provides inputs to risk assessment decisions; whether risks need to be treated and the appropriate schemes and methods to be adopted. Risk evaluation: Risk evaluation uses the understanding of risk obtained during the analysis stage to make decisions about future actions. The purpose is to determine the significance of the level systems’ risk and compare the estimated security risks with the criteria defined the established context. Risk decision: This provides an input into the risk decisions about future actions. Ethical, legal, financial and other considerations, including perceptions of risk, are also inputs to the decision. The decision-making criteria include (i) whether the risk needs treatment; (ii) priorities for treatment; (iii) whether an activity should be undertaken and (iv) which number of paths should be followed FIGURE 2. View largeDownload slide ISO/IEC risk assessment framework. FIGURE 2. View largeDownload slide ISO/IEC risk assessment framework. 3.3. Bs-7799-2006 The BS-7799-2006 promotes the adoption of a process approach for assessing, treating, monitoring, and reviewing systems’ risk exposure. The objective is to encourage systems’ custodians to emphasize the importance of: Understanding business information security requirements and the need to establish policies and clear information security objectives. Selecting, implementing and operating controls in the context of managing organization’s overall business risks exposure. Monitoring and reviewing the performance and effectiveness of the Information Security Management System and Continual evaluation of existing security risk measurement practices. The standard (BS-7799 defines security risk assessment as the processes which activities entails the following actions and activities (Fig. 3): FIGURE 3. View largeDownload slide BS-7799-2006 risk assessment framework. FIGURE 3. View largeDownload slide BS-7799-2006 risk assessment framework. 3.4. Enterprise models We review below three enterprise-wide assessment models to assimilate how such processes differ from the institutional standards discussed above. 3.4.1. OCTAVE Octave model is an enterprise-wide risk assessment model applicable in assessing IT risk exposure in the context of enterprises’ operational and strategic drivers. Unlike the previously discussed models, OCTAVE is a qualitative criterion, which core target is to evaluate organization’s operational risk tolerance level (Fig. 4).5 FIGURE 4. View largeDownload slide OCTAVE risk assessment framework. FIGURE 4. View largeDownload slide OCTAVE risk assessment framework. 3.4.2. Fair This approach6 focuses on identifying and estimating the levels of systems’ risk exposure to the likelihood of loss to a business. It is a guide to systems administrators to make informed decisions on how to manage risks loss (either by accepting existing risk or by mitigating it). The core processes of the model are shown in Fig. 5. FIGURE 5. View largeDownload slide FAIR security risk assessment framework. FIGURE 5. View largeDownload slide FAIR security risk assessment framework. 3.4.3. Microsoft Microsoft information security risk management has four processes (Fig. 6): Assessing risk Conducting decision support Implementing controls Measuring program effectiveness FIGURE 6. View largeDownload slide Microsoft risk assessment framework [11] FIGURE 6. View largeDownload slide Microsoft risk assessment framework [11] The objective of this model is to identify and prioritise risks facing the organization in the perspective with data acquisition and storage (as outlined in the data collection and analysis process). Another construct in the model is risk prioritization activities which outline various steps for qualifying and quantifying systems’ overall risk exposure. 3.5. Summary of common risk assessment frameworks 3.5.1. Gaps in the existing frameworks Till date, scholars, practitioners as well as security analysts in general, have been hamstrung by several challenges, not the least of which is inconsistent nomenclature. In certain references, software flaws/faults are referred to as ‘threats’, in other references the same faults are referred to as a ‘risk’, and yet others refer to them as ‘vulnerabilities’. Besides, the ensuing confusion, are the inconsistencies which add to the difficulty in defining and normalizing security risk assessment. It is not surprising that institutions, enterprises as well as scholars have failed so far to agree on a common method for the assessment of information security risk (see Table 1). Furthermore, the interface between critical infrastructure and their interdependencies has not been well explored by extant studies in the perspectives of cybersecurity risk assessment. As seen from Table 1, none of the reviewed frameworks has focused on critical infrastructure systems (i.e. controlled systems). And in literature where such study been done, the discussions have been limited to the identification of the risk metrics (see Table 1), with little (or no) emphasis on the entire assessment processes as has been approached in this study. Indeed, the problem of identification schemes as observed from the literature conflict with risks assessment process which incorporates systems dynamics and security policy evaluation as proposed in this study. TABLE 1. Summary of common risk assessment frameworks. Institution Publication (number) Description Focus NIST SP 800-30 Risk management Information systems (General IT systems) NIST SP 800-37 Risk assessment Information systems (Federal information system NIST SP 800-161 Risk management Supply chain management ISO/IEC 27 005 Risk management Information systems ISO/IEC 31 010 Risk management IT governance ISO/IEC 31 000 Risk management Organization wide (general) British Standard 100-3 Risk analysis based on IT infrastructure Information technology CERT OCTAVE Operationally critical threat, asset and vulnerability evaluation Enterprise (IT) projects FAIR FARE Risk identification Business information systems MICROSOFT MICROSOFT Risks in the perspective with data acquisition and storage Software and data MODELLING Risk assessment in complex (interdependent systems) Critical (complex) infrastructures Institution Publication (number) Description Focus NIST SP 800-30 Risk management Information systems (General IT systems) NIST SP 800-37 Risk assessment Information systems (Federal information system NIST SP 800-161 Risk management Supply chain management ISO/IEC 27 005 Risk management Information systems ISO/IEC 31 010 Risk management IT governance ISO/IEC 31 000 Risk management Organization wide (general) British Standard 100-3 Risk analysis based on IT infrastructure Information technology CERT OCTAVE Operationally critical threat, asset and vulnerability evaluation Enterprise (IT) projects FAIR FARE Risk identification Business information systems MICROSOFT MICROSOFT Risks in the perspective with data acquisition and storage Software and data MODELLING Risk assessment in complex (interdependent systems) Critical (complex) infrastructures TABLE 1. Summary of common risk assessment frameworks. Institution Publication (number) Description Focus NIST SP 800-30 Risk management Information systems (General IT systems) NIST SP 800-37 Risk assessment Information systems (Federal information system NIST SP 800-161 Risk management Supply chain management ISO/IEC 27 005 Risk management Information systems ISO/IEC 31 010 Risk management IT governance ISO/IEC 31 000 Risk management Organization wide (general) British Standard 100-3 Risk analysis based on IT infrastructure Information technology CERT OCTAVE Operationally critical threat, asset and vulnerability evaluation Enterprise (IT) projects FAIR FARE Risk identification Business information systems MICROSOFT MICROSOFT Risks in the perspective with data acquisition and storage Software and data MODELLING Risk assessment in complex (interdependent systems) Critical (complex) infrastructures Institution Publication (number) Description Focus NIST SP 800-30 Risk management Information systems (General IT systems) NIST SP 800-37 Risk assessment Information systems (Federal information system NIST SP 800-161 Risk management Supply chain management ISO/IEC 27 005 Risk management Information systems ISO/IEC 31 010 Risk management IT governance ISO/IEC 31 000 Risk management Organization wide (general) British Standard 100-3 Risk analysis based on IT infrastructure Information technology CERT OCTAVE Operationally critical threat, asset and vulnerability evaluation Enterprise (IT) projects FAIR FARE Risk identification Business information systems MICROSOFT MICROSOFT Risks in the perspective with data acquisition and storage Software and data MODELLING Risk assessment in complex (interdependent systems) Critical (complex) infrastructures Surprisingly, assets characterization and valuation, which this paper considers as a key component of security risks assessment in critical infrastructure systems, are not well discussed in the existing frameworks. Modern organizations depend on critical infrastructure resources, which are also part of the core corporate resources. These resources are critical to business survival (in cost and value). The risk assessment must, therefore, identify and value critical infrastructure systems; it only then risk impact can be evaluated. Additionally, quantitative assessment without asset valuation is also problematic for evaluators, and perhaps more problematic for interdependent critical systems. The challenge of valuing interdependent critical infrastructure system makes the application of dynamic modelling in the risk assessment process even more useful. Moreover, previous works on the concept (as discussed above) reveal disparities and lack of consensus among institutions, enterprises (e.g. NIST SP800-30, BS7799-2006, ISO/IEC 27 005-2008, OCTAVE, FAIR and Microsoft). The reasons for the disparities (in the assessment processes) make case for further research. Finally, none of the literature reviewed is found to have looked at critical infrastructure security from the perspective of systems dynamic modelling. Most of the studies on the subject have prescribed the techniques around either threats or vulnerability identification. This ‘metrics identification scheme’, whilst useful for theoretical discussions, is problematic as it does very little to predict and quantify the behavioural characteristics of systems and their subsystems (see Equation (4)) which are dynamic and unpredictable. This is where the approach in this paper comes. 4. DYNAMIC MODELLING OF RISK ASSESSMENT PROCESS We introduce here the proposed risk assessment model based on systems dynamics. The stages of the assessment process reflect the risk metrics as identified in Equation (4), which forms the basis of the proposed model. From Equation 1.4, four additional constructs are introduced: threat/vulnerability pair (TVP), control assessment (CA), risk modelling and risk policy evaluation (Fig. 7). The relationship (and difference) between the proposed model and the existing models is presented in Table 1. FIGURE 7. View largeDownload slide Security risk assessment framework. FIGURE 7. View largeDownload slide Security risk assessment framework. 4.1. System characterization The question here is; what is the environment to be assessed? In a controlled environment, critical resources include both information and operations technologies (hardware, software, networks, database, processes, people) which supports business information and operational processes. In other words, the entire IT environment is characterized in terms of assets and equipment, information flow, and responsibilities. 4.2. Assets identification What are the IT infrastructure (and their criticality) supporting business operations? The objective of risk assessment, in this case, is to identify and define the critical assets to protect, their value, container and custodian. In the energy sector, the focus is assets supporting downstream operations with much emphasis on controlled technologies. 4.3. Threat analysis What is the systems’ threat exposure? Defined as any events (actions and inactions) capable of exploiting system’s vulnerabilities, we look at actors whose events can violate the safety of the system with some negative consequences. G. J. Touhill and J. C. Touhill identified the following threat sources to the survival of any critical systems [12]: Advanced persistent threat (APT) Deliberate act of espionage or trespass Hacktivists Service deviation from service providers Deliberate act of sabotage or vandalism Technology obsolescence Insider threats and Substandard products and services Other sources include natural disasters (e.g. flood, tornadoes, hurricane, etc.), physical/environmental (e.g. fire, power failure, HVAC system), human factors (e.g. improper data entry, unauthorized access) and software induced threats such as SQL injection, malware, phishing, Bots, etc. 4.4. Vulnerability analysis This examines the weaknesses of the environment which could be exploited by any threat actor. Vulnerability is therefore described as the system’s overall susceptibility to attack due to some negative events [13]. In a controlled environment, systems vulnerabilities are viewed from two perspectives: system’s overall vulnerability to its threat exposure (global perspective) and assessment of the critical part of components where the system is vulnerable [14, 15]. 4.5. Threat–vulnerability pair TVP is introduced as the matching between threats and vulnerabilities. This involves matching a threat actor to a vulnerability. We propose the following TVP in a threat–vulnerability events (TVE) rating, (see Table 1) as a quantitative measure of the proposed assessment: 0.1 (as very unlikely) and 1.0 (as most very likely). 4.6. Control analysis What countermeasures are in place for systems’ protection? The objective here is to determine the available countermeasures capable of protecting assets against all possible threat actors (see Fig. 8). In our simulation exercise, it is established (Fig. 15) that the existence of effective countermeasures decreases significantly, the success rate of threat actors leveraging system’s vulnerability. We propose the following control effectiveness index (see Table 2) as a quantitative measure of the assessment process (Table 3). TABLE 2. TVE score. Risk scale Events counts Risk score Very likely >100/year 1.0 Likely ≥50 < 100/year 0.8 Somehow likely ≥10 < 50/year 0.6 Not likely ≥1 < 10/year 0.4 No Chance <1 0.1 Risk scale Events counts Risk score Very likely >100/year 1.0 Likely ≥50 < 100/year 0.8 Somehow likely ≥10 < 50/year 0.6 Not likely ≥1 < 10/year 0.4 No Chance <1 0.1 TABLE 2. TVE score. Risk scale Events counts Risk score Very likely >100/year 1.0 Likely ≥50 < 100/year 0.8 Somehow likely ≥10 < 50/year 0.6 Not likely ≥1 < 10/year 0.4 No Chance <1 0.1 Risk scale Events counts Risk score Very likely >100/year 1.0 Likely ≥50 < 100/year 0.8 Somehow likely ≥10 < 50/year 0.6 Not likely ≥1 < 10/year 0.4 No Chance <1 0.1 TABLE 3. Control effectiveness index. Index Control effectiveness index Control scale Control score 1 Default controls Very weak controls 0.2 No security screening in recruitment No technical controls No security training No awareness programmes No cyber insurance and compliance certificate 2 Default control measures Weak controls 0.4 Some technical controls No security screening in recruitment No security training No security awareness programmes No cyber insurance and Compliance Certificate 3 Default SCs Average controls 0.6 Some technical control measures Security screening in recruitment No cyber awareness programmes No cyber insurance 4 Default SCs Strong controls 0.8 Some technical control measures Security screening in recruitment Cybersecurity training and awareness programmes No cyber insurance and compliance certificate 5 Default SCs Very strong controls 1.0 Some technical control measures Security screening in recruitment Cybersecurity training and awareness programmes Existence of cyber insurance and compliance certifications Index Control effectiveness index Control scale Control score 1 Default controls Very weak controls 0.2 No security screening in recruitment No technical controls No security training No awareness programmes No cyber insurance and compliance certificate 2 Default control measures Weak controls 0.4 Some technical controls No security screening in recruitment No security training No security awareness programmes No cyber insurance and Compliance Certificate 3 Default SCs Average controls 0.6 Some technical control measures Security screening in recruitment No cyber awareness programmes No cyber insurance 4 Default SCs Strong controls 0.8 Some technical control measures Security screening in recruitment Cybersecurity training and awareness programmes No cyber insurance and compliance certificate 5 Default SCs Very strong controls 1.0 Some technical control measures Security screening in recruitment Cybersecurity training and awareness programmes Existence of cyber insurance and compliance certifications TABLE 3. Control effectiveness index. Index Control effectiveness index Control scale Control score 1 Default controls Very weak controls 0.2 No security screening in recruitment No technical controls No security training No awareness programmes No cyber insurance and compliance certificate 2 Default control measures Weak controls 0.4 Some technical controls No security screening in recruitment No security training No security awareness programmes No cyber insurance and Compliance Certificate 3 Default SCs Average controls 0.6 Some technical control measures Security screening in recruitment No cyber awareness programmes No cyber insurance 4 Default SCs Strong controls 0.8 Some technical control measures Security screening in recruitment Cybersecurity training and awareness programmes No cyber insurance and compliance certificate 5 Default SCs Very strong controls 1.0 Some technical control measures Security screening in recruitment Cybersecurity training and awareness programmes Existence of cyber insurance and compliance certifications Index Control effectiveness index Control scale Control score 1 Default controls Very weak controls 0.2 No security screening in recruitment No technical controls No security training No awareness programmes No cyber insurance and compliance certificate 2 Default control measures Weak controls 0.4 Some technical controls No security screening in recruitment No security training No security awareness programmes No cyber insurance and Compliance Certificate 3 Default SCs Average controls 0.6 Some technical control measures Security screening in recruitment No cyber awareness programmes No cyber insurance 4 Default SCs Strong controls 0.8 Some technical control measures Security screening in recruitment Cybersecurity training and awareness programmes No cyber insurance and compliance certificate 5 Default SCs Very strong controls 1.0 Some technical control measures Security screening in recruitment Cybersecurity training and awareness programmes Existence of cyber insurance and compliance certifications FIGURE 8. View largeDownload slide Security risk control cycle (Source: Course Technology/Cengage). FIGURE 8. View largeDownload slide Security risk control cycle (Source: Course Technology/Cengage). 4.7. Likelihood determination This is a quantitative measure of the probability of threat vector exploiting system’s vulnerabilities (TVP), measured as the function of the available countermeasures; defined as7: Lt=[(V(A)⁎T)/C)] (5) where Lt is the likelihood of an attack at any time t, V is system vulnerability, as a function of the asset (‘A’), and C, the available security controls (SCs). The divisor (C) weakens the likelihood of the attack. The higher its value, the lower the likelihood of threat exploitation and vice versa (Table 4). TABLE 4. Simulation (likelihood of attack) user-defined parameters. Test 1 High threat level Low vulnerability level Weak SCs 0.9 0.1 0.4 Test 2 High threat level High vulnerabilities Weak SCs 0.9 0.8 0.4 Test 3 High threat level High vulnerabilities Strong SCs 0.9 0.8 3.4 Test 4 High threat level Low vulnerability level Strong SCs 0.9 0.1 3.4 Test5 Test 2 + low asset value Asset1 Asset2 Asset3 Asset4 Asset5 Asset6 Asset7 Asset8 100 100 100 100 100 100 100 640 Test6 Test 2 + high asset value Asset1 Asset2 Asset3 Asset4 Asset5 Asset6 Asset7 Asset8 7065 780 12 430 1806 2018 11 050 3957 4730 Test 1 High threat level Low vulnerability level Weak SCs 0.9 0.1 0.4 Test 2 High threat level High vulnerabilities Weak SCs 0.9 0.8 0.4 Test 3 High threat level High vulnerabilities Strong SCs 0.9 0.8 3.4 Test 4 High threat level Low vulnerability level Strong SCs 0.9 0.1 3.4 Test5 Test 2 + low asset value Asset1 Asset2 Asset3 Asset4 Asset5 Asset6 Asset7 Asset8 100 100 100 100 100 100 100 640 Test6 Test 2 + high asset value Asset1 Asset2 Asset3 Asset4 Asset5 Asset6 Asset7 Asset8 7065 780 12 430 1806 2018 11 050 3957 4730 TABLE 4. Simulation (likelihood of attack) user-defined parameters. Test 1 High threat level Low vulnerability level Weak SCs 0.9 0.1 0.4 Test 2 High threat level High vulnerabilities Weak SCs 0.9 0.8 0.4 Test 3 High threat level High vulnerabilities Strong SCs 0.9 0.8 3.4 Test 4 High threat level Low vulnerability level Strong SCs 0.9 0.1 3.4 Test5 Test 2 + low asset value Asset1 Asset2 Asset3 Asset4 Asset5 Asset6 Asset7 Asset8 100 100 100 100 100 100 100 640 Test6 Test 2 + high asset value Asset1 Asset2 Asset3 Asset4 Asset5 Asset6 Asset7 Asset8 7065 780 12 430 1806 2018 11 050 3957 4730 Test 1 High threat level Low vulnerability level Weak SCs 0.9 0.1 0.4 Test 2 High threat level High vulnerabilities Weak SCs 0.9 0.8 0.4 Test 3 High threat level High vulnerabilities Strong SCs 0.9 0.8 3.4 Test 4 High threat level Low vulnerability level Strong SCs 0.9 0.1 3.4 Test5 Test 2 + low asset value Asset1 Asset2 Asset3 Asset4 Asset5 Asset6 Asset7 Asset8 100 100 100 100 100 100 100 640 Test6 Test 2 + high asset value Asset1 Asset2 Asset3 Asset4 Asset5 Asset6 Asset7 Asset8 7065 780 12 430 1806 2018 11 050 3957 4730 4.8. Impact analysis We examine here the impact of successful threat execution on critical systems and business operations. It is to measure the extent and the severity of a loss caused to an asset upon the realization of a threat vector. That is, the impact in terms of loss and cost. The severity of loss is proportional to the value of the asset compromised. This proposition is supported by Silver and Miller [16] who evaluates the impact of threat attacks by estimating the annual loss expectancy (ALE) using the probability of a loss [16]. 5. SYSTEMS DYNAMIC MODELLING From the above metrics, we propose dynamic modelling approach as an alternative to existing risk assessment process. The processes (Fig. 9) underlying the modelling development follow a proposal presented by [17] in his work ‘System Dynamics: Systems Thinking and Modelling for a Complex World’ and from [18] in his work System Dynamics Modelling [18]. Two broad considerations were made: structural analysis and dataset analysis. The structural analysis involves investigating the model structure of the system under consideration, while the dataset analysis entails investigating the datasets of the system’s variables to examine their behaviour. FIGURE 9. View largeDownload slide Dynamic modelling process [17]. FIGURE 9. View largeDownload slide Dynamic modelling process [17]. Systems dynamic requires investigating the complexities that are induced by the systems interdependencies and related security risks, propagated by systems inherent vulnerabilities. The approach provides the understanding of causal structure, underlying complexity of interdependent systems. According to Rinaldi, ‘the sheer complexity, magnitude, and scope of critical infrastructure systems make modelling and simulation important elements of any analytic effort’ [19]. To Rinaldi, modelling and simulation are important components for the safety, reliability, and survivability of the operations of critical infrastructure [19]. Systems dynamics is therefore defined as a method to gain insight into situations of dynamic complexity and policy resistance [17]. Sterman’s argument supports the proposal that dynamic modelling provides a systematic approach to deal with complex systems. 5.1. Dynamic hypothesis To simulate risk security assessment of critical infrastructure systems, we test the following hypothesis: Infrastructure interdependencies increase systems complexity which in turn increases the likelihood of systems security risk exposure. System’s vulnerability level correlates to the likelihood of threat attack. Inadequate SCs increase the likelihood of threat attack which escalates risk impact. 5.2. Modelling infrastructure interdependencies We present below system dynamic models for infrastructure interdependencies as an integrated system to assess the behavioural pattern of systems and their subsystems. This is to identify the interaction between systems component and to simulate their behaviour patterns (Fig. 10). The sequence of events helps probes the basis of systems security risk exposure. FIGURE 10. View largeDownload slide Technology integration causal loop diagram. FIGURE 10. View largeDownload slide Technology integration causal loop diagram. 5.3. Interdependencies causal loop diagram Causal loop diagram explains systems key variables focusing on factors which affect the system’s performance, and causes behavioural change. Key indicators Causal loop diagram for technology integration (Fig. 10) depicts forces and possibilities that shape the system’s behaviour. Since the model at this stage has not been implemented, most of the definitions are based on assumptions deduced from our interactions with infrastructure operators, administrators, and managers who are considered as the subject matter experts (SMEs). We discuss below some of the possibilities: Technology integration refers to the integration of ICS-SCADA infrastructure systems and the supporting technologies. The integration creates infrastructure interdependencies. Technology integration risk: This is security risk introduced in the system due to the interdependencies. Technology deployment risk: This is the system security risk exposure due to technology deployment. It is a function of the sum of the system’s vulnerabilities, threats events, integrated induced complexities divided by available (active) SCs, and security practices (e.g. training and awareness). It is assumed that the higher the rate of SCs, the lower the overall deployment risk. Furthermore, the risk exposure rate (high or low) is assumed to affect technology integration. This is indicated by the balancing loop feedback effect (B3 likelihood). System evaluation: This is to pre-assess the systems to establish the infrastructure requirements. The output of the evaluation provides the input for the technology integration decision process. Technology integration performance: This is a post-integration performance assessment measure. It is assumed that technology integration leads to changes in system’s performance. The performance could either be positive (reinforcement) or negative (balance). The indicator is based on three key indicators (i) system’s expected performance, (ii) observed performance and (iii) the actual performance. R3—system evaluation findings: Changes in the system’s activities are evaluated due to the technology integration. The purpose of the evaluation is to compare the system’s observed behaviour with its actual results. The gap is likely to trigger further assessment (rate) which in turn affects the integration. The feedback loop presents a positive reinforcement represented by R1. B3—technology integration impact: Infrastructure Interdependencies affect systems’ performance in one way or the other. In Chapter 4, we demonstrated using Bedell’s model to show how technology integration affects systems performance. The argument is extended to include the impact due to technology deployment risk. B3—integration performance index: Technology integration can lead to improvement in system’s performance. An optimum performance level could also lead to a reduction of infrastructure funding or investment which in turn loops back, affecting the system performance. This is represented by B3 balancing loop. R3—integration benefit index: It is assumed that technology integration affects system’s performance which also affects infrastructure investment. An increase in the level of investment is assumed to lead to improving system’s performance. Since the models presented above have mathematical foundations, the behaviour of stocks and flows can be observed by simulating for various time slots using different datasets of stocks and flows. In this case, models are complete with algebraic expressions (and assumptions) representing the model’s interactive alterations. 5.4. Simulations To predict the model’s behaviour, we present below the stock and flow diagrams of the following key metrics: (i) Likelihood of Attack (Fig. 11) and (ii) Impact of attack (effects of system’s breakdown (Fig. 12)). FIGURE 11. View largeDownload slide Likelihood of attack (without controls). FIGURE 11. View largeDownload slide Likelihood of attack (without controls). FIGURE 12. View largeDownload slide Business Impact of attack. FIGURE 12. View largeDownload slide Business Impact of attack. 5.4.1. Likelihood of attack (with/without countermeasures) Figure 11 shows a screenshot of a likelihood of attack simulation. The slider allows the user to manipulate the behaviour of the variable at various data inputs (with assigned values). Sliders are considered decision-making tools which control the input to the simulator. The models with the sliders (e.g. change management, multiple session requirements, etc.) are constants, in this case representing the key performance indicators (causes) of the simulating variables. Constants are datasets which are set up prior to starting a simulation. Attack likelihood rate is a flow representing a steady flow of a quantifiable amount over time. Variables in the rectangular box are called stocks and they represent storage (a quantifiable amount that is stocked (or removed) over time). The arrows between variables depict relationships among variables. Sliding movement changes the values of the underlying variable. For example, by sliding to the left of system vulnerability reduces the level of system’s vulnerabilities. Key variables TVP is used to estimate the likelihood of attack. An increase in TVP rate (due to increases in either threats activities or vulnerabilities) increases the likelihood of attack, the opposite holds. The fraction of TVP is captured by multiplying the system’s vulnerability rate by its threat exposure rate. Values assigned, are for the purpose of computation Complexity factors: These are variables considered to be contributing to system complexity due to infrastructure interdependencies. It is assumed that system’s complexity correlates positively with the likelihood of system’s security risk exposure. Thus, the complexity of a system affects its risk exposure rate. The following factors are assumed to influence system’s complexity: (i) the number of dependent systems, (ii) multiple session requirements, (iii) integrated functionalities, (iv) virtualization and (v) the technologies involved. 5.4.2. Security controls SC rate is measured by the sum of total active controls rates. The rate of active controls divides the rate of a likelihood of attack which determines the likelihood of system’s threat exposure. In this case, a likelihood of threat attack can be reduced (neutralized) by increasing the active SCs. The simulation is run by adjusting the corresponding slider either to the left (to decrease) or right (to increases) the rate of the underlying parameter. SC variables are measured on the scale of 0.1 (very weak) to 1.0 (very strong). This measurement is useful for computational analysis and for practical purposes as agreed by the SMEs (Fig. 13). FIGURE 13. View largeDownload slide Screenshot of likelihood of attack (with controls). FIGURE 13. View largeDownload slide Screenshot of likelihood of attack (with controls). 5.4.3. Business impact of security activities Estimating the impact of an attack is simulated using total cost of the system (Cost of assets + total revenue to be generated). The cost of the asset is assumed to include building cost, maintenance cost, data recapture cost, rebuilding cost, restoring cost, legal and compliance costs including cost of legal cases). Revenue, as captured here, includes all accruals from systems output (including loses due to system downtown). These two factors are assumed to impact businesses (due to successful threat attack). Thus, system integration generates an output (performance) derived from the system’s actual outcome less missed targets. In the event of an attack with a resulting system’s breakdown, the impact is in two folds—loss of performance (output) and the total cost to restore or to rebuild the system (considering the nature of attack) (Fig. 14). FIGURE 14. View largeDownload slide Screenshot of impact of attack. FIGURE 14. View largeDownload slide Screenshot of impact of attack. Note: Cost variables were based on authors’ own judgement (for the purpose of computation) and they do not reflect any economic reality. The impact simulator allows users to adjust the values of any of the key indicators using the corresponding slider to monitor the behaviour of the system at different costs scenarios. 5.4.4. Security risk exposure simulation From Equation (4), risk is defined as the product of likelihood and the impact of an attack (i.e.R=L∗I). Figure 15 shows the screenshot of security risk assessment simulation. FIGURE 15. View largeDownload slide Screenshot of system risk exposure. FIGURE 15. View largeDownload slide Screenshot of system risk exposure. System risk exposure is simulated by adjusting the values of the key indicators using the sliders attached to the respective variables. For example, moving a slider attached to systems vulnerability to the right (i.e. increases systems vulnerability level) increases system’s risk exposure which in effect increases the likelihood of threat attack. 5.5. Results and hypothesis This involves testing models to assimilate whether they replicate the dynamics of the system they represent, and to test their hypotheses. Another important task of validation is to verify whether the systems and their subsystems have a meaning to the real world they represent. 5.5.1. Test results The simulation as run here uses four sets of parameter combinations to test for likelihood of threat attack. These are: High threat levels, low vulnerability and weak SCs High threat levels, high vulnerabilities and weak SCs High threat levels, high vulnerabilities and strong SCs High threat levels, low vulnerabilities and strong SCs. The simulation approach enables the modeller to get a better view or an understanding of the behaviour of the system from different situations and to review or propose policy changes to in the existing control strategies. The simulation (Fig. 16) was run using the following parameters with the user-defined inputs in the table: INITIAL TIME (initial time for the simulation) = 1 month FINAL TIME (Final time for the simulation) = 12 months TIME STEP (Time step for the simulation) = 0.0078125 months. The time step of 0.0078 months was used to give smooth time profiles for the different parameters in the model. FIGURE 16. View largeDownload slide Likelihood of attack stock and flow behaviour. FIGURE 16. View largeDownload slide Likelihood of attack stock and flow behaviour. The result (Fig. 16) shows that likelihood of threat attack is high when vulnerability level is high with low level (weak) SCs. This scenario is shown by Test 2 on the simulation graph. The situation is, however, different when both threat and vulnerability levels were high with strong SCs (Test 3). In Test 3, strong SCs neutralize the impact of both threat and vulnerability levels. The lowest likelihood situation is when a vulnerability is low and with high level (strong) SCs. In this scenario, a likelihood of threat attack is very low (Test 4). Using the same parameters, we simulate the business impact of threat attack, by adjusting the value of the system. The test is run for two additional parameter combinations: Test V: Relatively low system value8 and Test VI: Relatively high system value. The result (Fig. 17) shows that at two extreme situations, there is equal likelihood chance of threat attack; however, the potential impact of the system’s breakdown is very high when the value of the system is high. The models presented above were not built for specific technology. The parameters used in the simulation were user inputs from SMEs. Specific technology solutions would require testing different sets of parameters. The simulation is run for twelve months period; which is the time horizon for the technology integration process. The initial month value is assigned a value of 1 and time step value of 0.0078125. FIGURE 17. View largeDownload slide Stock and flow of business impact. FIGURE 17. View largeDownload slide Stock and flow of business impact. 5.5.2. Hypotheses Hypothesis 1: Systems interdependencies increase systems complexity which in turn adds to the likelihood of threat attack. The following factors were considered to have influenced interdependencies complexity Number of dependent systems Type of technology involvement Integrated functionalities Multiple session requirements and Virtualization. For each variable, a numeric scale of 0.1–1.0 is assigned to determine its presence. 0.1 indicates very weak presence while 1.0 represents very strong presence. The total (sum) of all variables determines the level of complexity. The simulation is tested at three extreme scenarios (weak (0.1), average (0.5) and strong (0.9)) to observe the likelihood of threat behaviour (see Fig. 18). FIGURE 18. View largeDownload slide Hypothesis type I. FIGURE 18. View largeDownload slide Hypothesis type I. The graph (Fig. 18) supports the proposition that systems complexity adds to the likelihood of threat attack. Hypothesis 2: System’s vulnerability level correlates with the rate of likelihood attack. We test for three scenario factors with constant threat level (0.8): Low vulnerability level (0.2): Medium vulnerability level (0.5) and High vulnerability level (0.8). It is also observed that as both threat and vulnerability levels increase the likelihood of attack goes up irrespective of the level of SCs. This means when threat level is high, likelihood of threat attack increases (and as vulnerability level increases) (see H2-type III) Hypothesis 3: Inadequate SCs increase the likelihood of threat attack which in turn adds to security risk impact (Fig. 19: Test II, II and III). In this scenario, it is observed, when threat and vulnerability levels increase, the likelihood of an attack goes up when control level is low; the situation reverses when SC level increases. Hypothesis 4: The value of a system relates to the rate of its security risk impact (see Fig. 20). Tests IV and VI show that the value of information system (and/or asset) adds to its impact rate. Thus, the higher the asset value, the greater the risk impact (in the event of threat attack). FIGURE 19. View largeDownload slide Hypothesis type II. FIGURE 19. View largeDownload slide Hypothesis type II. FIGURE 20. View largeDownload slide Impact of attack stock and flow behaviour. FIGURE 20. View largeDownload slide Impact of attack stock and flow behaviour. Using the same parameters described above, we simulate risk impact by adjusting the value of the system. The test is run for two additional parameter combinations: Test V: Low value of a system Test VI: Relatively high value of a system. Note: As indicated above, it is assumed the value of the system, in this case, represents the total cost of building the system or restoring it and the benefits to be derived from the system. The result (Fig. 20) shows that in the two extreme situations, there is a high chance of threat attacking system. The result also shows that the impact of the system breakdown is high when the value of the system is high. This reflects the overall impact of the system; which is high when the value of the system is high. 5.6. Policy suggestions The results from the tests of hypotheses present certain discoveries which necessitate the need for policy formulations on security risk assessment on critical infrastructure systems. It is observed, SCs (countermeasures) positively correlate to the system’s overall security risk exposure as well as the chances of threat succeeding in exploiting system’s vulnerabilities. In the cyber ecosystem, threats are inevitable, infrastructure interdependency also increases system complexity. It, therefore, makes policy sense to devote more resources and measures protect critical systems as well as practices which aimed at reducing systems’ threat exposure. As identified earlier, Ransomware, Dropper, Rootkits, Virus, Spyware, Worms, Trojan Horses, Insider threats, are few examples of major threat agents against critical infrastructure systems. Training and awareness programs as well as strengthening remote login systems are considered as some of the major countermeasures in terms of infrastructure protection. Additionally, it is observed, infrastructure interdependencies increase systems’ complexities which in turn affect not only systems performance but also their security risk exposure. What is significant in terms of policy suggestion is the ability of asset owners, systems administrators, and system operators to identify factors that are likely to increase systems’ complexity, so as to analyse their structural characteristics. Structural analysis was observed to be a key factor in identifying causal relationships in complex systems. Besides its usefulness in the simulation process, it plays a very significant role in identifying threat sources as well as their method of propagation. Furthermore, policy shift could be influenced by the value of information assets. Hypothesis 4 shows a correlation between asset value and its risk impact. The challenge is that there is currently no standardized or acceptable method among scholars to quantitatively measure or value information assets (per the interactions with the asset custodians and systems administrators). For instance, the following questions posed to SMEs did not record any response: (i) What is the true value of ICS-SCADA and its dependent systems? (ii) What is the total cost of setting up ICS-SCADA? and (ii) What is the financial impact of a single breakdown on ICS-SCADA monitoring unit? In our simulations, we have provided some means of quantifying resources. These values are very subjective and were applied for the purposes of analysis, and do not reflect any market or economic reality. Effort should be made in terms of policy formulation to quantify information system resources so that in the event of an attack, its impacts can be properly valued. 6. CONCLUSION Assessing security risk in a controlled environment is more of an art than science. There is the need for current methods to factor in quantities and qualities that are inherently uncertain to predict and/or difficult to quantify. Critical infrastructure systems and their supporting technologies are becoming too complex and dynamic to predict, due to convergence with advanced technology. Similarly, systems’ boundaries have become too difficult to define due to systems interdependencies. The business impact of successful threat attack on critical infrastructure systems could be very damaging not only to the systems themselves but their interdependencies, and in some cases the health and social well-being of the citizenry. As reviewed above, there are in existence of various security risks assessment methods. Nonetheless, the complexity and increasing interdependencies of modern critical systems render existing assessment method less useful. There is, therefore, the need to develop different methods of security risk assessment method to address the gaps in the existing methods since there is no universal all-encompassing ‘silver bullet solution’ to security-related problems. Concluding, the modelling and simulation as presented in this paper are developed to specifically assess the security risks associated with controlled technologies supporting critical infrastructure systems (i.e. power distribution). In developing the modelling approach, seven key assessment metrics were identified. Systems characterization was based on controlled technologies (i.e. ICS-SCADA). While the approach has focused on critical infrastructure systems, it gives vendors, asset owners, customers, and regulatory agencies the ability to comparatively assess the relative robustness of different critical systems offerings. SUPPLEMENTARY MATERIAL Supplementary data are available at The Computer Journal online. Footnotes 1 Heating, ventilation and air-condition. 2 Computer Security Division, Information Technology Laboratory (ITL) National Institute of Standards and Technology. 3 The National Strategy for Cyberspace Operations Office of the Chairman, Joint Chiefs Of Staff, U.S. Department Of Defence. 4 ISO/IEC (the International Organization for Standardization and the International Electrotechnical Commission): it is part of the specialized systems for worldwide standardization. 5 Operationally critical threat, asset, and vulnerability evaluation: one of the primary goals of the CERT® Survivable Enterprise Management team is to help organizations ensure that their information security activities are aligned with their organizational goals and objectives. 6 The Open Group Risk Taxonomy. 7 Likelihood values are scaled from 1 (very low) to 10 to (most likely). 8 Note: As indicated above, it is assumed the value of the system, in this case, represents the total cost of building the system or restoring it and the benefits to be derived from the system. REFERENCES 1 Paquette , S. , Jaeger , P.T. and Wilson , S.C. ( 2010 ) Identifying the security risks associated with governmental use of cloud computing . Gov. Inf. Q. , 27 , 245 – 253 . Google Scholar CrossRef Search ADS 2 Stoneburner , G. , Goguen , A.Y. and Feringa , A. , 2002 . Sp 800-830. risk management guide for information technology systems. 3 Kaplan , S. and Garrick , B.J. ( 1981 ) On the quantitative definition of risk . Risk. Anal. , 1 , 11 – 27 . Google Scholar CrossRef Search ADS 4 De Bruijne , M. and Van Eeten , M. ( 2007 ) Systems that should have failed: critical infrastructure protection in an institutionally fragmented environment . J. Contingencies Crisis Manag. , 15 , 18 – 29 . Google Scholar CrossRef Search ADS 5 National Strategy for Physical Protection of Critical Infrastructure and Key Assets | Homeland Security [WWW Document], n.d. URL https://www.dhs.gov/national-strategy-physical-protection-critical-infrastructure-and-key-assets (accessed January 17, 2017). 6 Bodungen , C.E. , Singer , B.L. , Shbeeb , A. , Hilt , S. and Wilhoit , K. ( 2016 ) Hacking Exposed Industrial Control Systems: ICS and SCADA Security Secrets & Solutions. McGraw-Hill, Inc., 2016. 7 Pieters , W. ( 2011 ) The (social) construction of information security . Inf. Soc , 27 , 326 – 335 . Google Scholar CrossRef Search ADS 8 Dahbur , K. , Mohammad , B. and Tarakji , A.B. , 2011 . A survey of risks, threats and vulnerabilities in cloud computing. Proceedings of the 2011 International Conference on Intelligent Semantic Web-Services and Applications. ACM, p. 12. 9 Wei , Y.-M. , Fan , Y. , Lu , C. and Tsai , H.-T. ( 2004 ) The assessment of vulnerability to natural disasters in China by using the DEA method . Environ. Impact. Assess. Rev. , 24 , 427 – 439 . Google Scholar CrossRef Search ADS 10 Johansson , J. , Hassel , H. and Cedergren , A. ( 2011 ) Vulnerability analysis of interdependent critical infrastructures: case study of the Swedish railway system . Int. J. Crit. Infrastruct. , 7 , 289 – 316 . Google Scholar CrossRef Search ADS 11 TheSecurityRiskManagementGuide-Microsoft.pdf , n.d. 12 Touhill , G.J. and Touhill , J.C. ( 2014 ) Cybersecurity for Executives, A Practical Approach . John Wiley & Sons , New Jersey, USA . 13 Johansson , J. , 2010 . Risk and vulnerability analysis of interdependent technical infrastructures. Lund University Department of Measurement Technology and Industrial Electrical Engineering. 14 Apostolakis , G.E. and Lemon , D.M. ( 2005 ) A screening methodology for the identification and ranking of infrastructure vulnerabilities due to terrorism . Risk. Anal. , 25 , 361 – 376 . Google Scholar CrossRef Search ADS PubMed 15 Latora , V. and Marchiori , M. ( 2005 ) Vulnerability and protection of infrastructure networks . Phys. Rev. E , 71 , 015103 . Google Scholar CrossRef Search ADS 16 Silver , E. and Miller , L.L. ( 2002 ) A cautionary note on the use of actuarial risk assessment tools for social control . NCCD News , 48 , 138 – 161 . 17 Sterman , J.D. ( 2002 ) System Dynamics: systems thinking and modeling for a complex world. Proceedings of the ESD Internal Symposium. 18 Forrester , J.W. ( 2003 ) Dynamic models of economic systems and industrial organizations . Syst. Dyn. Rev , 19 , 329 – 345 . Google Scholar CrossRef Search ADS 19 Rinaldi , S.M. , Peerenboom , J.P. and Kelly , T.K. ( 2001 ) Identifying, understanding, and analyzing critical infrastructure interdependencies . Control. Syst. IEEE , 21 , 11 – 25 . Google Scholar CrossRef Search ADS Author notes Handling editor: Steven Furnell © The British Computer Society 2018. All rights reserved. For permissions, please email: journals.permissions@oup.com http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png The Computer Journal Oxford University Press

Security Risk Assessment of Critical Infrastructure Systems: A Comparative Study

Loading next page...
 
/lp/ou_press/security-risk-assessment-of-critical-infrastructure-systems-a-l5huMcWvrH
Publisher
Oxford University Press
Copyright
© The British Computer Society 2018. All rights reserved. For permissions, please email: journals.permissions@oup.com
ISSN
0010-4620
eISSN
1460-2067
D.O.I.
10.1093/comjnl/bxy002
Publisher site
See Article on Publisher Site

Abstract

Abstract Recent cyberattacks on critical infrastructure systems coupled with the technology-induced complexity of the system of systems have necessitated a review of existing methods of assessing critical systems security risk exposure. The question is; do existing security risk assessment methods adequately address the threats of modern critical infrastructure systems? Having examined six existing assessment frameworks, we argue, the complexities associated with modern critical infrastructure systems make existing methods insufficient to assess systems security risks exposure. From systems dynamics perspectives, this paper proposes a dynamic modelling approach as an alternative. 1. INTRODUCTION Economies all over the world depend on the efficient functioning of critical infrastructure systems for survival. Critical infrastructure resources include systems such as energy supply, power grid, water and sewerage, gas pipelines and much more. The criticality of these systems supports the argument for their special protection against any forms of attack. In trying to explore the cybersecurity space of these critical systems, multiple questions come to mind: how effective are existing security risk assessment methods in analysing the security risk exposure of modern critical infrastructure systems and how do we factor in the interdependence and complexity of critical infrastructure systems in the risk assessment process? We argue that the recent cyberattacks against critical infrastructure systems have nictitated the need for both researchers and practitioners to review existing security assessment processes of these systems. In addition to the constant threats from cyber adversaries, modern critical infrastructure systems are also increasingly becoming complex due to the integration of advanced technologies and cumulative interconnectivity. These factors have necessitated the need to review how we examine existing security risk dynamics of these critical systems. According to Paquette et al. [1], the word ‘risk’ is derived from the Italian word risicare, which translates to the English language as ‘to dare’ [1]. NIST SP-30 defines risk as the possible impact or result of an event on assets (and/or resources), and the corresponding consequences that occur [2]. Paquette et al. [1], on their part, argue that risk should not be defined or classified by the size of the risk, but, by the balance of expected and unexpected consequences known as ‘value at risk’ [1]. This is considered as the statistical measure of risk defined as the consequence of a loss by the chance of occurrence or confidence level. The challenge for many institutions in the context of risk assessment is when the risk they face cannot be assessed due to lack of clarity in estimation. In this case, risks could be measured in terms of the business impact (based on loss and cost to recover). This is the factor of the value of assets and the cost of protecting them. The challenge is how to accurately value systems like ICS-SCADA as operational technologies which support the functions of critical infrastructure systems. Faced with this dilemma, institutions often become susceptible to engaging in high-risk activities, yielding short-term benefits due to misunderstanding or ignorance of the exposed risks, comparative to the expected benefits. This paper considers risk as the combination of what can happen, the probability of happening and the consequence (impact) if it does happen. This proposition is supported by Kaplan and Garrick’s model [3]. According to their model, quantitative risk assessment (QRA) is based on ‘the set of triplets’; What can happen? How likely is that to happen? What is the consequence should that happen? This leads to Kaplan’s first risk function (as the probability of an unwanted event and the severity of the consequences of such event) [3]: R={<Ti,Li,Ii>} (1) where Ti denotes the ith risk scenario; Li denotes the likelihood of that scenario, and the Ii denote the resulting impact. This scenario is pivotal to the definition of risk as proposed by this paper. Extending the discussion, Kaplan defines risk as the possibility of a system moving from one state to another in a function of time ‘t’ and completeness ‘c’ [3] given new state of risk ‘R’ as R={<Ti,Li,Ii>}c (2) The Kaplan’s model is modified, to incorporate the likelihood of an event happening and its potential impact. Missing here, however, is the underlying entity upon which impact can be assessed. This is termed as an asset ‘A’. Moreover, the threat by itself is inconsequential in the absence of the underlying vulnerabilities. Threat ‘T’ is therefore introduced, as the function of vulnerabilities. This modifies Equation (2) above; leading to Equation (3): R={<V(Ti),Li,A(Ii)>}c (3) The introduction of asset ‘A’ and vulnerability ‘V’ in (3) makes it even more useful to the overall discussion of system risk. From Equation (3), a new risk model is defined; as a function of probability (P) of threat (T) events exploiting the vulnerability (V) and its severity, measured as its outcome, impact (I) on an asset (A). Risk assessment function is therefore defined in Equation (4); with the following vectors: (asset (A), threat (T), vulnerability (V), likelihood (L), impact (I)) R={<((V(Ti,)Li,A(Ii))t>}c (4) From Equation (4), we deduce the key elements of a new risk assessment model (see Fig. 7) and propose dynamic modelling for the assessment of security risks of critical infrastructure systems. This is how the rest of the paper is structured: Sections 2 looks at the state of the art of the security risk of critical infrastructure systems. An overview of six existing risk assessment models is provided in Section 3. In Section 4, the proposed dynamic assessment modelling is introduced with the review of its key constructs. Simulations run on the modelling are presented and discussed in Section 5. We conclude the paper in Section 6. 2. RELATED STUDIES 2.1. Critical infrastructure systems Critical infrastructure is defined broadly as large-scale sociotechnical systems which provide services to the society, and are essential for its proper functioning [4]. Undoubtedly, critical infrastructure plays very significant roles in the context of public services and for a societal living. There are many accounts of the classification of critical infrastructure. The USA President’s Commission on Critical Infrastructure Protection (PCCIP) (‘National Strategy for Physical Protection of Critical Infrastructure and Key Assets | Homeland Security,’ n.d.) [5], as part of national security strategy, has been very influential in this area. The PCCIP report proposes eight categories of critical infrastructure systems. They include Information and Communications, Electrical Power Systems, Gas and Oil Transportation and Storage, Banking and Finance, Transportation, Water Supply Systems, Emergency Services and Government Services. The interest of this paper is on downstream energy sector infrastructures and supporting technologies. The downstream energy processes include fabrication, refining, processing, purifying of raw gas, as well as the distribution of gases to the final consumer. Among the critical processes which take place to ensure successful energy distribution include pump control, blow-out prevention, well-monitory, manifolds management, net oil measurement, separation and burner management. Other processes include metering, gas processing and transportation, storage monitoring and safety control. The greater demand for visibility from production lines and the quest for process efficiency as well as the zeal to cut down operating cost by asset owners and systems administrators have increased controlled automation in the energy sector supported by advances in electronic industrial control systems (ICS) and logic-based digital systems. ‘Today, ICS and automation are found in nearly every aspect of our daily lives’ [6]. HVAC,1 Supervisory Control and Data Acquisition (SCADA) systems and sensor networks in substation automation and power grid transmission distribution, and robotic control in auto manufacturing are just a few examples of how control systems have permeated in every aspect of human life [6]. Among the common control systems, devices and components include SCADA, distributed control systems (DCS), programmable logic controllers (PLC), human machine interface (HMI), safety instrumentation systems (SIS) and variable frequency drives (VFDs). In addition to general Information Technology (IT) systems that support enterprise information and data management, the control systems devices or tools serve as the operations technologies upon which critical infrastructure systems operate. With this convergence, business owners are faced with managing two networks: IT networks for business information and operational technologies (OT) for operations. ‘Today, this convergence is not only common, but prevalent, and business reasons often requires that certain OT data be communicated to the IT networks’ [6]. 2.2. Risk assessment in critical infrastructure systems The reality is that, modern connected industrial process present danger to the industrial control space. It is fact that logic-based electronic systems are susceptible to many possible failures that are not commonly present in the hard-wired analogue systems, given that many of systems protocols, such as ControlNet, DeviceNet, Profibus and Serial Modbus were all based on propriety vendor-specific technologies [6]. There has also been a push to use open technologies such as Windows OS, Linux, and Ethernet protocols (IPs) for control, monitoring and viewing controlled operations. This convergence has brought new forms of security risks that were not known before the 1990s. The traditional implicit philosophy of the protection of critical resources has been based on the notion of containment to create physical boundaries around assets, compartments or perimeters that need protection [7]. This perception of the ‘inside’ versus the ‘outside’ is problematic in the interconnected environment, as it is getting more difficult to distinguish between insiders and outsiders especially in the interconnected critical infrastructure space. Modern critical infrastructure resources by their nature and design have become technology dependent and highly interconnected, which have amplified systems complexity. In such, space, it is not always clear where an organization’s boundary lies, making it difficult to implement traditional perimeter defence systems. Security risks relating to critical infrastructure systems refer to a wide range of physical, logical and technology defects in systems infrastructure setup and their environment. In that space, multiple threat sources exhibit. These include the act of human error, technical hardware failure, technical obsolescence, quality of services deviation from standard services, application/protocol attack, deliberate act of sabotage or vandalism, deliberate act of information extortion, DDoS, Botnets, web interface attack, and advance persistent or state-sponsored threats [8–10]. Other specific threats include Ransomware, Water Hole attack, Dropper, Rootkits, Spyware, Worms, Trojan Horses, Phishing and Spear Phishing. 3. INSTITUTIONAL RISK ASSESSMENT STANDARDS Over the years, various risk assessment models have been proposed to show how security risk assessment concept has been conceptualized. Below are six of such models. 3.1. NIST risk assessment framework (SP800-30/30rev1) NIST2 defines risk assessment as the fundamental components of an organization-wide risk management process, described in NIST SP 800-39. It is argued, the primary objective of information security assessment is to identify, estimate, and prioritize risk to organizational operations (i.e. mission, functions, image, and reputation). Reference is made to the organization’s assets, its people, interacting organizations with the use of its information resources. ‘… For the operational purpose, organizational threats, vulnerabilities, and impacts must be evaluated to identify important trends and decide where effort should be applied to eliminate or reduce threat capabilities; eliminate or reduce vulnerabilities; and assess, coordinate, and deconflict all cyberspace operations…’.3 The purpose is to inform decision-makers and support risk responses by identifying: (i) relevant threats to organizations and against other organizations; (ii) vulnerabilities both internal and external to organizations; (iii) impact to organizations that may occur given the potential for threats exploiting vulnerabilities; and the likelihood of the threats occurring. The result of the assessment process (Fig. 1) is, therefore, the determination of risk (considered as the function of the degree of impact and likelihood of harm occurring). FIGURE 1. View largeDownload slide NIST risk assessment framework. FIGURE 1. View largeDownload slide NIST risk assessment framework. 3.2. ISO/IEC 27 005:2008 ISO/IEC 27 005:2008 consider security risk assessment process in the context of organization-wide risk management activity and identify the key assessment processes as risk identification, analysis and evaluation (Fig. 2).4 Risk identification: The process of discovering, identifying and recording risks factors. It entails identifying the causes and source of the risk (e.g. hazard in the context of physical harm), events, situations or circumstances which could have a material impact on system’s objectives and the nature of the possible impact. The aim is to identify what might happen or what situations might exist that could affect the achievement of system’s objectives. Risk analysis: This involves developing an understanding of the risks to systems. It provides inputs to risk assessment decisions; whether risks need to be treated and the appropriate schemes and methods to be adopted. Risk evaluation: Risk evaluation uses the understanding of risk obtained during the analysis stage to make decisions about future actions. The purpose is to determine the significance of the level systems’ risk and compare the estimated security risks with the criteria defined the established context. Risk decision: This provides an input into the risk decisions about future actions. Ethical, legal, financial and other considerations, including perceptions of risk, are also inputs to the decision. The decision-making criteria include (i) whether the risk needs treatment; (ii) priorities for treatment; (iii) whether an activity should be undertaken and (iv) which number of paths should be followed FIGURE 2. View largeDownload slide ISO/IEC risk assessment framework. FIGURE 2. View largeDownload slide ISO/IEC risk assessment framework. 3.3. Bs-7799-2006 The BS-7799-2006 promotes the adoption of a process approach for assessing, treating, monitoring, and reviewing systems’ risk exposure. The objective is to encourage systems’ custodians to emphasize the importance of: Understanding business information security requirements and the need to establish policies and clear information security objectives. Selecting, implementing and operating controls in the context of managing organization’s overall business risks exposure. Monitoring and reviewing the performance and effectiveness of the Information Security Management System and Continual evaluation of existing security risk measurement practices. The standard (BS-7799 defines security risk assessment as the processes which activities entails the following actions and activities (Fig. 3): FIGURE 3. View largeDownload slide BS-7799-2006 risk assessment framework. FIGURE 3. View largeDownload slide BS-7799-2006 risk assessment framework. 3.4. Enterprise models We review below three enterprise-wide assessment models to assimilate how such processes differ from the institutional standards discussed above. 3.4.1. OCTAVE Octave model is an enterprise-wide risk assessment model applicable in assessing IT risk exposure in the context of enterprises’ operational and strategic drivers. Unlike the previously discussed models, OCTAVE is a qualitative criterion, which core target is to evaluate organization’s operational risk tolerance level (Fig. 4).5 FIGURE 4. View largeDownload slide OCTAVE risk assessment framework. FIGURE 4. View largeDownload slide OCTAVE risk assessment framework. 3.4.2. Fair This approach6 focuses on identifying and estimating the levels of systems’ risk exposure to the likelihood of loss to a business. It is a guide to systems administrators to make informed decisions on how to manage risks loss (either by accepting existing risk or by mitigating it). The core processes of the model are shown in Fig. 5. FIGURE 5. View largeDownload slide FAIR security risk assessment framework. FIGURE 5. View largeDownload slide FAIR security risk assessment framework. 3.4.3. Microsoft Microsoft information security risk management has four processes (Fig. 6): Assessing risk Conducting decision support Implementing controls Measuring program effectiveness FIGURE 6. View largeDownload slide Microsoft risk assessment framework [11] FIGURE 6. View largeDownload slide Microsoft risk assessment framework [11] The objective of this model is to identify and prioritise risks facing the organization in the perspective with data acquisition and storage (as outlined in the data collection and analysis process). Another construct in the model is risk prioritization activities which outline various steps for qualifying and quantifying systems’ overall risk exposure. 3.5. Summary of common risk assessment frameworks 3.5.1. Gaps in the existing frameworks Till date, scholars, practitioners as well as security analysts in general, have been hamstrung by several challenges, not the least of which is inconsistent nomenclature. In certain references, software flaws/faults are referred to as ‘threats’, in other references the same faults are referred to as a ‘risk’, and yet others refer to them as ‘vulnerabilities’. Besides, the ensuing confusion, are the inconsistencies which add to the difficulty in defining and normalizing security risk assessment. It is not surprising that institutions, enterprises as well as scholars have failed so far to agree on a common method for the assessment of information security risk (see Table 1). Furthermore, the interface between critical infrastructure and their interdependencies has not been well explored by extant studies in the perspectives of cybersecurity risk assessment. As seen from Table 1, none of the reviewed frameworks has focused on critical infrastructure systems (i.e. controlled systems). And in literature where such study been done, the discussions have been limited to the identification of the risk metrics (see Table 1), with little (or no) emphasis on the entire assessment processes as has been approached in this study. Indeed, the problem of identification schemes as observed from the literature conflict with risks assessment process which incorporates systems dynamics and security policy evaluation as proposed in this study. TABLE 1. Summary of common risk assessment frameworks. Institution Publication (number) Description Focus NIST SP 800-30 Risk management Information systems (General IT systems) NIST SP 800-37 Risk assessment Information systems (Federal information system NIST SP 800-161 Risk management Supply chain management ISO/IEC 27 005 Risk management Information systems ISO/IEC 31 010 Risk management IT governance ISO/IEC 31 000 Risk management Organization wide (general) British Standard 100-3 Risk analysis based on IT infrastructure Information technology CERT OCTAVE Operationally critical threat, asset and vulnerability evaluation Enterprise (IT) projects FAIR FARE Risk identification Business information systems MICROSOFT MICROSOFT Risks in the perspective with data acquisition and storage Software and data MODELLING Risk assessment in complex (interdependent systems) Critical (complex) infrastructures Institution Publication (number) Description Focus NIST SP 800-30 Risk management Information systems (General IT systems) NIST SP 800-37 Risk assessment Information systems (Federal information system NIST SP 800-161 Risk management Supply chain management ISO/IEC 27 005 Risk management Information systems ISO/IEC 31 010 Risk management IT governance ISO/IEC 31 000 Risk management Organization wide (general) British Standard 100-3 Risk analysis based on IT infrastructure Information technology CERT OCTAVE Operationally critical threat, asset and vulnerability evaluation Enterprise (IT) projects FAIR FARE Risk identification Business information systems MICROSOFT MICROSOFT Risks in the perspective with data acquisition and storage Software and data MODELLING Risk assessment in complex (interdependent systems) Critical (complex) infrastructures TABLE 1. Summary of common risk assessment frameworks. Institution Publication (number) Description Focus NIST SP 800-30 Risk management Information systems (General IT systems) NIST SP 800-37 Risk assessment Information systems (Federal information system NIST SP 800-161 Risk management Supply chain management ISO/IEC 27 005 Risk management Information systems ISO/IEC 31 010 Risk management IT governance ISO/IEC 31 000 Risk management Organization wide (general) British Standard 100-3 Risk analysis based on IT infrastructure Information technology CERT OCTAVE Operationally critical threat, asset and vulnerability evaluation Enterprise (IT) projects FAIR FARE Risk identification Business information systems MICROSOFT MICROSOFT Risks in the perspective with data acquisition and storage Software and data MODELLING Risk assessment in complex (interdependent systems) Critical (complex) infrastructures Institution Publication (number) Description Focus NIST SP 800-30 Risk management Information systems (General IT systems) NIST SP 800-37 Risk assessment Information systems (Federal information system NIST SP 800-161 Risk management Supply chain management ISO/IEC 27 005 Risk management Information systems ISO/IEC 31 010 Risk management IT governance ISO/IEC 31 000 Risk management Organization wide (general) British Standard 100-3 Risk analysis based on IT infrastructure Information technology CERT OCTAVE Operationally critical threat, asset and vulnerability evaluation Enterprise (IT) projects FAIR FARE Risk identification Business information systems MICROSOFT MICROSOFT Risks in the perspective with data acquisition and storage Software and data MODELLING Risk assessment in complex (interdependent systems) Critical (complex) infrastructures Surprisingly, assets characterization and valuation, which this paper considers as a key component of security risks assessment in critical infrastructure systems, are not well discussed in the existing frameworks. Modern organizations depend on critical infrastructure resources, which are also part of the core corporate resources. These resources are critical to business survival (in cost and value). The risk assessment must, therefore, identify and value critical infrastructure systems; it only then risk impact can be evaluated. Additionally, quantitative assessment without asset valuation is also problematic for evaluators, and perhaps more problematic for interdependent critical systems. The challenge of valuing interdependent critical infrastructure system makes the application of dynamic modelling in the risk assessment process even more useful. Moreover, previous works on the concept (as discussed above) reveal disparities and lack of consensus among institutions, enterprises (e.g. NIST SP800-30, BS7799-2006, ISO/IEC 27 005-2008, OCTAVE, FAIR and Microsoft). The reasons for the disparities (in the assessment processes) make case for further research. Finally, none of the literature reviewed is found to have looked at critical infrastructure security from the perspective of systems dynamic modelling. Most of the studies on the subject have prescribed the techniques around either threats or vulnerability identification. This ‘metrics identification scheme’, whilst useful for theoretical discussions, is problematic as it does very little to predict and quantify the behavioural characteristics of systems and their subsystems (see Equation (4)) which are dynamic and unpredictable. This is where the approach in this paper comes. 4. DYNAMIC MODELLING OF RISK ASSESSMENT PROCESS We introduce here the proposed risk assessment model based on systems dynamics. The stages of the assessment process reflect the risk metrics as identified in Equation (4), which forms the basis of the proposed model. From Equation 1.4, four additional constructs are introduced: threat/vulnerability pair (TVP), control assessment (CA), risk modelling and risk policy evaluation (Fig. 7). The relationship (and difference) between the proposed model and the existing models is presented in Table 1. FIGURE 7. View largeDownload slide Security risk assessment framework. FIGURE 7. View largeDownload slide Security risk assessment framework. 4.1. System characterization The question here is; what is the environment to be assessed? In a controlled environment, critical resources include both information and operations technologies (hardware, software, networks, database, processes, people) which supports business information and operational processes. In other words, the entire IT environment is characterized in terms of assets and equipment, information flow, and responsibilities. 4.2. Assets identification What are the IT infrastructure (and their criticality) supporting business operations? The objective of risk assessment, in this case, is to identify and define the critical assets to protect, their value, container and custodian. In the energy sector, the focus is assets supporting downstream operations with much emphasis on controlled technologies. 4.3. Threat analysis What is the systems’ threat exposure? Defined as any events (actions and inactions) capable of exploiting system’s vulnerabilities, we look at actors whose events can violate the safety of the system with some negative consequences. G. J. Touhill and J. C. Touhill identified the following threat sources to the survival of any critical systems [12]: Advanced persistent threat (APT) Deliberate act of espionage or trespass Hacktivists Service deviation from service providers Deliberate act of sabotage or vandalism Technology obsolescence Insider threats and Substandard products and services Other sources include natural disasters (e.g. flood, tornadoes, hurricane, etc.), physical/environmental (e.g. fire, power failure, HVAC system), human factors (e.g. improper data entry, unauthorized access) and software induced threats such as SQL injection, malware, phishing, Bots, etc. 4.4. Vulnerability analysis This examines the weaknesses of the environment which could be exploited by any threat actor. Vulnerability is therefore described as the system’s overall susceptibility to attack due to some negative events [13]. In a controlled environment, systems vulnerabilities are viewed from two perspectives: system’s overall vulnerability to its threat exposure (global perspective) and assessment of the critical part of components where the system is vulnerable [14, 15]. 4.5. Threat–vulnerability pair TVP is introduced as the matching between threats and vulnerabilities. This involves matching a threat actor to a vulnerability. We propose the following TVP in a threat–vulnerability events (TVE) rating, (see Table 1) as a quantitative measure of the proposed assessment: 0.1 (as very unlikely) and 1.0 (as most very likely). 4.6. Control analysis What countermeasures are in place for systems’ protection? The objective here is to determine the available countermeasures capable of protecting assets against all possible threat actors (see Fig. 8). In our simulation exercise, it is established (Fig. 15) that the existence of effective countermeasures decreases significantly, the success rate of threat actors leveraging system’s vulnerability. We propose the following control effectiveness index (see Table 2) as a quantitative measure of the assessment process (Table 3). TABLE 2. TVE score. Risk scale Events counts Risk score Very likely >100/year 1.0 Likely ≥50 < 100/year 0.8 Somehow likely ≥10 < 50/year 0.6 Not likely ≥1 < 10/year 0.4 No Chance <1 0.1 Risk scale Events counts Risk score Very likely >100/year 1.0 Likely ≥50 < 100/year 0.8 Somehow likely ≥10 < 50/year 0.6 Not likely ≥1 < 10/year 0.4 No Chance <1 0.1 TABLE 2. TVE score. Risk scale Events counts Risk score Very likely >100/year 1.0 Likely ≥50 < 100/year 0.8 Somehow likely ≥10 < 50/year 0.6 Not likely ≥1 < 10/year 0.4 No Chance <1 0.1 Risk scale Events counts Risk score Very likely >100/year 1.0 Likely ≥50 < 100/year 0.8 Somehow likely ≥10 < 50/year 0.6 Not likely ≥1 < 10/year 0.4 No Chance <1 0.1 TABLE 3. Control effectiveness index. Index Control effectiveness index Control scale Control score 1 Default controls Very weak controls 0.2 No security screening in recruitment No technical controls No security training No awareness programmes No cyber insurance and compliance certificate 2 Default control measures Weak controls 0.4 Some technical controls No security screening in recruitment No security training No security awareness programmes No cyber insurance and Compliance Certificate 3 Default SCs Average controls 0.6 Some technical control measures Security screening in recruitment No cyber awareness programmes No cyber insurance 4 Default SCs Strong controls 0.8 Some technical control measures Security screening in recruitment Cybersecurity training and awareness programmes No cyber insurance and compliance certificate 5 Default SCs Very strong controls 1.0 Some technical control measures Security screening in recruitment Cybersecurity training and awareness programmes Existence of cyber insurance and compliance certifications Index Control effectiveness index Control scale Control score 1 Default controls Very weak controls 0.2 No security screening in recruitment No technical controls No security training No awareness programmes No cyber insurance and compliance certificate 2 Default control measures Weak controls 0.4 Some technical controls No security screening in recruitment No security training No security awareness programmes No cyber insurance and Compliance Certificate 3 Default SCs Average controls 0.6 Some technical control measures Security screening in recruitment No cyber awareness programmes No cyber insurance 4 Default SCs Strong controls 0.8 Some technical control measures Security screening in recruitment Cybersecurity training and awareness programmes No cyber insurance and compliance certificate 5 Default SCs Very strong controls 1.0 Some technical control measures Security screening in recruitment Cybersecurity training and awareness programmes Existence of cyber insurance and compliance certifications TABLE 3. Control effectiveness index. Index Control effectiveness index Control scale Control score 1 Default controls Very weak controls 0.2 No security screening in recruitment No technical controls No security training No awareness programmes No cyber insurance and compliance certificate 2 Default control measures Weak controls 0.4 Some technical controls No security screening in recruitment No security training No security awareness programmes No cyber insurance and Compliance Certificate 3 Default SCs Average controls 0.6 Some technical control measures Security screening in recruitment No cyber awareness programmes No cyber insurance 4 Default SCs Strong controls 0.8 Some technical control measures Security screening in recruitment Cybersecurity training and awareness programmes No cyber insurance and compliance certificate 5 Default SCs Very strong controls 1.0 Some technical control measures Security screening in recruitment Cybersecurity training and awareness programmes Existence of cyber insurance and compliance certifications Index Control effectiveness index Control scale Control score 1 Default controls Very weak controls 0.2 No security screening in recruitment No technical controls No security training No awareness programmes No cyber insurance and compliance certificate 2 Default control measures Weak controls 0.4 Some technical controls No security screening in recruitment No security training No security awareness programmes No cyber insurance and Compliance Certificate 3 Default SCs Average controls 0.6 Some technical control measures Security screening in recruitment No cyber awareness programmes No cyber insurance 4 Default SCs Strong controls 0.8 Some technical control measures Security screening in recruitment Cybersecurity training and awareness programmes No cyber insurance and compliance certificate 5 Default SCs Very strong controls 1.0 Some technical control measures Security screening in recruitment Cybersecurity training and awareness programmes Existence of cyber insurance and compliance certifications FIGURE 8. View largeDownload slide Security risk control cycle (Source: Course Technology/Cengage). FIGURE 8. View largeDownload slide Security risk control cycle (Source: Course Technology/Cengage). 4.7. Likelihood determination This is a quantitative measure of the probability of threat vector exploiting system’s vulnerabilities (TVP), measured as the function of the available countermeasures; defined as7: Lt=[(V(A)⁎T)/C)] (5) where Lt is the likelihood of an attack at any time t, V is system vulnerability, as a function of the asset (‘A’), and C, the available security controls (SCs). The divisor (C) weakens the likelihood of the attack. The higher its value, the lower the likelihood of threat exploitation and vice versa (Table 4). TABLE 4. Simulation (likelihood of attack) user-defined parameters. Test 1 High threat level Low vulnerability level Weak SCs 0.9 0.1 0.4 Test 2 High threat level High vulnerabilities Weak SCs 0.9 0.8 0.4 Test 3 High threat level High vulnerabilities Strong SCs 0.9 0.8 3.4 Test 4 High threat level Low vulnerability level Strong SCs 0.9 0.1 3.4 Test5 Test 2 + low asset value Asset1 Asset2 Asset3 Asset4 Asset5 Asset6 Asset7 Asset8 100 100 100 100 100 100 100 640 Test6 Test 2 + high asset value Asset1 Asset2 Asset3 Asset4 Asset5 Asset6 Asset7 Asset8 7065 780 12 430 1806 2018 11 050 3957 4730 Test 1 High threat level Low vulnerability level Weak SCs 0.9 0.1 0.4 Test 2 High threat level High vulnerabilities Weak SCs 0.9 0.8 0.4 Test 3 High threat level High vulnerabilities Strong SCs 0.9 0.8 3.4 Test 4 High threat level Low vulnerability level Strong SCs 0.9 0.1 3.4 Test5 Test 2 + low asset value Asset1 Asset2 Asset3 Asset4 Asset5 Asset6 Asset7 Asset8 100 100 100 100 100 100 100 640 Test6 Test 2 + high asset value Asset1 Asset2 Asset3 Asset4 Asset5 Asset6 Asset7 Asset8 7065 780 12 430 1806 2018 11 050 3957 4730 TABLE 4. Simulation (likelihood of attack) user-defined parameters. Test 1 High threat level Low vulnerability level Weak SCs 0.9 0.1 0.4 Test 2 High threat level High vulnerabilities Weak SCs 0.9 0.8 0.4 Test 3 High threat level High vulnerabilities Strong SCs 0.9 0.8 3.4 Test 4 High threat level Low vulnerability level Strong SCs 0.9 0.1 3.4 Test5 Test 2 + low asset value Asset1 Asset2 Asset3 Asset4 Asset5 Asset6 Asset7 Asset8 100 100 100 100 100 100 100 640 Test6 Test 2 + high asset value Asset1 Asset2 Asset3 Asset4 Asset5 Asset6 Asset7 Asset8 7065 780 12 430 1806 2018 11 050 3957 4730 Test 1 High threat level Low vulnerability level Weak SCs 0.9 0.1 0.4 Test 2 High threat level High vulnerabilities Weak SCs 0.9 0.8 0.4 Test 3 High threat level High vulnerabilities Strong SCs 0.9 0.8 3.4 Test 4 High threat level Low vulnerability level Strong SCs 0.9 0.1 3.4 Test5 Test 2 + low asset value Asset1 Asset2 Asset3 Asset4 Asset5 Asset6 Asset7 Asset8 100 100 100 100 100 100 100 640 Test6 Test 2 + high asset value Asset1 Asset2 Asset3 Asset4 Asset5 Asset6 Asset7 Asset8 7065 780 12 430 1806 2018 11 050 3957 4730 4.8. Impact analysis We examine here the impact of successful threat execution on critical systems and business operations. It is to measure the extent and the severity of a loss caused to an asset upon the realization of a threat vector. That is, the impact in terms of loss and cost. The severity of loss is proportional to the value of the asset compromised. This proposition is supported by Silver and Miller [16] who evaluates the impact of threat attacks by estimating the annual loss expectancy (ALE) using the probability of a loss [16]. 5. SYSTEMS DYNAMIC MODELLING From the above metrics, we propose dynamic modelling approach as an alternative to existing risk assessment process. The processes (Fig. 9) underlying the modelling development follow a proposal presented by [17] in his work ‘System Dynamics: Systems Thinking and Modelling for a Complex World’ and from [18] in his work System Dynamics Modelling [18]. Two broad considerations were made: structural analysis and dataset analysis. The structural analysis involves investigating the model structure of the system under consideration, while the dataset analysis entails investigating the datasets of the system’s variables to examine their behaviour. FIGURE 9. View largeDownload slide Dynamic modelling process [17]. FIGURE 9. View largeDownload slide Dynamic modelling process [17]. Systems dynamic requires investigating the complexities that are induced by the systems interdependencies and related security risks, propagated by systems inherent vulnerabilities. The approach provides the understanding of causal structure, underlying complexity of interdependent systems. According to Rinaldi, ‘the sheer complexity, magnitude, and scope of critical infrastructure systems make modelling and simulation important elements of any analytic effort’ [19]. To Rinaldi, modelling and simulation are important components for the safety, reliability, and survivability of the operations of critical infrastructure [19]. Systems dynamics is therefore defined as a method to gain insight into situations of dynamic complexity and policy resistance [17]. Sterman’s argument supports the proposal that dynamic modelling provides a systematic approach to deal with complex systems. 5.1. Dynamic hypothesis To simulate risk security assessment of critical infrastructure systems, we test the following hypothesis: Infrastructure interdependencies increase systems complexity which in turn increases the likelihood of systems security risk exposure. System’s vulnerability level correlates to the likelihood of threat attack. Inadequate SCs increase the likelihood of threat attack which escalates risk impact. 5.2. Modelling infrastructure interdependencies We present below system dynamic models for infrastructure interdependencies as an integrated system to assess the behavioural pattern of systems and their subsystems. This is to identify the interaction between systems component and to simulate their behaviour patterns (Fig. 10). The sequence of events helps probes the basis of systems security risk exposure. FIGURE 10. View largeDownload slide Technology integration causal loop diagram. FIGURE 10. View largeDownload slide Technology integration causal loop diagram. 5.3. Interdependencies causal loop diagram Causal loop diagram explains systems key variables focusing on factors which affect the system’s performance, and causes behavioural change. Key indicators Causal loop diagram for technology integration (Fig. 10) depicts forces and possibilities that shape the system’s behaviour. Since the model at this stage has not been implemented, most of the definitions are based on assumptions deduced from our interactions with infrastructure operators, administrators, and managers who are considered as the subject matter experts (SMEs). We discuss below some of the possibilities: Technology integration refers to the integration of ICS-SCADA infrastructure systems and the supporting technologies. The integration creates infrastructure interdependencies. Technology integration risk: This is security risk introduced in the system due to the interdependencies. Technology deployment risk: This is the system security risk exposure due to technology deployment. It is a function of the sum of the system’s vulnerabilities, threats events, integrated induced complexities divided by available (active) SCs, and security practices (e.g. training and awareness). It is assumed that the higher the rate of SCs, the lower the overall deployment risk. Furthermore, the risk exposure rate (high or low) is assumed to affect technology integration. This is indicated by the balancing loop feedback effect (B3 likelihood). System evaluation: This is to pre-assess the systems to establish the infrastructure requirements. The output of the evaluation provides the input for the technology integration decision process. Technology integration performance: This is a post-integration performance assessment measure. It is assumed that technology integration leads to changes in system’s performance. The performance could either be positive (reinforcement) or negative (balance). The indicator is based on three key indicators (i) system’s expected performance, (ii) observed performance and (iii) the actual performance. R3—system evaluation findings: Changes in the system’s activities are evaluated due to the technology integration. The purpose of the evaluation is to compare the system’s observed behaviour with its actual results. The gap is likely to trigger further assessment (rate) which in turn affects the integration. The feedback loop presents a positive reinforcement represented by R1. B3—technology integration impact: Infrastructure Interdependencies affect systems’ performance in one way or the other. In Chapter 4, we demonstrated using Bedell’s model to show how technology integration affects systems performance. The argument is extended to include the impact due to technology deployment risk. B3—integration performance index: Technology integration can lead to improvement in system’s performance. An optimum performance level could also lead to a reduction of infrastructure funding or investment which in turn loops back, affecting the system performance. This is represented by B3 balancing loop. R3—integration benefit index: It is assumed that technology integration affects system’s performance which also affects infrastructure investment. An increase in the level of investment is assumed to lead to improving system’s performance. Since the models presented above have mathematical foundations, the behaviour of stocks and flows can be observed by simulating for various time slots using different datasets of stocks and flows. In this case, models are complete with algebraic expressions (and assumptions) representing the model’s interactive alterations. 5.4. Simulations To predict the model’s behaviour, we present below the stock and flow diagrams of the following key metrics: (i) Likelihood of Attack (Fig. 11) and (ii) Impact of attack (effects of system’s breakdown (Fig. 12)). FIGURE 11. View largeDownload slide Likelihood of attack (without controls). FIGURE 11. View largeDownload slide Likelihood of attack (without controls). FIGURE 12. View largeDownload slide Business Impact of attack. FIGURE 12. View largeDownload slide Business Impact of attack. 5.4.1. Likelihood of attack (with/without countermeasures) Figure 11 shows a screenshot of a likelihood of attack simulation. The slider allows the user to manipulate the behaviour of the variable at various data inputs (with assigned values). Sliders are considered decision-making tools which control the input to the simulator. The models with the sliders (e.g. change management, multiple session requirements, etc.) are constants, in this case representing the key performance indicators (causes) of the simulating variables. Constants are datasets which are set up prior to starting a simulation. Attack likelihood rate is a flow representing a steady flow of a quantifiable amount over time. Variables in the rectangular box are called stocks and they represent storage (a quantifiable amount that is stocked (or removed) over time). The arrows between variables depict relationships among variables. Sliding movement changes the values of the underlying variable. For example, by sliding to the left of system vulnerability reduces the level of system’s vulnerabilities. Key variables TVP is used to estimate the likelihood of attack. An increase in TVP rate (due to increases in either threats activities or vulnerabilities) increases the likelihood of attack, the opposite holds. The fraction of TVP is captured by multiplying the system’s vulnerability rate by its threat exposure rate. Values assigned, are for the purpose of computation Complexity factors: These are variables considered to be contributing to system complexity due to infrastructure interdependencies. It is assumed that system’s complexity correlates positively with the likelihood of system’s security risk exposure. Thus, the complexity of a system affects its risk exposure rate. The following factors are assumed to influence system’s complexity: (i) the number of dependent systems, (ii) multiple session requirements, (iii) integrated functionalities, (iv) virtualization and (v) the technologies involved. 5.4.2. Security controls SC rate is measured by the sum of total active controls rates. The rate of active controls divides the rate of a likelihood of attack which determines the likelihood of system’s threat exposure. In this case, a likelihood of threat attack can be reduced (neutralized) by increasing the active SCs. The simulation is run by adjusting the corresponding slider either to the left (to decrease) or right (to increases) the rate of the underlying parameter. SC variables are measured on the scale of 0.1 (very weak) to 1.0 (very strong). This measurement is useful for computational analysis and for practical purposes as agreed by the SMEs (Fig. 13). FIGURE 13. View largeDownload slide Screenshot of likelihood of attack (with controls). FIGURE 13. View largeDownload slide Screenshot of likelihood of attack (with controls). 5.4.3. Business impact of security activities Estimating the impact of an attack is simulated using total cost of the system (Cost of assets + total revenue to be generated). The cost of the asset is assumed to include building cost, maintenance cost, data recapture cost, rebuilding cost, restoring cost, legal and compliance costs including cost of legal cases). Revenue, as captured here, includes all accruals from systems output (including loses due to system downtown). These two factors are assumed to impact businesses (due to successful threat attack). Thus, system integration generates an output (performance) derived from the system’s actual outcome less missed targets. In the event of an attack with a resulting system’s breakdown, the impact is in two folds—loss of performance (output) and the total cost to restore or to rebuild the system (considering the nature of attack) (Fig. 14). FIGURE 14. View largeDownload slide Screenshot of impact of attack. FIGURE 14. View largeDownload slide Screenshot of impact of attack. Note: Cost variables were based on authors’ own judgement (for the purpose of computation) and they do not reflect any economic reality. The impact simulator allows users to adjust the values of any of the key indicators using the corresponding slider to monitor the behaviour of the system at different costs scenarios. 5.4.4. Security risk exposure simulation From Equation (4), risk is defined as the product of likelihood and the impact of an attack (i.e.R=L∗I). Figure 15 shows the screenshot of security risk assessment simulation. FIGURE 15. View largeDownload slide Screenshot of system risk exposure. FIGURE 15. View largeDownload slide Screenshot of system risk exposure. System risk exposure is simulated by adjusting the values of the key indicators using the sliders attached to the respective variables. For example, moving a slider attached to systems vulnerability to the right (i.e. increases systems vulnerability level) increases system’s risk exposure which in effect increases the likelihood of threat attack. 5.5. Results and hypothesis This involves testing models to assimilate whether they replicate the dynamics of the system they represent, and to test their hypotheses. Another important task of validation is to verify whether the systems and their subsystems have a meaning to the real world they represent. 5.5.1. Test results The simulation as run here uses four sets of parameter combinations to test for likelihood of threat attack. These are: High threat levels, low vulnerability and weak SCs High threat levels, high vulnerabilities and weak SCs High threat levels, high vulnerabilities and strong SCs High threat levels, low vulnerabilities and strong SCs. The simulation approach enables the modeller to get a better view or an understanding of the behaviour of the system from different situations and to review or propose policy changes to in the existing control strategies. The simulation (Fig. 16) was run using the following parameters with the user-defined inputs in the table: INITIAL TIME (initial time for the simulation) = 1 month FINAL TIME (Final time for the simulation) = 12 months TIME STEP (Time step for the simulation) = 0.0078125 months. The time step of 0.0078 months was used to give smooth time profiles for the different parameters in the model. FIGURE 16. View largeDownload slide Likelihood of attack stock and flow behaviour. FIGURE 16. View largeDownload slide Likelihood of attack stock and flow behaviour. The result (Fig. 16) shows that likelihood of threat attack is high when vulnerability level is high with low level (weak) SCs. This scenario is shown by Test 2 on the simulation graph. The situation is, however, different when both threat and vulnerability levels were high with strong SCs (Test 3). In Test 3, strong SCs neutralize the impact of both threat and vulnerability levels. The lowest likelihood situation is when a vulnerability is low and with high level (strong) SCs. In this scenario, a likelihood of threat attack is very low (Test 4). Using the same parameters, we simulate the business impact of threat attack, by adjusting the value of the system. The test is run for two additional parameter combinations: Test V: Relatively low system value8 and Test VI: Relatively high system value. The result (Fig. 17) shows that at two extreme situations, there is equal likelihood chance of threat attack; however, the potential impact of the system’s breakdown is very high when the value of the system is high. The models presented above were not built for specific technology. The parameters used in the simulation were user inputs from SMEs. Specific technology solutions would require testing different sets of parameters. The simulation is run for twelve months period; which is the time horizon for the technology integration process. The initial month value is assigned a value of 1 and time step value of 0.0078125. FIGURE 17. View largeDownload slide Stock and flow of business impact. FIGURE 17. View largeDownload slide Stock and flow of business impact. 5.5.2. Hypotheses Hypothesis 1: Systems interdependencies increase systems complexity which in turn adds to the likelihood of threat attack. The following factors were considered to have influenced interdependencies complexity Number of dependent systems Type of technology involvement Integrated functionalities Multiple session requirements and Virtualization. For each variable, a numeric scale of 0.1–1.0 is assigned to determine its presence. 0.1 indicates very weak presence while 1.0 represents very strong presence. The total (sum) of all variables determines the level of complexity. The simulation is tested at three extreme scenarios (weak (0.1), average (0.5) and strong (0.9)) to observe the likelihood of threat behaviour (see Fig. 18). FIGURE 18. View largeDownload slide Hypothesis type I. FIGURE 18. View largeDownload slide Hypothesis type I. The graph (Fig. 18) supports the proposition that systems complexity adds to the likelihood of threat attack. Hypothesis 2: System’s vulnerability level correlates with the rate of likelihood attack. We test for three scenario factors with constant threat level (0.8): Low vulnerability level (0.2): Medium vulnerability level (0.5) and High vulnerability level (0.8). It is also observed that as both threat and vulnerability levels increase the likelihood of attack goes up irrespective of the level of SCs. This means when threat level is high, likelihood of threat attack increases (and as vulnerability level increases) (see H2-type III) Hypothesis 3: Inadequate SCs increase the likelihood of threat attack which in turn adds to security risk impact (Fig. 19: Test II, II and III). In this scenario, it is observed, when threat and vulnerability levels increase, the likelihood of an attack goes up when control level is low; the situation reverses when SC level increases. Hypothesis 4: The value of a system relates to the rate of its security risk impact (see Fig. 20). Tests IV and VI show that the value of information system (and/or asset) adds to its impact rate. Thus, the higher the asset value, the greater the risk impact (in the event of threat attack). FIGURE 19. View largeDownload slide Hypothesis type II. FIGURE 19. View largeDownload slide Hypothesis type II. FIGURE 20. View largeDownload slide Impact of attack stock and flow behaviour. FIGURE 20. View largeDownload slide Impact of attack stock and flow behaviour. Using the same parameters described above, we simulate risk impact by adjusting the value of the system. The test is run for two additional parameter combinations: Test V: Low value of a system Test VI: Relatively high value of a system. Note: As indicated above, it is assumed the value of the system, in this case, represents the total cost of building the system or restoring it and the benefits to be derived from the system. The result (Fig. 20) shows that in the two extreme situations, there is a high chance of threat attacking system. The result also shows that the impact of the system breakdown is high when the value of the system is high. This reflects the overall impact of the system; which is high when the value of the system is high. 5.6. Policy suggestions The results from the tests of hypotheses present certain discoveries which necessitate the need for policy formulations on security risk assessment on critical infrastructure systems. It is observed, SCs (countermeasures) positively correlate to the system’s overall security risk exposure as well as the chances of threat succeeding in exploiting system’s vulnerabilities. In the cyber ecosystem, threats are inevitable, infrastructure interdependency also increases system complexity. It, therefore, makes policy sense to devote more resources and measures protect critical systems as well as practices which aimed at reducing systems’ threat exposure. As identified earlier, Ransomware, Dropper, Rootkits, Virus, Spyware, Worms, Trojan Horses, Insider threats, are few examples of major threat agents against critical infrastructure systems. Training and awareness programs as well as strengthening remote login systems are considered as some of the major countermeasures in terms of infrastructure protection. Additionally, it is observed, infrastructure interdependencies increase systems’ complexities which in turn affect not only systems performance but also their security risk exposure. What is significant in terms of policy suggestion is the ability of asset owners, systems administrators, and system operators to identify factors that are likely to increase systems’ complexity, so as to analyse their structural characteristics. Structural analysis was observed to be a key factor in identifying causal relationships in complex systems. Besides its usefulness in the simulation process, it plays a very significant role in identifying threat sources as well as their method of propagation. Furthermore, policy shift could be influenced by the value of information assets. Hypothesis 4 shows a correlation between asset value and its risk impact. The challenge is that there is currently no standardized or acceptable method among scholars to quantitatively measure or value information assets (per the interactions with the asset custodians and systems administrators). For instance, the following questions posed to SMEs did not record any response: (i) What is the true value of ICS-SCADA and its dependent systems? (ii) What is the total cost of setting up ICS-SCADA? and (ii) What is the financial impact of a single breakdown on ICS-SCADA monitoring unit? In our simulations, we have provided some means of quantifying resources. These values are very subjective and were applied for the purposes of analysis, and do not reflect any market or economic reality. Effort should be made in terms of policy formulation to quantify information system resources so that in the event of an attack, its impacts can be properly valued. 6. CONCLUSION Assessing security risk in a controlled environment is more of an art than science. There is the need for current methods to factor in quantities and qualities that are inherently uncertain to predict and/or difficult to quantify. Critical infrastructure systems and their supporting technologies are becoming too complex and dynamic to predict, due to convergence with advanced technology. Similarly, systems’ boundaries have become too difficult to define due to systems interdependencies. The business impact of successful threat attack on critical infrastructure systems could be very damaging not only to the systems themselves but their interdependencies, and in some cases the health and social well-being of the citizenry. As reviewed above, there are in existence of various security risks assessment methods. Nonetheless, the complexity and increasing interdependencies of modern critical systems render existing assessment method less useful. There is, therefore, the need to develop different methods of security risk assessment method to address the gaps in the existing methods since there is no universal all-encompassing ‘silver bullet solution’ to security-related problems. Concluding, the modelling and simulation as presented in this paper are developed to specifically assess the security risks associated with controlled technologies supporting critical infrastructure systems (i.e. power distribution). In developing the modelling approach, seven key assessment metrics were identified. Systems characterization was based on controlled technologies (i.e. ICS-SCADA). While the approach has focused on critical infrastructure systems, it gives vendors, asset owners, customers, and regulatory agencies the ability to comparatively assess the relative robustness of different critical systems offerings. SUPPLEMENTARY MATERIAL Supplementary data are available at The Computer Journal online. Footnotes 1 Heating, ventilation and air-condition. 2 Computer Security Division, Information Technology Laboratory (ITL) National Institute of Standards and Technology. 3 The National Strategy for Cyberspace Operations Office of the Chairman, Joint Chiefs Of Staff, U.S. Department Of Defence. 4 ISO/IEC (the International Organization for Standardization and the International Electrotechnical Commission): it is part of the specialized systems for worldwide standardization. 5 Operationally critical threat, asset, and vulnerability evaluation: one of the primary goals of the CERT® Survivable Enterprise Management team is to help organizations ensure that their information security activities are aligned with their organizational goals and objectives. 6 The Open Group Risk Taxonomy. 7 Likelihood values are scaled from 1 (very low) to 10 to (most likely). 8 Note: As indicated above, it is assumed the value of the system, in this case, represents the total cost of building the system or restoring it and the benefits to be derived from the system. REFERENCES 1 Paquette , S. , Jaeger , P.T. and Wilson , S.C. ( 2010 ) Identifying the security risks associated with governmental use of cloud computing . Gov. Inf. Q. , 27 , 245 – 253 . Google Scholar CrossRef Search ADS 2 Stoneburner , G. , Goguen , A.Y. and Feringa , A. , 2002 . Sp 800-830. risk management guide for information technology systems. 3 Kaplan , S. and Garrick , B.J. ( 1981 ) On the quantitative definition of risk . Risk. Anal. , 1 , 11 – 27 . Google Scholar CrossRef Search ADS 4 De Bruijne , M. and Van Eeten , M. ( 2007 ) Systems that should have failed: critical infrastructure protection in an institutionally fragmented environment . J. Contingencies Crisis Manag. , 15 , 18 – 29 . Google Scholar CrossRef Search ADS 5 National Strategy for Physical Protection of Critical Infrastructure and Key Assets | Homeland Security [WWW Document], n.d. URL https://www.dhs.gov/national-strategy-physical-protection-critical-infrastructure-and-key-assets (accessed January 17, 2017). 6 Bodungen , C.E. , Singer , B.L. , Shbeeb , A. , Hilt , S. and Wilhoit , K. ( 2016 ) Hacking Exposed Industrial Control Systems: ICS and SCADA Security Secrets & Solutions. McGraw-Hill, Inc., 2016. 7 Pieters , W. ( 2011 ) The (social) construction of information security . Inf. Soc , 27 , 326 – 335 . Google Scholar CrossRef Search ADS 8 Dahbur , K. , Mohammad , B. and Tarakji , A.B. , 2011 . A survey of risks, threats and vulnerabilities in cloud computing. Proceedings of the 2011 International Conference on Intelligent Semantic Web-Services and Applications. ACM, p. 12. 9 Wei , Y.-M. , Fan , Y. , Lu , C. and Tsai , H.-T. ( 2004 ) The assessment of vulnerability to natural disasters in China by using the DEA method . Environ. Impact. Assess. Rev. , 24 , 427 – 439 . Google Scholar CrossRef Search ADS 10 Johansson , J. , Hassel , H. and Cedergren , A. ( 2011 ) Vulnerability analysis of interdependent critical infrastructures: case study of the Swedish railway system . Int. J. Crit. Infrastruct. , 7 , 289 – 316 . Google Scholar CrossRef Search ADS 11 TheSecurityRiskManagementGuide-Microsoft.pdf , n.d. 12 Touhill , G.J. and Touhill , J.C. ( 2014 ) Cybersecurity for Executives, A Practical Approach . John Wiley & Sons , New Jersey, USA . 13 Johansson , J. , 2010 . Risk and vulnerability analysis of interdependent technical infrastructures. Lund University Department of Measurement Technology and Industrial Electrical Engineering. 14 Apostolakis , G.E. and Lemon , D.M. ( 2005 ) A screening methodology for the identification and ranking of infrastructure vulnerabilities due to terrorism . Risk. Anal. , 25 , 361 – 376 . Google Scholar CrossRef Search ADS PubMed 15 Latora , V. and Marchiori , M. ( 2005 ) Vulnerability and protection of infrastructure networks . Phys. Rev. E , 71 , 015103 . Google Scholar CrossRef Search ADS 16 Silver , E. and Miller , L.L. ( 2002 ) A cautionary note on the use of actuarial risk assessment tools for social control . NCCD News , 48 , 138 – 161 . 17 Sterman , J.D. ( 2002 ) System Dynamics: systems thinking and modeling for a complex world. Proceedings of the ESD Internal Symposium. 18 Forrester , J.W. ( 2003 ) Dynamic models of economic systems and industrial organizations . Syst. Dyn. Rev , 19 , 329 – 345 . Google Scholar CrossRef Search ADS 19 Rinaldi , S.M. , Peerenboom , J.P. and Kelly , T.K. ( 2001 ) Identifying, understanding, and analyzing critical infrastructure interdependencies . Control. Syst. IEEE , 21 , 11 – 25 . Google Scholar CrossRef Search ADS Author notes Handling editor: Steven Furnell © The British Computer Society 2018. All rights reserved. For permissions, please email: journals.permissions@oup.com

Journal

The Computer JournalOxford University Press

Published: Feb 1, 2018

There are no references for this article.

You’re reading a free preview. Subscribe to read the entire article.


DeepDyve is your
personal research library

It’s your single place to instantly
discover and read the research
that matters to you.

Enjoy affordable access to
over 18 million articles from more than
15,000 peer-reviewed journals.

All for just $49/month

Explore the DeepDyve Library

Search

Query the DeepDyve database, plus search all of PubMed and Google Scholar seamlessly

Organize

Save any article or search result from DeepDyve, PubMed, and Google Scholar... all in one place.

Access

Get unlimited, online access to over 18 million full-text articles from more than 15,000 scientific journals.

Your journals are on DeepDyve

Read from thousands of the leading scholarly journals from SpringerNature, Elsevier, Wiley-Blackwell, Oxford University Press and more.

All the latest content is available, no embargo periods.

See the journals in your area

DeepDyve

Freelancer

DeepDyve

Pro

Price

FREE

$49/month
$360/year

Save searches from
Google Scholar,
PubMed

Create lists to
organize your research

Export lists, citations

Read DeepDyve articles

Abstract access only

Unlimited access to over
18 million full-text articles

Print

20 pages / month

PDF Discount

20% off