A Process for Measuring Information Security Risk (June 2002)


Andrew T. ROBINSON


Abstract—This paper presents the RSK process for measuring information security risk.  RSK supplements existing security testing practices.  Security testing has become part of the standard of care for any organization that conducts business on the Internet.  Security testing identifies vulnerabilities in an information system that may allow a threat agent to gain unauthorized access to or compromise the confidentiality or availability of that information.  Security tests are an excellent tool for improving the security of the target organization.  One weakness of current security testing practices is that there is no easy way to compare the results of two security tests, even by the same testing agency.  This makes it difficult to assess the information security risk of the organization as a whole, or to compare that risk to other organizations (within or across industries).  RSK addresses this weakness by providing a standardized, algorithmic method for measuring the information security risk of an information system as a positive integer.  Larger RSK measurements indicate greater risk to the testing domain.  RSK measurements have objective meaning, are repeatable, and can be compared even between organizations in different industries.

 

Index Terms—Information Security, Security Testing, Penetration Testing, Vulnerability Analysis, Measuring Information Security Risk, Risk Assessment

I.     INTRODUCTION

S

ecurity testing has become part of the standard of care for any organization that conducts business on the Internet.  Many jurisdictions have passed legislation requiring security testing for certain industries or classes of organizations.  In the United States, regulated industries such as banking and healthcare are required by law to periodically test the security of their information systems. A security test is any process or method, performed by an authorized testing agency, which identifies information security vulnerabilities in an information system.  Such vulnerabilities increase the risk that a threat agent will compromise the information system.  Vulnerabilities arise from design errors, software errors (“bugs”), configuration errors, or violations of best current practices. 

Currently, there is no standardized method for performing security tests, primarily because the field is still evolving so quickly that any restrictive standard would run the risk of rapid technological obsolescence.

Current security testing practices are adequate for identifying vulnerabilities in information systems.  An organization that follows the remedial actions recommended by the testing agency may dramatically decrease its information security risk.  However, current security testing practices do not allow direct comparison of the results of two or more security tests, especially if those tests are carried out against two different testing domains or by two different testing agencies.  One may read both security test reports and write another report to summarize the differences.  This is a labor-intensive task that requires significant information security expertise, and does not simplify the problem of identifying reductions in risk due to remedial measures.

 The ability to quickly and easily compare the results of security tests is desirable for many reasons, including prioritizing remedial recommendations, identifying reductions in risk after applying remedial recommendations, and comparing your organization’s risk to other organizations in your industry or other organizations in general.  RSK provides this ability by defining a standardized, algorithmic method for calculating positive integer risk measurements for various elements of the testing domain.  These risk measurements are calculated based on information obtained during security test intelligence gathering.  RSK measurements convey their meaning clearly to expert and non-expert audiences, are repeatable, and may be directly compared.

II.       Background

    RSK grew out of a desire to quantify the results of security tests.  By 1998, net/main infoSecurity Solutions (NMI)—like other security firms—had hit on the idea of assigning a numeric value between one (1) and ten (10) to each vulnerability identified during security testing.  This vulnerability score was assigned based on the tester’s subjective impression of the severity of the vulnerability.  The vulnerability score made it easier to prioritize remedial actions, but still did not address the issues of determining cost-effectiveness of remedial actions, or of comparing the risk levels of one organization to that of another.

    In late 1999, Andrew T. Robinson of NMI began experimenting with methods for combining the individual vulnerability scores into a single value that would indicate the overall risk level for the testing domain.  Working with Ramon Krikken of NMI, the Robinson/Krikken Infosecurity Scale (RKIS) was developed.  RKIS was a simple average of vulnerability scores with one significant digit retained.    RKIS was applied to several sets of earlier security test results to prove the concept.  One obvious problem was that a preponderance of low-risk vulnerabilities would result in a testing domain score that seemed to indicate low risk, even though the testing domain contained one or more extreme-risk vulnerabilities.

    In 2000, Robinson, Krikken, and Eugene V. Slobodzian of NMI collaborated to develop a more sophisticated model for RKIS.  The initials of the three collaborators provided the new name for the process, RSK (Robinson/Slobodzian/Krikken), which is also evocative of what the process was trying to quantify—information security risk.  Whereas RKIS was a vulnerability scoring system, RSK was characterized as a measurement process because the results of RSK are based on observed characteristics of the testing domain, rather than the subjective knowledge of the testing agency.  The mathematical model of RSK has gone through several evolutions.  The principal investigators for the RSK project are currently Andrew T. Robinson, Ramon Krikken, Daniel S. Rivers, and Eugene V. Slobodzian, all consultants at NMI.

III.     Requirements for Risk Measurements

Risk is not a physical quantity that can be directly measured.  For the purposes of this paper, we define risk measurement as an algorithmic approximation of risk based on the objectively observable characteristics of a testing domain.  In order to be useful, risk measurements must have the following three (3) characteristics:

A.     Objective meaning

Risk measurements must have meaning that is immediately apparent to readers, regardless of their level of expertise in information security.  The interpretation of risk measurements should not require substantial supporting verbiage.  The measurements themselves should be generated algorithmically and must be independent of any specialized or “insider” knowledge possessed by the testing agency.

B.     Repeatability

Any two (2) RSK measurements taken of the same testing domain under identical testing conditions must be identical.  This must be true even if two different testing agencies take the measurements.

C.     Comparability

Any two (2) RSK measurements must be directly comparable, and the comparison must have meaning.  This must be true whether the measurements are of the same organization, two organizations within an industry, or two organizations in different industries.

IV.     Terminology

The following terms are used throughout this document:

A.     Compromise (of an information system)

An information system is compromised when a) a threat agent gains unauthorized access to components of the information system, b) a threat agent is able to intercept and/or modify confidential and proprietary information handled or transmitted by the information system, or c) a threat agent is able to deny access to legitimate users of the information system.

B.     Dependent vulnerability

A dependent vulnerability is a vulnerability that cannot be exploited unless a related, primary vulnerability is first exploited.  A dependent vulnerability may not exist on the same host or even the same horizon as the primary vulnerability.  A dependent vulnerability by definition cannot be detected with non-invasive means unless it always correlates with a specific primary vulnerability.

C.     Exploit

An exploit is a specific procedure for using a vulnerability to compromise an information system.

D.     Information system

A set of information processing resources, including but not limited to computers, software, network facilities and infrastructure, and the data that resides on or is transmitted or received by those resources.

E.     Primary vulnerability

A primary vulnerability is a vulnerability that can be directly exploited by a threat without compromising any other host or service.  RSK measures only primary vulnerabilities.

F.     Risk (or information security risk)

Risk is the degree to which an information system is vulnerable to compromise by a threat agent.

G.     Security testing

Security testing includes any process or method by which the information security of an information system is evaluated. 

H.     Testing agency

A testing agency is an entity, authorized by the owner or responsible authority for the information system, to evaluate the information security of a testing domain.  The testing agency does not have to be an external organization unless required by law or regulation, but the testing agency should not be involved in the implementation or management of the information system.

I.     Testing domain

    The testing domain is the subset of an information system that is to be subjected to security testing by the testing agency.

J.     Threat

A threat is any entity, other than a testing agency, that attempts to compromise an information system, whether or not the intent of the compromise or attempted compromise is malicious. 

K.     Vulnerability

A vulnerability is any problem or condition that increases the risk to an information system, including software errors, configuration errors, best current practice violations, and poor operational or management practices.

V.     Description of the RSK Process

RSK is a process for measuring the information security risk of a testing domain.  RSK takes a testing domain specification as input, and produces as output a positive integer representing the information security risk of that testing domain. RSK defines the synthetic unit RU (risk unit) for all RSK measurements.  A larger RSK measurement indicates greater risk.

A.     Intelligence gathering

The first phase of the RSK process is intelligence gathering.  Visible hosts are enumerated, and the visible services on each host are enumerated.  Implementation names and versions are collected for each visible service.   In RSK, the host operating system is treated as a single service.

B.     Vulnerability correlation

Information from the intelligence-gathering phase is correlated with known vulnerabilities contained in the RSK vulnerability database or VDB.  Each vulnerability defined in the VDB has an associated vulnerability measurement. The vulnerability measurement may vary from vmin RU (the service is visible with no known vulnerabilities) to vmax RU (the service is vulnerable to a remote compromise with escalated privileges).

C.     Measurement calculation

Vulnerability measurements are combined algorithmically to form composite measurements for hosts and the network horizon on which the hosts are visible.  This is described in more detail in Testing Domain Organization, below.

VI.     Testing Domain Organization

RSK defines a hierarchical organization for the testing domain.  The RSK measurement for each level in the hierarchy is a composite measurement that is calculated from the RSK measurements of the components below it in the hierarchy. 

Lower-level components are referred to as the subcomponents of higher-level components.  For example, hosts are subcomponents of horizons, and vulnerabilities are subcomponents of hosts.

A.     Horizon

A horizon is a set of hosts accessible from a specific testing origin.  There may be any number of horizons in the testing domain.  For example, the Internet horizon is the set of hosts in the testing domain that are accessible from the Internet. A horizon specification consists of one or more host address specifications.  Host address specifications may be lists of individual host addresses or ranges of host addresses. 

A horizon measurement is a composite measurement calculated from all of the host measurements in the horizon.

B.     Host

A host is the set of services associated with a unique host address (e.g., 10.0.0.1).  RSK does not attempt to detect multiple host addresses associated with the same physical hardware.  There may be any number of hosts in a horizon.

A host measurement is a composite measurement calculated from the vulnerability measurements for services running on that host.

C.     Service

A service is a network-accessible application identified by a unique host address (e.g. 10.0.0.1), a protocol name (e.g., “tcp”), and an application name or number (e.g., “telnet”).  For the purposes of RSK, the host operating system is treated as a single service.  Services are organizational aids during intelligence gathering and vulnerability correlation and are not measured by RSK.

D.     Vulnerability

RSK detects a subset of vulnerabilities that a) are associated with a host, and b) can be identified based on the visible properties of that host and the services running on that host.  There may be any number of vulnerabilities associated with a host.

A vulnerability measurement is a fixed positive integer value between vmin and vmax that is obtained from the RSK vulnerability database (VDB).  RSK only detects primary vulnerabilities.  If a dependent vulnerability always correlates to a primary vulnerability, the severity of the dependent vulnerability must be considered when assigning the vulnerability measurement for the primary vulnerability.

VII.     Assumptions

Risk is not a physical quantity and cannot be measured directly.  In order to develop a process for arriving at a meaningful approximation of risk that conforms to the requirements for risk measurement described above, RSK is subject to the following assumptions:

A.     Visible properties assumption (VPA)

Risk measurements should be based on visible properties of the testing domain, and not on specialized or “insider” knowledge of the testing domain.  Visible properties are properties that can be enumerated using automated methods without human intervention.

B.     Low impact assumption (LIA)

The process of taking risk measurements should not disrupt the testing domain, or compromise the information security of the testing domain.  RSK measurements are based on non-invasive techniques based on intelligence gathering and vulnerability correlation.

C.     Weakest link risk assumption (WLRA)

Overall risk is greater than or equal to the risk associated with the most severe vulnerability in the testing domain.  In other words, the risk level of a system is always greater than or equal to the risk level of the “weakest link.”

D.     Multiple vulnerabilities risk assumption (MVRA)

Risk increases as the number of vulnerabilities increases, but the rate of increase is conditioned by the diminishing effect assumption (DEA).  While the minimum risk level is based on the most severe vulnerability (WLRA), the existence of multiple vulnerabilities results in a net increase in risk, providing multiple threat vectors and a greater opportunity to exploit dependent vulnerabilities (see below).

E.     Horizon vulnerability distribution assumption (HVDA)

Given a set Vmax of n total vulnerabilities in a horizon, the risk of a horizon with n vulnerable hosts is greater than the risk of a horizon in which one (1) host has all n vulnerabilities.

F.     Diminishing effect assumption (DEA)

Although multiple vulnerabilities have an impact on the risk measurement, the most severe vulnerability predominates (WLRA).  The impact of each additional vulnerability decreases progressively from the effect of the preceding vulnerability. 

G.     Complexity risk assumption (CRA)

Risk increases with the complexity of the testing domain, but much more slowly than the increase due to multiple vulnerabilities.  More complex testing domains increase the probability of one or more components developing an active vulnerability, or of interactions between components to produce dependent vulnerabilities. 

H.     Effective intelligence gathering assumption (EIGA)

The intelligence gathering techniques used are assumed to be as effective as technologically possible at enumerating hosts, services, and at identifying the implementation names and versions of active services.

I.     Authoritative vulnerability database assumption (AVDBA)

The vulnerability database (VDB) must be authoritative, and is assumed to contain information on all documented, active vulnerabilities that may be present in a testing domain.

VIII.     Model For Assigning Vulnerability Measurements

Vulnerability measurements are positive integers ranging from vmin to vmax, with vmax indicating extreme risk.  Currently, these values are assigned as follows:


vmin = 1

vmax = 100

 

This range was chosen because it has intuitive meaning for most audiences, but the behavior of the RSK mathematical model is not dependent on this range.

Vulnerability measurements are assigned by the testing agency based on the attributes of the vulnerability, and stored in the VDB.  To reduce the subjective component of this assignment, we originally devised the class-based model for vulnerability measurements.  We defined a series of vulnerability classes based on the attributes of vulnerabilities that would fit into each class.  Each vulnerability class had an associated base vulnerability measurement between vmin and vmax, based on the nature and severity of the vulnerability.  New vulnerabilities were assigned to the most closely corresponding vulnerability class, and assumed that vulnerability class’ base vulnerability measurement.  The disadvantages of the class-based model are a limited number of base vulnerability measurements, and the difficulty of mapping new vulnerabilities to a single vulnerability class.

The model for vulnerability measurement currently under development is the attribute-based model.  This model consists of a set of vulnerability attributes and numeric values associated with each possible state of each vulnerability attribute.  These attributes taken together describe the nature and severity of the vulnerability, and permit a wide range of vulnerabilities to be concisely described.  The vulnerability measurement of a new vulnerability is the product of the values of each attribute as it applies to the new vulnerability, scaled to the range of vmin to vmax.  The challenge of the attribute-based model is to define the relationships between the vulnerability attributes in such a way that vulnerability measurement for a new vulnerability consistently corresponds to the nature and severity of the vulnerability.

IX.     Mathematical Model for Host Measurements 

Host measurements are composite measurements that are calculated from vulnerability measurements associated with a host.  Let Vi be the decreasing ordered set of all vulnerability measurements v associated with host i:

 

 

The host measurement hi is calculated as follows.  The weakest link risk assumption (WLRA) requires that hi be at least equal to the worst vulnerability measurement Vi0.  The multiple vulnerability risk assumption (MVRA) contributes any amount in excess of Vi0, but the contribution of each successive vulnerability decreases as specified in the diminishing effect assumption (DEA).  The rate of this decrease is controlled by parameter a, which also imposes a maximum on the value of the host measurement.  This results in the composite measurement function f(), defined as follows:

 

 

X.     Mathematical Model for Horizon Measurements

Horizon measurements are composite measurements that are calculated from all host measurements contained in the horizon.  Let H be the decreasing ordered set of all host measurements in the horizon:

 

 

WLRA, MVRA, and DEA apply to horizon measurements, but there is an additional constraint imposed by the horizon vulnerability distribution assumption (HVDA).  In order to assure that the horizon measurement r complies with the HVDA, we construct a set Vmax that is the descending ordered set of all vulnerabilities associated with all hosts.  This set represents the worst case if all the vulnerabilities in the horizon were present on one host.

 

 

In order to insure that the HVDA assumption holds true, we define a horizon distribution function g() that quantifies the distribution of vulnerabilities across all hosts in the horizon.  We define g() such that the result must be between 1 and 1+b. The parameter b controls the maximum impact of HVDA on the horizon measurement, and imposes a maximum value on the horizon measurement. The standard distance of all host measurements from zero (0) was chosen as the basis for g() as follows:

 

 

The worst-case value for the standard distance is |H| occurrences of the worst host measurement h0.  Therefore:

 

 

Finally, the horizon measurement r is calculated as the product of the composite measurement function applied to Vmax and the horizon distribution function g() applied to H:

 

XI.     Problems with the RSK Approach

There are several actual and potential problems with the RSK approach to risk measurement.  There are exceptions to many of the assumptions described above. 

A.     Visible properties assumption limitations

Risk cannot be completely quantified using only visible properties. There may be properties affecting the risk level of the testing domain that cannot be detected using non-invasive intelligence gathering techniques.  Such properties include the physical security of the information system, the presence of dependent vulnerabilities, etc.

B.     Low impact assumption limitations

The existence of vulnerabilities cannot be proven without attempting the correlated exploits for that vulnerability.  However, attempting correlated exploits may result in performance degradation, denial of service, or information security compromise of the testing domain.   

C.     Effective intelligence gathering assumption limitations

If the RSK intelligence-gathering phase is unable to enumerate all the hosts and services in the testing domain, or identify implementation names and versions for services, the accuracy of the risk measurement will be reduced.  For example, if a service does not provide version information on its “banner,” vulnerability correlation may not be performed effectively.

D.     Authoritative vulnerability database assumption limitations

If the vulnerability database does not contain information for a vulnerability, the authoritative vulnerability database assumption will fail.  The accuracy of the risk measurement will be reduced. 

As with virus signatures and intrusion detection signatures, there will always be a time lag between the publication of a vulnerability and its implementation in the vulnerability database.  Furthermore, there is (to date) no widely accepted format for vulnerability reporting or any agreement on what level of information should be reported and to whom.

XII.     Conformance to Risk Measurement Requirements

RSK conforms to the risk measurement requirements described above as follows:

A.     Objective meaning

RSK conforms to the objective meaning requirement.  RSK measurements are calculated from visible properties of the testing domain that can be enumerated using automated methods.  This eliminates the impact of specialized or “insider” knowledge on the RSK measurement.  The RSK measurement itself has a well-defined meaning, where a larger measurement indicates higher risk.

One area where subjective interpretation enters the RSK calculations is in the assignment of base vulnerability measurements in the vulnerability database.  These assignments, while not entirely arbitrary, are based on the discretion of the person who enters the data.  To address this issue, a rigorous process needs to be defined for establishing base vulnerability measurements in the vulnerability database.

B.     Repeatability

RSK conforms to the repeatability requirement, provided that EIGA and AVDBA apply.  The RSK process for calculating composite measurements is well defined and repeatable.  The RSK process for intelligence gathering and vulnerability correlation is not fully defined, and the vulnerability database (which must be authoritative) is evolving as experience with RSK grows. 

A more rigorous set of specification for intelligence gathering and vulnerability correlation are required, but such rigorous specifications run the risk of technological obsolescence.  Specifications must be defined for results, rather than for the technical process used to arrive at those results.

C.     Comparability

RSK conforms to the comparability requirement. Earlier mathematical models for RSK suffered from a scaling anomaly that made the comparison of measurements of large networks and small networks problematic.  The current mathematical model eliminates this anomaly and permits the comparison of networks of any size. 

XIII.     Summary Case Studies

The following sections present eleven (11) case studies where RSK was used to measure the information security risk of testing domains in several industries.  The industries are banking, public utilities, Internet service provision (ISP), and application service provision (ASP). 

The following table summarizes the meaning assigned to the RSK measurement values (all values in RUs):

 

TABLE I

Relative Risk Levels

Risk Level

Low

High

Low

0

24

Moderate

25

49

High

50

74

Very high

75

99

Extreme

100

+

 

Table II summarizes the results of these case studies.  The meaning of the column names is defined in the Method section, below (all RSK measurements expressed in RUs).

 

TABLE II

SUMMARY OF RSK CASE STUDIES

#

Industry

IM

WVR

Hosts

1

Banking

80

43

23

2

Banking

123

53

23

3

Banking

164

68

29

4

Public Utility

65

33

10

5

Banking

171

2

7

6

Banking

158

72

28

7

Banking

78

44

5

8

Financial

165

84

228

9

ASP

161

77

27

10

Banking

159

78

102

11

ISP

160

84

825

 

A.     Method

For each case study, an RSK measurement of the testing domain was performed.  This is the initial measurement or IM in the table above. 

Second, a penetration test was performed on the testing domain by a security consultant.  Based on this penetration test, a set of remedial recommendations was developed and presented to the customer. 

Third, the result set of the initial RSK measurement (IM) was modified to remove all the worst-case vulnerabilities and the RSK measurement process was run on this data.  This produced the worst vulnerabilities remedied or WVR column.

The HOSTS column in the table above is the number of hosts in the testing domain at the time of the initial measurement (IM).

In the presentation of results for each case study, a brief description of the customer and testing domain is provided, the results of the initial measurement, a summary of vulnerabilities identified in the manual analysis, the resulting theoretical minimum measurement, and the results of the follow-up measurement (if applicable).

B.     Case Study 1: Bank

The testing domain was the bank’s Internet point of presence, consisting of twenty-three (23) hosts.  The initial RSK measurement was 80 RU (very high risk).

The penetration test identified three (3) hosts with remote access vulnerabilities, one of which was the firewall, and one (1) host with a denial of service vulnerability.  The remote access vulnerabilities allow the testing agency to access the vulnerable systems interactively, and given enough time would probably have resulted in privilege escalation.  These findings are consistent with the initial RSK assessment of very high risk.

By correcting the worst vulnerabilities (the three (3) remote access vulnerabilities), the RSK measurement could be reduced to 43 RU (moderate risk).

C.     Case Study 2: Bank

The testing domain was the bank’s Internet point of presence, consisting of twenty-three (23) hosts.  The initial RSK measurement was 123 RU (extreme risk). 

A penetration test identified a remote administrative access vulnerability on one host and a remote access vulnerability on another server.  No vulnerabilities were identified in the remaining hosts in the testing domain. The administrative access vulnerability provided complete access to and control confidential bank and customer information (account numbers, credit card numbers, etc.).  These findings are consistent with the initial RSK assessment of extreme risk.

By correcting the worst vulnerability (the remote administrative vulnerability), the RSK measurement could be reduced to 53 RU (high risk).

D.     Case Study 3: Bank

The testing domain was the bank’s Internet point of presence, consisting of twenty-nine (29) hosts.  The initial RSK measurement was 164 RU (extreme risk).

Penetration testing identified six (6) hosts with remote administrative vulnerabilities, nine (9) hosts with read-only file system access vulnerabilities, and two (2) hosts with termination of service vulnerabilities (some hosts had more than one vulnerability).  The remote administrative vulnerabilities provided access to and control of confidential bank and customer information (account numbers, credit card numbers, etc.).  These findings are consistent with the initial RSK assessment of extreme risk.

By correcting the worst vulnerabilities (the six (6) remote administrative vulnerabilities), the RSK measurement could be reduced to 68 RU (very high risk). 

E.     Case Study 4: Public Utility

The testing domain was the utility’s Internet point of presence, consisting of ten (10) hosts.  The initial RSK measurement was 65 RU (high risk).

Penetration testing identified one (1) host with a remote access vulnerability and one (1) host with a read-only file system access vulnerability.  None of these vulnerabilities permitted administrative access to the testing domain, but the remote user access vulnerability would provide a base for privilege escalation attempts, and the read-only file system access vulnerability provided access to non-public information.  Based on this analysis, the “high risk” interpretation of the RSK measurement is justified.

    By correcting the worst vulnerability (the remote access vulnerability), the RSK measurement could be reduced to 33 RU (moderate risk).

F.     Case Study 5: Bank

The testing domain was the bank’s Internet presence, consisting of six (7) hosts.  The initial RSK measurement was 171 RU (extreme risk).

Penetration testing identified four (4) hosts with no password required for administrative access.  Any user who attempted to log in to the vulnerable servers as “root” with no password would gain unlimited administrative access to those systems, permitting further attacks against the internal network.  Based on this analysis, the “extreme risk” interpretation of the RSK measurement is justified.

    By correcting the worst vulnerabilities (the four (4) remote administrative vulnerabilities), the RSK measurement could be reduced to 7 RU (low risk). 

G.     Case Study 6: Bank

The testing domain was a subset of the bank’s entire network, consisting of twenty-eight (28) hosts.  The initial RSK measurement was 158 RU (extreme risk).

 Penetration testing identified three (3) hosts with a total of five (5) remote administrative vulnerabilities, one (1) host with a remote access vulnerability, and two (2) hosts with termination-of-service vulnerabilities.  These vulnerabilities provided complete access to and control of confidential bank and customer information.  This level of vulnerability is consistent with the initial RSK assessment of extreme risk.

By correcting the worst vulnerabilities (the five (5) administrative access vulnerabilities), the RSK measurement could be reduced to 72 RU (high risk).

H.     Case Study 7: Bank

The case study was performed on a bank.  The testing domain was the bank’s Internet presence, consisting of five (5) hosts.  The initial RSK measurement was 78 RU (very high risk).

Penetration testing identified one (1) remote access vulnerability in one (1) host, and three (3) local administrative vulnerabilities in three (3) hosts.  The remote access vulnerability and one of the local administrative vulnerabilities were present on the same host, which could allow a threat agent to compromise the server and escalate to administrative privileges.  Based on this analysis, the “very high risk” interpretation of the RSK measurement is justified.

    By correcting the worst vulnerability (the remote access vulnerability), the RSK measurement could be reduced to 44 RU (moderate risk). 

I.     Case Study 8: Financial Services Company

The testing domain was the company’s internal network, consisting of 228 hosts.  The initial RSK measurement was 165 RU (extreme risk).

Penetration testing identified one-hundred sixty (160) remote administrative vulnerabilities in sixty-three (63) hosts, twenty-seven (27) remote access vulnerabilities in seventeen (17) hosts, two-hundred-thirty (230) local administrative vulnerabilities in one-hundred-fifty (150) hosts, and two-hundred-twenty (220) read-only file system access vulnerabilities in one-hundred-thirteen (113) hosts.  This level of compromises would have permitted the testing agency to achieve complete control of the target network.  Based on this analysis, the “extreme risk” interpretation of the RSK measurement is justified.

    By correcting the worst vulnerabilities (the one-hundred-sixty (160) remote administrative vulnerabilities), the RSK measurement could be reduced to 84 RU (very high risk).

J.     Case Study 9: Application Service Provider

The testing domain was a subset of the company’s ASP network consisting of twenty-seven (27) hosts.  The initial RSK measurement was 161 RU (extreme risk).

Penetration testing identified six (6) remote administrative vulnerabilities on four (4) hosts, three (3) local administrative compromises on three (3) hosts, and thirteen (13) read-only file system access vulnerabilities on eleven (11) hosts.  Two (2) of the remote administrative vulnerabilities detected by RSK could not be verified by penetration testing because the target version of Linux was running on an IBM ESA9000, an architecture for which no exploit had been developed.  The remaining vulnerabilities were verified, and justify the “extreme risk” interpretation of the RSK measurement.

By correcting the worst vulnerabilities (the six (6) remote administrative compromises), the RSK measurement could be reduced to 77 RU (very high risk).

K.     Case Study 10: Bank

The case study was performed on the bank’s internal network, consisting of one hundred two (102) hosts.  The initial RSK measurement was 159 RU (extreme risk).

Penetration testing identified seventeen (17) remote administrative vulnerabilities on three (3) hosts, one (1) remote access vulnerability on one (1) host, eighty-seven (87) read-only file system access vulnerabilities on eighty-three (83) hosts, and three (3) local access vulnerabilities on two (2) hosts.  These vulnerabilities would have been sufficient for the testing agency to take over control of the internal network, and gain unlimited access to and control of confidential bank and customer information.  This analysis justifies the “extreme risk” interpretation of the initial RSK measurement.

By correcting the worst vulnerabilities (the seventeen (17) remote administrative vulnerabilities), the RSK measurement could be reduced to 78 RU (very high risk).

L.     Case Study 11: Internet Service Provider

The case study was performed on an Internet service provider.  The testing domain was the provider’s Internet-connected infrastructure, consisting of eight hundred twenty-five (825) hosts.  The initial RSK measurement was 160 RU (extreme risk).

Penetration testing identified twenty-four (24) remote administrative vulnerabilities on twenty-three (23) hosts, twelve (12) remote access vulnerabilities on twelve (12) hosts, eight hundred thirty-nine (839) local administrative compromises on four hundred eighty-nine (489) hosts, and eleven (11) read-only file system access vulnerabilities on eleven (11) hosts. 

All twelve (12) the remote access vulnerabilities existed on systems that also had local administrative vulnerabilities, which would permit a threat agent to escalate privileges.  The vulnerabilities identified would allow a threat agent to gain control of the ISP’s infrastructure, eavesdrop on traffic, or redirect traffic at will.  This analysis justifies the “extreme risk” interpretation of the initial RSK measurement.

    By correcting the worst vulnerabilities (the twenty-four (24) remote administrative vulnerabilities), the RSK measurement could be reduced to 84 RU (very high risk).

XIV.     Detailed Case Study

The summary case studies demonstrate the value of the RSK horizon measurement as an indicator of information security risk for a variety of testing domains.  Of these, I have selected case study 7 for a detailed analysis of the RSK process.  For the purposes of this case study, parameter a is chosen is 3 and parameter b is chosen as 1/5.

Intelligence gathering identified five (5) hosts with the following vulnerability vectors:

 

 

The “1” values are visible hosts that have no known vulnerabilities, the “40” values are read-only file system access vulnerabilities, and the “50” value is a remote access compromise that allows a threat agent to gain access to the host in a non-administrative context.

Using the composite measurement function to obtain the vector of host measurements H, and sorting H in descending numeric order, we obtain:

 

 

The next step is to construct Vmax as if all vulnerabilities existed on a single host, and calculate f(Vmax):

 

 

Now calculate the value of the horizon distribution function g(H):

        

 

    Finally, multiply f(Vmax) and g(H) to calculate the horizon measurement r.  In all cases, the ceiling of the resulting measurement is taken to keep the displayed results in the integer domain:

 

XV.     Conclusions

RSK was intended to supplement existing security testing methodologies and to provide an indicator of information security risk that could be easily understood by a wide range of audiences.  Based on the case studies described in this document, I draw the following conclusions:

RSK accurately assesses the risk level of arbitrary testing domains.  This conclusion holds for all case studies.  In case study 9, vulnerabilities were detected that could not be verified, but there were enough other vulnerabilities to minimize the impact of this error.  However, it is possible for RSK to fail to detect vulnerabilities or to report the existence of non-existent vulnerabilities due to failures of EIGA or AVDBA.

RSK measurements reflect the effects of remedial actions.  This conclusion is supported by all case studies when the theoretical “worst vulnerabilities remedied” data set is processed.  Note that remedial actions only have a substantial impact on the RSK measurement if all or most of the vulnerabilities at a certain risk level are eliminated.

RSK measurements of different testing domains may be compared, even if they belong to different industries.  This is supported by all case studies.  The meaning of the different relative risk levels applies to all measurements presented, regardless of the size of the testing domain.

For greater accuracy, the low impact assumption and the visible property assumption should be relaxed.  As illustrated in case study 9, a vulnerability that can’t be verified may be correlated from gathered intelligence.  This may result in RSK measurements in excess of the actual risk level of the testing domain.  This failure mode is preferable to missing vulnerabilities, but permitting minimal impact verification of vulnerabilities, or allowing the testing agency to “unselect” a vulnerability that does not really exist can improve the accuracy of the RSK measurement.

RSK would benefit from a well-defined vulnerability database with wide accessibility and contributions.  There are many knowledge bases that contain information on vulnerabilities, from mailing lists to newsgroups to companies that specialize in maintaining such information.  The RSK vulnerability database is defined and maintained by net/main infoSecurity Solutions using these sources and others.  An industry-wide vulnerability database with a low cost of access, with well-defined characteristics, the ability to accept contributions in real time (or close to it) from sources with established credentials, and the ability for applications (again, restricted to organizations with established credentials) to connect to this database and obtain information from it, would greatly enhance the accuracy and reliability of RSK and security testing tools in general.



© Copyright 2001, 2002 by net/main infoSecurity Solutions, all rights reserved. This document is protected by United States and international copyright law.  Unauthorized reproduction or distribution of this document by any means may result in criminal or civil legal action.

 

Andrew T. Robinson is the President of net/main infoSecurity Solutions, 145 Newbury Street Second Floor, Portland, Maine 04101 USA (telephone: 207-780-6381, e-mail: atr@nmi.net).