Monday, July 31, 2017

Operationalizing Cybersecurity

Operationalizing, or implementing, cybersecurity is an ongoing effort that continually evolves and grows. Just like organizations can’t achieve safety; they cannot achieve cybersecurity. Therefore, having a well-defined organizational cybersecurity strategy is essential in keeping organizational security goals in mind. Board members are becoming increasingly aware of the requirements to implement cybersecurity strategies and the perils faced by those organizations that continue to leave cybersecurity as an information technology (IT) problem. These motivations are assisting board members in being more active in defining the organization cybersecurity strategy. Therefore, board members are becoming increasingly aware of the importance in implementing a cybersecurity strategy.

Defining a cybersecurity strategy
An organizational cybersecurity strategy is the organization’s plan for mitigating security risks to an acceptable level. Understanding the business purpose and mission goals of the organization is the first step in defining a cybersecurity strategy. Board members, and business leaders, within the organization define their expectations for the services within the business by establishing operating targets and budgets. If aligned correctly, this information provides insight into critical business functions within the organization and can assist in identifying the criticality of the resources supporting those functions. For example, if an organization declares it is releasing a new product this quarter and all focus is being placed on completing the project, the resources supporting the new product development becomes critical. There are many frameworks available, such as ISCAC’s COBIT 5[1], that assist organizations in defining and establishing business priorities for the organization.

Translating a cybersecurity strategy into a risk management plan
Once an organization understand their business objectives and align resources to those objectives, the organization can develop a security risk management plan. Security risks are not simply a count of the number of vulnerabilities detected by a vulnerability scanner. Security risks are areas within the organization that could be damaging to business operations if the threat acts.. There are many risk assessment processes available to assist organizations in defining cybersecurity risks for their organization. Operationally Critical Threat, Asset, and Vulnerability Evaluation (OCTAVE)[2] and FAIR[3] are quantitative risk assessment processes that enable organizations to identify and quantify the risk to their business. NIST 800-30, Guide for Conducting Risk Assessments[4], helps organization understand how likely a security risk is to occur and the impact or harm it will have on the organization if it does occur. Organizations can leverage any of these processes, or a combination of each, to define security risk thresholds and expectations of the organizations business operations. These security thresholds and expectations become the guidance required to define a risk management plan. Organizations can use the risk management plan to create a security risk register for their organization.

A security risk register is an artifact that aligns the key threats to the business operations of the organization (e.g. natural disaster, accidental insider, malicious external parties, etc.) with weaknesses within the organization that the threats could exploit to harm the organization. While an exhaustive risk register may have hundreds of line items for different ways threats could impact business operations, most organizations can summarize the threats and weaknesses within their organization to identify twenty to thirty key risk areas. This enables organization to focus on implementing cybersecurity objectives in areas where key security risks can be mitigated. The risk register can be sorted by the risk quantified using the risk assessment methodology selected by the organization.

Operationalizing cybersecurity strategies
The NIST Framework for Improving Critical Infrastructure Cybersecurity (CSF)[5] defines the core activities and outcomes of a cybersecurity program. The CSF Core establishes five function: Identify, Protect, Detect, Respond, and Recover. Organizations can use these functions to establish security capabilities required to manage cybersecurity to an acceptable risk level as defined in the risk management plan.

Cybersecurity strategies are implemented using people, process, and technology. While technology provides a critical component within a cybersecurity program, it can’t be the only element. Similarly, cybersecurity policies are only effective if they are followed. Security policies that address all security risks within the organization are not effective if staff are not trained and reminded regularly of policies and their expectations in achieving the requirements defined within the policies. Organizations can implement a holistic cybersecurity strategy by using the CSF to define organizational cybersecurity expectations which mitigate security risks below risk thresholds established in the risk management plan as defined in the risk register. The CSF refers to this plan as a Target State Profile. An effective target state profile is one which identifies the types of security policy required within the organization and defines organizational practices required to implement the security policies.

Implementing a cybersecurity strategy is an ongoing activity, but not impossible. Organizations must continually evaluate the ever-changing threat landscape and business objectives. A good cybersecurity strategy is one that is in alignment with organizational business goals and mission objectives. The business goals and mission objectives establish the foundation for establishing a risk management plan that defines the acceptable security risk levels within the organization. Once the security risks within the organization are defined in a risk register, organization can determine the appropriate level of security required to operate within that risk level.

[1] What is COBIT 5?, ISACA,


[3] FAIR, FAIR Institute,

[4] Guide for Conducting Risk Assessments, National Institute of Standards and Technology (NIST) SP 800-30, September 2012,

[5] NIST Framework for Improving Cybersecurity, NIST, February 2013,

Source; RSA Conference

Posted on July 27, 2017

by Tom Conkle

CISSP, Cybersecurity Engineer, and Commercial Services Lead, G2, Inc.

Thursday, July 28, 2016

"Run-time Cyber Economics – Applying Risk-Adaptive Defenses"

G2, Inc. has been involved in OpenC2 for quite some time. This is a great read relating to active cyber defense and OpenC2.

Run-time Cyber Economics – Applying Risk-Adaptive Defenses

Posted by: CyberSecurityChief Categories: Active Cyber Defense Articles

Well, it has been a long break since the last article of this series but I feel duty-bound to do this third article on cybersecurity investment since I find the possibilities resulting from a “risk-adaptive” security approach to be compelling. Generally cyber defenses must be pre-planned with cost-benefits carefully weighed prior to investing in new tools to bolster defenses. However a risk-adaptive approach can change cyber investment to a fluid, service-oriented, run-time decision that can be made using well-understood economic principles. Learn how risk-adaptive defenses can raise the quality of an organization’s security posture while reducing capex and opex.

Today’s new defense strategies are focused on hunting and mitigating threats. In general, the threat vectors don’t change greatly – however the malware is being packaged and delivered in ways that are designed to evade detection and to deceive users. Some examples of this malware trend can be found here and here. As a reaction to this trend, cyber protection systems are beginning to move away from static signature-based approaches (Intel’s pending sale of McAfee anti-virus is an example of this lack of faith in static signature-based defenses) to an integrated proactive model based on a range of different narrow aperture collectors feeding big databehavioral models that can sense anomalies. These sense-making models output alerts to cyber decision-making systems that produce courses of action (COAs). The COAs are implemented byorchestrators which synchronize detection mechanisms and instigate mitigation services, such as sending updates to nexgen firewalls, or signaling a suite of other software-based protection services. These services are designed to quickly respond and stop attacks or prevent data breaches. This is really good but how do we balance the investment in these different tools with the risk posture and scale of the organization? That is, how do these tools reduce the cyber value-at-risk at a rate that makes investment in these tools worthwhile? This question was recently highlighted in a “Security Challenges” market landscape report by SDxCentral. When asked to indicate all their major security challenges, there was no single overwhelming problem identified by respondents to the SDxCentral survey; 49% said “Lack of visibility” was an issue, followed by the “Cost effectiveness of security solutions at scale,” at 44%. What was also evident from this survey is that organizations lack common measures to quantify cyber risk, curtailing their ability to make clear strategic decisions concerning optimal cyber security investment levels.
Adaptive Cyber Defenses Can Be Effective in Reducing Attacker Dwell Time and Minimizing Loss to Cyber Intrusions

One aspect of cyber security that is being measured is the cost of breaches. Studies have shown that the cost impact to the business goes up to exponential proportions the longer the attack goes undetected. This is where a proactive cyber strategy can help. The benefit of a proactive cyber strategy lies in its ability to drastically reduce the “dwell time” of an attacker on compromised platforms by accelerating the OODA loop. This reduction in dwell time inhibits the attacker’s ability to pivot and move laterally across the network to cause more harm and therefore drive up costs. However a successful proactive cyber strategy depends on overcoming four main challenges:

1 – As you might surmise, the sense-making process is often a bottleneck as there’s too much data and not enough context provided by the collectors. The sense-making process must enrich the collected data at a rate and level of accuracy (i.e., low false positive rate) that matches the cyber threat. One method of enrichment is to use one or more (more is preferred) threat intelligence sources. Cyber threat intelligence providers supply Indicators of Attack (IOA) and / or Indicators of Compromise (IOC) to help direct the sense-making service about what to look for and where to look.

2 – The second major challenge is defining COAs. There are two main issues here:

The first issue is there is no mutually understood, generally accepted, machine-readable, and shareable language between the different IT organizations as well as the business side who are involved in COA development and incident response that allows all sides to really connect, perform critical impact and root cause analysis, make efficient and faster decisions, implement response strategies, and, ultimately, work with less friction. A standards effort that is working to help in this area is the Open C2 COA Standardization WG. The Open C2 work group, a partnership between NSA, DHS, and industry is initially focused on defining a language at a level of abstraction that will enable command and control of cyber defense entities that execute the actions with enough generality to provide flexibility in the implementations of devices and accommodate future products. This effort, if successful should help to reduce the upfront investment cost in COA development by defining a common language for COAs and for sharing COAs.

The second major issue is actually identifying the specific courses of action that need to be performed for a given intrusion set to enable a given set of services that protect a given mission or business system. This issue requires algorithm development. A variety of mathematical theories can be used to model and analyze cybersecurity. Resource-allocation problems in network security can be formulated as optimization problems. In dynamic systems, control theory is beneficial in formulating the dynamic behavior of the systems. Game theory also provides rich mathematical tools and techniques to express security problems. This DTIC report highlights some of the issues and an approach to COA algorithm development.

3 – The third challenge involves integrating the tools needed to automate the COAs. This challenge is being addressed through community efforts led by NSA, DHS, and Johns Hopkins Applied Physics Lab. In addition, work by OASIS’ STIX provides COA, Incident, Threat and other related schemas which can be leveraged by tools seeking interoperability across threat data, COA, and cyber impact. See figure below for an example.

4 – The fourth major challenge involves culture change – i.e., overcoming a lack of confidence in automating the decision-making of cyber response. Generally, most organizations will insist in having a man-in-the-middle in the COA decision-making processes until confidence is well-established in the COA algorithms. Having a vetting process will also be essential prior to sharing COAs or accepting shared COAs from other organizations.
Adaptive Defenses Require OODA Loops at Each System [and Business] Layer

As pointed out by Emami-Taba et al, it is necessary to provide a holistic approach in implementing adaptive defenses. Depending on the architecture layer, the source of the data to be monitored is different and the adaptive cyber decision-making and responses are different. For example, to detect a cyber attack at the network level, the data to be monitored can be packet data, network traffic, etc. Intrusion detection systems are a cyber detection, decision-making, and response mechanism at the network layer. Intrusion-detection systems can take adaptive actions such as intensifying monitoring efforts when malicious behavior is detected. Likewise at the application layer, a cyber attack can be detected from various data sources. For example, the system can monitor the number of transactions by a specific user or the access rights of a user to a particular piece of sensitive data. An adaptive access control system may prevent access to the data if the behavior of the user appears abnormal. Therefore, COAs should not be limited to actions in only one layer of the systems being defended but cover top to bottom of the system stack. This holistic approach mirrors the advances made by tool vendors regarding their newer approach to malware: Security companies are aiming lower in the system stack, essentially running their software in a position where they can observe all activity on the device – examples include Tanium and Bromium. However, it is also important to connect adaptive defenses to the business layer so mission dependencies can be evaluated, business disruption can be assessed, the value-at-risk can be determined, and appropriate risk mitigation action can be taken, i.e., what’s the risk impact to the business related to a particular attack and COA response? The answer to this question is what the respondents to the SDX Challenge Survey were searching.
Risk-Adaptive Defenses Relate Protections to an Economic Model

“Risk-adaptive” defenses can be used to help provide visibility and governance to cyber defenses since they can quantify risk and manage allocation of protections using an economic model. One example of a risk-adaptive approach is Fuzzy MLS, an access control model which in a limited context can be used to quantify risk associated with information access. The ability to quantify risk makes it possible to treat risk that an organization is willing to take as a limited and countable resource. This enables the use of a variety of economic principles to manage the resource (risk) allocation with the goal of achieving the optimal utilization of risk, i.e., allocate risk in a manner that optimizes the risk vs. benefit trade-off.

According to Pau-Chen – the author of Fuzzy MLS: “the fact that when a security administrator creates the [access control] policy, she is guessing and codifying what risk-benefit trade-offs will be acceptable for information accesses that will happen in the future. Clearly, for an organization with dynamic needs the future risk-benefit trade-offs are not predictable and the guesses made about future risk-benefit trade-offs, encoded in the security policy are likely to be in conflict with the real risk-benefit trade-offs at the time of access.”

The main feature of Fuzzy MLS is that it considers access control as an exercise in risk management where access control decisions are made on the basis of risk, risk tolerance, and risk mitigation, where risk has the usual connotation of expected damage. Viewed in terms of risk, the process of setting a traditional access control policy is actually determining a fixed trade-off between the risks of leakage of sensitive information versus the need of the organization to provide such information to its employees for them to perform their job. This fixed trade-off sets up a non-adaptive, binary access control decision model where accesses have been pre-classified as having either acceptable risk or non-acceptable risk and only the accesses with acceptable risk are allowed. Fuzzy MLS devises a way to compute an estimate of risk associated with an access by quantifying the “gap” between the subject’s value and the object’s value. With these quantified estimates of risk, a risk scale can be built such that each access is associated with a point on the scale. With such a scale, the access control model can be made risk–adaptive by adjusting the point of trade-off on the scale as the needs and environment change. Fuzzy MLS goes one step further by expanding this point of trade-off into a region on the scale. An access associated with a point below the lower-bound of the region (also called the soft boundary) is allowed, an access associated with a point above the upper-bound of the region (the hard boundary) is denied. The region is further divided into bands of risk such that each band is associated with a risk mitigation measure(s) or course of action. An access located in a band is allowed only if the risk mitigation measure(s) / courses of action associated with that band can be applied to the access. Thus, the Fuzzy MLS model depicts a risk management system that resembles a Fuzzy control system and thus the name “Fuzzy MLS.”

One of the keys to such a risk-adaptive system such as Fuzzy MLS is coming up with values which can be used to quantify the subject/object gaps and perform risk trade-offs. I suggest that two types of values be assigned to cover two scenarios: 1) for access control scenarios, the value of an asset to the organization could be applied; and, 2) for attack scenarios, the value of an asset to an attacker could be applied. In the former case, the value of an asset can be established through well-known processes such as business impact analysis, where the levels of confidentiality, integrity and availability can be ascertained and translated into economic terms. In the latter case, a different approach is needed as a seemingly worthless piece of data from an organizational perspective might be extremely valuable to an attacker to profile and phish a target. This attacker view of an asset value requires intelligence about how attackers rate and target your assets.

The perishability or shelf life of an asset should also be determined. In this way you can adaptively change protections over a time span when the value of the asset changes – e.g., M&A plans have a relatively short shelf life and access protections need to increase around these plans during that time period. In the same regard, a vulnerability can be considered an asset from an attacker’s perspective that has a shelf life. In contrast, detections against weaponizer artifacts are defender assets that have a very durable shelf life.

To conclude, risk-adaptive cyber defenses can help in the prioritization and selection of courses of action by instantiating economic principles that reflect the true mission impact of a course of action mitigation or remediation action.

Source: July 26th 2016

Wednesday, January 27, 2016

G2 invited to speak at the RSA Conference 2016

G2 Inc, was invited to speak at the RSA Conference 2016 for the second year in a row. G2 is most recently known as the prime contractor responsible for supporting NIST in the development of the Cybersecurity Framework (CSF).  G2 cybersecurity engineers Greg Witte and Tom Conkle will present on G2’s experiences helping customers across the nation with using the CSF. 

The session, “Effectively Measuring Cybersecurity Improvement: A CSF Use Case” highlights G2’s experience helping  customers use the CSF to achieve measurable and continuous improvement. The session harnesses momentum established by the 2014 release of the CSF, as organizations work with G2 in leveraging the Framework to develop and maintain effective cybersecurity program outcomes.

This session explains how G2 engineers helped an organization use the CSF as a common language for communicating program goals and activities among stakeholders. Through quantitative measurement of current and planned activities (including references to standards, e.g., ISO 27002), in alignment with senior executives’ priorities, G2 helped to clearly articulate gaps between their existing cybersecurity program and the one needed to achieve their risk objectives. 

The talk will illustrate how Board members were able to quickly understand the identified security deficiencies, enabling resource and planning discussions.

Join G2 in West Room 2014 on Thursday, March 3, 2006 at 9:10am PT to learn more about the CSF case study.

Monday, July 20, 2015

G2 attends IAS Symposium

G2 Inc, an Annapolis Junction Maryland based cyber solutions and services organization, recently participated in the Information Assurance Symposium (IAS) sponsored primarily by the Intelligence Community.

This well attended timely symposium was hosted by the Washington Convention Center. The IAS G2 symposium technical display included a live demonstration of a cloud adaptable, open source, standards based Identity and Access Management (IdAM) technology entitled Enhanced OpenAM. Enhanced OpenAM is built on open source ForgeRock technology, is DoD/IC ready, is easily configurable and accommodates pluggable attribute repositories. Based upon multiple current DoD implementations, this G2 Enhanced OpenAM demonstration generated significant IAS interest. More information can be found by way of the video below.

The video below is from IRM Summit 2014 where Daniel Stroud, CISSP-ISSAP, MCSE, Identity and Access Management Capabilities Lead, G2, Inc. delivered a presentation focused on how to enable dynamic eCitizen systems in environments with distributed PKI administration, Light Weight Directory Access Protocol (LDAP), and distributed attribute repositories using OpenAM. The session will include background on the initial case study and a demonstration.

Wednesday, March 11, 2015

Two G2 experts published by ISACA

We couldn't be more proud of Tom Conkle and Greg Witte two of G2's subject matter experts on NIST's Cybersecurity Framework. 

G2 partnered with ISACA and Tom and Greg's work was recently published in the below publication.


If you would like a copy for your reference, the guide can be purchased here.

Congratulations Tom and Greg on your accomplishments and to that of the whole G2 Federal and Commercial practice  !!!

Monday, January 26, 2015

Greg Witte To Share His COBIT 5 Knowledge

Connect with fellow IT and business leaders at ISACA’s first-of-its-kind COBIT Conference. In addition to creating value for enterprises, COBIT 5 framework can significantly mitigate risk as you will learn in the invaluable session: Cybersecurity and COBIT—where you can leverage Greg Witte’s of G2's insights to:

  • Better understand COBIT 5 principles and how they apply to the cybersecurity landscape
  • Learn how COBIT 5 enablers work with common security frameworks
  • Integrate cybersecurity into enterprise risk management strategy using COBIT

Wednesday, January 21, 2015

G2 selected to speak at the 2015 RSA Conference

G2 received word this week that Tom Conkle has been accepted as a speaker that RSA 2015 Conference in San Francisco. Below is part of Tom's acceptance letter;

"The quality and quantity of submissions for RSA Conference 2015 were at an all-time high, making the selection process extremely competitive.

We are delighted to inform you that your session has been accepted to be part of the RSA Conference 2015 agenda, taking place April 20 - 24, 2015, at The Moscone Center in San Francisco, California. Your session is a valuable contribution to help make this year’s agenda one of the best ever!

Short Abstract: The Cybersecurity Framework (CSF) establishes a common language for describing cybersecurity activities. In 2015 it is anticipated that if voluntary adoption of the framework is not sufficient, the industry specific regulators will leverage the CSF as part of their regulatory oversight process. This session provides an overview and benefits organizations receive from aligning to the CSF."

We are very proud of Tom and the whole Federal and Commercial Practice here at G2.

Stay tuned, we'll post a video after the event.