Security Controls Model
The Secure Arc Reference Architecture logical model is adapted from the Logical Model of IT Security Controls (page 232, Figure 7-2) in the book Security Metrics - Replacing Fear, Uncertainy, and Doubt combined with the Common Vulnerability Scoring System (CVSS) specification. The high level data model in the figure to the left represents the basic relationships between Exposures, Threats and Countermeasures.
The goal is to enable complete end-to-end traceability between Information Assets and the decisions behind the Security Controls that are employed to protect them.
The different domains of the Security Controls Model have different active stages. In the case of the Exposures domain, the Asset Definition and Classification sub-domains are relatively static. You define all of your Information Asset types once, classify them to determine their Value and from that point on they don't need to be changed for your organization again. The Vulnerabilities within the Exposures box are really the only part of the Exposures domain that requires active updates and maintenance.
The Threats box is all about what is currently being exploited and what is actively being attacked and the Countermeasure Controls are specifically design time decisions and alternatives, however these will also be updated in light of new Vulnerabilities that are identified. The Metrics within the Exposures domain should be maintained constantly.
You can click on each area of the diagram above to go to the detailed description of each one. The first step is to define all of the Information Assets in the solution.
At it’s highest level, the role of a Security Architect is to identify vulnerabilities that may expose an asset to attack, present a number of alternative solutions to the business and make an informed decision on which Security Controls to put in place. These controls may come in a variety of forms, from a segmented, fire-walled infrastructure design down to the cryptographic controls applied to individual connections. Similarly controls may provide detail as to how the entry points to a system may be secured using role based access controls.
Security Controls
The Threats box primarily represents the tracking of realized attacks.
Threats
Asset Values
Asset Definition
Exposures
Asset Impact
The Exposures box encapsulate the Vulnerabilities that can be exploited on Infrastructure Assets, which in turn expose the Information Assets that they either process, transmit or store and are inherently supposed to protect. Each Information Asset has a value to the owning organization and therefore there is an Impact to the organization if one of those Vulnerabilities is exploited. In a somewhat overly simplified one liner, the value of the Information Assets exposed are the driver for the selection of the Countermeasures required to protect them.
The Exposure domain is broken down into the following areas:
Everything begins with the Asset Definition phase, so this is a good place to begin.
Vulnerabilities
Part of the assessment of any vulnerability includes it's Remediation Level. In the case of a software bug in an off the shelf middleware product there may either be a Workaround or an Official Fix. If there is, the Vulnerability itself can be eliminated altogether. Doing so seems like a no-brainer, but for various reasons patching servers is not always a simple or cost effective solution. Without an appropriate patch management process in place upgrading a production server can be a long and costly endeavor. In other cases, there may simply be no fix available.
Security Controls
There are four categories Security Controls that an be applied as Countermeasures:
Preventative ControlsCorrective ControlsDetective ControlsDeterrent ControlsTo quickly put these into context and provide some examples, the following paragraph is taken from the Official (ISC)2® Guide to the CISSP®-ISSEP® CBK®
Unauthorized access can be prevented with locks, smartcards, passcodes, biometrics and mantraps. Intruders can be deterred with fences and contraband checks (metal detectors or x-ray). Detection of unauthorized access can be performed with CCTV, motion detectors or infrared sensors.
While required, at present we are not detailing Recovery and Compensating controls in relation to the asset Vulnerabilities and their assessments.
Assessment
While each of the different categories of Security Controls above have different goals and address different parts of the CVSS assessment of the target Vulnerability, they all have the cost assessment in common.
To keep things as simple as possible, we only deal with three variables when considering the cost of a Security Control:
TimeOperational Expenses (OPEX)Capital Expenses (CAPEX)In large corporations, the first one is often the most 'costly' irrespectively of the dollar values associated with the CAPEX and OPEX assessments. Particularly when launch deadlines are drawing near. The time should be based on how many hours, days or months it will take to deliver in whatever unit of measurement is appropriate for the project or system.
OPEX needs to take into account how much the Security Control will cost to implement and maintain, typically over the next 5 to 7 years. This will include any ongoing support costs paid to a vendor.
CAPEX is simply the upfront cost of the software and/or hardware.
To help come up with the OPEX implementation cost, just take the Time and multiply it by the rate and number of personnel required to deliver it. For the ongoing costs, make an estimate on how much time per week or month is required to dedicate to it and again, multiply that by the rate and number of personnel required to maintain it.
How these are actually broken up will depend entirely on the project, the company and where the Security Controls come from. If everything is vendor supplied and implemented, it may be all one contract and all considered CAPEX. If the whole thing is outsourced to something like Google's Security Services then the whole thing, including ongoing maintenance, may be classified as OPEX.
Countermeasures
The purpose of the Countermeasures are to protect the Information Assets. This can be as simple as putting a Web Server (Infrastructure Asset) behind a firewall to reduce the likelihood of an attack from a potential vulnerability. Doing this doesn't eliminate the Threat as it is still exposed to administrative staff or other compromised Infrastructure Assets within the same network segment.
Similarly, a network monitoring tool, such as Snort, can be configured to detect Threats and potentially initiate another Countermeasure that decreases the Impact of an attack after it has begun, such as blacklisting the source IP of the attack.
Security Controls
Design Patterns
Asset Impact
Preventative Controls
Vulnerabilities
Exposures
Security Principles
Asset Values
Detective Controls
Countermeasures
Deterrent Controls
Policies & Standards
Asset Definition
Corrective Controls
Secure Arc Reference Architecture
triggers
discovers
Corrective Control
Deterrent
Control
Vunlerability
results in
Preventative
Control
exploits
has
Value
Detective
Control
eliminates
decreases
reduces likelihood of
Asset
Impact
Asset Definition
Everything ties back to Asset management and classification. If you don't know what it is that you're protecting and how valuable it is, you can't know or justify how much you should spend on Security Controls.
Determining the value of an Information Asset is not a trivial task. Our approach attempts to address this by making the process as quantitative as possible.
Asset Types
Infrastructure Assets
Put simply, Infrastructure Assets refer to the individual nodes displaying, transferring, processing and storing the Information Assets in a system. When drawing up the Security Architecture for a solution, each logical representation of a server that is placed on the architectural diagram has a one-to-one relationship with an Infrastructure Asset.
Information Assets
Information Assets are less tangible. They refer to the categories of data that pass through and are stored in the system. These should be considered in relation to Value at Risk, Regulatory, Reputation and Mission classifications (largely sourced from NIST 800-30).
If you need to protect a type of data for any of the above reasons, then it should be defined as an Information Asset. A good approach is to focus on the regulatory compliance needs and the ability to identify the types of assets that have regulatory constraints on them.
The definition of the assets themselves do not include the classification and valuation of them. That comes in the next step.
Asset Definition
When defining an individual asset, you need to identify the following:
Who owns it, both from a management and implied business unit perspectiveWhat other assets of the same type it is dependent onWhat information assets are stored on it (or what it’s stored on if it’s an Information Asset) and how manyWhat information assets pass through it (or what it passes through if it’s an Information Asset) and how many
The quantity parts of where information assets are persistently and transiently stored is very important later on. These are obviously very quantitative values and easy to come by. Each organization should know just how many Credit Card Numbers they store and how many Customers they have.
Classification
Asset Classification is a long drawn out complicated task, which is why it is important to identify Information Asset types as opposed to walking through a classification exercise for each element on an Object graph in a Data Model. Stick with types or you'll end up with a never ending task beginning from scratch on each project.
If you can come up with the appropriate Information Assets at the beginning, you should be able to classify them once and then review them once every year or so.
Classification itself is based on a combination of Magnitude of Impact Definitions table in the NIST 800-30 standard, the ISO17799/27001 and the Collateral Damage Potential ratings from the Environmental Score definitions of the CVSS spec. More on this in the Vulnerability section.
To arrive at the values in the following tables, you need to run through a variation of the following questions for each cell.
If someone who shouldn't be able to see the Information Asset are able to see 54,9993,556 of them, it may result in the highly costly loss of major tangible assets or resources. Total losses, lost revenue and damage control may exceed $53,893,685 dollarsIf someone who shouldn't be able to change the Information Asset are able to change 54,9993,556 of them, it may result in the highly costly loss of major tangible assets or resources. Total losses, lost revenue and damage control may exceed $53,893,685 dollarsIf someone who should be able to access the Information Asset are able to access 54,9993,556 of them, it may result in the highly costly loss of major tangible assets or resources. Total losses, lost revenue and damage control may exceed $53,893,685 dollarsThe example above is for the Value at Risk, Critical Impact cell in the table to below.
Asset Classification is an organization specific process and the values for one may be significantly different from another.
Inferred Value
The inferred Asset Unit Value isn't particularly useful on it's own, but can provide a useful comparison to the overall value of each individual Information Asset. The Inferred Asset Value, as the name implies, is derived from the classification values assigned to it. This is simply a matter of getting the average unit value for each impact level and then taking the median across each of those 4 averages.
As such one of the first tasks of Information Asset Classification is to determine what the Security Requirements are. The values that can be selected for each Security Requirement comes directly from the Environmental Score definitions of the CVSS spec.
If needed, the Asset Classifications can be revisited to adjust the Inferred Value relative to each other. For example, if Trade Secrets turn out to be worth far less individually than is known to be the case, the Architect can revisit the Asset Classifications to re-adjust the values to a more suitable combination.
Asset Value
Later, when a Vulnerability in an Infrastructure Asset is being assessed, the Security Requirements for the Information Assets that would be impacted by the exploitation of that Vulnerability will have a direct impact on the overall CVSS score for the vulnerability. If the Vulnerability only results in a breach of Confidentiality on Information Assets that have no or low Confidentiality requirements then the overall CVSS score will be lower.
Security Requirements
Any Vulnerability in an Infrastructure Asset will ultimately be classified as a breach of one or all of the Core Principles of Security in the CIA Triad:
ConfidentialityIntegrity andAvailabilityDepending on which one of these any given Information Asset is exposed to, the Impact can alter considerably. For example, PCI Regulatory compliance is focussed almost solely on the Confidentiality of Credit Card data. While a breach of the other two may result in a significant impact to the organization, they are not as important to regulatory compliance.
As mentioned in the Asset Definition section, everything ties back to Asset Classification. If you don't know what it is that you're protecting and how valuable it is, you can't know or justify how much you should spend on Security Controls.
While there will always need to be some subjective estimates involved in valuing Information Assets, the approach taken in the Secure Arc Reference Architecture is intended to make the process as quantitative as possible.
Overall Risk Profile
The final piece of data that can be inferred from the the Asset Classification data is the overall risk profile for the organization. By taking the cumulative asset classification values across all Information Assets and then taking the median of each value, you can determine what this particular organisation considers Critical, High, Medium and Low risk from a lost revenue perspective across all of its Information Assets.
As with the Inferred Values, this can be used to feed back into the asset classifications as well. This chart should represent the losses an organization is willing to accept and if it doesn't accurately reflect how the company feels about those numbers, then again, the Asset Classifications should be revisited until the overall Risk Profile fits the organization.
Next Steps
At this point, for each Information Asset in the organisation, we know what is considered a Low, Medium, High and Critical loss to the company from both a volume lost and a revenue lost perspective. The former is going to lead directly into the assessment of individual Vulnerabilities in the next section.
For the remaining cells, refer to the descriptions in the following assessment table and repeat for each Security CIA Requirement.
The key behind this is that it is much easier to make decisions based on quantitative volumes of Information Assets than to pluck a dollar figure out of the air. Every Architect should be well aware of the volume of each type of data in the system or enterprise, especially as this was identified during the Asset Definition phase, and once we have a number we are in a much better position to determine what the dollar impact will be.
If you lose 5 million credit card numbers, that is 5 million customers that need to be contacted. It's 5 million separate people that will need their credit cards replaced with a cost likely to be passed on to your organization by their banks. It's likely to be a fixed fine or increased merchant rates with your credit card merchant. Estimating the dollar impact becomes much easier when you have a quantity to work with.
Access Complexity
The Access Complexity provides an indication as to how difficult it is to get to the vulnerable system and exploit the exposure. For example, if a web server accessible directly from the internet has an unpatched system service with a known vulnerability, it may require the attacker to get shell access with a valid username, password or private key before the vulnerability can be exploited. If all it takes is a large or malicious payload in a URL in a web browser, the Access Complexity will be Low.
The available selections for the Access Complexity are defined in the table to the right.
Integrity Impact
This is a measure of how many or how much of an organization's Information Assets may be modified by unauthorized users as a result of this vulnerability being exploited.
Taking the SQL injection example again, the Integrity Impact resulting from the vulnerability being exploited may take the form of one of the following scenarios:
The injected SQL query will be executed such that
any and all tables and their data can be modified or destroyedonly a specific subset of tables can be modified or destroyed by the injected SQLthe vulnerability does not allow any data to be modified or destroyed as a result of a successful SQL injectionThe available selections for Integrity Impact are defined in the table to the right.
Collateral Damage Potential
The Collateral Damage Potential is all about what the impact to the organization is if the vulnerability is exploited. To distinguish this from the Base Score Exploitability and Impact, this is entirely related to what it means to this particular organisation and environment. The Base Score will indicate how easily exploitable it is and what kind of actions a successful attacker can perform, however if the target Infrastructure Asset is not responsible for anything of value, then the severity of the Base Score needs to be weighted down.
At this point we know what Information Assets are exposed and we know how many of them. We have already performed the Information Asset Classification that indicates what the rating is for that volume of those specific Information Assets being exposed and so we can just map the appropriate rating from the Classification table to the Collateral Damage Potential.
Where multiple Information Assets are potentially exposed, the Collateral Damage Potential takes the rating from the highest rated Information Asset.
The CVSS spec explicitly defines the available ratings for the Collateral Damage Potential, however as we are deriving them from the Asset Classification phase we will not have to make any selections here.
Report Confidence
Often vulnerabilities are reported in the media, but aren't confirmed until some time later. Depending on how trusted the source of the vulnerability report is, this will impact the score. As with the other Temporal Metrics, this value may change over time as the reported vulnerability is assessed further and confirmed or denied.
The available selections for the Remediation Level are defined in the table to the right.
Target Distribution
The Target Distribution indicates how many of the servers in a given environment are exposed by this vulnerability. At the beginning of the Environmental Score assessment phase, we have already associated the vulnerability with one or more Infrastructure Assets and back at the Asset Definition phase we identified all of the Infrastructure Assets. At this point a simple division will give us the percentage of impacted servers, however we also have dependency information between Infrastructure Assets, so we can expand that to include servers that are indirectly exposed to the vulnerability as well.
Once that is performed, we have a simple percentage of Infrastructure Assets that are exposed to or by this vulnerability and we can then simply slot that into the CVSS definition ratings based on the following table:
Temporal Score Example
As an example, you have a system that has been up and running for some time, has millions of registered users. It handles and stores a reasonable amount of their personal details and credit card numbers. This system is served up by the Apache web server, version 2.0.17.
Vulnerability Definition
Once the assessment of a vulnerability is completed, we will have some concrete values to work from in order to make decisions on what should or shouldn't be fixed, how and when.
The diagrams to the right represent the CVSS metric selections for a specific Vulnerability. Note that the Environmental Score selections are grayed out as they are derived from the associated Assets rather than selected directly. The diagram below shows the resulting CVSS scores and Vectors.
You'll also notice that the Vulnerability Definition has a reference to a Defect ID. This is so that each Vulnerability can be mapped into your issue tracking system to allow it to be managed alongside all other defects. In many cases, security issues, particularly those in infrastructure, are left on a separate 'security to do list' that is not subject to the same kinds of rigors and processes as a typical defect management process.
Up Next
Up to this point we have only had to deal with quantitative numbers that can be easily tracked down, such as how many particular pieces of data are stored in a database, and from this we have derived the Environmental Impact, in line with the CVSS spec, of this particular vulnerability. What we don't have yet is how much it is going to cost. This is always the hard part because it is so difficult to map a potential exposure to potential lost revenue.
The Impact to the organization in business terms is handled in the next section.
Vulnerabilities
Common Vulnerability Scoring System (CVSS)
Please refer to the CVSS Guide for the full CVSS specification, including the formulas used to calculate the scores. This section is intended to provide a relatively high-level overview of CVSS and explain how it fits in with the rest of the Secure Arc Reference Architecture. The content from the CVSS Guide that is replicated below is identified as such and used in accordance with the completely free and open standard position put forward by it's custodians.
CVSS is broken down into the following three separate assessment areas, each one contributing different parts of the overall score
Base ScoreTemporal ScoreEnvironmental Score
Exploitability
The Exploitability section asks for a rating of three key areas
Access VectorAccess ComplexityAuthentication
With many Vulnerabilities to address and a finite amount of time and resources, the best way to present these is in the form of a bubble chart.
The vertical axis is tied to the Base Score, The horizontal axis is for the Temporal Score and the size of the Bubble indicates the Environmental score, or more succinctly, how big a deal it is to this organization. With this chart you can see, at a glance, all of the identified vulnerabilities with a simple means to compare them.
The higher and further to the right, the more urgently they should be fixed and the bigger the bubble the bigger the impact on the organization if the vulnerability is exploited. In the example chart to the right, the left most Vulnerability is less severe and easier to fix, but if it were exploited would have a much bigger impact on the organization.
In the news, you come across an article highlighting a vulnerability in the Apache web server. Unlike the last notification you received of an urgent list of security vulnerabilities and associated patches forwarded on via a friend who also received it from a friend, this particular security issue had references to both a Cert Advisory bulletin and a CVE Entry in the National Vulnerability Database on the NIST site, which also includes a reference to an official bulletin from the Apache group itself.
As this is clearly a real and very serious issue, you decide you had better act immediately. In this particular case, the CVSS score has already been completed on the CVE Entry, so only the Temporal and Environmental score is to be completed.
Based on the reliability of the vulnerability sources outlined above, the Report Confidence can be immediately set to "Confirmed."The CERT Bulletin indicates that an official fix from Apache is available in the form of Apache version 2.0.39, as a result the Remediation Level can be set to "Official Fix"A brief search on Google reveals a product called CoreImpact that automates the exploitation of this particular vulnerability, leading to a "High" Exploitability rating.
Remediation Level
Put simply, this is an indication of how 'fixable' the vulnerability is. When dealing with a vulnerability in a vendor supplied piece of software, the general process that is typically followed is to quickly implement a workaround solution that may mitigate the risk. Later a temporary patch may be provided by the vendor before eventually they deliver an official fix for the problem. To quote directly from the CVSS v2 Guide: "The less official and permanent a fix, the higher the vulnerability score is."
As the vulnerability moves through this 'lifecycle' the Temporal Score will be reduced.
The available selections for the Remediation Level are defined in the table to the right.
Temporal Score
Once the Base Score for a vulnerability is calculated, there should be no circumstances under which it is modified unless actions are taken to address the vulnerability. The Temporal Score, however, explicitly covers properties of the vulnerability that do change over time. As a general rule, the Temporal Score helps to prioritize what vulnerabilities should be addressed first based on whether exploits are currently in the wild, how easy it is to fix and how confident you are of the source of these details.
The three temporal metrics that need to be assessed include:
ExploitabilityRemediation LevelReport Confidence
Security Requirements
This is very distinct from the Base Score Impact defined earlier. The earlier assessment was in regard to whether Confidentiality, Integrity and/or Availability will be compromised if the vulnerability is exploited. This section is specifically in regard to whether each of those actually matters in the context of the system or business in question.
More specifically, whether it is important in the context of the exposed Information Assets.
In relation to CVSS, the Security Requirements in the Environmental Score provide a weighting to the Impact ratings in the Base Score. If the Confidentiality Impact is large and the Confidentiality Requirement is Low, then the Environmental score is reduced. If the Confidentiality Requirement is also high, the Environmental Score will reflect that as well.
The Security Requirements are broken down into the following
Confidentiality RequirementIntegrity RequirementAvailability RequirementAs before, these values are simply taken as the highest rated Security Requirements for the exposed Information Assets as identified above.
Base Score
The Base Score is a measure of the vulnerability itself, completely independently of how easy it is to fix and how it impacts any specific organisation or project.
It is broken down into two main areas
ExploitabilityImpact
Authentication
The Authentication metric indicates how many times the attacker needs to authenticate to exploit the vulnerability. The strength of the authentication is not considered when performing this assessment.
The available selections for Authentication are defined in the table below.
Availability Impact
The best way to put the Availability principle into a Security context is to think about it's antonym, the Denial of Service Attack (DSA). The unavailability of a system or part of a system can be as costly as a direct security breach.
The kind of Value at Risk associated with Availability can include Service Level Agreements, online sales and transactions to customers, partners and suppliers. For organisations such as banks and other financial institutions, this can reach billions of dollars a day.
Taking up the SQL injection example again, the Availability Impact resulting from the vulnerability being exploited may take the form of one of the following scenarios:
The injected SQL query will be executed such that
all data is destroyed or inaccessible and the system is inoperableonly a specific subset of data is destroyed or inaccessible and therefore only part of the system will not work properlythe vulnerability does not have any impact on the availability of the system as a result of a successful SQL injectionThe available selections for Availability Impact are defined in the table to the right.
The ability to trace full circle between the classification of an Information Asset to the threats that face the systems that protect them and the resulting impact of a breach is key to being able to select and justify the selection of the Security Controls applied to a solution.
CVSS is used as the basis for the vulnerability assessment, which focuses on vulnerabilities on Infrastructure Assets. This could be a buffer overflow bug in a web server or it could be a cross site scripting vulnerability in a custom built application.
The primary benefit behind using CVSS as opposed to other alternatives, such as STRIDE & DREAD, is that it is a very quantitative model. There is a low likelihood of two separate people coming up with a different rating for the same vulnerability. Where there are discrepencies, there should only be one correct answer.
Access Vector
The Access Vector provides an indication as to how 'close' an attacker needs to get to the vulnerable system in order to exploit the exposure. A cross site scripting problem with an application, for example, may be exploitable from anywhere on the internet. Conversely, a permissions problem with an executable on a database server may only be exploitable if someone has local shell access to the host itself, which isn't exposed to the internet. In the latter case, the Access Vector would either be Local or Adjacent Network.
The available selections for the Access Vector are defined in the table to the right.
Confidentiality Impact
This is a measure of how many or how much of an organization's Information Assets may be accessible to unauthorized users as a result of this vulnerability being exploited.
For example, if a custom built system allows SQL injection via one of the fields presented in a query form it may result in any of the following scenarios:
The injected SQL query will be executed such that
any and all tables and their data will be displayed to the attackeronly a specific subset of tables can be accessed by the injected SQLthe vulnerability does not display any data as a result of a successful SQL injectionThe available selections for Confidentiality Impact are defined in the table to the right.
Environmental Score
This is where the magic is.
CVSS defines the Environmental Score as being the only part of the Vulnerability assessment that indicates how big a deal the vulnerability is to your particular organization, business unit or project. Without the Environmental Score, the Base Score and Temporal Scores are unable to answer the 'so what?' question on their own.
The magic in this case, is that we don't have to assess the ratings for the Environmental metrics. Instead, we refer back to the Infrastructure Asset definitions and Information Asset Classifications we've performed already and from there we can derive the ratings. Even better, the actual vulnerabilities themselves could simply be generated by a tool like Open VAS, which reports it's findings in accordance with the CVSS spec. In addition, it will indicate what the Infrastructure Asset is that it found the vulnerability on.
The first step to perform when initiating the Environmental Score assessment is to determine what Infrastructure Assets the vulnerability applies to. If the vulnerability is on the CRM Application Server, for example, we can immediately see that the CRM Database may also be vulnerable as it is a dependent Infrastructure Asset. A SQL injection vulnerability in the CRM Application may expose data stored in the CRM Database.
We can also see immediately what Information Assets are processed and transferred by the CRM Application and its dependents. In this case the only Information Assets transferred and stored by these two Infrastructure Assets is the Personal Data Information Asset. We can also see that there are 300,000 customers that may be exposed at either point in this scenario.
From here, we can derive the ratings in the following Environmental Score sections:
Impact
In addition to the effort the attacker needs to make in order to exploit the vulnerability, part of the Base Score also measures the impact in terms of the CIA Triad of fundamental security objectives:
ConfidentialityIntegrity andAvailabilityBe aware that this impact is solely based on whether someone can see, change or inhibit access to your Information Assets. It is explicitly not related to how valuable those assets are or the losses you will encounter if they are exposed. Those details come later in the Environment section.
Exploitability
This provides an indication as to the current state of the exploit in the wild, specifically in relation to whether there are known attacks being used and whether the knowledge or tools required to exploit the vulnerability are widely distributed. The assessment of this property is a moving target.
When the vulnerability is first discovered, there may be no attacks in the wild, but as time goes by and the vulnerability remains unpatched, attacks may have become widespread. As the state of the vulnerability's Exploitability changes, the value should be adjusted accordingly.
Building on the SQL injection example, if this was found in a custom built application, the Exploitability assessment would potentially come up in one of the following scenarios:
the problem was discovered as part of a security code review and there are no known attacks in the wildas above, however in addition to being discovered in the code an example attack has demonstrated it's ligitimacythe vulnerability is already known to be exploited by attackers in the wild and is usually successfulthe vulnerability is able to be exploited automatically by a script, virus or worm or code is widely available for others to replicate the attack manuallyThe available selections for Exploitability are defined in the table to the right.
To put this into perspective, if a Vulnerability exposes 5 different Information Assets, the amount of money that could be lost is not simply the amount associated with the most valuable asset. It is the total of all of those assets that will be exposed. Because we're talking about qualitative and subjective dollar values, we need to talk about ranges of losses rather than a particular figure. With the Environmental Score of the Vulnerability based on specific, well known volumes of Information Assets that are exposed, the 'criticality' of the exposures is finite, exact and quantitative. The dollar values are just educated guesses.
As a result, the Impact table should consist of the Vulnerability ID, the volume of Information Assets exposed and the range outlining the potential losses associated with it. To come up with the lower and upper ranges of the potential losses, look at the impact rating that the volume of Information Assets exposed is associated with and take the lower and upper values from the Revenue Classification table.
For example, if the quantity of Personal Data records exposed by a vulnerability results in a Medium-High impact rating, then the dollar impact is from the Medium-High value up to the High value in the Revenue Classification table for that asset.
If there are a multiple assets associated with a single Vulnerability add up all the lower bound values and all the upper bound values and use those.
Visualizing the Impacts
There are a couple of useful ways to visualise the Impact of the vulnerabilities. As with the Vulnerabilities and their Environmental Score, we can represent all of the Impacts of all Vulnerabilities in a bubble chart. Instead of an Environmental Score metric, we define the bubble size based on the upper bound of the dollar impact of the vulnerability. Consider the size of the bubble to represent the potential size of the impact and the actual value could be anywhere within that scope.
To more concisely describe the ranges and the amounts exposed by each vulnerability, a candlestick chart can be utilized. It's worth highlighting that the largest potential losses also have the largest range of potential losses and the more margin for error.
Asset Impact
Up Next
At this stage we have some fairly solid, albeit subjective and qualitative, potential losses associated with each Vulnerability. The next step is to do something about it. The charts presented above should provide the details required to prioritize the issues to be addressed. To actually make a decision on what Countermeasures to put in place, we need to determine how best to reduce the exposure and impact of these Vulnerabilities and compare the cost to implement those Security Controls with the potential losses of not doing so.
This is explained in the Countermeasures phase.
There is only a marginal difference between the charts associated with the Vulnerabilities and the Impact. The key difference is that instead of the volume based Environment score, we're looking at the associated dollar amounts. To get to this point, you need to look back at the Information Asset classification again. In the Vulnerability Environmental score section, we looked at the Information Assets that are impacted and took the highest rated score to assess the Collateral Damage rating. This time, we need to look at the cumulative dollar amounts for each exposed Information Asset.
Preventative Controls
Up Next
As with all Countermeasures, the hard part is the assessment on the costs.
Official Fix
The first approach is clearly the best as it leaves no room for the Vulnerability to be exploited, however this is not always achievable because there may simply be no official fix for the problem or the Vulnerability may actually be part of a business process that needs to be accepted and can therefore only be mitigated.
Even if a fix is available, all options should be presented so that the most economical choice can be selected. An official fix may require the system to be unavailable for days or weeks for regression testing and may have other costs, both of time and money, that need to be factored into the selection.
This leads us to the second form of mitigating preventative controls.
Mitigating Controls
In contrast to an official fix, a mitigating preventative control needs to address one or more of the Exploitability factors of the CVSS assessment of the Vulnerability. These include:
Access Vector - How close the attacker needs to be
Access Complexity - How easy it is to get to the Vulnerability
Authentication - How many times the attacker needs to authenticate to get to the Vulnerability
Each of these factors have very explicit options to choose from. Take a look at the current values associated with each of them and consider what options are available that would allow you to make better selections from that list.
Access Vector
For example, if the Vulnerability in question is a bug in an ssh daemon residing on an internet facing web server, you could potentially reduce the Access Vector from Network to Adjacent or even Local by denying access to the ssh port from outside of the local network with a simple firewall rule change.
This alone would significantly reduce the exploitability of the Vulnerability.
Access Complexity
A common, but relatively unknown problem when using a Reverse Proxy that maintains it's own authenticated session independently of a back end application server is that by default logging out of the 'application' only actually logs you out of the Reverse Proxy. The Application Server session typically still exists and expects to timeout, however the session cookies tying the web browser to the Application Server session are often still present in the browser even after the user has logged out.
In a scenario where a user only logs out of the Reverse Proxy and a second user utilizing the same web browser logs in as themselves, the application server will see the previous user's session cookies and hand over their session to the new user.
This is a good example of a 'race condition' as described in the CVSS spec, where the Vulnerability only presents itself under specific circumstances.
In this case, a simple (although incomplete) solution may be to add some javascript to the exit page that cleans up the Application Server cookies as well, which may allow the Access Complexity to be moved from Low to Medium.
Authentication
A simple example of a business process type Vulnerability may be the need for registered users to be able to change their contact details where those same contact details are also used as a means of confirming a users identity via a back channel.
The user will need to authenticate in order to get to the profile page, but if they don't need to enter their password again when changing their contact details then anyone walking past an unlocked computer could change their contact details without challenge. They could then potentially use it to change the victims password via the 'forgotten password' option, which will utilize the new email address just entered.
Adding a password confirmation to the screen that allows these details to be changed would allow the Authentication selection to be adjusted from single to multiple.
Preventative Controls address the Vulnerability itself, specifically the exploitability of the Vulnerability. Relating this back to the assessment of the Vulnerability, coming up with Preventative Controls can take one of two approaches:
Apply a fix and the Vulnerability no longer existsApply Security Controls to adjust the exploitability of the Vulnerability
Corrective Controls are only initiated as a result of a Detective Control identifying an active exploit of a Vulnerability and triggering it. As a result, Corrective Controls are all about reducing the Impact of the exploited Vulnerability rather than preventing them from happening in the first place.
The key point here is that the attack has already begun and all we can do is limit the severity of it.
Looking at the Impact assessment, it is entirely dependent on the number of Information Assets that are exposed. The dollar Impact itself is effectively a multiple of the quantity of Information Assets exposed and their inferred value. In practice we map the quantity to a dollar range, but the outcome is the same.
The goal of the Corrective Control, therefore, should be to reduce the quantity of Information Assets that are exposed. This will be entirely dependent on the nature of the Vulnerability, however something as simple as terminating the associated users session may be enough to limit the number of records they were able to steal or modify.
Corrective Controls
Detective Controls
Detective Controls are all about monitoring. This can include IDS and IPS tools, such as snort or a configuration management system that will monitor key configuration files for unauthorized changes. The key thing to understand about Detective Controls is that they are re-active. By the time they identify an attack it may already be too late.
When identifying a Vulnerability, part of the Countermeasures assessment should be to determine whether automated monitoring could be configured to detect an attack on that vulnerability. This may be in the form of network monitoring for a specific signature or a custom application recording an audit event that is itself monitored.
These detective controls may simply provide statistics on what did and did not happen and when, or they may also actively initiate some kind of Corrective or Preventative Control to limit the impact of the exploit.
Triggered Preventative Controls may be in the form of shutting down a server if the underlying authorisation server has become unresponsive.
Triggered Corrective Controls may be in the form of a firewall rule change blocking access from the source IP address of the attack. Many sites will block an IP address temporarily if a port scan is detected, which may be a sign that they are looking for vulnerabilities to exploit.
Deterrent Controls are difficult to quantify. The goal of a Deterrent Control is to reduce the likelihood of a Vulnerability being exploited without actually reducing the exposure.
While this doesn't sound like an effective approach straight off the bat, it is important when combined with other types of Security Controls.
Large financial organizations typically reinforce what users should expect from them in every outbound communication. Statements like "We will never ask you for your password" and "We will never include a hyperlink in an email " will be plastered over every email, letter and message box they can put in front of their customers. The goal being to govern their customers expectations on what they should and should not expect to receive from them and therefore help customers identify phishing emails and other scams.
Ultimately the goal is to reduce the likelihood that a phishing attack, which is completely outside of the control of the target company, is successful by increasing the awareness of it's customers.
How to quantify the likelihood of something like this working is extremely difficult, particularly when the customer base is large. When the target users are actually staff, various policies and procedures can be put in place to help quantify these things and assess how successful these Deterrent Security Controls have been.
The US Department of Justice regularly sends out elaborate phishing emails to their staff to both determine the success of their internal security awareness programs and also as a means to educate their staff.
Similarly, for internal facing threats, including black lists of known malware sites and so forth can reduce the likelihood of staff being exposed to these kinds of threats.
There are many tips on the OWASP phishing page on how to address phishing type threats, where your only options are deterrent controls.
The short summary of Deterrent Controls are that they do not attempt to fix the associated Vulnerability, they just attempt to make it occur less frequently.
Up Next
As with all Countermeasures, the hard part is the assessment on the costs.
™
Cybersecurity Office
Under the following conditions:
Atrribution — You should provide a deep link to the page the content was sourced from on the Secure Arc wiki in both unaltered and adapted copies, in such a way that does not imply the endorsement of the adaptation by Secure Arc. Where providing a link is not suitable, an attribution statement should be included without links. This may be the case where the work is not hosted on the web.
For example, a statement such as, "Based on the Vulnerability Assessment templates in the Secure Arc Reference Architecture" is sufficient.
to Remix — to adapt the work
Secure Arc retain the copyright ownership of original content produced on this site, however unless otherwise specified all content of the Secure Arc Security Reference Architecture is available under the Creative Commons Attribution license.
You are free:
to Share — to copy, distribute and transmit the work
Copyrights
Content available under Creative Commons Attribution 3.0 License
The following summary is taken from the OWASP website (reproduced in accordance with the Creative Commons 2.5 License):
Application security principles are collections of desirable application properties, behaviors, designs and implementation practices that attempt to reduce the likelihood of threat realization and impact should that threat be realized. Security principles are language independent, architecturally neutral primitives that can be leveraged within most software development methodologies to design and construct applications.
Principles are important because they help us make security decisions in new situations with the same basic ideas. By considering each of these principles, we can derive security requirements, make architecture and implementation decisions, and identify possible weaknesses in systems.
The important thing to remember is that in order to be useful, principles must be evaluated, interpreted, and applied to address a specific problem. Although principles can serve as general guidelines, simply telling a software developer that their software must "fail safely" or that they should do "defense in depth" won't mean that much.
Security Principles
Assertion
Security Principle Name
Policies & Standards
Rationale
Related References
Systems or sub-systems outside the bounds of a receiving component must never be trusted implicitly
There are a number of scenarios where this can apply.
In B2B interactions, partner organisations will not necessarily enforce the same level of security constraints, policies and quality controls as your own and therefore the level of trust attributed to their requests should be questioned.In large organisations, the same B2B scenarios above can come into playWithin the same system a request from a User Interface to the downstream services should not be implicitly trusted either. This is in accordance with the Defence in Depth principle and primarily addresses bugs and misconfiguration rather than malicious intent, however depending on the deployment model used a Service used by its own Web Interface is an entry point for both intended web traffic and malicious direct traffic as described in the Minimise Attack Surface principle.Where possible, a request should be accompanied with an end user credential that can be validated by the receiving service and authorisation controls should be enforced based on the end user, not the system the end-user is interacting with.
Further detailed information is available on Wikipedia.
As described in the book, Voice Over IPv6: Architectures for Next Generation VoIP Networks, "Companies typically see the environment comprised of the following zones (also known as domains)." This Design Pattern is based on our own extensions of this common approach and our experience in various organisations.
Silos
Each system resides in a Silo. The Silo cuts across the three primary Logical Security Zones and in a highly secure environment, are isolated from each other with firewalls. Generally speaking a lot of large enterprises do not isolate individual systems from each other in this way, doing so will depend both on the sensitivity of the assets that require protection, the paranoia level and ultimately the budget available. The primary goal of this is to compartmentalize separate systems to minimize the impact of a compromise of one on another.
If one system needs to handle credit card data and be PCI compliant and another only serves up marketing material, the former will need to be physically isolated from the latter with firewalls between them. The security employed on the marketing website will most likely only be to the extent required to protect publicly available information. If a compromise of one of these servers results in a compromise of one of the credit card processing servers, all the money spent on security of the other system is of little value as the weakest link in the chain isn't actually part of the same system, project or budget.
Logical Security Zone Model Pattern
Structure
The Rules
In its simplest form, the rules are very simple. The rules basically state that Nodes in one Zone can only communicate directly with Nodes in the same or adjacent Zones. When a Node in one Zone needs to communicate with a Node in another Zone that is not adjacent, some form of proxy should be placed in the intermediate Zones.
The allowable communications paths are represented in the diagram to the left.
Motivation (Forces)
As the template for most of the Secure Arc Security Reference Architecture, the majority of the Design Patterns are related. These include:
Authentication PatternReverse Proxy PatternEmbedded Authentication PatternEntitlements PatternCoarse Grained Authorisation PatternMedium Grained Authorisation PatternFine Grained Authorisation Pattern
Some enterprises have strict rules around the allowable directions communications may be invoked. The model above allows bi-directional access between all zones. The second diagram to the left shows a more restrictive model where communications can not be established from one Zone to a less trusted Zone.
The primary purpose of this architectural pattern is to ensure enterprise systems are designed with the big picture in mind in a secure manner.
This should be used as a guide in at least three points during the Software Development Lifecycle
Infrastructure DesignApplication Architecture DesignApplication Component/Interface DesignThis provides a template to guide all design decisions in the three areas identified above.
At every point a decision needs to be made about where a component, service or server should reside or how it should communicate with another, it just needs to be dropped in the appropriate zone and all of the rules satisfied.
The pattern consists of a set of pre-defined Logical Zones where servers reside. These typically mapped to physical networks and subnets.
Level of TrustThe underlying concept behind the zone model is the increasing Level of Trust from outside into the centre. On the outside is the the Internet. There is zero trust here. It's in the Internet that any anonymous attacker or Script Kiddie resides. In the centre is the Data Zone. It's in here that the most sensitive data is stored.
The Rules of the Logical Security Zone Model state that communication between Zones must only originate from an adjacent Zone. Within and between each Zone are countermeasures such as firewalls, URL based access controls, Mutually Authenticated SSL (MASSL) point-to-point connections and J2EE declarative and programmatic role based access controls. The granularity of the authorisation level typically increases from outer to inner zones, however in most cases the connection to the data repository in the Data Zone is as an application System User rather than an End User's level. This means that the finest grained authorisation is actually enforced in the Application Tier.
The Internal DMZ and Staff Intranet are analogs to the Internet DMZ and the Internet. The concentric zone model clearly reflects that the Staff Intranet is basically as untrusted as the Internet and consequently the enterprise systems need to be just as protected from the Staff Intranet as they do from the Internet.
Within the Global State of Information Security report for 2007, the following statement is made:
This year (2007) marks the first time “employees” beat out “hackers” as the most likely source of a security incident.Many organisations consider the staff intranet to be sufficiently secured, but when you consider contractors, laptops moved between home and work, iPods and other mp3 players potentially infected with viruses plugged into work PCs and staff turnover, the chances of some form of compromise are very high. A google search for "disgruntled employee" network security returns around 10,000 results.
There are also some highly public multi-billion dollar cases in the news recently.
Collaboration
Consequences
This model has an impact on the network and firewall design, the structure of an application itself and can impact how the interfaces to Services in a Service Oriented Architecture should be designed. When deciding what servers are to be deployed where, a Logical Zone needs to be selected and considerations as to what other servers and nodes it needs to communicate with will implicitly need to be assessed. The same goes for application components and services.
Ultimately the Logical Security Zone Pattern helps to satisfy the Security Principles identified above. Defence in Depth is clearly enforced at each Compartmentalised section. As a result, the Attack Surface is significantly reduced, especially between systems and as a consequence, a Denial of Service attack on one system should not impact that of another (this is quite subjective).
Defence in DepthMinimise Attack SurfaceCompartmentaliseAvailabilityDo not Trust Services
Silos should define the boundaries of each isolated system, however there are many cases where systems need to communicate with each other, particularly in a Service Oriented Architecture (SOA). As shown above, the Silos cross multiple Zones. The rules shown above apply within the Silos, however when a Node in one Silo needs to communicate with another Node in a different Silo, they both must reside in the same Zone.
The reasoning behind this is quite straight-forward. Within a Silo the level of Trust increases from outer most zone to the inner most zone and all of the access controls that have been put in place to establish that trust are tied to a particular system. This is due to the CVSS rating associated with that system, the budget given to it and the experience of the people who deliver it.
For example, if Silo 1 is extremely locked down and Silo 2 has very little in the way of security and is allowed access to the Data Zone of Silo 1 from its Application Zone, then an attacker could bypass all of the controls in Silo 1 by taking the path of least resistance through Silo 2.
The Logical Security Architecture is primarily, but not entirely, realised as a network design. The network design to the left represents an ideal realisation of the sample system Logical Security Architecture above.
As per the sample system, this is not intended to be an exhaustive network design. It only attempts to highlight the network segmentation via firewalls representing each of the Logical Zones above. Note also that simpler, cheaper realisations are possible by enforcing the logical zone segmentation off the back of one or more firewalls rather than having separate physical firewalls for each one. There is an increased risk in taking this cheaper option as a compromise of that one firewall results in a compromise of all zones.
Applicability
Information Asset Classification
As the Logical Security Zone Model is created for a particular system, all of the Nodes, or Infrastructure Assets, identified should be tagged with the Information Assets that are stored on or pass through that Node.
The details on this procedure and how they fit in with the The Secure Arc Reference Architecture are detailed on the Asset Definition section.
Implementation
Each node making up a system needs to be dropped into an appropriate Logical Zone. Once selected, the Flows between each Node must be identified and they must follow the Rules. Each Node and Flow needs to be tagged with a unique identifier so that they can be referenced in the various tables, specifically the Infrastructure Assets and everything that references them. By doing this, it is relatively safe to assume that the resulting architecture represents a secure design, at least at the big picture level. The diagram to the right represents a very simple and cut down Logical Security Architecture.
Intent
This model should apply to all enterprise systems regardless of whether they are critical 3 tier systems or static content web servers. Any entry point into an organisations network is a potential point of attack and any system deployed in the same environment as another are inherently subject to the security vulnerabilities of the other.
Logical vs PhysicalBoth the Logical Security Zones and the Silos need not necessarily be realized as physically separate environments. When designing a solution, regardless of whether each zone gets its own isolated subnet or not, the logical model should be adhered to. When that Logical model is then mapped to a physical Infrastructure design, an Architectural Decision will need to be made as to how many Logical Zones map to how many physical zones.
During the process of identifying potential Vulnerabilities, the Attack Surface Area will be assessed based on all of the potential entry points into the system. This will include the operating system level access from one host to another across Silos. If this ultimately identifies a substantial risk associated with a compromise of a less secure system in the same Zone, a decision to isolate the Silos should be made.
Demilitarized Zone (DMZ)In a lot of organisations, the DMZ is referred to as a subnet isolated by firewalls. The definition of a DMZ as used in network security originally came from the military definition, which is basically an unoccupied area between two opposing forces.
The DMZ in network security context is an area between secure networks and untrusted networks. There should be a DMZ between the internet and the protected systems and the staff intranet and the protected systems. As described above, sensitive Information Assets and systems need to be protected from the staff intranet almost as much as they do from the internet.
Participants
The participants in this pattern are the Zones and Silos.
Zone Description Adjacent Zones Node Types
Internet aka: Uncontrolled. The vast majority of End Users in this Zone are unauthenticated and unidentifiable. This is where both legitimate End Users and Malicious Attackers are found. Trusted Third Parties,
Internet DMZ Web Browsers, Hacker Tools
Trusted Third Parties aka: Externally Controlled . This represents a 3rd Party Business Partner Site. Business-to-Business (B2B) connections originate and terminate in this Zone. Outside of Service Level Agreements (SLAs), there is little or no control or visibility over the security policies of this environment. While it may be secured, it is not necessarily conforming with internal security policies. Internet,
Internet DMZ 3rd Party B2B Services
Internet DMZ aka: Controlled. This the no-mans-land between the untrusted networks and the trusted. All traffic should travel through the Internet DMZ to reach the Trusted Systems and vice versa. The Nodes deployed in this Zone should be kept to a minimum and be as simple as possible. Internet,
Trusted Third Parties,
Internal DMZ,
Application,
Managed Web Servers, Reverse Proxies
Staff Intranet aka: Internally Uncontrolled. It’s within here that employees connect their desktop PCs and their laptops. Staff Intranet,
Internet DMZ Staff PCs, iPods, Disgruntled Employees, Contractor Laptops, Home Laptops, USB Keys, Portable Hard drives
Internal DMZ aka: Internally Controlled. This is the equivalent of the Internet DMZ, but is specifically for the internal staff. It typically maps to an Internal DMZ separating the staff intranet from internal systems. Staff Intranet, Application Web Servers, Reverse Proxies
Application aka: Restricted. In order to access this Zone, an End User must have traversed the Internet DMZ or Internal DMZ and satisfied all of the authorisation constraints required to do so. With the exception of the Silo dedicated to the security sub-system required for authentication in the Internet DMZ, End Users should have been authenticated prior to initiating any requests in the Application Zone Internet DMZ, Internal DMZ, Data, Management Web Servers, Application Servers
Data aka: Secured.Database, LDAP Repositories and Access Control Policies Stores should be deployed into this Zone. The Data Zone requires the most effort to compromise, as an Attacker must compromise the access controls at all of the outer Zones before getting into this one. Application, Management User Repositories, Data Repositories, ACL Policy Stores
Management aka: Secured Management. Administrators and monitoring systems need access to all of the Zones. Having a physically separate Management network allows these administrative entry points to be locked down to specific rooms or floors in a building. Within the Management Zone, further controls can limit who and how administrators can access each of the Zones. For example, Database Administrators shouldn’t need access to the Internet DMZ or Application Zone. Controlled, Restricted, Secured Monitoring Tools, SSH Clients, Provisioning Servers
In addition to the Zones, there are also the Silos
Name Description
Silo(s) Each Silo defines the boundaries of a particular system, spans multiple Zones and may be also be nested. Nesting allows various sub-systems to be 'scoped' within an environment, such as Integration Test, User Acceptance Testing and Production. The Silos should be isolated from each other to prevent the compromise of one from impacting the other.
Application Design Patterns
The Application Design Patterns described here are a security subset of patterns that can be used to satisfy various Security Principles, Policies and Standards. Each of the Patterns will have bi-directional links to the Principles, Policies and Standards that they help to satisfy.
The diagram to the right shows the set of Design Patterns, how they relate to each other and where they logically and typically appear in a system. The lines between patterns depict relationships, not communications.
The Patterns covered include:
Authentication PatternReverse Proxy PatternEmbedded Authentication PatternEntitlements PatternCoarse Grained Authorisation PatternMedium Grained Authorisation PatternFine Grained Authorisation PatternSession Validation PatternCredential Propagation PatternService Client Pattern
Architectural Design Patterns
The overarching Architectural Design Pattern that forms the basis for the Secure Arc Reference Architecture is the:Logical Security Zone Model.The primary purpose of this architectural pattern is to ensure enterprise systems are designed with the big picture in mind in a secure manner.
This should be used as a guide in at least three points during the Software Development Lifecycle
Infrastructure DesignApplication Architecture DesignApplication Component/Interface Design
There are two types of Design Patterns covered here. The first are Architectural Design Patterns and the second are Application Design Patterns.
Design Patterns
Authentication Pattern
Where any of the Security Principles listed above should be applied.
This is an abstract pattern that has more specialised versions identifying specifically how it can be realised, such as the Reverse Proxy Pattern and the Embedded Authentication Pattern.
The fundamental goal of the Authentication Pattern is to identify the user wishing to perform an action. Once the user has been identified, subsequent authorisation decisions can then be made. The concepts of Authentication and Authorisation are distinctly separate, but typically co-dependent.
See the Reverse Proxy Pattern and the Embedded Authentication Pattern for specialisations of the abstract Authentication Pattern.
The need for Authentication is quite well understood. The follow Security Principles will require Authentication in order to be satisified:
AccountabilityLeast PrivilegeSegregation of DutiesDefence in DepthMinimise Attack SurfaceDo not Trust Services
Related Patterns
Where each impact rating for the Information Assets is sufficiently high and warrant a significant investment in any or all of the security principles listed aboveWhere a large number of in-house and related or co-deployed web or application servers require single sign-on and all reside under the same domain name
Reverse Proxy Pattern
Defence in DepthReuseMinimise Attack SurfaceCompartmentalise
Related Patterns
This is a sub-pattern of the abstract Authentication Pattern and an alternative to the Embedded Authentication Pattern.
One of the consequences of using this pattern is the need to apply the Session Validation Pattern.
The Controlled Zone where the Reverse Proxy is logically deployed is detailed in the Logical Security Zones Pattern.
The Reverse Proxy provides a single point of entry, (typically via HTTP), to all of the web, application and other servers making up a system. From a Minimise Attack Surface perspective alone, this is a huge win from a security perspective for an application. This requires that any bugs, misconfigurations or other vulnerabilities that may exist in the servers and applications making up a system must be attacked through the Reverse Proxy, which can limit an attacker to attacks over HTTP.
A Reverse Proxy is typically deployed in a DeMilitarized Zone (DMZ), which supports the Compartmentalise and Defence in Depth principles. The same kind of bugs, misconfigurations and vulnerabilities in the servers that the Reverse Proxy is protecting can also appear in the Reverse Proxy itself. By placing a firewall between the internet and the Reverse Proxy and the protected servers, any compromise of the Reverse Proxy itself can be relatively contained.
Finally, building a custom security solution should only ever be a last resort, especially in very large, high risk systems. Existing security solutions have been out in the wild for many years and are supported by big corporations spending millions of dollars to harden them against many types of attacks that most people wouldn't even think of when starting something from scratch. There are both commercial and open source security solutions available, such as those from Sun (and their Open Source equivalent) and IBM.
Any thoughts that an individual or team of developers could do better not only opens up a system to the same security problems that the big corporations have probably solved in the first few years of their products being on the market, it also means that there is no guarantee that all of the systems within an organisation will be interoperable from a security perspective. This will limit the ability to support single sign-on as well as back end integration.
In short, if you can decouple security from the application development you should. A Reverse Proxy takes the responsibility of authentication completely out of the applications domain and all applications can share the same authentication system. In code, developers need only ask the application container (in a J2EE context) who the user is rather than determine when to authenticate them, how to authenticate them and how to maintain the integrity of a validated credential.
Reusing a single authentication mechanism across applications is great, reusing an off the shelf one is ideal.
The collaboration diagram is made up of the Participants described above. Similarly, it doesn't include all nodes in the process and some details are left out for simplicity and clarity.
Participants
Not included are the systems the Reverse Proxy will interact with to perform the actual authentication of the user or the user repository this and the Web and/or Application Servers used to determine the entitlements of the user.
At its highest level, the participants are:
The End UsersThe Inbound FirewallThe Reverse ProxyThe Outbound FirewallThe protected Web and/or Application Servers
Implementation
As mentioned above, there are both commercial and open source Reverse Proxy solutions available, such as those from Sun (and their Open Source equivalent) and IBM.
A request is initiated from the End User, typically from a Web Browser over HTTP on port 80 or 443The Inbound FIrewall intercepts and inspects the incoming request, ensures that it has originated from the internet, that it is an HTTP request and that the port requested is either 80 or 443 and that the target is the Reverse ProxyAssuming all of the validation checks are passed, the Inbound Firewall allows the request to propagate through to the Reverse ProxyThe Reverse Proxy looks at the requested URL and checks its ACL to determine whether authorisation is required.If authorisation is required for the requested URL, the Reverse Proxy checks whether the End User is already authenticatedIf the End User is not authenticated, the End User is prompted to authenticate and the submitted credentials are validatedIf authentication is successful, the Reverse Proxy will perform an authorisation check to determine if the End User is allowed to access the requested URLIf the End User is authorised to access the requested URL, the Reverse Proxy proxies the request through to the target Web or Application ServerThe Outbound Firewall intercepts and inspects the incoming request, ensures that it has originated from the Reverse Proxy, that it is an HTTP request and that the port requested is either 80 or 443 and that the target is one of the Web or Application ServersAssuming all of the validation checks are passed, the Outbound Firewall allows the request to propagate through to the Web or Application ServerThe Web or Application Server validates that the request was initiated by the Reverse Proxy, either via an embedded credential authenticating the Reverse Proxy itself, a mutually authenticated SSL connection between the Reverse Proxy and the Web or Application Server or by validating a signed End User credential embedded within the request.Once trust of the request original is established, the Web or Application Server takes the End User credentials provided and translates them into a Web or Application Server specific credentialThe Web or Application Server can then perform standard J2EE authorisation checks, as can the Application itself, based on the propagated End User credential established by the Reverse Proxy
ConsequencesThere are a number of constraints and consequences introduced by using this approach. Some quite critical that must be addressed at design time to lock down a system.
Security PrinciplesThe Security Principles identified above are satisfied by employing this pattern for the reasons described in the Intent section.
Domain NameThe most prominent consequence that has an impact on the End User is the single web entry point to all systems that a particular Reverse Proxy is protecting. The implications of this are that all of the web applications will have the same domain name with different base URLs.
Where different domain names are required, a single sign-on solution will be much simpler if they all have the same base domain as that will allow session cookies to be shared across the sub-domains. A Cross Domain single sign on solution is significantly more complex, so if it can be avoided at design time it should be.
Isolated SessionsWeb and Application Servers typically have their own mechanism of tracking what sessions belong to what end users. This is normally achieved by storing a session cookie in the web browser. When authentication is delegated to a Reverse Proxy, the Reverse Proxy will most likely have its own session cookie stored in the End Users browser.
This can result in a critical security vulnerability if not addressed. If the Reverse Proxy session cookie is removed, either due to a log out or because it was deleted directly via the browser utilities, the Web and Application Server session cookies will stay in the browser if it is not closed.
If a second user comes along and reuses the same browser window to access the same site, they will be prompted to authenticate by the Reverse Proxy and be given a new session cookie, but the Web and Application Servers will see the previous users cookies still present in the browser and hand that users session to the new user, along with all of their data and entitlements.
This is particular issue is detailed in the Session Validation Pattern
Known Uses
Collaboration
An illustration of how the pattern can be used in a programming language
Situations in which this pattern is usable; the context for the pattern.
Other patterns that have some relationship with the pattern; discussion of the differences between the pattern and similar patterns.
Participants
A description of the goal behind the pattern and the reason for using it.
A description of the results, side effects, and trade offs caused by using the pattern.
A scenario consisting of a problem and a context in which this pattern can be used.
Examples of real usages of the pattern.
A description of how classes and objects used in the pattern interact with each other.
A listing of the classes and objects used in the pattern and their roles in the design.
Sample Code
Embedded Authentication Pattern
A graphical representation of the pattern. Class diagrams and Interaction diagrams may be used for this purpose.
Consequences
Entitlements Pattern
Coarse Grained Authorisation Pattern
Medium Grained Authorisation Pattern
Fine Grained Authorisation Pattern
Session Validation Pattern
Credential Propagation Pattern
Service Client Pattern
Every effort has been made to cite sources when used and abide by their licensing conditions.
As this is a relatively open and collaborative site that will accept content from multiple sources, it is possible that some copyrighted and unauthorised material may appear. Any content found on this site that violates another copyright holders rights should be brought to the attention of the administrators who will assess and remove the offending content if necessary.
General Disclaimer
Security Policies and Standards
Most organisations must comply with various standards, policies and regulations, both internal and dictated. Not all of these are security related, but those that are typically map back to the Security Principles.
Of all the Standards and Policies that organisations need to comply with, the following are quite typical.
Important Security Standards
International StandardsNational or Regional StandardsOrganisational Standards or Guidelines
IT Security ManagementISO 13335, ISO 13569, ISO 17799, ISO 27001, ISO 27002 BS 7799-2, NIST Standards ACSI-33, COBIT Security Baseline, ENV12924, ISF Standard of Good PracticeSAS 70
IT GovernanceISO 38500:2008 COSO Internal Control - Integrated Framework COBIT, ITIL, BITS
ComplianceSarbanes-Oxley Act, Privacy Act, Trade Practices Act Basel II, FFIEC Handbook, Gramm-Leach-Bliley Act, BSA, FACTA, GISRA, CA Bill 1386, PCI DSS, FISMA
PrivacyDirective 95/46 - European Union, ETS no. 108 - Council of Europe, PIPEDA - Canada, Privacy Act 1988 - Australia Specter-Leahy Personal Data Privacy and Security Act 2005 - USA, Personal Information Protection Act No. 57 - Japan
Risk ManagementISO 27005 AS/NZS 4360, COSO Enterprise Risk Management, M_o_R, NIST Standard 800-30
Security MetricsISO 27004 NIST Standards Web Security Threat Classification, ISECOM, CVSS
Security EvaluationISO 15408, ISO 27001 NIST Standards - FIPS, NSA IAM / IEM PCI DSS
Security TestingNIST Standard - 800-42 OWASP, OSSTMM, CHECK, ISACA, ISSAF, CREST
Technical Standards, Policy and GuidelinesIdentification and Authentication
International StandardsNational StandardsOrganisational Standards or Guidelines
Identification and AuthenticationISO 9798, ISO 9594-8:2001
Identity Management FrameworksCS1 (JTC 1/SC 27), IdM-GSI
TokensEBS 111-1999
Personal Identification Numbers (PIN)ISO 9564 EBS 105-1998
Biometrics19092:2008 ANSI X9.84-2001, ANSI INCITS 358-2002, 398-2005, 377-2004, 378-2004, 379-2004, 381-2004, 385-2004, 395-2005, 396-2005, 383-2004, 394-2004, 421-2006, 422-2006, 442-200 (BIAS)
Data Integrity
International StandardsNational StandardsOrganisational Standards or Guidelines
Message AuthenticationISO 9797, ISO 16609 ANSI X9.71-2000
Hash-functionsISO 10118
Privacy and Confidentiality
International StandardsNational StandardsOrganisational Standards or Guidelines
Encipherment
Non-repudiation
International StandardsNational StandardsOrganisational Standards or Guidelines
Non-repudiationISO 13888, ISO 10181-4
Time StampingISO 18014 ANSI X9.95:2005 ETSI TS 101 861-2001
Digital SignaturesISO 9796, ISO 14888 ANSI X9.31 ETSI TS 101 733, ETSI TR 102 572
CertificatesANSI X9.55-1997 ETSI TS 101 862-2000
Public Key Infrastructure (PKI)ANSI X9.77, ANSI X9.79-2001 ETSI TS 101 456
Accountability and Audit
International StandardsNational StandardsOrganisational Standards or Guidelines
Functionality ClassesISO 10181
Protection ProfilesISO 15292, ISO 15446 ANSI X9.79
Evaluation CriteriaISO 13491, ISO 15408 ANSI X9.74
Security Management
International StandardsNational StandardsOrganisational Standards or Guidelines
Security ManagementISO 13335, ISO 13569, ISO 15816, ISO 15947 ANSI X9.41, BS 7799 ECBS TR 406
Key ManagementISO 11770, ISO 13492 ANSI X9.24-1:2004, ANSI X9.24-2:2006, ANSI X9.42-2001, ANSI X9.44-2000, ANSI X9.63-2001 ECBS TR 405
Certificate ManagementISO 15782 ANSI X9.57-1997, ANSI X9.79-2001 ECBS TR 402-1997, IETF RFC 2527:1999
Trusted Third Party ManagementISO TR 14516, ISO 15945
Security Implementation Standards
Standard
Transport LayerSSL, TLS
AuthenticationSAML, WS-Federation,
Web ServicesWS-Security, WS-Policy, WS-Trust, WS-Privacy, WS-Secure Conversation, WS-Federation, WS-Authorisation
Symmetric EncryptionAES
Delete
Title
Create
IDNameSummary
Cancel Create
Consequences
Expand all Collapse allNameControl
Cancel
Select
IDNameSummary
Security Classification
Cancel Select
Cancel Create
Security ObjectiveAsset ClassificationWorst Case Consequence of a Compromise
Confidentiality
Integrity
Availability
Cancel Create
IDNameNaming ConventionsTypePersonalGenericServiceSummary
Confidentiality of...
Integrity of...
Remove Add
Availability of...
Can Compromise...
IDNameSummary
CostNot SetNot Set Low Medium High Very High
BenefitNot SetNot Set Low Medium High Very High
In Work PackageControl Objectives Transitioned to Target State by this Task
Control Objective IDControl Objective
RemoveTask Dependencies that must be completed before this Task can start
Task IDTask Name
Remove Add
IDNameSummary
CostNot SetNot Set Low Medium High Very High
BenefitNot SetNot Set Low Medium High Very High
Tasks in Work Package
Task IDTask NameCostBenefit
Remove Add
IDNameSummary
Lower ROI Bound
Government Information Security Reform Act
Information in italics below is referenced from wikia, reproduced in accordance with the Creative Commons License.
The Government Information Security Reform Act (GISRA) of 2000, established information security program, evaluation, and reporting requirements for federal agencies. GISRA required agencies to perform periodic threat-based risk assessments for systems and data. GISRA requires agencies to develop and implement risk-based, cost-effective policies and procedures to provide security protection for information collected or maintained either by the agency or for it by another agency or contractor. GISRA required that agencies develop a process for ensuring that remedial action is taken to address significant deficiencies. GISRA also required agencies to provide training on security awareness for agency personnel and on security responsibilities for information security personnel. GISRA required the agency head to ensure that the agency’s information security plan is practiced throughout the life cycle of each agency system. The agency head was responsible for ensuring that the appropriate agency officials, evaluated the effectiveness of the information security program, including testing controls.In 2002, GISRA was replaced and strengthened with FISMA (Federal Information Security Management Act).
Each requirement of the law relating to Information Security is broken down further into more specific sub-requirements that can be mapped back to both the Security Principles that drive them and the Design Patterns that satisfy them.
ComplianceGISRA contains the following elements;
All federal agencies must assess the security of their non-classified information systemsAgencies are to perform Security Assessments and report on the security needs of the systems (Gap Analysis)Security Reports will be included in the agency’s budget for upcoming fiscal year (OMB)Funds can be cut for non-complianceThe Act implies that funding will be provided to cover the mitigation of security gapsAgencies have opportunity to get the additional funds as long as they can provide a comprehensive Security Assessment that includes viable, Best Practice mitigating solutions
Self Assessment can be performed with the assistance of the NIST 800-26 as a guide.
DocumentationPublications on the Government Information Security Act of 2000, are unavailable at present. The more current FISMA standard should be referred to.
NavigationBack to Security Policies and Standards
Federal Information Security Management Act
Information in italics below is referenced from wikipedia, reproduced in accordance with the GNU Free Documentation License.
The Federal Information Security Management Act of 2002 ("FISMA", 44 U.S.C. § 3541, et seq.) is a United States federal law enacted in 2002 as Title III of the E-Government Act of 2002 (Pub.L. 107-347, 116 Stat. 2899). The act was meant to bolster computer and network security within the federal government and affiliated parties (such as government contractors) by mandating yearly audits.FISMA has brought attention within the federal government to cybersecurity which had previously been much neglected. As of February 2005, many government agencies received extremely poor marks on the official report card.An effective information security program should include:
Periodic assessments of risk, including the magnitude of harm that could result from the unauthorized access, use, disclosure, disruption, modification, or destruction of information and information systems that support the operations and assets of the organizationPolicies and procedures that are based on risk assessments, cost-effectively reduce information security risks to an acceptable level, and ensure that information security is addressed throughout the life cycle of each organizational information systemSubordinate plans for providing adequate information security for networks, facilities, information systems, or groups of information systems, as appropriateSecurity awareness training to inform personnel (including contractors and other users of information systems that support the operations and assets of the organization) of the information security risks associated with their activities and their responsibilities in complying with organizational policies and procedures designed to reduce these risksPeriodic testing and evaluation of the effectiveness of information security policies, procedures, practices, and security controls to be performed with a frequency depending on risk, but no less than annuallyA process for planning, implementing, evaluating, and documenting remedial actions to address any deficiencies in the information security policies, procedures, and practices of the organizationProcedures for detecting, reporting, and responding to security incidentsPlans and procedures to ensure continuity of operations for information systems that support the operations and assets of the organization.Each requirement of the law relating to Information Security is broken down further into more specific sub-requirements that can be mapped back to both the Security Principles that drive them and the Design Patterns that satisfy them.
Contents
1 Risk Management Framework2 Compliance3 Documentation4 Navigation
Risk Management FrameworkThe risk-based approach to security control selection and specification considers effectiveness, efficiency, and constraints due to applicable laws, directives, Executive Orders, policies, standards, or regulations. The following activities related to managing organizational risk (also known as the NIST Risk Management Framework) are paramount to an effective information security program and can be applied to both new and legacy information systems within the context of the system development life cycle and the Federal Enterprise Architecture:
Step 1: CategorizeCategorize the information system and the information resident within that system based on impact. FIPS 199 and NIST SP 800-60 Revision 1 (Volume 1, Volume 2)Step 2: SelectSelect an initial set of security controls for the information system based on the FIPS 199 security categorization and apply tailoring guidance as appropriate, to obtain a starting point for required controls. FIPS 200 and NIST SP 800-53, Revision 2Step 3: SupplementSupplement the initial set of tailored security controls based on an assessment of risk and local conditions including organization-specific security requirements, specific threat information, cost-benefit analyses, or special circumstances. NIST SP 800-53, Revision 2 and SP 800-30Step 4: DocumentDocument the agreed-upon set of security controls in the system security plan including the organization's justification for any refinements or adjustments to the initial set of controls. NIST SP 800-18, Revision 1Step 5: ImplementImplement the security controls in the information system.See appropriate NIST publication in the publications section.Step 6: AssessAssess the security controls using appropriate methods and procedures to determine the extent to which the controls are implemented correctly, operating as intended, and producing the desired outcome with respect to meeting the security requirements for the system. NIST SP 800-53AStep 7: AuthorizeAuthorize information system operation based upon a determination of the risk to organizational operations, organizational assets, or to individuals resulting from the operation of the information system and the decision that this risk is acceptable. NIST SP 800-37Step 8: MonitorMonitor and assess selected security controls in the information system on a continuous basis including documenting changes to the system, conducting security impact analyses of the associated changes, and reporting the security status of the system to appropriate organizational officials on a regular basis. NIST SP 800-37 and SP 800-53AComplianceFISMA imposes a mandatory set of processes that must be followed for all information systems used or operated by a U.S. federal government agency or by a contractor or other organization on behalf of a federal agency. These processes must follow a combination of Federal Information Processing Standards (FIPS) documents, the special publications SP-800 series issued by NIST, and other legislation pertinent to federal information systems, such as the Privacy Act of 1974 and the Health Insurance Portability and Accountability Act. However, following these mandates only results in "compliance" and not "security".The Compliance process consists of the following;
Determine System BoundariesNIST SP 800-18 revision 1 provides guidance on determining system boundaries.Determine system information types and perform FIPS-199 categorizationNIST SP 800-60 provides a catalog of information types, and FIPS-199 provides a rating methodology and a definition of the three criteria.Document the systemNIST SP 800-18 Rev 1 gives guidance on documentation standardsPerform risk assessmentNIST SP 800-30 provides guidance on the risk assessment process.Select and implement security controlsNIST Special Publication 800-53 revision 1, Recommended Security Controls for Federal Information Systems, which contains the management, operational, and technical safeguards or countermeasures prescribed for an information system.Certify systemNIST SP 800-53A provides guidance on the assessment methods applicable to individual controls.Accredit systemNIST SP 800-37 provides guidance on the certification and accreditation of systems.Continuous monitoringGuidance on continuous monitoring can be found in NIST SP 800-37 and SP 800-53A.DocumentationThis legal publication is freely available on the internet.
NavigationBack to Security Policies and Standards