Image of a man writing data used on cissp domain 3-security architecture and engineering - Destination Certification

Rob Witcher

March 3, 2023

Domain 3 of the CISSP certification exam is called Security Architecture and Engineering. It contains the concepts, principles, structures, and standards used to design, implement, monitor, and secure various architectures such as systems, applications, operating systems, equipment, networks, and those controls used to enforce multiple appropriate levels of security.

The study of this domain is of the utmost importance since there's a vast amount of information to grasp and understand properly. Here's a complete guide to approaching Domain 3: Security Architecture and Engineering—which comprises 13% of the total marks—and what you need to know to pass the exam.

3.1 Research, implement, and manage engineering processes using secure design principles


Security must be involved in all phases of designing and building a product or system; it must be involved from beginning to end. It's important to understand the meaning of the domain's title.

The word architecture implies many components that work together to allow that architecture to be used for its intended purposes.

If we add the word security, that would include security policies, knowledge, and experience that must be applied to protect this architecture to the level of value relating to the individual components and the overall architecture.

This is what is meant by the term security architecture

And finally, the word engineering commonly points to designing a solution by walking through a series of steps and phases to put the components together so they can work in harmony as an architecture.

Determining appropriate security controls

Regardless of the framework, model, or methodology used, the risk management process should be used to identify the most valuable assets and risks to those assets and to determine appropriate and cost-effective security controls to implement this.

Some secure design principles are:

  • Threat modeling
  • Least privilege
  • Defense in depth
  • Secure defaults
  • Fail securely
  • Separation of duties
  • Keep it simple
  • Zero trust
  • Trust but verify
  • Privacy by Design
  • Shared responsibility

Secure defaults

Any default settings a system has should be secured to the extent possible so no compromise is facilitated.

Fail securely

If a system or its components fail, they should do so in a manner that doesn't expose the system to a potential attack.

Keep it simple

Remove as much complexity from a situation as possible and focus on what matters most.

Zero trust

It is based upon the premise that organizations should not automatically trust anything internal or external to enter their perimeter. Instead, before granting access to systems and individuals, those must first be authenticated and authorized.

Trust but verify

Trust but verify means being able to authenticate users and perform authorization based on their permissions to perform activities on the network so they can access the various resources.

It also means that real-time monitoring is a requirement. In short, focus on employing complete controls that include better detection and response mechanisms.

Privacy by design

Privacy by design is premised on the belief that privacy should be incorporated into networked systems and technologies by default and designed into the architecture.

Image of privacy by design on cissp domain 3 - Destination Certification
Image of privacy by design on cissp domain 3 - Destination Certification

Shared responsibility

Because of increased reliance on third-party services, a corresponding increase in clarity on shared security expectations should exist. The cloud customer and service provider must clearly communicate expectations both ways and define related responsibilities.

To this end, consumers and providers must act on these responsibilities and define clear contracts and agreements.

3.2 Understand the fundamental concepts of security models (e.g., Biba, Star Model, Bell–LaPadula)

What is a model?

A security model represents what security should look like in an architecture being built. Security models have existed and have been used for years. Some of these models include Bell–LaPadula, Biba, Clark–Wilson, and Brewer–Nash (also referred to as the Chinese Wall model).

It's valuable to understand these models and their underlying rules as they govern the implementation of the model.

Concept of security

To ensure the protection of any architecture, it must be broken down into individual components, and adequate security for each component needs to be put in place.

Any system should be broken down and individual components secured to the degree that value dictates doing so.

Enterprise security architecture

Security architecture involves breaking down a system into its components and protecting each component based on its value.

Three of the most popular enterprise security architectures are:

  • Zachman
  • Sherwood Applied Business Security Architecture (SABSA)
  • The Open Group Architecture Framework (TOGAF)

Security models

Security models are rules that need to be implemented to achieve security. Many security models exist, but most of them are one of two types: lattice-based or rule-based.

A good way to envision a lattice-based model is to think of a ladder with a framework and steps that look like layers going up and down. In other words, a lattice-based model is a layer-based model. It requires layers of security to address the requirements.

Two lattice-based models exist: Bell–LaPadula, and Biba. Bell–LaPadula addresses one primary component of the CIA triad: confidentiality. Biba addresses another component: integrity.

All other models are rule-based, meaning specific rules dictate how security operates.

The following table provides a summary of the various lattice-based and rule-based security models:

Layer / Lattice-based models

Rule-based models

  • Bell–LaPadula
  • Information Flow
  • Biba
  • Clark–Wilson
Cell
  • Brewer–Nash (Chinese Wall)
Cell
  • Graham–Denning
Cell
  • Harrison–Ruzzo–Ullmann

Layer-based models

Lattice-based security models, like Bell–LaPadula, and Biba, can also be thought of as layer-based security models.

Bell–LaPadula

Bell–LaPadula is based on incorporating the necessary rules that need to be implemented to achieve confidentiality.

Image of Bell-LaPadula model on cissp domain 3 -Destination Certification
Image of Bell-LaPadula model on cissp domain 3 -Destination Certification

Biba

Biba focuses on ensuring data integrity.

Image of Biba layer based model on cissp domain 3 - Destination Certification
Image of Biba layer based model on cissp domain 3 - Destination Certification

Lipner implementation

What happens if you want to have both confidentiality and integrity? The Lipner implementation is simply an attempt to combine the best features of Bell–LaPadula and Biba regarding confidentiality and integrity. As such, it is not truly a model but rather an implementation of two models.

Rule-based models

Depending on the model, the number and complexity of the rules employed may vary widely, and the model's focus can also change. In addition to looking at some specific rule-based models further below, a basic understanding of information flow models and covert channels should first be covered.

Information flow

If the flow of information can be tracked, this implies it can be tracked throughout its life cycle; in other words, it can be tracked from the point of origin, whether collected or created, to its storage, use, dissemination, sharing with others, and eventually to its end of life (e.g., archival and destruction). Information flow can also help the identification of vulnerabilities and insecurities, like covert channels, and serves as the basis for both Bell–LaPadula, and Biba.

Covert channels

Covert channels are unintentional communication paths that may lead to the disclosure of confidential information. An example of a storage covert channel exists in most technology architectures. On a laptop, sensitive information could be placed in RAM because a process needs to be used, but when that process finishes, the sensitive data remains in memory. That could become available to other processes that are placed in memory and can read it.

They are also called secret channels, are unintentional communications paths, and two types of covert channels exist, as summarized in the following table:

Storage

Timing

The process writes sensitive data to RAM, and the data remains present after the process completes; now, other processes can potentially read the data.

An online web server responds to a user providing an existing username within three seconds, while it takes one second if the username doesn't exist. That allows the attacker to perform username enumeration.

Clark-Wilson

Clark–Wilson is an important, rules-based model focusing only on integrity. Unlike Biba, which only prevents unauthorized subjects from making any changes, Clark–Wilson offers further protection and meets three goals of integrity:

  1. Prevent unauthorized subjects from making any changes (this is the only of the three that Biba addresses)
  2. Prevent authorized subjects from making bad changes
  3. Maintain consistency of the system

Biba only addresses #1 and therefore falls short of truly addressing security concerns related to protecting all integrity, while Clark–Wilson addresses #1 and then further protects integrity through #2 and #3.

Clark–Wilson achieves each of the goals specifically through the application of the three rules of integrity noted here:

Well-Formed transactions

Separation of duties

Access triple

Good, consistent, validated data. Only perform operations that won't compromise the integrity of objects.

One person shouldn't be allowed to perform all tasks related to a critical function.

Subject | Program | Object
A subject cannot directly access an object, i.e., in a database; access must go through a program that enforces access rules.

Brewer–Nash (The Chinese Wall) model

Brewer–Nash is also known as "The Chinese Wall" model and has one primary goal: Preventing conflicts of interest.

An example of where Brewer–Nash might be implemented is between the Development and Production departments in an organization, as the two departments should not be able to influence each other or even allow access between each other.

Image of Brewer-Nash model on cissp domain 3 - Destination Certification
Image of Brewer-Nash model on cissp domain 3 - Destination Certification

Graham-Denning model

Graham–Denning is another lesser known, rule-based model that specifies rules allowing a subject to access an object.

Harrison–Ruzzo–Ullman Model

Like Graham–Denning, Harrison–Ruzzo–Ullman is also a rule-based model that focuses on the integrity of access rights via a finite set of rules available to edit a subject's access rights. It adds the ability to add generic rights to groups of individuals.

Certification and accreditation

Certification is the comprehensive technical analysis of a solution to confirm it meets the desired needs.

Accreditation is management's official sign-off of certification for a predetermined period.

When architectures—especially security architectures—are built, products are often purchased from vendors. Security today often relies on solutions and mechanisms provided by vendors. This fact introduces a potential problem: How do we know vendor solutions actually provide the level of security we think they provide?

Any vendor is going to say they have the best products, the best solutions, and the best architectures. For example, if a firewall needs to be purchased, every firewall vendor will say their firewall is the best one available and will meet our needs perfectly. How can statements like this be confirmed and verified? We would need an independent and objective measurement system that vendors can use for evaluation and purchasing purposes. Such a system could be used by any organization around the globe to make purchasing decisions and not need to rely on vendors themselves.

These evaluations, these measurements, could be trusted because they've been created using an independent, vendor-neutral, objective system. These measurement systems do, in fact, exist and are called evaluation criteria systems.

The most well-known evaluation criteria systems are:

  • Trusted Computer System Evaluation Criteria (TCSEC)—also known as the Orange Book
  • The European equivalent of TCSEC called Information Technology Security Evaluation Criteria (ITSEC)
  • An ISO standard, called the Common Criteria.

Evaluation criteria (ITSEC and TCSEC)

Orange Book/Trusted Computer System Evaluation Criteria (TCSEC)

The first evaluation criteria system created is often referred to as the Orange Book due to the fact the cover of the book is orange. It was written as part of a series of books known as the "rainbow series," published by the US Department of Defense in the '80s. Each book in the series deals with a topic related to security, and the cover of each is a different color, thus the nickname "rainbow series."

Image of Orange BookTrusted Computer System Evaluation Criteria (TCSEC) - Destination Certification
Image of Orange Book Trusted Computer System Evaluation Criteria TCSEC - Destination Certification

The classification levels—the criteria—used in the Orange Book are:

  • A1. Verified design
  • B3. Security labels, verification of no covert channels, and must stay secure during start up
  • B2. Security labels and verification of no covert channels
  • B1. Security labels
  • C2. Strict login procedures
  • C1. Weak protection mechanisms
  • D1. Failed or was not tested

The Orange Book only measures confidentiality. Even by today's standards and when many people say it's obsolete, if you're interested only in confidentiality, there's no better system than TCSEC.

In addition, it only measures single-box architectures; it does not map well to networked environments. This is why many European organizations considered the model from a more current perspective and revamped it. They took what they thought to be a good idea and made it better in what is known as the Information Technology Security Evaluation Criteria (ITSEC).

Information Technology Security Evaluation Criteria (ITSEC)

ITSEC measures more than confidentiality and works well in a network environment. Also, when ITSEC was created, ways to measure function and assurance separate from each other were incorporated.

When a product is considered through the lens of ITSEC, two ratings are given. One rating—the "F" levels—is a functional rating, like the ones used in the Orange Book. The other rating—the "E" levels—was introduced as part of ITSEC and refers to levels of assurance. E levels range from E0 to E6, as shown in the following list:

  • E6. Formal end-to-end security tests + source code reviews
  • E5. Semiformal system + unit tests and source code review
  • E4. Semiformal system + unit tests
  • E3. Informal system + unit tests
  • E2. Informal system tests
  • E1. System in development
  • E0. Inadequate assurance

Common Criteria replaced ITSEC in 2005.

Common Criteria

ISO 15408, better known as the Common Criteria, is the most used of the evaluation criteria systems and is also the most popular; most products are evaluated using it.

As such, it's critical to understand Common Criteria components, Evaluation Assurance Levels (EAL), and the ramifications if changes to an EAL-rated system take place.

Like the other evaluation systems, the Common Criteria provides confidence in the industry for consumers, security functions, vendors, and others. The Common Criteria is the latest measurement system, and it's also an ISO standard (ISO 15408). It's called the Common Criteria because several countries joined together with a common goal: to create a common measurement system that could be trusted globally.

Common Criteria process

The first component is the Protection Profile (PP). The PP lists the security capabilities that a type or category of security products should possess. For example, there's a Protection Profile for firewalls; it lists the security capabilities that any firewall should contain—for example, two-factor authentication (2FA) capabilities, VPN capabilities, ability to encrypt to 128-bit encryption level, and secure logging, to name a few.

Target of Evaluation (TOE) is the next component. Using the earlier firewall example, if a vendor desires their firewall to be rated according to the Common Criteria, the firewall would be considered the TOE.

The next component, the Security Targets (ST), describe—from the vendor's perspective—each of the firewall's security capabilities that match up with capabilities outlined in the Protection Profile. When the firewall is measured, capabilities like VPN, encryption, two-factor authentication, secure logging, and so on will be compared against standards listed in the protection profile and tested extensively. For example, the firewall may perform two-factor authentication very well but lacks strong VPN capabilities.

Image of common criteria process on cissp domain 3 - Destination Certification
Image of common criteria process on cissp domain 3 - Destination Certification

EAL levels

Despite the table below illustrating multiple EAL levels, EAL7 is not necessarily the best for the sake of a vendor marketing and selling its product. In fact, most organizations will not purchase a product rated above EAL4. Operating systems are typically at EAL3, and firewalls at EAL4.

EAL7

Formally verified, designed, and tested

EAL6

Semi-formally verified, designed, and tested

EAL5

Semi-formally designed and tested

EAL4

Methodically designed, tested, and reviewed

EAL3

Methodically tested and checked

EAL2

Structurally tested

EAL1

Functionally tested

If a product is at EAL7, it could become more vulnerable to compromise due to being more complex and harder to maintain. Yes, the product might offer more security features and capabilities, but consumers will likely not use them if they require extensive configuration, administrative skills, and maintenance. This could ultimately leave an organization at greater risk.

Vendors, therefore, must balance the trade-off between functionality and security. Too much of the latter always impacts product speed and administrative overhead, which might also lead to the creation of an expensive product.

A final thing to note is that after a product undergoes an evaluation and is assigned an EAL level, the EAL level for that product will remain the same throughout its lifespan unless a major change in product functionality is introduced. In other words, when a patch or software update to the product is made, the EAL level remains unchanged.

3.3 Select controls based on systems security requirements

Security control frameworks aid with the control selection process, and security control frameworks provide guidance based on best practices.

Additionally, as control frameworks offer guidance, the best and most applicable elements of multiple frameworks could potentially be utilized as a part of the control selection process:

  • COBIT
  • ITIL
  • NIST SP 800-53
  • PCI DSS
  • ISO 27001
  • ISO 27002
  • COSO
  • HIPAA
  • FISMA
  • FedRAMP
  • SOX

Rationalizing frameworks

The following figure shows how all these different security frameworks relate to one another:

Image of rationalizing frameworks on cissp domain 3 - Destination Certification
Image of rationalizing frameworks on cissp domain 3 - Destination Certification

Notice that they overlap, which means that frameworks can span contexts, and as also noted earlier, organizations will often choose to use features from multiple frameworks to meet their needs.

Personal CISSP Mentoring call ad - Destination Certification

3.4 Understand security capabilities of information systems (IS)

RMC, security kernel, and TCB

Subjects and objects

Before diving into concepts like the RMC and security kernel, it's important to understand subjects and objects, as those concepts are heavily used throughout the following section.

Subject

Object

Active entities

A subject is a person, process, program, or anything similar that actively tries to access an object.

Passive entities

An object is anything that is passively accessed by a subject, like a file, server, process, or hardware component.

Reference Monitor Concept (RMC)

The RMC is simply the concept of a subject accessing an object through some form of mediation that is based on a set of rules, with this access being logged and monitored. This is the reference monitor concept, which is prevalent throughout security and is a topic often seen on the exam.

Image of reference monitoring concept, as well called RMC on cissp domain 3 - Destination Certification
Image of reference monitoring concept, as well called RMC on cissp domain 3 - Destination Certification

RMC features include:

  • Must mediate all access
  • Be protected from modification
  • Be verifiable as correct
  • Always be invoked

Security kernel

It's important to remember that the implementation of the reference monitor concept is known as a security kernel. Any system that is actually controlling access must be an actual implementation. If it's implemented, it's a security kernel.

Any time a security kernel is implemented, it should demonstrate the three characteristics or properties of the RMC: completeness, isolation, and verifiability, as shown in the following table:

Completeness

Isolation

Verifiability

Impossible to bypass mediation; impossible to bypass the security kernel

Mediation rules are tamperproof

Logging and monitoring, and other forms of testing to ensure the security kernel is functioning correctly

Trusted Computing Base (TCB)

Trusted Computing Base a(TCB) encompasses all the security controls that would be implemented to protect an architecture. It'.s the totality of protection mechanisms within an architecture. Examples of components that would be within the TCB include all the hardware, firmware, and software processes that make up the security system.

It's worth highlighting that all the components noted below are found in the TCB:

  • Processors (CPUs)
  • Memory
  • Primary storage
  • Secondary storage
  • Virtual memory
  • Firmware
  • Operating systems
  • System kernel

Processors (CPUs)

A central processing unit (CPU) is the brain of a computer; it processes all of the instructions and ultimately solves problems. A CPU constantly iterates through this four-step process:

  • Fetching instructions and data
  • Decoding instructions
  • Executing instructions
  • Storing results
Image of processors (CPUs) on cissp domain 3 - Destination Certification
Image of processors (CPUs) on cissp domain 3 - Destination Certification

Processor states

From a security perspective, CPUs operate in one of two processor states: the supervisor or problem state. These states can also be thought of as privilege levels and are simply operating modes for the processor that restrict the operations that can be performed by certain processes.

Process isolation

From a security perspective, process isolation is a critical element of computing, as it prevents objects from interacting with each other and their resources. It  is often accomplished using either of the two following methods:

  1. Time-division multiplexing. Time-division multiplexing relates more to the CPU. With time-division multiplexing, process isolation is determined by the CPU. As before, when multiple applications are running, multiple accompanying processes are also running. In this case, the CPU allocates very small slots of time to each process.
  2. Memory segmentation. is all about separating memory segments from each other to protect the contents, including processes that may be running in those segments. In many cases, it relates more to Random-Access Memory (RAM)—the high-speed volatile storage area found in computer systems. Memory segmentation ensures that the memory assigned to one application is only accessible by that application.

Types of storage

Storage is where data in a computer system can be found. At a high level, two main types of storage exist: primary and secondary storage.

Primary Storage

Secondary Storage

  • Fast
  • Volatile—data is lost when the device gets powered off
  • Small size
  • Examples of primary storage
  • Cache
  • CPU registers
  • RAM
  • Slow
  • Non-volatile
  • Large size
  • Examples of secondary storage:
  • Magnetic hard drives
  • Optical media
  • Tapes
  • SSD

Another related concept refers to what happens because of RAM filling up when many applications are running at the same time. Data related to each program and running process are loaded into RAM, and if RAM fills up, the system will eventually crash. A way to mitigate this is using what's known as paging, or virtual memory.

Image of virtual memory on cissp domain 3- Destination Certification
Image of virtual memory on cissp domain 3- Destination Certification

System kernel

The system kernel is the core of the operating system and has complete control over everything in the system. It has low-level control over all the fine details and operational components of the operating system. In essence, it has access to everything.

The system kernel and the security kernel are not the same thing. As noted, the system kernel drives the operating system. The security kernel is the implementation of the reference monitor concept.

From a security perspective, it's critical to protect the system kernel and ensure that it is operating correctly, and privilege levels aid in this regard.

Privilege levels

Privilege levels establish operational trust boundaries for software running on a computer. Subjects of higher trust (e.g., the System kernel) can access more system capabilities and operate in kernel mode. Subjects with lower trust (most applications running on a computer) can only access a smaller portion of system capabilities and operate in user mode.

Image of privilege levels on cissp domain 3 - Destination Certification
Image of privilege levels on cissp domain 3 - Destination Certification

Ring protection model

The ring protection model is a form of conceptual layering that segregates and protects operational domains from each other. Ring 0 is the most trusted and, therefore, the most secure ring. Firmware and other critical system-related processes run in Ring 0. Ring 3 (user programs and applications), on the other hand, is the least trusted and secure level, where the least access exists to protect the kernel from unwanted side effects like malware infecting the machine.

Image of ring protection model on cissp domain 3 - Destination Certification
Image of ring protection model on cissp domain 3 - Destination Certification

Firmware

Firmware is software that provides low-level control of hardware systems; it's the code that boots up hardware and brings it online. One of the challenges with firmware is that it is no longer hard-coded; therefore, it can be updated and modified, which makes it vulnerable to attacks. Changeable, updateable, or modifiable firmware means that the hardware itself is now vulnerable to attacks.

Middleware

The idea of middleware is it's an intermediary; it's a layer of software that enables interoperability (glue) between otherwise incompatible applications. In other words, middleware speaks two languages and can thereby enable communication between two completely different systems that otherwise could not communicate with each other.

Image of middleware on cissp domain 3- Destination Certification
Image of middleware on cissp domain 3- Destination Certification

Abstraction and virtualization

Abstraction

Abstraction is a concept that is used extensively in computing. It is an idea that the underlying complexity and details of a system are hidden. Examples of abstraction include driving a car and computing.

It is used in programming. CPUs, at their core, understand 1s and 0s. From a human perspective, however, 1s and 0s are very hard to understand, and over the years, numerous iterations of programming languages have evolved and abstracted the complexity of computing to a human-readable form.

Virtualization

Carrying the concept of abstraction further, virtualization is the process of creating a virtual version of something to abstract away from the true underlying hardware or software. Specifically, to facilitate virtualization, a hy pervisor is employed. A hypervisor serves as a layer of abstraction between underlying physical hardware and virtual machines (VMs).

Image of virtualozation on cissp domain 3 -Destination Certification

Layering/Defense-in-depth

Another important concept is the concept of layered defense or defense-in-depth. What this simply means is the protection of a valuable asset should never rely on just one control. If that control fails, the asset would be unprotected. Instead, multiple control layers should be implemented, and the control at each layer should be a complete control—a combination of preventive, detective, and corrective controls.

Image of layering defense in depth on cissp domain 3- Destination Certification
Image of layering defense in depth on cissp domain 3- Destination Certification

Trusted platform modules (TPM)

A trusted platform module (TPM) is a piece of hardware that implements an ISO standard, resulting in the ability to establish trust involving security and privacy. In other words, a TPM is a chip that performs cryptographic operations like key generation and storage in addition to platform integrity.

For example, when a machine boots, the TPM can be used to identify if there has been any tampering with critical system components, in which case the system wouldn't boot. So, a TPM is a piece of hardware—usually installed on the motherboard—that incorporates the international standard denoted by ISO/IEC 11889 on computing devices, like desktop and laptop computers, and mobile devices, among others.

In many ways, a TPM is a black box, meaning that commands can be sent to the TPM, but the information stored within the TPM cannot be extracted.

Computers that contain a TPM can create cryptographic keys and encrypt them—using the endorsement key—so only the TPM can be used for decryption. This process is known as binding.

Computers can also create a bound key that is also associated with certain computer configuration settings and parameters. This key can only be unbound when the configuration settings and parameters match the values at the time the key was created. This process is known as sealing.

3.5 Assess and mitigate the vulnerabilities of security architectures, designs, and solution elements

Vulnerabilities in systems

Single point of failure

Let's answer the question of what single point of failure means with the graphic below:

Image of single point of failure on cissp domain 3- Destination Certification
Image of single point of failure on cissp domain 3- Destination Certification

The cloud represents the internet. Below it, the brick wall with the flame represents a firewall. Next, the ball with arrows pointing in every direction is a router. Finally, below the router are several computer systems. What are the single points of failure in this diagram? In this example, the firewall and the router are each considered a single point of failure; if either device fails, the connection to the internet is broken. In other words, a single point of failure means that when a single device or connection fails, it impacts the entire architecture.

Reduce the risk of single point of failure

Single points of failure can become very dangerous for any organization and need to be dealt with accordingly, usually by implementing redundancy. Looking at the previous example, two firewalls and two routers could be installed to create redundancy and mitigate the risk of single points of failure. Each pair can be configured in what is known as "high availability" so that if firewall 1 fails, traffic can be rerouted through firewall 2; if router 1 fails, traffic can be rerouted through router 2.

Bypass controls

Bypass controls are a potential vulnerability or new source of risk, but they are intentional. Let's examine this concept through an example: You need to access the administrative settings of your home router, but for some reason, you can't remember the password you set up the last time you did this. Being able to perform a factory reset of that device would allow you to enter the configuration utility with default credentials and set up the device from scratch.

Reduce the risk of bypass controls

Bypass controls are needed, and other compensating controls should always be implemented with them to mitigate or prevent their exploitation.

Ways to mitigate the risk associated with bypass controls include:

  • Segregation of duties
  • Logging and monitoring
  • Physical security

TOCTOU or race condition

TOCTOU stands for Time-of-Check Time-of-Use and essentially represents a short window of time between when something is used and when authorization or access for that use is checked. In other words, in that short time period, something unintended or malicious can transpire. This is also sometimes known as a race condition.

Reduce the risk of race conditions

To mitigate the risk of race conditions, the frequency of access checks should increase. The more frequent the checks, the greater the frequency of re-authentication, thus reducing the overall risk.

Emanations

Emanations represent a valid security concern since any time a device is emanating, valuable data could be available that a properly equipped eavesdropper or system could collect.

Image of emanations on cissp domain 3- Destination Certification
Image of emanations on cissp domain 3- Destination Certification

Various ways exist to protect from emanation, such as:

Shielding (TEMPEST)

Proper walls, Faraday cage use, copper-lined envelopes that prevent sensitive information from leaking out or being intercepted; TEMPEST is a specific technology that prevents emanations from a device

White noise

Strong signal of random noise emanated where sensitive information is being processed

Control zones

Preventing access or proximity to locations where sensitive information is being processed

Hardening

Hardening is the process of looking at individual components of a system and then securing each component to reduce the overall vulnerability of the system.

Vulnerabilities in systems

To protect all devices, they need to be broken down into components, and each component would need to be secured. Each component within a given device is secured based on value.

Organizational relevance is another term you need to be familiar with, and it indicates how valuable something is to the organization. This is a term you could possibly see on the exam, and you need to remember that it implies value.

Reduce risk in client and server-based systems

Examples of hardening include doing things like disabling unnecessary services on a computer system or uninstalling software that shouldn't be there (like an SFTP server running on a user's endpoint). A service represents a small subset of code running on a system for a particular reason.

Other ways to harden systems include:

  • Installation of antivirus software
  • Installation of host-based IDS/IPS and firewall
  • Perform device configuration reviews
  • Implementation of full-disk encryption
  • Enforcement of strong passwords
  • Obtaining routine system backups
  • Implement sufficient logging and monitoring

The most important question to ask is, "What is this system meant to do?" That will guide the hardening effort. If a system is supposed to act as a web server, then it shouldn't have fifty different ports open and services installed, as that heavily increases an attacker's chances of breaching it. Each time a system is deployed, a hardening procedure should be followed, and after each hardening process, the resulting configuration should be verified to confirm the system is working as expected.

Risk in mobile systems

Reduce risk in mobile-based systems

Mobile device management (MDM) and mobile application management (MAM) solutions help organizations secure devices and the applications that run on them.

Mobile device management solutions should particularly focus on securing remote access using a VPN and end-point security, as well as securing applications on the device through application whitelisting.

MDM and MAM can be combined with policy enforcement, application of device encryption, and related policies to adequately protect mobile devices if they are lost or stolen.

  • Policies. One of the best ways to reduce risk related to mobile devices is using policies like Acceptable Use, Personal Computers, BYOD/CYOD (Bring Your Own Device/Choose Your Own Device), and Education, Awareness, and Training.
  • Process related to lost or stolen devices. Typically, this involves notification of IT and security personnel as well as a means by which the device can be remotely wiped.
  • Remote access security. VPN and 2FA capabilities should be enabled by default to prevent a mobile device from being used to connect to a remote network in an insecure manner.
  • Endpoint security. Antivirus/malware, DLP, and similar MDM-provisioned software should be installed on mobile devices just like standard computing equipment. Additionally, the concept of hardening should be employed to minimize the potential attack surface of the devices.
  • Application whitelisting. Organizations should control which applications users may install on their mobile devices through application whitelisting and not allow them to install anything not present on the approved application list.

OWASP Mobile Top 10

The Open Web Application Security Project (OWASP) Foundation is an organization that is driven by community-led efforts dedicated to improving the security of software, including software and applications that run on mobile devices.

Among OWASP's many substantial contributions to the security community are the globally recognized OWASP Top 10 and OWASP Mobile Top 10 lists that conform data from a variety of sources like security vendors and consultancies, bug bounties, and numerous organizations located around the world. The recent OWASP Mobile Top 10 is listed here:

  • M1. Improper Platform Usage
  • M2. Insecure Data Storage
  • M3. Insecure Communication
  • M4. Insecure Authentication
  • M5. Insufficient Cryptography
  • M6. Insecure Authorization
  • M7. Poor Client Code Quality
  • M8. Code Tampering
  • M9. Reverse Engineering
  • M10. Extraneous Functionality

For each identified risk, specific details about it can be found, including threat agents, attack vectors, security weaknesses, technical impacts, and business impacts, as well as ways to prevent or mitigate the risk.

Distributed systems

Distributed systems are systems that are spread out and can communicate with each other across a network. The internet is a great example of a distributed system.

Although there is significant value in connecting the systems within an organization and then connecting the organization to the internet, there are also significant risks, such as providing a means for potential attackers to gain access to the corporate network and cause mayhem (data breaches, denial-of-service, ransomware, etc.).

Distributed file systems (DFS) take the concept of distributed systems a step further by allowing files to be hosted by multiple hosts and shared and accessed across a network. DFS software helps manage the files being hosted and presents them to users as if they're stored in one central location.

Grid computing

Grid computing is like distributed systems as it still relates to systems that are connected together, but the thinking behind grid systems is that they're usually connected via a very high-speed connection to serve a greater purpose than simply passing the occasional email or file back and forth.

Image of grid computing on cissp domain 3 - Destination Certification
Image of grid computing on cissp domain 3 - Destination Certification

What's the security risk with grid computing?

The security risks with grid computing are as follows:

  1. Data integrity
  2. Data validation
  3. Inappropriate use of the grid computer

Interference and aggregation

Data warehouse

The idea behind a data warehouse is to perform data analytics from a number of different data sets with the hope of identifying interesting bits of information. A common term related to data warehouses is data island, and it's used in the form of a question: "Where are the data islands located?" As this question alludes, the premise of a data warehouse is that all the data from these islands are brought together in one central location. Once in one location, the data is much easier to analyze.

What are the security risks related to a data warehouse?

For a couple of reasons, it could be a single point of failure. The first relates to availability. The second relates to the fact that if someone gains unauthorized access to the data warehouses, they could have access to significant amounts of valuable information.

Big data

Data from many different locations are brought into a central repository to be analyzed. On the surface, this sounds very similar to a data warehouse. What's the difference? Three things:

  • Variety means that data can be pulled from a number of different sources. In big data, just about anything can be stored, meaning it represents the variety that can be found within a big data repository.
  • Volume refers to the size of the data sets. With a data warehouse, storage is typically restricted to the storage capacity of a single system; with big data, storage spans multiple systems.
  • Velocity refers to the fact that data can be ingested and analyzed very rapidly in big data—even faster than is possible with data warehouses.

Examples of big data tools include Hadoop, MongoDB, and Tableau.

Data mining and analytics

The primary driver behind data warehouses and big data is the desire to identify trends and other interesting insights. Through the analysis of seemingly disparate data, otherwise invisible relationships and little nuggets of valuable information can be gleaned. These insights are typically referred to as inference and aggregation.

Aggregation

Inference

Collecting, gathering, or combining data for the purpose of statistical analysis

Deducing information from evidence and reasoning rather than from explicit statements

Reduce the risk of inference and aggregation

Inference, especially unauthorized inference, can create a significant risk to an organization. One method to reduce the risk of unauthorized inference is using polyinstantiation, which allows information to existing in different classification levels for the purpose of preventing unauthorized inference and aggregation.

Industrial Control Systems (ICS)

Industrial control system (ICS) is a general term used to describe control systems related to industrial processes and critical infrastructure.

Reduce risk in industrial control systems

One of the best ways to protect ICS is keeping them offline, also known as air gapping or creating an air gap. What this simply means is that ICS devices can communicate with each other, but the ICS network is not connected to the internet or even the corporate network in any way. So, even if someone does try to connect to these ICS systems from the internet or corporate network, they'll be unable to do so.

The three primary types of ICSs are Supervisory Control and Data Acquisition (SCADA), Distributed Control System (DCS), and Programmable Logic Controller (PLC).

Patching industrial control systems

Strong configuration management processes, good patch management and backup/archive plans, and so on should be in place and used when and where possible. When patching ICS systems is not possible, additional mitigating steps can be taken to reduce the risk and impact of disruption of critical infrastructure:

  • Implement nonstop logging and monitoring and anomaly detection systems.
  • Conduct regular vulnerability assessments of ICS networks, with a particular focus on connections to the internet or direct connections to internet-connected systems, rogue devices, and plaintext authentication.
  • Use VLANs and zoning techniques to mitigate or prevent an attacker from pivoting to other neighboring systems if the ICS is breached.

Internet of Things (IoT)

The Internet of Things (IoT) refers to all the devices, like home appliances, that are connected to the internet. IoT devices, by their nature, are risky. Reducing their risk involves making different purchase decisions, taking every precaution when installing, and keeping the technology up to date

Cloud service and deployment models.

Cloud computing

A cloud can be a private, public, or hybrid model. It can also allow greater or smaller control to fall on the client or the cloud service provider. It all depends on what the goals are. So many options and variations exist. Some of the most common characteristics of a cloud provider are:

  1. On-demand self-service
  2. Broad network access
  3. Resource pooling
  4. Rapid elasticity and scalability
  5. Measured service
  6. Multitenancy

Cloud service models
There are three primary service models used in the cloud:

  • Software as a Service (SaaS) provides access to an application that is rented—usually via a monthly/annual, subscriber-based fee—and the application is typically web-based.
  • Infrastructure as a Service (IaaS) is an environment where customers can deploy virtualized infrastructure servers, appliances, storage, and networking components.
  • Platform as a Service (PaaS) is a platform that provides the services and functionality for customers to develop and deploy applications.

Two additional cloud service models that are now pervasively used are:

  • Containers as a Service (CaaS). It allows for multiple programming language stacks, like Ruby on Rails or node.js, to name a couple, to be deployed in one container.
  • Function as a Service (FaaS). It describes serverless and the use of microservices to accomplish business goals inexpensively and quickly.
Image of cloud service models on cissp domain 3 - Destination Certification
Image of cloud service models on cissp domain 3 - Destination Certification

Significant responsibility is still often shared between the cloud service provider and the cloud customer. The cloud customer is always accountable for their data and other assets existing in a cloud environment.

Cloud deployment models

Several cloud deployment models exist, and these refer to how the cloud is deployed, what hardware it runs on, and where the hardware is located. Most of the deployment models are intuitive and easy to understand.

Header

Infrastructure managed by

Infrastructure owned by

Infrastructure located

Accessible by

Public

Third-party provider

Third-party provider

Off-premises

Everyone (untrusted)

Private/ Community

Organization
or
third-party provider

Organization
or
third-party provider

On-premises
or
off-premises

Trusted

Hybrid

Both:
Organization and third-party provider

Both:
Organization and third-party provider

Both:
On-premises and Off-premises

Both:
Trusted and untrusted

Protection and privacy of data in the cloud
In addition to implementing strong access controls, strong encryption practices should be used when and where necessary to properly secure this data. This is especially true when an organization makes the initial decision to move from legacy, on-premises infrastructure to that of a cloud provider. In cases like this, best practices indicate that data should be encrypted and secured locally and then migrated to the cloud.

Cloud computing roles

Multiple computing roles relate to cloud computing: cloud consumer, cloud provider, cloud partner, and cloud broker.