• Home
  • /
  • Resources
  • /
  • Cloud Application Security: A Deep Dive into CCSP Domain 4

Estimated reading time minutes

Image of CCSP domain 4 thumbnail - Destination Certification

Rob Witcher

Last Updated On: September 10, 2024

Cloud application security is where things get really interesting in the CCSP journey. We're moving beyond the nuts and bolts of infrastructure and platforms, and diving into the heart of what many organizations care about most: their applications.

In this complex world of cloud environments, securing applications isn't just about writing good code. It's a multifaceted challenge that touches on everything from development practices to user authentication to involving security from the beginning. We'll navigate these waters together, exploring how to build a robust security posture that can stand up to modern threats.

Ready to take your cloud security skills to the next level? Let's dive into CCSP Domain 4 and unpack the essentials of cloud application security.

4.1 Advocate training and awareness for application security

Those in charge of cloud application security must have a solid awareness of all of the moving parts that they are responsible for. They need to be well-trained in everything from the basics to the highly complex aspects of their roles.

Cloud development basics

Cloud development overlaps significantly with traditional application development. However, since you aren’t building on top of your own hardware infrastructure, the underlying cloud infrastructure is different, as are the abilities, limitations, security strengths and weaknesses.

One of the main benefits is in terms of efficiency. You can sign up to a PaaS (platform as a service) service and get started straight away with your code. You don’t have to worry about setting up your hardware or configuring your servers. If you opt to build on top of IaaS (infrastructure as a service) instead, it gives you a lot more control, but there is a lot more work required in setting up and managing your systems.

However, this pales in contrast to building on top of your own hardware. In both cases, scaling is a lot easier than traditional development, so you can get up and running without having to worry too much about forecasting usage.

While there are tremendous benefits associated with cloud development, there are also some complications. One aspect is that cloud services can quickly become incredibly complex. Another is that you must share some level of control and responsibility with cloud providers. Employees need to be trained and aware of the many cloud-related pitfalls that they can fall into when working under the cloud paradigm. They also need to be especially careful when moving applications from traditional environments to the cloud.

Common pitfalls

Organizations that use cloud services must have a clear understanding of the various development pitfalls that they can fall into. Some of the major pitfalls include:

  • On-premises systems may not transfer to the cloud easily.
  • Confusion over who is responsible.
  • Lack of a secure development framework.
  • Interoperability and vendor lock-in.
  • Insufficient support or budget from senior management.
  • Lacking skills, knowledge and training.
  • Misunderstanding the different risk profiles.
  • Complexity.
  • Multitenancy.

Common cloud vulnerabilities

Due to the complexity of cloud development, there are many different vulnerabilities and security risks that cloud developers need to be aware of. The OWASP Top 10 web application vulnerabilities are:

  1. Broken access control
  2. Cryptographic failures
  3. Injection
  4. Insecure design
  5. Security misconfiguration
  6. Vulnerable and outdated components
  7. Identification and authentication failures
  8. Software and data integrity failures
  9. Security logging and monitoring failures
  10. Server-side request forgery

OWASP also keeps track of mobile application security risks in its OWASP Mobile Top 10 , which are:

  1. Improper credential usage
  2. Inadequate supply chain security
  3. Insecure authentication/authorization
  4. Insufficient input/output validation
  5. Insecure communication
  6. Inadequate privacy controls
  7. Insufficient binary protections
  8. Security misconfiguration
  9. Insecure data storage
  10. Insufficient cryptography

The SANS Institute’s Top 25 Software Errors are:

  1. CWE-787 Out-of-bounds write
  2. CWE-79 Improper neutralization of input during web page generation (cross-site scripting)
  3. CWE-89 Improper neutralization of special elements used in an SQL command ('SQL Injection')
  4. CWE-416 Use after free
  5. CWE-78 Improper neutralization of special elements used in an OS command ('OS command injection')
  6. CWE-20 Improper input validation
  7. CWE-125 Out-of-bounds read
  8. CWE-22 Improper limitation of a pathname to a restricted directory ('path traversal')
  9. CWE-352 Cross-site request forgery (CSRF)
  10. CWE-434 Unrestricted upload of file with dangerous type
  11. CWE-862 Missing authorization
  12. CWE-476 NULL pointer dereference
  13. CWE-287 Improper authentication
  14. CWE-190 Integer overflow or wraparound
  15. CWE-502 Deserialization of untrusted data
  16. CWE-77 Improper neutralization of special elements used in a command ('command injection')
  17. CWE-199 Improper restriction of operations within the bounds of a memory buffer
  18. CWE-798 Use of hard-coded credentials
  19. CWE-918 Server-side request forgery (SSRF)
  20. CWE-306 Missing authentication for critical function
  21. CWE-362 Concurrent execution using shared resource with improper synchronization ('race condition')
  22. CWE-269 Improper privilege management
  23. CWE-94 Improper control of generation of code ('code injection')
  24. CWE-863 Incorrect authorization
  25. CWE-276 Incorrect default permissions

4.2 Describe the Secure Software Development Life Cycle (SDLC) process

The software development life cycle (SDLC) is a process for developing high-quality software in an efficient and cost-effective manner. The secure SDLC brings security into the picture, integrating it from the earliest stages to ensure that the software meets its functional requirements while providing adequate security.

Business requirements

You need to consider the business requirements before you start developing your application. If you don’t start by thinking about what the business actually needs, it’s unlikely that the software you produce will adequately fulfill its requirements. We often refer to the process of considering business requirements as validation.

Phases and methodologies

The Cloud Security Alliance developed three “meta-phases” as a descriptive model to help us understand the secure software development lifecycle:

Secure design and development

This meta-phase ranges from training and developing organization-wide standards to actually writing and testing code.

Secure deployment

This meta-phase includes security and testing activities when moving code from an isolated development environment into production.

Secure operations

This meta-phase includes securing and maintaining production applications, including external defenses such as web application firewalls (WAF) and ongoing vulnerability assessments.

There are many different methodologies and models you can use for secure software development. The ideal one is dependent on what you are trying to achieve and what your priorities are. In the subsequent sections, we will be discussing two of the most common choices, waterfall and Agile.

Looking for some CCSP exam prep guidance and mentoring?


Learn about our personal CCSP mentoring

Image of Lou Hablas mentor - Destination Certification

The waterfall model

An adjusted version of the waterfall model is:

  • Initiation (planning and management approval)
  • Requirements analysis
  • Design
  • Development
  • Testing
  • Deployment

There are two further phases that can be added on to form the system life cycle (SLC). These are:

  • Operation
  • Disposal

Both the software development life cycle and the system life cycle are shown in the following diagram:

Image of system life cycle - Destination Certification

The Agile model

In contrast to the discrete steps of the waterfall model, the Agile model is a set of 12 principles:

The 12 Agile principles

Our highest priority is to satisfy the customer through early and continuous delivery of valuable software.

Welcome changing requirements, even late in development. Agile processes harness change for the customer's competitive advantage.

Deliver working software frequently, from a couple of weeks to a couple of months, with a preference to the shorter timescale.

Business people and developers must work together daily throughout the project.

Build projects around motivated individuals. Give them the environment and support they need, and trust them to get the job done.

The most efficient and effective method of conveying information to and within a development
team is face-to-face conversation.

Working software is the primary measure of progress.

Agile processes promote sustainable development. The sponsors, developers, and users should be able to maintain a constant pace indefinitely.

Continuous attention to technical excellence and good design enhances agility.

Simplicity—the art of maximizing the amount of work not done—is essential.

The best architectures, requirements, and designs emerge from self-organizing teams.

At regular intervals, the team reflects on how to become more effective, then tunes and adjusts its behavior accordingly.

Immutable infrastructure

Immutable infrastructure is infrastructure that can’t be changed. Immutable infrastructure can include virtualized components such as boundary firewalls, routers, switches, and various workloads. These immutable workloads include virtual machines and containers that have been configured so that no changes are possible. Both immutable infrastructure and immutable workloads are often created by building systems that don’t allow changes, or by enforcing automated processes that overwrite any unauthorized changes.

Image of deploying immutable workloads - Destination Certification

If we wanted to deploy a server as an immutable VM (virtual machine), we would take the server configuration files and source code and build them into an image. We would then store that image in the repository, as seen in the diagram above.

Once it has been built, it would need to be tested. Upon completion of testing, it’s termed a master image, which is also stored in the repository. You can then deploy copies of the master image into production. If these immutable servers required changes, you would have to rebuild an image with the appropriate changes, put it into testing, deploy it, and then throw out the old immutable VM.

Infrastructure as code (IaC)

Infrastructure as code allows us to deploy infrastructure using software commands. Instead of using interactive configuration tools and commands, IaC allows the usage of machine-readable definition files to make changes to architectures and their configurations. Since cloud customers are using virtualized forms of their provider’s underlying physical resources, customers can configure their infrastructure through code. We use configuration files to specify the details.

With just a few lines of code, a cloud customer can launch the infrastructure they need. When they need to instantiate a new load balancer or create a new VM, all the customer needs to do is send API calls, because it’s all virtualized. This infrastructure can be made immutable, which is a good way to lock it down and secure your environment.

DevOps and DevSecOps

DevSecOps is based on the DevOps approach, which involves integrating software development (Dev) and IT operations (Ops). This can make the software development lifecycle more responsive, resulting in the delivery of quality software at a faster pace, with more frequent releases. DevSecOps brings in security (Sec) as well, decentralizing security practices and making delivery teams responsible for security controls in their software. DevSecOps brings security practices into the entirety of the software development lifecycle, relying heavily on automation to secure code from the initial stages all the way through to testing, deployment, and delivery. The following diagram provides a visualization of the DevOps approach, while the DevSecOps manifesto is included in the table.

Image of DecSecOps cycle - Destination Certification

The DevSecOps Manifesto principles

Leaning in over always saying “no”

Data & security science over fear, uncertainty and doubt

Open contribution & collaboration over security-only requirements

Consumable security services with APIs over mandated security controls & paperwork

Business driven security scores over rubber stamp security

Red & blue team exploit testing over relying on scans & theoretical vulnerabilities

24x7 proactive security monitoring over reacting after being informed of an incident

Shared threat intelligence over keeping info to ourselves

Compliance operations over clipboards & checklists

Continuous integration, continuous delivery (CI/CD)

CI/CD stands for continuous integration, continuous delivery, although sometimes sources switch out “delivery” for “deployment”. Continuous integration involves automating many of the steps for committing code to a repository, as well as automating much of the testing. This allows code changes to be frequently integrated into the shared source code and ensures that a bunch of testing gets done easily.

Continuous delivery also involves automating the integration and testing of code changes, but it includes delivery as well, automating the release of these validated changes into the repository. Continuous deployment takes things a step further and automatically releases the code changes into production so that they can be used by customers. With continuous deployment, code changes can be automatically put into production without further human intervention, as long as they pass through all of the testing and there are no issues. If there is an error in any of these steps, the changes will get sent back to the developer. The diagram highlights how these three processes overlap:

Image of Continuous integration, continuous delivery and continuous deployment - Destination Certification

4.3 Apply the Secure Software Development Life Cycle (SDLC)

Now that we have outlined the software development life cycle, we can dive into some of the security considerations in more depth.

Cloud-specific risks

he huge contrast between traditional architectures and the cloud means that there are a range of cloud-specific risks. Some of these include:

  • Due to the shared responsibility model, cloud customers will share both control and responsibility with their provider.
  • Under the cloud model, customers generally have less visibility.
  • The management plane offers centralized control that isn’t available under traditional architectures. However, because it has so much access and power, it must be carefully secured.
  • Public clouds have multitenancy. Although cloud providers should enforce logical isolation between tenants, this is still a risk that doesn’t exist in on-premises infrastructure.

Threat modeling

Threat modeling involves systematically identifying, enumerating and prioritizing the threats that relate to an asset. This allows us to assess the risk to a given asset, as shown in the diagram below. There are many different threats that can impact assets with wide-ranging value and sensitivity. We need a systematic way to identify and prioritize threats so that we can effectively allocate our limited resources toward mitigation.

The components of risks

Image of components of  risk - Destination Certification

The STRIDE model

Threat

Violation

Definition

Spoofing

Authentication

Spoofing of user identity involves an attacker circumventing authentication by leveraging a user’s personal information or replaying steps of the authentication process.

Tampering

Integrity

Tampering with data involves making unauthorized changes to user or system data.

Repudiation

Non-repudiation

Repudiation refers to the ability to deny something. If a system is designed with adequate non-repudiation controls a user cannot take an action and then plausibly deny their activity later on.

Information disclosure

Confidentiality

Information disclosure involves exposing information to unauthorized parties.

Denial of service

Availability

Denial of service involves making a system unusable or unavailable.

Elevation of privilege

Authorization

Elevation of privilege is where someone escalates their privileges to access systems and resources that they are unauthorized to access.

The DREAD model

The STRIDE model helps you to identify and categorize threats, while the DREAD model aims to help you determine the severity of a threat:

Damage potential

The maximum amount of damage that the threat could pose.

Reproducibility

This measures how difficult an attack is to reproduce.

Exploitability

This is a measure of how much skill, energy and resources are required for the attack.

Affected users

This is the portion of users that would be affected.

Discoverability

This metric is an estimation of the likelihood of an attacker discovering it.

The PASTA model

The Process for Attack Simulation and Threat Analysis (PASTA) model is a more recent development. Its steps are outlined in the following table:

Define objectives

This stage involves identifying business and security objectives as well as conducting a business impact analysis.

Define technical scope

This includes:

  • Defining assets.
  • Understanding the scope of required technologies, including dependencies and third-party infrastructures.

Application decomposition

This step involves analyzing the use cases, actors, assets, data sources and trust boundaries. It also involves creating data flow diagrams.

Vulnerability mapping

This stage involves mapping vulnerabilities to assets.

Attack modeling

This phase is where you build an attack tree, as well as map attack nodes to vulnerability nodes.

Risk and impact analysis

In the final stage, you both quantify and qualify the business impact, analyze the residual risks, identify mitigation strategies and develop countermeasures.

The ATASM model

The ATASM model is a high-level process for threat modelling highlighted in the following table:

Architecture

This step involves understanding:

  • Both the logical and component architecture of the system.
  • All communication flows and the locations of all data, both in storage and in transit.

Threats

The threats step involves:

  • Listing each of the possible threat agents for the system.
  • Writing down all of the possible goals of these threat agents.
  • Listing the typical attack methods of the threat agents.
  • Writing down the system level objectives of the threat agents for each of these attack methods.

Attack Surfaces

Attack surfaces provides both the A and S of the acronym. This stage of the process involves:

  • Decomposing the architecture to expose every attack surface.
  • Applying the attack methods identified in the prior step to each attack surface.
  • Filtering out threat agents if there are no attack surfaces for their usual attack methods.

Mitigations

The mitigations step involves:

  • Writing down all existing security controls for each attack surface.
  • Filtering out attack surfaces that are already protected appropriately.
  • Adding security controls to mitigate the remaining security issues.
  • Establishing defense in depth.

Avoid common vulnerabilities during development

We introduced many of the most common vulnerabilities when we discussed the OWASP Top 10 and the SANS Top 25 in 4.1 Common Cloud Vulnerabilities. Now we will delve into some of the most important ones in more depth.

Cross-site scripting (XSS)

There are three major types of cross-site scripting (XSS):

Stored (persistent)

Injected code is stored on the server and sent to subsequent website visitors.

Reflected (non-persistent)

Injected code is passed to a vulnerable server via URL and reflected to the victim. The URL is often sent to the victim through phishing emails. This is the most common type of XSS attack.

DOM (Direct Object Model)

The DOM environment in the victim’s browser is modified and malicious code is injected. DOM attacks are pretty rare, so we won’t bother delving into them any further.

Stored XSS Attack

Image of stored XSS attack - Destination Certification

Reflected XSS Attack

Image of reflected XSS attack - Destination Certification

Cross-site request forgery (CSRF)

The core of a CSRF attack is illustrated in the following diagram. It involves:

  • An attacker tailoring a link that can direct a user into submitting an unwanted action.
  • The attacker sending a link to the victim.
  • The victim clicking on the link which sends a request to the website.
  • The website processing the request. It assumes that the request is legitimate because it originates from the victim’s web browser. However, the victim must be logged into their account at the time in order for this to work.
Image of a CSRF attack - Destination Certification

Below are the important differences between XSS and CSRF attacks:

Cross-site scripting (XSS)

Cross-site request forgery (CSRF)

An unwanted action is performed on the user’s browser.

An unwanted action is performed on a trusted website.

The user’s browser runs malicious code.

The website’s server executes a command from the user’s browser.

The user’s browser is exploited.

The website’s server is exploited.

Insecure direct object referencing

Insecure direct object reference vulnerabilities occur when applications do not check authentication against user-supplied inputs. The following image shows an example of a malicious user exploiting an insecure direct object reference vulnerability on a website.

Image of insecure direct object referencing - Destination Certification

SQL injection

To comprehend SQL injection, you first need an understanding of Structured Query Language (SQL), which is a language used for communicating with databases. SQL injection is a method of attack that utilizes SQL commands and can be used for modification, corruption, insertion, or deletion of data in a database.

Image of SQL injection - Destination Certification

The image above illustrates what SQL injection looks like. In this case, imagine a webserver with a database residing behind it. The website associated with this configuration is a dynamic website, meaning that web pages can be created dynamically using data from the database, based upon user requests and interaction with the website.

Due to the dynamic nature of these websites, a persistent connection to the database is required, but a web user should never be able to directly interact with the back-end database. However, SQL injection makes this possible.

A simple login screen is used so that when a person enters their username and password, the database will be queried for the corresponding information, and if it is valid, the user should be authenticated. Using SQL injection, however, neither a correct nor incorrect username is entered into the “Username” field. Instead, SQL code is entered as shown in the image above.

The first part of this code—aaa—is just text and could be replaced by any other text, as can the bbb entry in the password field. However, everything else following the aaa in the username field (‘ OR 1=1 --) constitutes the SQL injection string.

Once this information is entered, the web server will formulate the request into SQL code and send it to the database server, asking if this username and password exist in the database. The first SQL statement below the login box shows how this request will be perceived by the back-end SQL database. Because of the apostrophe (‘) at the end of aaa, the database server treats aaa as the end of the username and then searches for the username aaa, which probably does not exist. Next, OR 1=1 is treated as an SQL statement, which when analyzed yields “true.” In essence the interpreter executes a logical OR query, which is true if either of the conditions accompanying it are considered true. aaa doesn’t exist (resulting in a false state); however, 1 always equals 1, so that returns a true state. Finally, within SQL, the use of “--” signifies that everything that follows it is a comment and would be ignored by the SQL interpreter.

Because of the above SQL command, the attacker can successfully authenticate and gain access to the system behind the login screen. This example highlights one very important thing: the web server passed unvalidated information directly to the database server. Unvalidated data should never be passed directly from a web server to a database server.

Buffer overflow

A buffer overflow happens when information sent to a storage buffer exceeds the capacity of the buffer. At a high level, applications accept input, process it, and provide output. When designing applications, buffers—temporary memory storage areas—are included to handle the input, processing, and output functionality. Buffer sizes are often determined ahead of time and don’t dynamically change. If an application somehow sends more information than a buffer can handle, it results in an overflow condition.

The fact that buffers can’t dynamically change size can be exploited. An attacker could create a situation where overflow data that contains executable code is placed into a storage area where the code is then executed. The first image shows two buffers, with one allocated to program A and another allocated to program B. The attacker then exploits a buffer overflow vulnerability and the data for program A overflows into the buffer for program B, as shown in the second image. If this overflow data contains malicious code, it could be executed in program B.

Two Buffers Storing Programs and Instructions

Image of Two Buffers Storing Programs and Instructions - Destination Certification

An Attacker's Data Overflows into Buffer B

Image of An Attacker's Data Overflows into Buffer B - Destination Certification

Secure coding

Two common resources for secure coding are the OWASP Application Security Verification Standard (ASVS) and the Software Assurance Forum for Excellence in Code (SAFECode).

OWASP Application Security Verification Standard (ASVS)

The Open Worldwide Application Security Project (OWASP) publishes the Application Security Verification Standard (ASVS). It aims to help organizations build and maintain secure applications, as well as help consumers and vendors align their security requirements with security offerings. It sets out three separate application security verification levels:

  • ASVS Level 1 – This is completely penetration testable by humans, and it is for low assurance levels.
  • ASVS Level 2 – This is the recommended level for most apps that contain sensitive data and require reasonable protections.
  • ASVS Level 3 – This level is for applications that require a very high level of trust, such as those that contain medical data or that process high-value transactions.

Software Assurance Forum for Excellence in Code (SAFECode)

The Software Assurance Forum for Excellence in Code (SAFECode) is a body made up of some of the biggest tech players like Microsoft and Dell. It provides a venue for tech experts and business leaders to share their insights on programs for effective software security. SAFECode’s publication, Fundamental Practices for Secure Software Development aims to help organizations begin or improve their software assurance programs.

Software configuration management and versioning

Software configuration management (SCM) is critical for managing software changes. Good SCM helps to maintain the integrity of software, reducing the chances of bugs and other undesired behavior. Version control is a critical part of the SCM process. If errors occur in updates, it allows us to roll back to the last known functional configuration.

Another important aspect is establishing baselines, which are formally reviewed and approved specifications. When effective software configuration management practices are followed, they can minimize mistakes and security issues, as well as increase overall productivity.


4.4 Apply cloud software assurance and validation

Whether we run our apps on-premises or in the cloud, we need to be able to assure our stakeholders that they are built with an appropriate level of security. Software assurance and validation are critical parts of demonstrating the security of our apps.

Before we implement a security control, we want to make sure that it meets its functional requirements and we also want a mechanism to grant us assurance that it delivers on these requirements. When we want to provide assurance to our stakeholders about the overall security of our software and architectures, we use security assessment and testing.

Functional and non-functional testing

Functional tests involve testing whether software functions as it is supposed to. These include things like basic functionality, usability, and accessibility.

Security testing methodologies

Before we deploy our software into production, it must be carefully tested. Proper testing can help us determine potential security issues before they are exploited.

Methods and tools

We should use both manual and automated testing when developing software. Manual testing means a person or a team of people are performing the tests. Automated testing means that test scripts and batch files are being automatically manipulated and executed by software. Thorough testing employs both manual and automated approaches to produce a multitude of outcomes and to achieve the best results.

Code review and access to source code

Code review and access to source code can be considered from two perspectives when testing:

  • Without access to the source code (also known as black-box testing).
  • With access to the source code (also known as white-box testing).

Static application security testing (SAST), dynamic application security testing (DAST) and fuzz testing

Static application security testing (SAST) is conducted when an application is not running. It involves examining the underlying source code, which makes it a type of white-box testing, because the code is visible.

Dynamic application security testing (DAST) involves testing a running application and it focuses on the application and system as the underlying code executes. In contrast to static testing, dynamic testing is a form of black-box testing, because the code is not visible.

Fuzz testing is a form of dynamic testing that is premised on chaos. Fuzz testing involves throwing randomness at an application to see how it responds and what might cause it to break.

There are two major types of fuzz testing:

Mutation (dumb fuzzers)

Generation (intelligent fuzzers)

  • The input to an application is randomly changed by flipping bits or appending additional random input.
  • This is often referred to as dumb fuzzing as the fuzzer has no understanding of the input structure.
  • New input to an application is generated from scratch based on an understanding of the file format or protocol.
  • This is often referred to as smart or intelligent fuzzing because the fuzzer must understand the input structure.

Below are the key differences between static application security testing, dynamic application security testing and fuzz testing.

Static application security testing (SAST)

Dynamic application security testing (DAST)

Fuzz testing

  • White box.
  • Examines code.
  • Black box.
  • Examines the application itself.
  • A form of dynamic testing.
  • Works under the premise of testing random and chaotic inputs

Interactive application security testing (IAST)

Another type of testing known as interactive application security testing (IAST) combines elements of both SAST and DAST. Testing is performed as the application is running (DAST) with access to the code (SAST).

Software composition analysis (SCA)

Software composition analysis (SCA) uses automation to find open-source software within code. The aim is to gain insight into the quality of the code, evaluate its security and check that any relevant licenses are being complied with. Open-source software and licensing can be incredibly complex, and it’s easy to overlook critical issues.

Test types

When a system is running, it’s possible to test it as if a user was using it. The system can be tested a few different ways, such as by positive testing or negative testing, which are explained in the table below. Another type of testing is abuse (also known as misuse) case testing, which we cover further down the page under Abuse case testing.

Positive testing

Focuses on the response of a system, based upon normal usage and expectations.

Negative testing

Focuses on the response of a system when normal errors are introduced.

Vulnerability assessment and penetration testing

Purpose of a vulnerability assessment

Vulnerability assessments and penetration testing (better known as pen testing) are important topics when discussing vulnerabilities and threats. As a quick review, a vulnerability can be defined as a weakness that exists in a system, while a vulnerability assessment is an attempt to identify vulnerabilities in a system.

Before you can begin a vulnerability assessment or a pen test, you need to know which assets exist. Next, the threats these assets face must be identified.

Vulnerability assessment vs. penetration testing
Image of Vulnerability assessment vs. penetration testing - Destination Certification

The image above shows how vulnerability assessments and penetration testing compare. Vulnerability assessments are examinations that look for security weaknesses and evaluate security controls. Vulnerability assessments can be automated and are relatively fast. A report documenting the findings and recommendations is compiled at the end of a vulnerability assessment.

Penetration tests are similar to vulnerability assessments, but they tend to involve more manual work. They involve finding vulnerabilities and then trying to exploit them to prove they are exploitable. The steps of a penetration test are:

Image of the phases of penetration testing - Destination Certification

Testing Techniques

Vulnerability assessments and penetration testing can be conducted in many different ways. Some of the main variations include perspective, approach, and knowledge.

Perspective

Perspective refers to where the assessment or test is being performed from. Is the assessment or test coming from an internal (inside the corporate network) or from an external (out on the Internet) perspective?

Approach

Testing can also be categorized as either blind or double-blind:

Blind testing

Double-blind testing

The assessor is given little to no information about the target being tested. It could be limited to just the name of the company or an IP address. The assessor is blind to network details and must use reconnaissance and enumeration techniques to gain more visibility about the target. With a blind approach, members of the target company’s IT and security operations teams will likely know that some type of test is coming and can be better prepared to respond to alerts.

A double-blind approach goes one step further. In addition to the assessor being given little to no information about the target company, the target company’s IT and security operations teams will not know of any upcoming tests. This type of approach tests the assessor’s ability to identify vulnerabilities and other weaknesses as well as the target team’s ability to respond. Usually only the senior management will be aware of an upcoming double-blind test, because they will be the ones who commissioned it.

Knowledge

Knowledge refers to how much insight or information an assessor has about a target, as shown in the table below.

Zero knowledge (black box)

Partial knowledge (gray box)

Full knowledge (white box)

The assessor has zero knowledge, similarly to the blind approach noted above. It is also known as black-box testing because the assessor doesn’t have visibility into the details.

The assessor is given some information about the target network but not the full set that a white-box test would have. It lies somewhere in between a white-box and a black-box test. This is why it’s called gray-box testing.

The assessor is given full knowledge (including items like IP addresses, network diagrams, information about key systems, and perhaps even password policies). This is also known as white-box testing.

Quality assurance (QA)

Software quality assurance (SQA) is the process of making sure that software engineering abides by relevant standards and meets compliance obligations. In essence, it involves ensuring that software is high quality, and it relies on following best practices for all of the processes surrounding software development. Testing and auditing form crucial parts of SQA.

Abuse case testing

Abuse case testing is about testing features to determine if attackers are able to use them in an unintended way. Abuse case testing is also sometimes called misuse case testing. It can involve fuzz testing, stress testing, controlled denial-of-service attacks and more.


4.5 Use verified secure software

We always want our software to be reliable and of the highest quality. In the cloud, we use a lot of third-party software, so we need ways of ensuring its reliability as well.

Securing application programming interfaces (APIs)

Many of the web applications we use today are built from disparate components that talk to and work with each other. They communicate through a set of standards known as application programming interfaces or APIs. APIs can be seen as a collection of tools, routines, protocols and standards for developing software apps that access web-based software applications and similar tools.

Application Programming Interfaces (APIs)

The two most common API formats are Representational State Transfer (REST) and Simple Object Access Protocol (SOAP):

Representational State Transfer (REST)

Simple Object Access Protocol (SOAP)

  • Newer.
  • More flexible and lighter weight alternative to SOAP.
  • HTTP-based.
  • Easy to learn and use.
  • Fast in processing.
  • Output can take several forms, including CSV, JSON, RSS, and XML.
  • Has caching support.
  • Older, originally developed by Microsoft.
  • More rigid and standardized.
  • XML-based. Soap messages are encoded as XML documents, and they feature an envelope which consists of an optional header, plus a body.
  • Extensible through use of WS standards.
  • Strong error handling.
  • Does not support caching.

Supply-chain management

Every organization will rely on a range of suppliers to provide the underlying services that it uses. All of these third-party services present risks to an organization, and these risks must be appropriately managed. Organizations must include these suppliers in their risk assessments.

Prior to choosing a vendor, an organization should determine its business goals and requirements. This should include listing out the required business functions, as well as the security and compliance needs. Once an organization fully understands its requirements, it can look for suitable vendors. Each of these vendors must be carefully assessed and the contracts scrutinized to ensure that they meet the needs. Once an appropriate vendor has been found, it needs to be monitored to ensure that it is living up to its side of the contract.

Third-party software management

Your organization will need to carefully manage all of the third-party software that it uses. This includes reviewing potential suppliers to ensure that they can meet your needs, such as frequent updates, appropriate security controls, and adequate compliance provisions. You should also evaluate things such as the vendor’s long-term stability—you do not want to choose a vendor that will suddenly go out of business (vendor lock-out). You also want to watch out for vendors that make it really hard to migrate away to another service (vendor lock-in).

Validated open-source software

When we want to use open-source software in business environments, the software must be validated first. ISO 9000 defines validation as “confirmation, through the provision of objective evidence, that the requirements for a specific intended use or application have been fulfilled”. In simpler terms, validation means that there is assurance that the software meets the needs of the customer. This means that for any open-source software we use in the business context, we need assurance that it will meet our needs—functional, security and compliance.

When we want to use open-source software in business environments, the software must be validated first. ISO 9000 defines validation as “confirmation, through the provision of objective evidence, that the requirements for a specific intended use or application have been fulfilled”. In simpler terms, validation means that there is assurance that the software meets the needs of the customer. This means that for any open-source software we use in the business context, we need assurance that it will meet our needs—functional, security and compliance.


4.6 Comprehend the specifics of cloud application architecture

Cloud application architecture is often quite different from traditional, monolithic architecture. With technologies like IaaS, PaaS, microservices, serverless and containers, we can develop faster and scale much more easily.

Supplemental security components

In this section, we will be covering some of the important security components that help to secure our cloud applications.

Proxy

One way to provide strong security across a network is through the use of proxies. A proxy is a device that acts on behalf of something else, commonly a user or an application. In the context of a network, as shown in the diagram below, a proxy helps facilitate the connection between a client and a server, because it is better equipped to manage and direct outgoing and incoming traffic.

A proxy or proxy server is an intelligent application or a piece of hardware that acts as an intermediary and is placed between clients and a server. Proxies provide enhanced security because they can filter requests and block any traffic that resolves to known malicious destinations, which can help to keep users away from dangerous sites.

Image of a proxy - Destination Certification

Web application firewall (WAF)

A web application firewall (WAF) is somewhat like a reverse proxy, because they sit between the client and server, as shown in the image below. They act as firewalls for HTTP apps. WAFs apply rules to HTTP traffic, which can help block common web-application attacks like SQL injection or XSS attacks.

Image of file activity monitoring - Destination Certification

Database activity monitoring (DAM)

Database activity monitoring (DAM) generally sits on the network between the client and the database server, as shown in the image below. It inspects both the requests going to the database, and the responses coming back, with the ability to block anything that it detects as suspicious. DAM is able to monitor, log and analyze database access, which makes it incredibly useful for blocking malicious activity, as well as for investigations after the fact.

Image of database activity monitoring - Destination Certification

File activity monitoring (FAM)

File activity monitoring (FAM) generally sits in front of the file server and intercepts requests, as shown in the diagram below. If it detects anything suspicious, it can block access. This can help you prevent things like data exfiltration.

Extensible Markup Language (XML) firewalls

Extensible markup language (XML) firewalls are application layer firewalls that can help protect applications against XML-based attacks. These attacks can include things like XSS and SQL injection. XML firewalls can help to stop these attacks through a mix of filtering and validation, as well as rate-limiting.

API gateways

API gateways sit between a client and the backend services. They serve as API management tools, acting as a reverse proxy that allows you to decouple your backend implementation from the client. An API gateway can receive a request from a client, then break it down into multiple requests that are each sent off to the relevant backend API. The image below shows an API gateway.

Image of API gateway - Destination Certification

Cryptography

Cryptography is the study and application of securing information, generally through techniques like encryption, hashing and digital signatures. Cryptography can provide our information with the following properties:

Confidentiality

Keeping our data confidential basically means keeping it a secret from everyone except for those who we want to access it.

Integrity

If data maintains its integrity, it means that it hasn’t become corrupted, tampered with, or altered in an unauthorized manner.

Authenticity

Authenticity basically means that a person or system is who it says it is, and not some impostor. When data is authentic, it means that we have verified that it was actually created, sent, or otherwise processed by the entity who claims responsibility for the action.

Non-repudiation

Non-repudiation essentially means that someone can’t perform an action, then plausibly claim that it wasn’t actually them who did it.

Major components involved in cryptography include:

Symmetric-key encryption

This is a simple and efficient form of encryption that only uses one key for both encryption and decryption. We use it to provide confidentiality to our data in both transit and storage. One of the most common examples is the Advanced Encryption Standard (AES).

Public-key encryption (also known as asymmetric-key encryption/cryptography)

Asymmetric encryption is a little more complex because it uses two separate keys, a public key for encryption and a private key for decryption. It’s much less efficient, so we mostly use it for securely exchanging symmetric keys and for digital signatures. Examples include RSA and elliptic-curve cryptography (ECC).

Hashing

Cryptographic hash functions are deterministic one-way algorithms that take arbitrary-length inputs and always produce fixed-length outputs, which are known as hashes. It is not feasible to compute the original input from the hash of a secure cryptographic hash function, like SHA2-256. Hashes can be used for verifying the integrity of data, and in digital signatures

Digital signatures

Digital signatures combine public key encryption with hashing, giving us a way to verify the integrity and authenticity of information, as well as provide non-repudiation.

Certificates

Digital certificates feature an entity’s basic information and their public key. They are signed by a trusted body known as a certificate authority (CAs). CAs are responsible for verifying an entity’s identity—if an entity’s digital certificate is signed by a reputable CA, then we can assume that the entity is legitimate.

Encryption basics

In cryptography lingo, the information we want to encrypt is called plaintext when it is in its normal, unprotected state. Once it has been encrypted, we call it ciphertext. We encrypt data with encryption algorithms, which basically involves putting all of our plaintext in a blender, alongside a key. The key is a special piece of additional information that we must keep secret. If anyone finds it out, they can use it to decrypt the information.

Once we encrypt our plaintext through an encryption algorithm alongside the key, it becomes ciphertext, which is a seemingly random jumble of meaningless characters. If we used a secure encryption algorithm like the Advanced Encryption Standard (AES), it can only be decrypted by putting it back through the same algorithm in reverse, alongside the key. Only those with access to the key can decrypt the information. This means that securely encrypted data is confidential as long as the key doesn’t fall into the wrong hands.

Symmetric-key encryption

With symmetric-key encryption, we use a single key to both encrypt and decrypt the data, as shown in the diagram:

Image of symmetric key encryption - Destination Certification

The advantages and disadvantages of symmetric key encryption include:

Advantages

Disadvantages

  • Fast and efficient.
  • Strong.
  • It does not provide a way to securely communicate with other parties unless you have previously established a secure channel.
  • It scales poorly.
  • It does not allow you to verify integrity, authenticity, or non-repudiation.

Public-key encryption (asymmetric)

Public-key encryption was a huge step forward in cryptography, because it solved something known as the key distribution problem: in order for symmetric-key encryption to be useful, you need a preexisting secure channel to establish a shared key. If you try to send your recipient a key over email or a phone line that could be tapped, the key could very easily fall into the hands of an attacker, which would mean that the attacker could decrypt all of your future communications.

One method of solving the key distribution problem is to use public-key encryption algorithms like RSA. One of the major differences that sets it apart from symmetric-key encryption is that instead of just having a single key for both encryption and decryption, there are two separate but matching keys. One of these is known as the public key, which you can share openly. The other is the private key, which must be kept secret. The use of public-key encryption is shown in the following diagram:

Image of the encryption and decryption processes for public-key encryption - Destination Certification

Below are the advantages and disadvantages of public-key encryption:

Advantages

Disadvantages

  • It solves the key distribution problem.
  • It scales better than asymmetric cryptography.
  • It can help to provide mechanisms for verifying integrity, authenticity and non-repudiation.
  • It is slow and inefficient.

Cryptographic hashing

Cryptographic hash functions can be used to verify integrity on their own, but when they are used as part of digital signatures, they can also verify integrity, authenticity and non-repudiation.

Cryptographic hash functions are one-way deterministic functions that can take on inputs of any length and always produce fixed length outputs, as shown in the image below. One of the most common cryptographic hash functions is SHA2-256, but there are a number of other useful cryptographic hash functions, including the SHA-3 family.

Image of cryptographic has function - Destination Certification

In order for cryptographic hash functions to be useful and secure, they must fulfill each of the following properties:

Cryptographic hash functions must:

Be fast to compute.

Be deterministic.

Be able to take on variable-length inputs, whether the inputs are just a few bits or a huge file.

Always produce fixed-length outputs. For example, the output of SHA-256 is always 256 bits

Be collision resistant. It needs to be infeasible to find two separate inputs that both result in the same hash.

Be preimage resistant, which is more commonly referred to as the one-way property. While it should be quick and easy to take an input and compute the hash, it should be infeasible to run the computation in reverse and find the original input from a given hash.

Be second preimage resistant. This basically means that if we begin with any specific message (preimage), it is infeasible to find another message (preimage) that also results in the same hash.

Digital signatures

Digital signatures play a crucial role in our online security. They play very similar roles to our handwritten signatures, but they use a lot more math instead. Digital signatures provide integrity, authenticity and nonrepudiation by combining the peculiar features of cryptographic hash functions and public-key encryption algorithms. Let’s say that Alice wants to send Bob a message. She wants Bob to be able to verify that the message arrived intact (that it maintained its integrity), and that the message he receives is truly from her (that it is authentic). Her communication with Bob is important, so she does not want hackers impersonating her or changing her messages.

Image of digital signature process - Destination Certification

Digital certificates

One thing that we haven’t explained yet is how Alice and Bob find each other’s public keys. One of the most common ways is through digital certificates. Digital certificates are issued through trusted bodies known as certificate authorities (CAs). Major CAs include DigiCert and GlobalSign.

CAs play an important role in verifying an individual’s identity and binding it to a public key. The first step is identity proofing, where Alice, Bob, or anyone else will essentially send their identification documents to a trusted CA. The CA then looks at these documents and verifies them.

If the CA looks at Alice’s identification documents and verifies that Alice is truly Alice, and not some impostor, then the CA will use its private key to sign a digital certificate for Alice, as shown in the figure below. When Alice wishes to communicate securely with someone online, she can share her digital certificate with them. That person will look at Alice’s digital certificate, see that it has been signed by a reputable CA like DigiCert, and then assume that this must be the real Alice. Similarly, Alice would examine her communication partner’s certificate to see whether it had been signed by a reputable CA. Each party can therefore trust the other’s public key that is listed on the certificate.

Image of the process of creating a digital certificate - Destination Certification

Kerchoffs’ principle

Kerchoffs’ principle states that a cryptosystem should be secure, even if the system is public knowledge. The intention is that even if your adversary knows exactly how your system works, your system should still be secure as long as the keys have not been compromised.

Key management

Key management includes key generation, key storage, key distribution and key disposal. We also discuss key management in the cloud.

Key generation

Due to the sensitivity of keys, they must be secured at each stage of their lifecycle. The first step is key generation, which generally involves cryptographically secure pseudorandom number generators (CSPRNGs) or key derivation functions (KDFs).

Key storage

Key storage also must be performed carefully. Cryptographic keys must be stored at a security level at least as high as the data they protect. Two common storage options:

Trusted Platform Module (TPM)

A TPM is a dedicated cryptoprocessor that is designed to securely generate and store keys on endpoints, such as smartphones, laptops and other devices.

Hardware security module (HSM)

An HSM is useful for generating and storing keys at the enterprise level. HSMs are also hardware devices, but they can be attached to the network, rather than being limited to a single endpoint.

Key storage in the cloud

There are three major components for encrypting data:

  • The data
  • The algorithm
  • The keys

We don’t want to keep all of these components in the same place. If an attacker finds the ciphertext and the keys all in the one place, they’ve hit the jackpot and will easily be able to gain access. The encryption algorithms we use are generally open source, so we can’t keep these hidden from attackers. Keys are pretty small, while our data is often quite large, so it doesn’t make sense to move the data somewhere else. This means that our best option is to store and securely manage the keys elsewhere. The major options include:

Internally managed

This is also referred to as instance-managed key storage. It means that the keys are stored locally on either the VM or the container.

Externally managed

Externally managed keys are stored elsewhere and not in the VM or the container. One example is in an on-premises hardware security module (HSM). Another is a third-party escrow service (see below).

Escrow (third-party managed)

Escrow is a type of externally managed key storage, but the keys are stored and managed with a trusted third-party as opposed to an on-premises solution.

Key distribution

Sometimes we will need to share keys or move them from where they were initially generated or stored. In the past, we would often have to do this out-of-band. With public-key cryptography, it’s easy to establish new secure channels, so we can send keys anywhere we want. However, we need to ensure that we verify the identity of the recipient to ensure that an attacker isn’t misleading us.

Key disposal and destruction

At the end of a key’s life cycle, we must dispose of it appropriately. In the cloud, the best option for this is usually cryptoshredding, which we covered in detail in Domain 2.7. Cryptographic erasure (cryptoshredding).

Encryption throughout the data lifecycle

We covered the data lifecycle (shown below) in Domain 2.1. Encryption can be used in a variety of ways throughout these phases. We encrypt data all of the time in storage, whether it is through full-disk encryption or encrypted folders. For data in use, encryption is tricky. Homomorphic encryption allows us to perform computations on encrypted data, but it takes an enormous amount of compute, and it isn’t a very mature technology yet.

The cloud data life cycle phases - Destination Certification

You may want to implement DRM or IRM so that you can share data only within an authorized manner—encryption provides the backbone for these technologies. We also use a range of encryption technologies when transporting data, such as TLS or IPsec.

For archiving, we use the same encryption technologies that we mentioned for storage. As we have discussed earlier in the book, encryption is also important for data destruction, especially in the cloud. Cryptoshredding is often our best option for securely destroying data.

Sandboxing

Sandboxes are safe areas where unknown or untrusted code can be isolated, run and tested. They allow us to determine whether the code is malicious without putting our systems at risk. Sandboxes are often used by malware analysts who run potentially malicious code in them to try and identify indicators of compromise, as well as gain an in-depth understanding of how the code operates.

Analysts can perform dynamic heuristic analysis to observe whether suspicious code attempts to self-replicate, overwrite files, remain in memory after executing, or perform other undesirable activity. We can also use sandboxes to run experiments away from our production environment, without fear of causing wider damage. If problems occur, they won’t have impacts elsewhere.

Many email services open attachments in sandboxed environments, due to the large amount of attacks that begin in this way. It allows them to detect malicious code before a user accidentally runs it locally on their machine. We can also integrate sandboxing capabilities into our IDS/IPS systems.

Application virtualization and orchestration

We discussed virtualization and orchestration in Domain 3.1. Microservices and containers were also discussed in Domain 3.1.


4.7 Design appropriate identity and access management (IAM) solutions

Identity and access management (IAM) is a crucial part of security. If we want to ensure that only authorized parties are able to access our sensitive systems and data, then we need ways to effectively identify and authenticate users, and only grant them access to authorized systems. Our IAM systems should restrict all other parties from access.

Access control

Access control is the collection of mechanisms that work together to protect the assets of an organization while still allowing authorized subjects to have controlled access to objects.

Access control enables management to:

  • Specify which users can access the system.
  • Specify which resources they can access.
  • Specify which operations they can perform.
  • Provide individual accountability—the system should know who is doing what.

Access control principles

The fundamental access control principles listed in the table below are important to understand because they help to reduce risks. By limiting individuals to only have the access and information that they need in order to be effective, but nothing more, we can limit the damage that any one individual is capable of. If their account becomes compromised by an attacker, these principles also help to restrict the potential impacts.

Need to know

Least privilege

Separation of duties

Limiting information and access to sensitive assets so that it is only granted to those who strictly need it for their work.

Designing and managing systems so that each individual or entity has access to the minimum amount of authorizations and system resources necessary to carry out their job.

Ensuring that no single user has sufficient privileges to be able to abuse a system on their own.

Need to know

The concept of need to know means that access to an asset is only given to those that absolutely need access, based on job function. This can be applied in many ways. Imagine a law enforcement agent being undercover and investigating a case. Their true identity doesn’t need to be known to anyone apart from their direct supervisor and a handful of agents involved in the case. We can apply the same principles to our systems and ensure that employees only know the information they strictly require.

Least privilege

This concept means that you are only given the level of access (permissions) that are absolutely required for you to perform what you have been authorized to do. Some companies put themselves at immense risk by having overprivileged accounts where several people (if not the whole company) have local administrator permissions on their machines. This goes against least privilege. Most people in a company don’t need to have local administrator permissions, so to apply the principle of least privilege, a group policy should state that everyone has standard accounts configured, apart from a handful of administrators.

Separation of duties and responsibilities

Separation of duties and responsibilities refers to the concept that one person should not be responsible for all aspects of a critical process. Separation of duties is often employed in areas of an organization where money is received or disbursed. For example, when a new vendor is added to an accounts payable system, one person might enter the vendor information and another person might confirm the validity or accuracy of the information. These two steps can help to prevent fake vendors from being created in a system. In addition, when the vendor is paid, one person might enter the invoice and payment information, another person might generate the check, and yet another could confirm the check amount against the invoice before signing it. This shows how separation of duties can help to prevent fraud.

As another example, developers of software should not be the same people who push applications to production. There needs to be a separation of duties in place, so that proper testing, validation, and approval can be conducted to prevent errors.

Access control implementations

Access control includes all aspects and levels of an organization and covers all types of assets including:

  • Facilities
  • Systems and devices
  • Information
  • Personnel
  • Applications

Access control systems

  • The focus of access control is controlling a subject’s access to an object through some form of mediation.
  • Mediation is based upon a set of rules.
  • All activity is logged and monitored to provide accountability and gain assurance that things are working properly.

We use a model known as the reference monitor concept (RMC), which involves placing a rules-based decision-making tool in between subjects and objects to mediate access. The tool also needs to log and monitor all activity for the sake of accountability and assurance. The RMC is implemented in the security kernel. The figure below depicts the RMC and its various components.

Image of reference monitor concept RMC - Destination Certification

There are four access control services that are important to understand, identification, authentication, authorization and accountability. Before a user can log in to their account for the first time, their account needs to be created. This process is referred to as registration. An important security aspect of registration is identity proofing, which is the process of confirming or establishing that somebody is who they claim to be before they are given access to a valuable resource or asset.

image of identification, authentication, authorization and accountability - Destination Certification

Identification

Identification is the process of a user asserting or claiming their identity. One of the most familiar examples of this is someone typing in their username. When you type in user123 you are essentially claiming “I am user123”.

User identities should be unique, with each user having their own identity and no shared accounts. The identities should not be descriptive of the role—you do not want administrators having accounts called “admin”, because having such an easy to guess username makes a hacker’s job even simpler.

Identity is a little complex. Each of us is an entity. But so are our servers, laptops and processes. Each of these entities can have multiple different identities, which can vary according to context, such as work or personal. On top of this, we have identifiers, which are often just usernames like user123. Things like our legal names, ID cards, and fingerprints can also act as identifiers in certain contexts. We also have attributes, which can be things like role, department, location, etc.

Image of entities, identities and attributes - Destination Certification

Authentication

Authentication involves a system verifying that an entity is who it claims to be. Note that when we were talking about identification, we didn’t mention the user entering their password. This is because the password plays the role of authentication. This works under the assumption that passwords are secret pieces of information that only the legitimate user will know. If a user supplies the correct username and password, a system will assume that they are legitimate, so it will grant them access.

Of course, passwords aren’t foolproof. Some people use weak passwords, reuse them across accounts, or they can fall into the hands of an attacker by other means. This is why sensitive systems don’t rely solely on passwords for security. They will often rely on another factor, such as a fingerprint or a one-time code as an additional security layer.

We can divide up types of authentication into three separate factors:

Knowledge

Ownership

Inherence

  • Something you know.
  • Examples include PINs, passwords, and answers to security questions.
  • Something you have.
  • Examples include hardware tokens and phones running authentication apps.
  • Something you are.
  • These are characteristics like your face scan, iris, fingerprint, voice print, etc.

Some authentication factors can also fill the role of identification in certain scenarios. Many people access their phones simply with their fingerprint or face scan. They do not have to separately identify themselves by typing in a username.

To increase our security, we generally want to use multiple factors, like in multi-factor authentication. This often involves something you know, like a password, as well as another factor like the code from an authenticator app or a fingerprint. Multi-factor authentication should involve two or more different factor types, as opposed to two of the one factor type, such as a PIN and a password.

Authorization

The third step is authorization. Once a user has identified and authenticated themselves and the system is confident of who they are, its next job is to determine whether the user should be granted access to the resource that they are requesting to access.

As we mentioned earlier, we want to follow the principle of least privilege, so we only want users to be able to access the resources they strictly need to do their jobs, and nothing else. Therefore, the system should be locked down so that a user has a fairly strict list of permissions that the system will grant.

Let’s consider an example. Alice works in marketing, so she is authorized to access all of the marketing systems that are relevant to her work. If she requests access to the latest marketing reports, the system will check her authorizations, and then grant her access. If she asks to access the top-secret research from R&D, the system will again look up her authorizations, see that she does not have permission, and then deny access.

Accounting

The final step is accounting. Accounting involves logging and monitoring user access. Having single user accounts and recording their access allows us to determine who accessed what and when they did it. We can use this information to track down what went wrong during a cyber incident and find out who was responsible. It also acts as a deterrent, because if users know that their access is tracked, they may worry that they won’t be able to get away with the crime, making them less likely to abuse the system.

User access review

Once an account has been registered for a user and the user is granted access to facilities, systems, and other resources, that doesn’t mean that the access should remain forever. All user access should be reviewed on a periodic basis by the owner of the asset, because the owner is in the best position to conduct the review and confirm that continued user access is appropriate. Additionally, user access reviews can mitigate access or privilege creep.

Image of user access review - Destination Certification
How often should user access reviews be performed?

User access reviews should be conducted at least annually. However, special circumstances may require the reviews to be conducted more frequently, such as if a user changes roles, if they leave the company, or if there are concerns about their admin or super user privileges.

In the case of a user changing roles, their access should be reviewed at the time of the change. New access should be granted as needed, and any access that is not needed should be removed. When someone leaves the company (through voluntary or involuntary termination) that user’s access should be reviewed, and in most cases, all access should be removed. Administrative and super user account access should be reviewed more frequently, perhaps even as often as weekly. This is due to the large amount of power and access that these users can have.

Privileged user management

Privileged user management, also known as privileged access management, is a tool for monitoring, detecting and preventing suspicious or unauthorized access to privileged resources. Privileged users have a lot more access than normal users, so they can also do a lot more damage. Therefore, we want to monitor them extra carefully to keep our organizations safe.

Single sign-on

The concept of single sign-on (SSO) is best illustrated through an example. Let’s say a company has two applications that a given user has to use throughout their workday. Without single sign-on, they would enter their login details to application A, do some work, and then later have to log in again to access application B to do some other work.

Under single sign-on, the user just logs in once, and they can then access both applications without having to do any further logins. The core of SSO is that a user logs in once and is then authorized to access multiple systems.

Users typically love SSO because it removes a little friction from their lives. One immediate advantage is that they may be more likely to use a single stronger password to log in as opposed to a bunch of weaker passwords for accessing multiple systems.

A big disadvantage of SSO is that it involves centralized administration, and centralized administration represents a single point of failure from both an availability and a confidentiality perspective. If an SSO system is compromised, an attacker potentially has access to everything. Similarly, if the system goes down, users have access to nothing.

The SSO Process

The SSO process - Destination Certification
  1. A user sends a login request to an application.
  2. If the user has not already logged in or authenticated, the application will essentially say, “I don’t know who you are right now,” and redirect the user back to the authentication server, saying, “You’re not currently authenticated, I don’t know who you are, you need to go and authenticate.”
  3. The user will identify and authenticate themselves to the authentication server. Once identified and authenticated, the user will be given some type of ticket or token.
  4. The user is directed back to the application, and the ticket or token is presented for authorization to the application.
  5. If the application grants authorization, the user will be able to access the application.

Pros

Cons

  • Better user experience.
  • Users may create stronger passwords.
  • Timeout and attempt thresholds are enforced.
  • The centralized administration makes it easier to manage.
  • There is a single point of failure for both compromise and availability.
  • Inclusion of unique and legacy systems.

One of the most commonly used protocols for single sign-on is Kerberos.

Federated identity

In contrast with single sign-on (SSO), where a user authenticates one time and gains access to multiple systems in the context of an organization, federated identity management (FIM) allows a user to authenticate one time and gain access to multiple disparate systems, including systems in other domains. FIM is also sometimes referred to as federated access. Under FIM a user can gain access to company-owned systems as well as systems outside of the organization’s control—as long as these organizations are part of the same federation.

Let’s take a closer look at federated identity management through an example. When a person travels via airplane, they must go through a security checkpoint before proceeding to the departure gate. Passing through this checkpoint means the traveler is in a secure zone. After they arrive at the destination airport, the person is still in a secure zone, because the new airport trusts the security check that was performed at the original airport. This highlights one of the most important aspects of federated access—trust relationships between different entities. In this example, the two airports are owned and operated by different organizations, but they share a trust relationship.

Let’s look at federated access in the context of the logical world. With many websites today, when creating an account, two or more options are often available. One option is to create an account using a unique username and password, another option is to create an account using an existing account from a major platform like Facebook or Google.

For this example, let’s imagine that a user wants to create a new account on Pinterest, but they would prefer to do so using their existing Google account. The user visits Pinterest, and they’re given the option to create an account or log in with Google (among several choices). However, Pinterest and Google are unrelated companies.

They choose to log in via Google, and a small window pops up asking them to provide their Google username and password. This step is Google authenticating the user. Despite Google and Pinterest being separate companies, they are in a federation together, and the two have a trust relationship. Even though the user is authenticating through Google, because Pinterest trusts Google, Pinterest accepts Google’s authentication of the user as valid and allows the user to enter its own systems.

Federated identity management systems involve three major components. First is the user, also referred to as the principal. The user is the person who wants to log in or access the system. Second is the identity provider. The identity provider is the entity that owns the identity and performs the authentication. In the example above, Google is the identity provider. Third is the relying party, sometimes called the service provider. In the example noted above, Pinterest is the relying party. Federated identity management relies on a trust relationship between the three entities.

Three Major Components of Federated Identity Management

Three Major Components of Federated Identity Management - Destination Certification

Federated identity standards

Several major protocols enable federated access, with Security Assertion Markup Language (SAML) being one of the most important to understand. WS-Federation, OpenID, and OAuth are the others that should be known at a high level.

Diagram of major federated access standards - Destination Certification

The diagram above shows four federated access standards and whether they provide authentication, authorization, or both.

WS-Federation (like SAML) offers authentication and authorization functionality. Like most federated access standards, the primary goal is enabling the authentication and authorization of federated identities. WS-Federation was created by a consortium of companies, including IBM, Microsoft, and Verisign, and it was codified as a standard by OASIS.

OpenID and OAuth are complementary protocols that often work together. OpenID provides the authentication component, and OAuth provides the authorization component. In its simplest form, OpenID allows a user to use an existing account to identify and authenticate to multiple disparate resources—websites, systems, etc.—without the need to create new passwords for each resource. With OpenID, a user password is given only to the user’s identity provider—Microsoft, for example—and the identity provider confirms the user’s identity to the sites the user visits. OAuth is the standard that allows users to be authorized to access resources. Both OpenID and OAuth are open standards. While they can work independently of each other, they’re often deployed together, because of the richer functionality they provide as a unit.

Security Assertion Markup Language (SAML)

Image of the SAML process - Destination Certification

SAML provides two capabilities: authentication and authorization.

  1. First, the user (principal) must authenticate via the identity provider. If the user is not logged in and requests access to a service (offered by the service provider), the request will get bounced to the identity provider, where the user can authenticate.
  2. The identity provider will authenticate the user through their login details, at which point the user will be issued a SAML assertion ticket. One critical fact to note here: the SAML assertion ticket does not contain the username and password of the user. Rather, as the name suggests, the ticket contains assertion statements that the service provider—the relying party—can use for authorization purposes or to determine the level of authorization granted to the user.
  3. Once the SAML assertion ticket is provided to the user, the user will pass it on to the service provider. The service provider is going to read the assertion statements contained within the SAML ticket and make an authorization decision.

Similarly to Kerberos, SAML uses tickets or tokens, which are basically just synonyms. The critical thing to note is that they contain assertions or statements about the user—username, role, level of access, etc. Assertions are written in a language called Extensible Markup Language (XML), which is a way of communicating in a manner that is machine and human-readable.

How is IAM different in the cloud?

One of the major challenges of IAM in the cloud is that you often have to provision the same user on dozens, or even hundreds of separate cloud services. This can get complicated and costly, but tools like federated identity management help to smooth things over.

Identity providers (IdP)

Identity providers are third parties that provide authentication services. We discussed them in the Single sign-on and Security Assertion Markup Language (SAML) sections.

Single sign-on (SSO)

We discussed Single sign-on earlier in 4.7 because it is easier to understand federated identity management if we cover SSO first.

Multi-factor authentication (MFA)

We discussed MFA earlier in 4.7 under the Authentication subheading.

Cloud access security broker (CASB)

Cloud access security brokers (CASBs) enforce security policies between users and cloud providers. They can help to keep cloud security consistent across services, apps and devices. This allows organizations to enforce their security policy in a flexible manner that’s more suited to the modern cloud environment.

They can help to give organizations greater risk visibility and allow them to restrict employee access based on their location and other attributes. They can also help to detect threats like malware and can even play a role in data loss prevention

Secrets management

We discussed secrets management as part of Domain 4.6 under the Key management heading. Secrets management includes key generation, key distribution, and key disposal.


CCSP Domain 4 key takeaways

4.1 Advocate training and awareness for application security

Cloud development basics

  • Cloud environments can often help you get started faster than traditional environments.
  • However, development in the cloud can often lead to high levels of complexity.

Common pitfalls

  • Cloud development has a wide variety of pitfalls, including the requirement of new skills, vendor lock-in, and confusion over responsibility.

Common cloud vulnerabilities

  • The OWASP Top 10 web application security risks.
  • The OWASP Mobile Top 10 security risks.
  • The SANS Top 25 dangerous software errors.

4.2 Describe the Secure Software Development Life Cycle (SDLC) process

Business requirements

  • Validation is the process of considering business requirements to ensure you are building the right product.

Phases and methodologies

  • The phases of the software development life cycle include:
  • Initiation.
  • Requirements analysis
  • Design
  • Development
  • Testing
  • Deployment
  • Waterfall and Agile are two common software development methodologies.

Immutable infrastructure

  • Immutable infrastructure is infrastructure that can’t be changed.
  • Immutable workloads are workloads that have been configured so that no changes are possible

Infrastructure as code (IaC)

  • Infrastructure as code is virtualized infrastructure that we can deploy through software commands.

DevOps and DevSecOps

  • DevOps allows for faster software deliveries.
  • DevSecOps incorporates security into the DevOps approach.

Continuous integration, continuous delivery (CI/CD)

  • Continuous integration involves automating many of the processes of adding code to a repository and testing it.
  • Continuous delivery involves automating much of the integration, testing and delivery of code changes.
  • Continuous deployment goes a step further and includes automatically releasing code changes into production.

4.3 Apply the Secure Software Development Life Cycle (SDLC)

Cloud-specific risks

  • Cloud architectures have specific risks, including confusion of responsibility and lack of visibility.

Threat modeling

  • STRIDE = Spoofing, Tampering, Repudiation, Information disclosure, Denial of service, and Elevation of privilege.
  • DREAD = Damage potential, Reproducibility, Exploitability, Affected users, and Discoverability.
  • PASTA = Process for Attack Simulation and Threat Analysis.
  • ATASM = Architecture, Threats, Attack Surfaces, and Mitigations.

Avoid common vulnerabilities during development

  • There are many vulnerabilities, but some of the most common include:
  • XSS
  • CSRF
  • SQL injection
  • Insecure direct object references
  • Buffer overflows

Cross-site scripting (XSS)

  • XSS attacks come in three variants: stored, reflected and DOM.
  • XSS attacks exploit the user’s browser.

Cross-site request forgery (CSRF)

  • CSRF attacks involve exploiting a trusted webserver.
  • They rely on unwanted commands being issued from the victim’s browser.

Insecure direct object referencing

  • These attacks occur when apps don’t check authentication against user inputs.

SQL injection

  • Structured Query Language (SQL) is a language used for communicating with databases.
  • SQL injection is a method of attack that utilizes SQL code and commands.
  • Input validation is the best method to prevent SQL injection attacks from being successful.

Buffer overflow

  • Buffer overflows are a common problem in applications. They happen when information sent to a storage buffer exceeds the capacity of the buffer.
  • Buffer overflow vulnerabilities can be exploited to elevate privileges or execute malicious code.
  • Address space layout randomization (ASLR) can be used to protect against buffer overflows.
  • Parameter/bounds checking is another way to protect against buffer overflows.

OWASP Application Security Verification Standard (ASVS)

  • ASVS Level 1 – Completely penetration testable by humans.
  • ASVS Level 2 – The recommended level for most apps with sensitive data that requires reasonable protection.
  • ASVS Level 3 – For apps that require a high level of trust.

Software Assurance Forum for Excellence in Code (SAFECode)

  • SAFECode is a consortium that helps organizations begin or improve their software assurance programs.

Software configuration management and versioning

  • SCM helps to manage software, making it easier to maintain its integrity.

4.4 Apply cloud software assurance and validation

Functional and non-functional testing

  • Functional tests cover things like basic functionality, usability and accessibility.
  • Non-functional tests include testing performance, stress, scalability and load.

Security testing methodologies

  • Manual testing – Done by a person with their hands on the keyboard.
  • Automated testing – Done by an automated tool, like code scanning or vulnerability assessment software.

Static application security testing (SAST), dynamic application security testing (DAST) and fuzz testing

  • Static application security testing (SAST) looks at the underlying source code of an application while the application is not running. It is considered white-box testing because the code is visible.
  • Dynamic application security testing (DAST) examines an application and system as the underlying code executes. DAST is considered black-box testing because the code is not visible.
  • Fuzz testing is a form of dynamic testing that is premised upon chaos. Its purpose is to see how an application responds to complete randomness.

Interactive application security testing (IAST)

  • IAST is performed while the application is running and it can access the code.

Software composition analysis (SCA)

  • SCA can help you check for licensing, security and code quality issues in open-source code.

Vulnerability assessment and penetration testing

  • Vulnerability testing techniques tend to be automated and can be performed in minutes, hours, or a few days. Penetration testing techniques tend to be manual and can take several days, depending on the complexity involved.
  • Penetration testing stages include: reconnaissance, enumeration, vulnerability analysis, exploitation, reporting
  • Testing perspectives include: internal (inside a corporate network) and external (outside a corporate network).
  • Testing approaches include: blind (tester knows little to nothing about the target) and double-blind (tester knows little to nothing about the target. Internal security teams and response teams do not know the test is coming).
  • Testing knowledge includes:
  • Zero, or black box (similar to the blind approach, where the tester knows nothing about a target).
  • Partial, or gray box (tester has some information about a target).
  • Full, or white box (tester has significant knowledge about a target).

Quality assurance (QA)

  • The SQA process ensures that software meets the appropriate standards.

Abuse case testing

  • Abuse case testing involves testing features to figure out if attackers can use them in an unintended manner.

4.5 Use verified secure software

Securing application programming interfaces (APIs)

  • Application programming interfaces (APIs) provide a way for applications to communicate with each other. APIs act as translators.
  • Two of the most common APIs are Representational State Transfer (REST) and Simple Object Access Protocol (SOAP).
  • APIs should be secured along with other components of an application. Security can include authentication and authorization mechanisms, TLS encryption for data traversing insecure channels, API gateways, and data validation, among others.

Supply-chain management

  • Third-party providers must be carefully assessed to ensure that they meet the business function, security and compliance requirements of an organization.

Third-party software management

  • Managing third-party software includes evaluating providers to ensure you find the right fit.
  • Third-party software also requires appropriate licensing.

Validated open-source software

  • Validation involves ensuring that software meets the needs of the customer.

4.6 Comprehend the specifics of cloud application architecture

Proxy

  • Proxy servers act as intermediaries between the client and the server.
  • They can be used to filter requests to malicious destinations

Web application firewall (WAF)

  • WAFs apply rules to HTTP traffic and can be used to block attacks like SQL injection and XSS.

Database activity monitoring (DAM)

  • Typically sits between the client and a database server.
  • Allows you to set policies for activities to block.

File activity monitoring (FAM)

  • FAM intercepts requests going to the file server.
  • It can block requests that it deems suspicious.

Extensible Markup Language (XML) firewalls

  • XML firewalls can help to block XSS, SQL injection and other XML-based attacks.

API gateways

  • API gateways act as interfaces that can manage multiple APIs.
  • They can also fulfill a security role, performing tasks like rate-limiting and logging.

Cryptography

  • Cryptography is a field that revolves around securing information.
  • Confidentiality – The property that data is kept secret from unauthorized parties.
  • Integrity – The property that data hasn’t become corrupted or tampered with.
  • Authenticity – The property that an entity is truly who it claims to be.
  • Non-repudiation – The property that an entity can’t plausibly deny an action they were responsible for.

Encryption basics

  • Plaintext – Unencrypted data.
  • Ciphertext – Encrypted data.
  • Key – A secret piece of information used in the encryption process.

Symmetric-key encryption

  • Symmetric-key encryption uses a single key for both encryption and decryption.
  • It’s fast and strong.
  • It does not scale well, can’t solve the key distribution problem and does not have provisions for integrity, authenticity and non-repudiation.

Public-key encryption

  • Public-key encryption uses separate keys for encryption and decryption.
  • It’s slow and inefficient.
  • It solves the key distribution problem.
  • Alongside hashing, it can be used in digital signatures for integrity, authenticity and non-repudiation.

Hybrid cryptography

  • Hybrid cryptosystems combine different types of cryptography to utilize their varying benefits

Cryptographic hashing

  • Cryptographic hash functions are one-way deterministic algorithms.
  • They can be combined with public-key encryption to make digital signatures.

Digital signatures

  • Digital signatures are formed through a combination of public-key cryptography and hashing.
  • They can be used to verify the integrity and authenticity of data, as well as to provide non-repudiation.

Putting all of the pieces together

  • By combining these different types of cryptography, we can bring confidentiality, integrity, authenticity and non-repudiation to our communications. This allows us to control access, so that only authorized parties who have the key can access our information.

Key generation

  • Key generation is the first step of a key’s life cycle.

Key storage

  • Whoever has access to our keys has access to our data, so keys must be stored securely.
  • A TPM generates and stores keys on the device.
  • An HSM is a piece of hardware that provides a hardened repository for key storage and management.

Key storage in the cloud

  • We should store our keys away from our data. Common options include:
  • Internally managed (on the instance).
  • Externally managed (on an HSM or in a third-party key-escrow service).

Cryptography in clouds

  • There are many different places where we can deploy encryption in the cloud.
  • The difficult part is deciding where we should put it, to get the right balance between security and performance.

Encryption throughout the data lifecycle

  • Encryption can be deployed in various ways throughout the cloud lifecycle, from IRM for sharing, to cryptoshredding for destruction.

Storage-level encryption

  • Storage-level encryption involves encrypting the entire contents of a hard drive.

Database-level encryption

  • We can apply encryption to the data in our databases through:
  • The application
  • A proxy
  • The database
  • File encryption

Sandboxing

  • A sandbox is a safe area where untrusted code can be isolated and run.

4.7 Design appropriate identity and access management (IAM) solutions

Access control

  • Access control is a concept that refers to the collection of mechanisms that work together to protect organizational assets while allowing authorized subjects to have controlled access to objects.
  • Fundamental access control principles include:
  • Need to know
  • Least privilege
  • Separation of duties
  • Access control is applicable at all levels of an organization and covers all types of assets.

Identification

  • Identification is the process of claiming an identity.

Authentication

  • Authentication is the process of proving an identity.
  • The three authentication factors are something you know, something you are, and something you have.
  • You should use multi-factor authentication for all critical accounts.

Authorization

  • Authorization involves the system checking whether a known user is allowed access to a given resource.
  • The system grants or denies access based on this evaluation

Accounting

  • Accounting is the process of monitoring and logging individual access.
  • Accounting acts as a deterrent and allows organizations to investigate who is responsible if they detect malicious behavior.

User access review

  • Account access review is an ongoing process, regardless of the type of account (user, system, service).
  • Account access review frequency should be based upon the value of resources and associated risks.
  • Privileged accounts should be reviewed more frequently.

Single sign-on

  • Single sign-on refers to authenticating one time and being able to access multiple systems.
  • A disadvantage of single sign-on is that it involves centralized administration, which represents a single point of failure.
  • Kerberos is one of the major single sign-on protocols, and it provides accounting, authentication, and auditing services.

Federated identity

  • Single sign-on refers to one-time authentication to gain access to multiple systems in one organization.
  • Federated identity management (FIM) refers to one-time authentication to gain access to multiple systems, including systems across separate domains and those associated with other organizations.
  • Federated identity management (FIM) relies on trust relationships established between different entities.
  • FIM trust relationships include three components: principal/user, identity provider, relying party.
  • Principal/user = the person who wants to access a system.
  • Identity provider = the entity that owns the identity and performs the authentication.
  • Relying party = the service provider.

Federated identity standards

  • Important federated identity management protocols include: Security Assertion Markup Language (SAML), WS-Federation, OpenID (for authentication) and OAuth (for authorization).
  • SAML is frequently used in federated identity management (FIM) solutions, and it provides authentication and authorization.
  • OpenID and OAuth are open-standard federated access protocols that provide authentication via OpenID and authorization via OAuth.
  • SAML assertions are written in a language called XML, or Extensible Markup Language. XML is a way of communicating in a manner that is machine and human readable.

Cloud access security broker (CASB)

  • CASBs help to enforce security policies in the cloud.

Preparing for CCSP Domain 4 Exam

Cloud Application Security is the cornerstone of Domain 4, and it's where theory meets practice in the CCSP certification. This domain challenges you to think beyond traditional security measures, focusing on the unique aspects of protecting applications in cloud environments. From understanding the intricacies of secure software development lifecycles to grasping the nuances of identity and access management in the cloud, Domain 4 covers a wide range of critical topics.

In this section, we'll break down what you need to focus on for the exam and point you towards resources that will sharpen your cloud application security skills. Let's get you ready to tackle Domain 4 with confidence.

CCSP Domain 4 exam expectations: What you need to know

Important reminder: Below are some of the topics that are likely to appear in the Domain 4 of the CCSP exam. But don't be lulled into a false sense of security. Cloud Application Security is vast, and the exam reflects this. Consider our list a starting point, not a finish line.

Success demands more than targeted study. It requires a holistic understanding of how various concepts interconnect. As you prepare, challenge yourself to go beyond memorization. Analyze how each topic fits into the bigger picture of cloud security. Remember, your goal isn't just to pass an exam, but to become a proficient cloud security professional.

4.1 Advocate training and awareness for application security

  • Understand the various cloud development pitfalls.
  • The OWASP Top 10 web application security risks.
  • The OWASP Mobile Top 10 security risks.
  • The SANS top 25 software errors.

4.2 Describe the Secure Software Development Life Cycle (SDLC) process

  • What is validation?
  • The phases of the software development lifecycle.
  • The Agile principles.
  • What are immutable workloads and immutable infrastructure?
  • What is infrastructure as code?
  • The stages of the DevSecOps lifecycle.
  • The DevSecOps principles.
  • The difference between continuous integration, continuous delivery and continuous deployment.

4.3 Apply the Secure Software Development Life Cycle (SDLC)

  • The major risks associated with cloud architectures.
  • The threats that make up the STRIDE model.
  • How to use the DREAD model.
  • How to use the PASTA model.
  • The steps of the ATASM model.
  • The XSS attack variants.
  • How to prevent XSS attacks.
  • How to prevent CSRF attacks.
  • How to prevent insecure direct object reference vulnerabilities.
  • Be able to recognize an example of an SQL injection attack.
  • How to mitigate a SQL injection attack.
  • Be able to recognize SQL commands and codes.
  • Understand what is meant by the term buffer overflow and how a buffer overflow works.
  • Understand how a buffer overflow can be mitigated through ASLR.
  • The three ASVS levels.
  • The importance of versioning.

4.4 Apply cloud software assurance and validation

  • The difference between functional and non-functional tests.
  • The difference between black-box and white-box testing.
  • The differences between SAST, DAST and fuzz testing.
  • The differences between positive and negative testing.
  • What is the purpose of a vulnerability assessment?
  • Understand the difference between vulnerability assessment and penetration testing.
  • Understand the steps involved in penetration testing.
  • The difference between internal and external testing.
  • The difference between blind and double-blind testing.
  • Understand the types of testing perspectives, approaches, and knowledge.
  • What is abuse case testing?

4.5 Use verified secure software

  • Understand what the term application programming interface means and what function an API provides to applications.
  • Understand the two most common API formats and the characteristics of each.
  • Understand the techniques commonly used to secure APIs.
  • The key ways to mitigate risks of third-party suppliers.
  • The important third-party software management considerations.
  • The potential pitfalls of open-source software.

4.6 Comprehend the specifics of cloud application architecture

  • What are proxies and what type of intelligence do they have?
  • The types of attack that WAFs can help block.
  • The uses of DAM.
  • The pros and cons of symmetric-key encryption.
  • What is the key distribution problem?
  • What is public-key encryption?
  • The pros and cons of public-key encryption.
  • Why do we use hybrid cryptosystems?
  • The important properties of cryptographic hash functions.
  • The difference between a TPM and an HSM.
  • The difference between internally managed and externally managed keys.
  • Where we can use encryption in the cloud.
  • The ways that we can encrypt data throughout the data lifecycle.
  • The options that each service model has for data encryption.
  • The different ways that we can encrypt data in our databases.
  • What is a sandbox?

4.7 Design appropriate identity and access management (IAM) solutions

  • Understand the fundamental access control principles and how they might be applied.
  • What is the RMC?
  • The different authentication factors.
  • What is multi-factor authentication?
  • Why should user access reviews be conducted?
  • How often should access reviews be conducted?
  • Which accounts should be reviewed most frequently?
  • Understand the underlying premise, as well as the pros and cons of single sign-on.
  • Understand the basis of federated identity management (FIM) and the three components that make up any federated access system.
  • Understand the importance of SAML and its relationship to federated identity management (FIM).

Resource recommendations

Cloud application security is an expansive field, constantly evolving with new technologies and threats. To truly grasp CCSP Domain 4, you need more than just theoretical knowledge - you need a comprehensive, all-encompassing study strategy. This means diving deep into concepts, understanding their real-world applications, and continuously challenging your understanding.

To help you build this robust skillset in this domain, we recommend the following resources:

  • Destination Certification CCSP MasterClass: Our MasterClass is designed to tackle the expansive nature of Domain 4. It features visual MindMaps that help you connect complex Cloud Application Security concepts, making it easier to grasp the big picture. The MasterClass also adapts to your progress, focusing your study time on areas where you need the most improvement within this domain. This adaptive learning ensures efficient and effective preparation. Plus, with our 1-on-1 mentoring calls (available in Preferred and Premier plans), you'll get personalized help to make sense of challenging topics and ideas in cloud application security.
  • Destination CCSP: The Comprehensive Guide: his guide excels in breaking down complex Cloud Application Security topics into digestible concepts. It's particularly useful for visualizing the interconnections between different aspects of Domain 4, making it easier to grasp the big picture of cloud application security.
  • Destination Certification CCSP App: Perfect for reinforcing your Domain 4 knowledge on-the-go. The app's flashcards and practice questions are specifically tailored to test your understanding of key Cloud Application Security concepts, helping you identify and address any gaps in your knowledge.

FAQs

How much of the CCSP exam does Domain 4 typically cover?

This domain covers 17% of the CCSP exam, making it a significant portion of the test.

Does the exam format change for Domain 4 questions compared to other domains?

The format remains consistent across all domains. However, Domain 4 questions may include more technical scenarios due to the nature of the content.

Are there any prerequisites I should focus on before diving into Domain 4 study materials?

A solid understanding of web applications, and general cybersecurity principles will provide a strong foundation for Domain 4 study. Familiarity with at least one major cloud platform is also helpful.

Destination Certification: Your Key to CCSP Domain 4 Success

Cloud Application Security is a vast and complex field, constantly evolving with new technologies and threats. Domain 4 of the CCSP exam reflects this complexity, challenging candidates to demonstrate a comprehensive understanding of securing applications in cloud environments.

.Navigating this expansive domain can be daunting, but Destination Certification is here to guide you through. Our CCSP MasterClass is tailored to tackle the intricacies of Domain 4 head-on. With our adaptive learning system, we focus on strengthening your weak areas in Cloud Application Security, ensuring efficient and effective preparation. Our visual MindMaps help you connect complex concepts, while our proven exam strategies prepare you for Domain 4's difficult questions. Plus, with personalized mentoring, you'll get expert guidance to clarify challenging topics.

Ready to master CCSP Domain 4 and ace your exam? Enroll in our CCSP MasterClass today and experience the Destination Certification advantage. Your path to becoming a certified cloud security professional starts here!

Image of Rob Witcher - Destination Certification

Rob is the driving force behind the success of the Destination Certification CISSP program, leveraging over 15 years of security, privacy, and cloud assurance expertise. As a seasoned leader, he has guided numerous companies through high-profile security breaches and managed the development of multi-year security strategies. With a passion for education, Rob has delivered hundreds of globally acclaimed CCSP, CISSP, and ISACA classes, combining entertaining delivery with profound insights for exam success. You can reach out to Rob on LinkedIn.

Rob is the driving force behind the success of the Destination Certification CISSP program, leveraging over 15 years of security, privacy, and cloud assurance expertise. As a seasoned leader, he has guided numerous companies through high-profile security breaches and managed the development of multi-year security strategies. With a passion for education, Rob has delivered hundreds of globally acclaimed CCSP, CISSP, and ISACA classes, combining entertaining delivery with profound insights for exam success. You can reach out to Rob on LinkedIn.

The easiest way to get your CCSP Certification 


Learn more about our CCSP MasterClass

Image of masterclass video - Destination Certification

Read our other CCSP domain summaries