CCSP Domain 3 - Cloud Infrastructure Components MindMap
Download a FREE Printable PDF of all the CCSP MindMaps!
Your information will remain 100% private. Unsubscribe with 1 click.
Transcript
Introduction
Hey, I’m Rob Witcher from Destination Certification, and I’m here to help YOU pass the CCSP exam. Today, we’ll be covering some of the major topics related to cloud infrastructure components in Domain 3. The aim is to help you understand how they all interrelate.

This is the first of seven videos for Domain 3. I have included links to the other MindMap videos in the description below. These MindMaps are a small part of our complete CCSP MasterClass.
Cloud Infrastructure Components
There are a ton of components that need to work seamlessly together to deliver cloud services. These components form the backbone of cloud computing and enable all of the various -as-a-services we use: infrastructure, platform, software, databases, functions, containers, blockchain, machine learning as a service, and the list goes on and on.
Major cloud components
So, we’re going to talk through the major components required to deliver the services. At a high level, the major groupings of components are compute, network and storage.
So I’ll walk you through compute, network, and storage, and talk about the components used in each.
Compute
We’ll start with compute.
Compute is essentially the ability to run code. As I mentioned as part of the resource pooling characteristic in the very first MindMap, you don’t typically have direct access to a physical server and physical CPUs in the cloud. So, how do you access compute–how do you run your code and applications?
You have access to various types of virtualized computing options such as virtual machines, containers and serverless.
Virtual Machines
Let’s start with virtual machines. A virtual machine is an operating system and some applications running on top of a layer of abstraction instead of running directly on the physical hardware.

This is a good example of where a picture is probably worth a thousand words. As you can see in this diagram here at the bottom, we have the “Compute Node”. This is essentially the physical server comprising a CPU, a bunch of RAM, a network interface card, etc. It’s the physical hardware. In a traditional computer the operating system would then be running directly on top of the hardware, but in a virtual machine we have a layer of abstraction between the hardware and the OS. What is providing this layer of abstraction? The hypervisor. The hypervisor is a piece of software. I like to think that the job of the hypervisor is to be a giant liar. Here’s why: an operating system expects to have total and complete control of all the underlying hardware, because a key job of an operating system is to control all the underlying hardware in a coordinated fashion. If we wanted to run two or more operating systems simultaneously on the same underlying hardware, it would never work, because each OS would be trying to have total and complete control and they would therefore conflict with each other.
With virtualization, we do this all the time by creating a layer of abstraction between the hardware and the OS. The layer of abstraction is the hypervisor, the giant liar. The hypervisor is simulating the underlying hardware to each OS. So each operating system thinks it is controlling the underlying hardware, but is actually just controlling virtual hardware that the hypervisor is simulating.
This setup therefore allows us to run multiple operating systems and their applications simultaneously on one physical server–one compute node.
As you can see in the diagram here, we have three virtual machines. Watch out–on the exam, virtual machines might also be referred to as instances or guests.
Containers
Next, let’s talk about containers. Containers are highly portable code execution environments that run within an operating system, sharing and leveraging resources of that operating system.
The basic idea here is that you can package your code inside a container. Much like an intermodal shipping container.
The cool thing about a shipping container, is once you’ve stuffed whatever you want to inside you can now move that container around the world via ship, truck, train, whatever you want, without ever having to re-pack the container. It’s highly portable.
The same idea applies with a container that you can put your code in. You stuff your code in the container and then you can run that container on your Mac laptop, an on-premise Linux server, the Windows Azure Cloud, the Google Cloud–containers make your code highly portable.
To sum it up: Containers encapsulate an application and its dependencies–such as libraries, configurations, and other binaries–into a single, lightweight unit. This approach ensures that the application runs consistently across different computing environments, from development to testing and production.

Just like with virtual machines, there is a layer of abstraction that a container runs on top of. It’s called the container engine. I’ll talk about container engines a bit later in this MindMap.
A major difference between containers and virtual machines is where this layer of abstraction is located.
With virtual machines, the abstraction is between the hardware and the operating system. With containers, the abstraction is between the operating system and the containers.

As you can see here, we can run multiple containers on top of the container engine. A big benefit of this is efficiency and portability. From a portability perspective, the container engine abstracts away the specifics of the underlying operating system. This allows exactly the same container to run on a Windows Server, a Mac laptop or a Linux server. The specific operating system doesn’t matter, which makes containers highly portable.
Serverless Computing
The final type of virtualized compute we are going to talk about is serverless. Serverless encompasses a whole bunch of services that allow you to run applications in the cloud without having to manage any infrastructure. A subset of serverless is functions as a service. Before we talk about FaaS though, I think it’s helpful to understand a fundamental concept about FaaS–we can separate an application into a bunch of interconnected functions.

Take a look at this diagram here. A more historical way of developing an application is to create a monolithic application. This is where all the functions of the application–all the coloured shapes–are packaged together into a large executable. A disadvantage of this approach is that if the usage of the application grows, it can be difficult to scale the performance of the application. Often, only certain functions or groups of functions need to be scaled.
Enter the next approach: Microservices. The idea here is you take the same application, the same functionality, but you break the application down into groupings of functionality called microservices. One of the advantages of this is that it’s now easier to scale the application as you can focus on scaling whichever microservices you need to.
Taking this decomposition a step further, we arrive at functions as a service. Again, the same application, providing the same functionality, but we’ve broken the application down into individual functions which talk to each other via API calls.
This is a fundamental aspect of FaaS: Simple functions are written and stored in the cloud, and these functions can be called as much or as little as desired. As a developer you basically don’t have to think about the infrastructure at all. If you want to call a function a million times a second or once every three weeks, it is exactly the same function. And if you never call the function you pay nothing.
Immutable workloads
The final piece we’ll discuss here related to compute is immutable workloads.
The core idea of immutable workloads is in the name: Immutable, which means unchanging over time or unable to be changed. You create workloads (VMs, containers, microservices, etc.) where the underlying environment, configuration, and application code cannot be changed once deployed. Why? To make them way more secure and reliable.
Since the environment does not change, it's easier to reproduce and debug issues, as each instance behaves exactly the same way.
Immutable workloads reduce the chances of configuration drift, where the environment becomes inconsistent due to ad-hoc changes. This leads to a more predictable and reliable environment.
With no modifications allowed after deployment, immutable workloads minimize the risk of unauthorized changes or configurations that could introduce vulnerabilities. So they are way more secure.
Super cool.
Network
Onwards! The next major group we’ll talk about now: Network components.
Dedicated isolated networks
There’s a lot of different network traffic flying around within the cloud, and it’s extremely important for cloud providers to separate traffic based on function and purpose. This segregation enhances security, performance, and manageability. You absolutely do not want some random user out on the Internet to be able to access the super secure network that they use to manage all their cloud infrastructure. So, let's talk through the three, possibly two different isolated networks in the cloud.
Service
The service network, often called the front-end or application network, handles user and application traffic. This is where clients interact with services such as web servers, application servers, and other business applications. Put simply, the service network is what customers have access to.
Storage
Now, the really critical network from a security perspective, is the management network. Cloud service providers use it to manage all of the IT infrastructure, including administrative access to servers, network devices, hypervisors, firmware updates, configuration and monitoring systems. The management network must always be a dedicated isolated network with the utmost security.
Networking Models
Let me explain what I meant by how we might have two or three different isolated networks in the cloud.
Non-converged
In a non-converged network model, you will have three dedicated isolated networks, the three I just described: service, storage and management.
Converged
However, in a converged network model you will merge two of these networks together: the service and the storage networks. You would never, ever, want to converge your management network. As I said, and it bears repeating: The management network must always be a dedicated isolated network.

Here’s a diagram to help you visualize this. Again, the management network will always be isolated. In a non-converged network, the service network (shown as the LAN network), and the storage network (shown here as the SAN network) are separate networks.
In a converged model, the service and storage networks are merged into a single converged network.
Virtual Networks
Next up, let’s talk about how we can use virtualization to logically segment our network, and achieve some really cool security benefits with SDNs.
VLAN
VLAN–virtual local area networks–allow you to logically segment a network. Put another way, you can segment a network through software instead of having to physically segment a network by buying and configuring new network hardware. A VLAN can include a subset of the ports on a single switch, or subsets of ports on multiple switches, thus allowing systems to be logically segmented into groups. Network segmentation has a lot of security benefits and VLANs can be a good way of achieving segmentation efficiently and economically.
SDN


Software defined networks are a massive leap forward in virtualization beyond just simple VLANs. An SDN allows you to create multiple completely virtualized, software-controlled networks on top of a physical network. SDNs provide far greater flexibility to reconfigure a network rapidly by centralizing all the control of the virtualized network. SDNs are a critical part of what makes the cloud work. Fundamentally, SDNs provide abstraction for network topology, network flow, and network protocols.
Storage
Next, the final major group we’ll talk about: Storage components.
D2: Cloud Data Storage
I’m not going to talk about all the storage components again as I dedicated a whole MindMap to them, in Domain 2: the Cloud Data Storage MindMap where I discussed volume storage, object storage, raw-disk, ephemeral, and all that fun stuff.
Infrastructure as Code (IaC)
So that brings us to the last piece I’m going to explain related to all these compute, network and storage components: infrastructure as code (IaC).
Infrastructure as code (IaC) is a very cool idea. Fundamentally, everything in the cloud is essentially controlled via API calls. You can deploy a new virtual firewall and completely configure the firewall via API calls, you can instantiate a new virtual machine or a new container with whatever software you want running on it via API calls, and you can attach some storage to your newly created VM by, again, API calls. Like I said, you can create and control essentially everything via API calls.
Now, here is where infrastructure as code (IaC) comes in. You can create code that will deploy your infrastructure. Infrastructure as code (IaC) allows infrastructure to be defined and deployed with version-control in the same way you can develop software code.
You can effectively turn the deployment and management of your cloud infrastructure into a software development project.
Why? Because this enables a huge amount of automation of your infrastructure management, reducing manual errors, speeding up processes, and enhancing consistency across environments.
Just like application code, infrastructure code can be stored in version control systems like Git, enabling rollbacks, code reviews, and tracking of changes over time.
It allows for consistent configuration of environments across development, testing, and production, ensuring that infrastructure behaves the same way regardless of the environment.
Environments can be recreated on demand from the code, which is particularly useful for scaling applications or disaster recovery. The list goes on–you can even combine infrastructure as code with the immutable workloads that we discussed earlier. Super super cool.
Virtualization
What underpins all of this is the pervasive use of virtualization just about everywhere. Most of the compute, network and storage components are virtualized. Virtualization is a foundational technology in the cloud that plays a critical role in enabling the scalability, flexibility, and efficiency of cloud services. So let’s dig into virtualization.
Virtualization is a technology that creates virtual versions of physical hardware, such as servers, storage devices, or networks.
Abstraction
Abstraction is a broader concept that hides the underlying complexity of a system, exposing only the necessary details to the user or other systems. Abstraction simplifies how resources are managed and accessed by masking the low-level operations and presenting a simplified interface or model.
Virtualization enables abstraction. Virtualization creates virtual instances of hardware resources, which allow cloud providers to abstract the physical details from end-users.
Hypervisors
Hypervisors, also known as virtual machine monitors (VMMs), are software or firmware that enable virtualization by allowing multiple virtual machines (VMs) to run concurrently on a single physical host.
There are two types of hypervisors that you need to know about:
Type 1: Hardware
Type 1 hypervisors are also known as hardware or bare-metal hypervisors. They run directly on the host's hardware without relying on an underlying operating system. They have direct access to the physical hardware, making them highly efficient.
Type 2: Operating System
Type 2 operating system hypervisors run on top of an existing operating system like any other software application. This setup makes them less efficient and less secure than type 1 hypervisors, as they have to go through the host OS to access hardware, and a vulnerability in the host OS will impact the security of the hypervisor and any VMs running atop it .
Type 2 hypervisors are easier to install and manage and they are useful for testing, development, demos, and that sort of thing.
Type 1 hypervisors are much better for running systems in production securely and efficiently.

Here’s a helpful diagram that depicts the difference between type 1 and type 2 hypervisors.
Containerization Engine
Another key piece of virtualization software: Containerization engines. Containerization engines are software platforms that allow applications to be packaged, deployed, and run in isolated environments called containers.
So simply put, containerization engines are what containers run on top of.
Management Plane
And that brings us to the last but absolutely not least item of this MindMap: the management plane.
The management plane is what enables the management of the cloud. The management plane is the set of tools, interfaces, and services used to control, manage, and configure resources in the cloud environment.
The management plane provides the interface and tools through which administrators manage the cloud infrastructure, services, and applications.
The security of the management plane is of paramount importance. If the management plane was taken over by an attacker, they could basically do whatever they wanted in the cloud. So lock it down. Hard.
The management plane is super important, so we’re going to dedicate the whole next video to it.

And that is an overview of cloud infrastructure components in Domain 3, covering the most critical concepts you need to know for the exam.

If you found this video helpful you can hit the thumbs up button and if you want to be notified when we release additional videos in this MindMap series, then please subscribe and hit the bell icon to get notifications.
I will provide links to the other MindMap videos in the description below.
Thanks very much for watching! And all the best in your studies!