How to Minimizing Cloud Security Risk with Multi-layered Approach

The cloud comes with a heightened risk profile due to the nature of cloud computing — the security of a cloud customer’s data is only as good as the cloud providers’ measure. Therefore, any enterprise looking at using cloud services internally or in their lines of business continually must evaluate and competently manages security risks that are associated with the cloud.

How to Minimizing Cloud Security Risk with Multi-layered Approach
How to Minimizing Cloud Security Risk with Multi-layered Approach

Cloud computing offers companies the opportunity to provide their customers with a best-in-class service without having to spend large amounts of capital on infrastructure. It also offers companies the chance to scale their internal operations without a lot of capital expenditure.

However, the move to the cloud comes with a heightened risk profile due to the nature of cloud computing — the security of a cloud customer’s data is only as good as the cloud providers’ measures. That lack of control can be disquieting because it’s a business vulnerability a company can never fully mitigate. Therefore, it’s vital that any enterprise looking at using cloud services internally or in their lines of business continually evaluate and competently manage cloud security risk associated with the cloud.

In this article, we’ll break down how to identify – and understand the complexities of – the biggest cloud security risks, including how to manage the human element, where to start taking action on cloud security and how to build an extensive security infrastructure based on well-tuned tools and processes. This article details steps IT pros can take to implement a multi-layered security approach that takes advantage of today’s technology offerings and organizational best practices to reduce exposure to cloud-based security risks.

The four steps detailed in this report on cloud security risk are:
Step One: Define the Business Risks of Cloud Security Failure
Reputation and Market Value
Exposure of Personally Identifiable Information
Loss of Intellectual Property
Revenue Loss
Step Two: Understand Organizational Risks
The Human Element
Data Exposure
Hacking
Step Three: Determine Where to Start Actioning Security
Plus: Remote Access VPNs Have Ransomware on Their Hands
Footprint of a Malware Attack
Negative Impacts of VPN
Making The Case For a New Approach
Step Four: Build an Extensible Infrastructure for Your Organization
Find Someone to Take Responsibility
Log and Audit Everything
Create Security Groups and Use Role-Based Access Control
Use Encryption in the Cloud
Implement Firewalls and Isolation of Purposes
Require Virtual Private Networks (VPNs) for Cloud Access
Use Network Intrusion Detection and Prevention
Limit Administrative Access
Don’t Leave Your Privacy Showing
Keep Duties Separate
Be Careful Exposing Data Buckets
Change Management
Be Prepared for Incidents
Scope Out the Cloud Provider’s Disaster Recovery Services
Engage a Third Party to See the Things You Can’t
Summary

Step One: Define the Business Risks of Cloud Security Failure

Before one can address security risks, it’s necessary to first understand the risks of a security failure, its impact and how to mitigate against the issue where possible. Some of the key business issues a company should consider as risks include:

Reputation and Market Value

When a company is compromised, there is a degree of loss of faith in the company and its ability to secure data. This can lead to customers going elsewhere and the brand being tainted. One only need look at companies historically impacted in this way, like the British landline, broadband, TV and mobile services provider TalkTalk. After a 2015 cybersecurity breach made the news, a survey by research firm Alva Group found that TalkTalk’s reputation had sustained reputational damage among its customers and its market capitalization dropped as investors judged the company to be a risky investment.

Exposure of Personally Identifiable Information

Personally Identifiable Information (PII) is very much a hot button issue in security, especially now that regulators are scrutinizing how companies collect, store and use PII as part of their business operations. Loss of information that can identify individuals is regarded as a huge failure of the company. If the company cannot prove it undertakes the appropriate security measures, it risks both significant fines as well as the loss of potential customers.

Loss of Intellectual Property

If a competitor or even a nation-state gains the processes, product designs and strategies a company relies upon for its business operations, the stolen data provides a road map to effectively beat the company in terms of products and performance.

Revenue Loss

Ultimately, a security lapse could (will) lead to financial repercussions. If a key system is unavailable, revenue generation can slow or even stop. This can have many repercussions across the company and, in the most extreme circumstances, cause the company to fold due to being unable to meet its commitments in the short to near term. Another aspect of financial risk to consider: In addition to losing business due to systems being down or customers fleeing en masse, a company that exposes its customers to a security breach could face regulatory fines.

Step Two: Understand Organizational Risks

Once an enterprise has assessed how systemic security risks could affect their business, they need to look internally to see how to mitigate the following:

The Human Element

Most incidents result from human action or inaction. People do make mistakes and they always will.

For a lot of companies, the move to the cloud – opening an account with a cloud vendor – is where the first mistakes happen. Setting up a cloud-based account usually requires a credit card, so: Whose card? Which department? What information is associated with the account? Are there personnel who can access and change the terms of the account without supervision? If a user or administrator sets up the cloud environment based on a company credit card, they become the de-facto owner of the account.

If that individual resigned or is let go and they have access to the environment still, they present a significant risk. This is akin to failing to disable an administrator account and VPN access before they are let go.

There have been documented cases of people who had set up the initial cloud environment being let go and, due to the company not controlling the access and ownership of the cloud environment, trying to blackmail their previous employers. If they don’t follow the demands, those companies could risk data or account deletion. If the deletion threat is carried out, it removes the companies’ immediate ability to generate income.

While there are civil and criminal responses to this scenario, the long-term consequences don’t immediately offset the issue nor the risk. And while the cloud vendors may help, it won’t be an instant fix and it could likely cost more money.

The mitigation for this is to have a cloud environment properly set up by the IT department with payment via company bank transfer in the first instance. This also prevents the issue of people forgetting to renew credit cards or having them stolen, missing invoices and items just “dropping through the cracks.” It also makes it clear that the company, rather than any one individual, owns the account.

The master account needs to be secure and should not be used for daily tasks. Creating new admin accounts is a best practice. To access the master account, there should be approval required from two individuals of seniority. That helps prevent any rogue power user from trying to steal away the account.

At the same time, one of the easiest security options also returns the best security. A popular and well-used method of securing an account is the use of two-factor authentication.

Two-factor authentication provides an additional security layer. This is usually a hardware token or device with a time-sensitive rolling code (the Time-based One-time Password or TOTP algorithm) to complement something the user knows (such as a password) to be entered alongside the username and password.

Google, an extremely high-profile target, reported that once users enabled two-factor authentication, the number of successful phishing attempts dropped to zero. Microsoft similarly reported that of all the accounts that get reported, none of them utilized two-factor authentication.

Data Exposure

Out of the box, the major cloud providers offer a relatively secure default environment. It is when changes are made that issues start to occur. One very common example is the exposure of S3 buckets that can hold extremely sensitive data.

Administrators who don’t fully understand access control can easily expose the bucket to the world. It is an extremely common issue. Most companies worry about a “classic” security breach more than inadvertent data leaks. In doing so, they miss one of the biggest issues that cloud computing suffers from today, i.e. automated scanning for improperly secured data silos that can then be sold or exploited, leaving the company to pick up the pieces.

Data ransom is also becoming very popular, especially where the data in question is commercially or otherwise sensitive.
Another inadvertent data loss issue: confidential (proprietary) data being stored in publicly accessible repositories. While the data (source code, proof of concept, etc.) may have value, the private keys are sometimes accidentally left in public view. With those keys, it becomes trivial to consume cloud-based resources on someone else’s tab.

The answer to this, somewhat simple, is to have a non-public repository with access and roles restricted to just those who need them. The programmers should be educated not to include private keys in the source code! A lot of these repositories are set up by developers as a stop-gap for their code and then it becomes a permanent thing. It is considered best practice to have key stores separated from the code so that such scenarios don’t happen.

Hacking

Hacking is what most companies fear. Placing everything on the public cloud does increase the risk of hacking, especially for high-profile companies. One only needs to look at Travelex as an example of what can happen when an environment is not secure. The company was held for ransom by hackers in January 2020. Recovering from this malware, which was delivered by remote desktop vulnerabilities, has taken several weeks to recover.

The attacks can take many forms and the attacker could be an ex-employee or even a nation-state if the stakes are high enough.

Step Three: Determine Where to Start Actioning Security

Security as a whole can be broken down into three distinct zones. These are:
Confidentiality: Ensuring that only the appropriate people can see information
Integrity: Ensuring that the information is correct
Availability: Ensuring the information is available when needed

These three zones are also known as the CIA security triad. All security issues that a company faces will somehow be related to one of these three facets.

Now that we have familiarized ourselves with the potential issues and the problems they can bring, let’s focus on how to protect against these issues.

With cloud environments, both the risks and how those risks are shared have changed. Cloud-based stacks can be thought of as a shared responsibility. The cloud provider ensures the hypervisors are patched against risks like the recent Intel CPU attacks, the customer ensures the services they build are secure and, perhaps more importantly, the programmers have a responsibility to ensure their code is as secure as possible. One example of the risk that programmers bring: If the programmers are using plugins and add-ons included at build or deployment time, they are taking a risk by assuming that the plugins and add-ons haven’t had any discoverable security breaches since they were first rolled out.

There are obvious mitigations, including using locally controlled, secured versions of external libraries. While currently, such attacks are limited, they are becoming ever more popular.

It would also be remiss to not mention the slight differences between infrastructure as a service (IaaS), platform as a service (PaaS) and software as a service (SaaS). These offer varying levels of control: IaaS means the business is responsible for everything above the hypervisor (i.e. operating system, application, network, and firewall). PaaS, on the other hand, is managed by the cloud vendor and many clients consume that service. It can help in several ways, such as with cost, the lack of hands-on interaction required by the customer, no patching to manage and a highly secure environment designed for multi-customer use.

Purchasing a SaaS environment takes this one step further and the whole stack is both the responsibility of and managed by the cloud vendor or other third parties (such as a SaaS vendor that uses a cloud provider to host their application that they provide to end-users and businesses).

One note of caution here: Sometimes contracts are written that exclude the use of such shared environments. Check your contractual requirements before committing to any specific configuration. This step should not be overlooked in the rush to the cloud.

The thirty-thousand-foot view is that no one system or technology in isolation will prevent a potential attack. A well-designed system will have multiple overlapping layers of defense against such a problem. This will help any enterprise avoid being in a “hard on the outside, soft on the inside” category. In that configuration, it only takes one exploit to effectively crack open the infrastructure.

Plus: Remote Access VPNs Have Ransomware on Their Hands

When remote access VPNs were first introduced 30 years ago, they were pretty awesome. Remote access from anywhere was a concept that was forward-thinking and game-changing. But VPNs were created during a time when most apps were running in the data center, which could easily be secured with a bunch of network security appliances.

However, the world has changed as internal apps have moved to the cloud. You have to deliver a great experience, which is what users expect, with the knowledge that 98 percent of security attacks stem from the internet. And this is more critical than ever with a sudden influx of users working from home.

Minimizing Cloud Security Risk
Minimizing Cloud Security Risk

Remote access VPNs require servers to be exposed to the internet and users to be placed onto the corporate network through static tunnels that drive holes through firewalls. Now the very same technology built to protect businesses has left them vulnerable to modern malware and ransomware attacks. So how exactly does this happen?

Footprint of a Malware Attack

In an article published by Medium.com, it described how the Sodinokibi ransomware gets introduced via a VPN. Let’s take a high-level look at the typical process for how malware is introduced to a network through a VPN vulnerability:

  1. Cybercriminals scan the internet for unpatched remote access VPN servers.
  2. Remote access to the network is achieved (without valid usernames or passwords).
  3. Attackers view logs and cached passwords in plain text.
  4. Domain admin access is gained.
  5. Lateral movement takes place across the entire network.
  6. Multifactor authentication (MFA) and endpoint security are disabled.
  7. Ransomware (ex. Sodinokibi) is pushed to network systems.
  8. The company is held up for ransom.

Negative Impacts of VPN

Many organizations still feel that remote-access VPNs are necessary. And, in some cases, they may very well be. But, more often, VPNs are opening the network to the internet and, as a result, the business to increased risk.

Patching is often slow or forgotten: Remembering, and even finding time, to patch VPN servers is plain difficult. Teams are asked to do more with less, often creating a human challenge that leads to security vulnerabilities.

Placing users on the network: This is perhaps the genesis of all the issues related to remote-access VPNs. For VPNs to work, networks must be discoverable. This exposure opens the organization to attack.

Lateral risk at exponential scale: Once on the network, malware can spread laterally, despite efforts to perform network segmentation (which is a complex process in itself). As mentioned above, this can also lead to the takedown of other security technologies, such as MFA and endpoint security.

The business’s reputation: Your customers trust that you will protect their information and provide the best level of service to them. To do this, businesses must be able to protect themselves. News of a ransomware attack has a detrimental impact on your brand reputation.

Making The Case For a New Approach

The negative impacts of VPN have led to a search for an alternative solution. Gartner says that this buzz has created a world where, “By 2023, 60% of enterprises will phase out most of their remote access virtual private networks (VPNs) in favor of zero-trust network access (ZTNA).”
If you are considering alternative methods, such as ZTNA, keep these points in mind when positioning it to your executives:

Minimize business risk: ZTNA allows for access to specific business applications (based on policy) without the need for network access. Also, there is no infrastructure ever exposed, so ZTNA removes the visibility of apps and services on the internet.

Reduce costs: ZTNA can often be fully cloud-delivered as a service, which means there are no servers to purchase, patch or manage. This is not limited to just the VPN server. The entire VPN inbound gateway can now be smaller or fully removed (external firewall, DDoS, VPN, internal firewall, load balancer, etc.).

Deliver a better user experience: Given the increased availability of cloud ZTNA services when compared to limited VPN inbound appliance gateways, remote users are provided with a faster and more seamless access experience regardless of application, device or location.
Moving from VPN to ZTNA can also help with the new work-from-home initiatives that organizations around the world are rolling out. Enabling your entire workforce to work from home would strain any VPN infrastructure, and expanding it would be costly and time-consuming. Employing a ZTNA solution allows your at-home workforce to securely access all of the apps they need without the hassles and limitations of VPNs.

If you’re looking to replace your remote access VPN, you might find this page helpful. And, if you need more details about how a ZTNA solution can help enable your at-home workforce, this page is what you’re looking for. In the meantime, don’t forget to patch your VPN servers!

Step Four: Build an Extensible Infrastructure for Your Organization

In addition to a multi-layered defense, IT pros must also be sure they’re enforcing consistency in development and implementation approaches, the tools used to access the cloud and the data it’s hosting, and the end-user deployment. Here are some of the best practices to build an extensible, strong base framework.

Find Someone to Take Responsibility

It may sound obvious, but there needs to be someone who has the ultimate say in what happens at the strategic level, hence the rise of the chief information security officer (CISO) and chief security officer (CSO) who have to understand not only the technical aspects of cybersecurity but also the strategic vision of the company. This person should be able to help guide the company through the myriad of security threats, from both offensive and defensive perspectives. For smaller companies that cannot justify a C-level position, identify an individual who will oversee the cybersecurity aspect of the company.

Log and Audit Everything

Often, logs provide the first inkling of indicators of compromise (IOCs). Without trustworthy logs, it can be extremely difficult to understand the what, when and how. Some may be thinking that auditing a large environment may be difficult to manage but there is a myriad of tools that can be utilized to help separate the bulk of the data and the important items that need attention.

Logging and auditing are a requirement for certification and access to some vendor programs such as the credit card companies’ Payment Card Industry (PCI) compliance. More advanced logging and detection can enable the whole new raft of tools known as Security Information and Event Management (SIEM) tools. These SIEM tools are designed to detect anomalies in the infrastructure more accurately than any human could do and discrete trends that would otherwise be missed. There are many vendors to choose from, each offering a different take on the logging and reporting.

It is also important to note that these logs can be used in the prosecution of rogue staff or other individuals so the whole set of logs can be verifiably correct and properly audited for access.

Create Security Groups and Use Role-Based Access Control

A rudimentary mistake some administrators make is to directly assign control of objects like virtual machines, configurations and load balancers. This is just bad, full stop. Although it may work, it makes managing the entire enterprise difficult and reduces the ability to monitor and enforce.

Direct assignment of rights introduces complexity and as people start and leave the company, it can be burdensome. Create security groups and use role-based access control (RBAC) to assign rights.

Use Encryption in the Cloud

Encryption protects data from being intercepted or read by those that shouldn’t have access. Encryption in transit (i.e. utilizing the application with the data moving between the networks) is a mature offering. Data at rest, however, is equally important. If data is encrypted at rest it means that should the data be stolen, it has little use to the thief as it is encrypted. (This is assuming that the encryption keys are kept secure.)

Within cloud environments, key management is something that most tier-one cloud vendors provide as a service. It is professionally managed by administrators who fully understand the potential risks involved rather than having to be implemented by hard-pressed administrators who may not fully understand the requirements and implementation of cryptography.

Most vendors provide what is called a KEK or Key Encryption Key that can be used to provide a security mechanism for the company and allows the service provider to provide encryption services.

Implement Firewalls and Isolation of Purposes

Firewalls can help restrict access between different parts of an enterprise infrastructure, like making sure that the only systems that can contact an enterprise’s cloud assets are the authorized servers.

Such configurations help minimize exposure and slow down the “land and laterally expand” phase of a system exploitation because access is restricted to known ports for specific purposes. A very trivial example of this is isolating (or rather, grouping) identical services together.

Using firewalls and restricting access helps reduce the ability to exploit systems (poor configuration and patching, for example). It may not stop the hacker completely, but it helps reduce the initial access any hacker has.

As the number of servers and infrastructure grows, centralized firewall management becomes essential. While hosts come with firewalls, managing each one individually is not only time-consuming but has the potential to be misconfigured on an individual basis.

It should go without saying that there should be a logical separation of server types with only the minimal amount of access required for normal operation.

Require Virtual Private Networks (VPNs) for Cloud Access

Depending on the company’s configuration, cloud-to-cloud or client-to-cloud VPNs may be required. Appropriate examples of this would be if a company was consuming a SaaS application, it would want to restrict access to the platform. A site-to-site VPN would help ensure the privacy of the traffic and access from outside the VPN would not be possible if everything was configured correctly.

Use Network Intrusion Detection and Prevention

Network intrusion detection is a passive technology that logs and reports findings via appropriate avenues – a SIEM system, for example. On the other hand, a network Intrusion Prevention system can autonomously react to unexpected changes in the traffic or other signatures. There is a certain amount of advanced monitoring technology built into these devices that can track and learn good and bad traffic flows or suspicious activity. It can then act to prevent access and send alerts as appropriate.

Limit Administrative Access

The infrastructure as a whole needs to be maintained. Just as with on-site infrastructure, IT managers need to make sure that only those that need access are granted it, as well as make sure the privileges granted are enough to perform the required duties. For example, there is no need for a database administrator to be logging into the server build portal to requisition several additional servers. Similarly, managers should heavily scrutinize service accounts, audit them for usage and change passwords often.

Account lifecycles are important. Old, forgotten accounts can be a serious issue. The company NordVPN recently experienced such a failure as they had not removed a testing account from one of the servers.

Don’t Leave Your Privacy Showing

When an attacker tries to exploit a system following the “cyber kill chain” (a professional hacker, rather than the infamous “script kiddies” that take any low hanging fruit), the first thing they will do is to perform reconnaissance on the company. There is a multi-billion-dollar market in the tools that can help with this, including specialized search engines such as “Shodan” that can be used to gather intelligence without even going near the targeted company. Many other tools of a similar nature can gather similar information.

While security by obscurity is not a valid posture, taking steps to reduce the amount of information leaked can help make an attacker’s job that much more time-consuming.

Some examples of this include:

  1. Ensure domain zone transfer is disabled. This hinders people from being able to map out an enterprise’s tech environment by understanding the DNS layout. It doesn’t make it impossible to do, but why make it easy for the attacker?
  2. Don’t divulge domain registration details. Use a professional company that allows proxy registration. That way it makes it more difficult to understand the particulars of ownership.
  3. Turn off or turn down logs and systems that can give away more information than they should. An example of this is that, by default, Apache will return its version details to the browser when reporting a page not found. This can then be cross-referenced to find vulnerabilities in that specific Apache version.
  4. Contrary to the ever-connected world, it makes a lot more sense for administrators at companies to avoid giving away too much information. An example of this is that the NSA was targeting a certain database that belonged to a company of interest that was outside of US jurisdiction. They were able to penetrate the system because they were able to use social media to ascertain – with a high degree of certainty – the details of the software the company in question was running, especially when cross-referenced with popular IT help sites.
  5. In a similar vein, a company’s IT is there for company use. Cutting user access to externally hosted cloud services helps dramatically reduce the number of phishing attempts as well as unauthorized data exfiltration. Most internal data thefts are quite unsophisticated and center around the use of private to personal email or services such as Box, Dropbox and the like.

Keep Duties Separate

New administrators often fail to understand how important it is to keep duties separate. Administrator accounts should only be used for managing the infrastructure. An account with too many rights is a security risk.

Be Careful Exposing Data Buckets

While most companies fear a hack or ransomware attack, by far the riskiest proposition is data loss due to incorrectly secured data buckets.

Change Management

Often, uncontrolled changes are the root of security issues. Management of these changes means that they are logged and documented with what will happen as well as how to back out the changes. Changes should be done in a consistent and well-documented way. Without this tracking, systems can deviate from accepted security norms.

Be Prepared for Incidents

One of the least-actioned issues is being prepared for an inevitable security challenge. Having a well-designed and tested incident response plan can make a huge difference. Good security response plans will have well-defined actions, key personnel details and a template to follow, should the worst occur.

Incidents are not just security-related. They can cover many varied threats including, for example, having the infrastructure in flooding or earthquake zones. While this may not be as much of a concern, it can still happen.

Scope Out the Cloud Provider’s Disaster Recovery Services

A cloud service provider’s disaster recovery (DR) offerings should be well-defined and frequently tested. DR should only cover key systems that need to be available. It is up to the business to define those key services. It is also extremely important to test the DR plan because often there can be dependencies that aren’t documented, even in the best environments.

As part of being prepared, most cloud vendors will offer DR and failover to other data centers as part of this offering. While it may be costly, it is a serious requirement for any business. Don’t conflate DR with cloud-based system backups either. Each has its place and it should not be a one-or-the-other choice.

A lot of companies neglect to look at this type of scenario and then when a disaster scenario inevitably hits, the company itself has to scramble to address the disaster at hand while their cloud provider says, “We never said that was covered.”

Fortunately, most cloud vendors as well as other vendors offer DR between cloud environments and come with the functionality to build and test the failover steps required.

Engage a Third Party to See the Things You Can’t

Although most administrators feel they can adequately cope with security, a second set of eyes is always considered extremely good practice. Engaging a well-regarded third-party auditing and security company can help highlight potential issues that the administrator may not.

Sometimes these external verifications are required as part of contractual obligations.

Summary

Moving to the cloud comes with its own unique set of security issues. Those issues can be addressed by careful use of complementary tools beyond those the vendors provide; these tools should be focused to help any IT procreate a multi-layered security apparatus that reduces exposure to cloud-based security risks. The way to think about the security aspect at a high level is that most cloud vendors provide a general level of security tools but advanced security is not their core business.

Multi-layered Security Approach
Multi-layered Security Approach

The technologies are only one part of the solution. Together, when paired with a good, well-documented and well-implemented security plan, the business will be in a good place to be protected against the myriad of potential threats and no single failure will cause a business-impacting loss of reputation or financial losses.

It should also be noted that as long as the company can prove they have a proactive and well-defined security plan, technology and processes, it can prevent inconvenience, a potentially large fine and the embarrassment that a security incident causes.

Source: Zscaler