A practical introduction to hybrid cloud security

A practical introduction to hybrid cloud security
Mikhail Krechetov
cloud infrastructure cybersecurity expert

The Russian corporate sector has entered a cloudy phase. Even banks and government agencies are no longer afraid of clouds and are looking for the most convenient and profitable ways to use them. Mikhail Krechetov, an expert on cloud infrastructure cybersecurity at STEP LOGIC, has talked to TAdviser about the way the concepts of corporate information security are changing, and what methods and means of security are coming to the fore.

About the main trends in changes of corporate IT infrastructures

Corporate IT infrastructures are evolving to provide digitalization processes with a reliable, flexible, scalable, high-performance IT foundation. Today, an important sign is that companies are no longer afraid of clouds. How does this affect corporate IT infrastructure?

MIKHAIL KRECHETOV: Yes, the traditional IT infrastructure is changing. Data Processing Centers are evolving from traditional to the hybrid cloud model of the future. Moreover, this process did not start today—prefixes "as a Service" appeared in circulation in the mid-noughties, and the pandemic accelerated the universal transition to the cloud.

The hybrid cloud today is a way to solve problems that are primarily related to the lack of in-house capacity. In addition, the introduction of a hybrid cloud can significantly reduce costs, because own DPC is always more expensive. But most importantly, moving to the cloud contributes to the implementation of the global accessibility principle. A classic DPC depends on geography: the location of redundant DPCs and communication channels between them. However, "geographic" disaster tolerance, even within the same region, does not provide a complete guarantee that IT service will smoothly perform the business purpose. That's why the main driver for moving to the cloud is global availability and business process backup. The only obstacle to this process so far is the prejudice that there are no effective mechanisms to protect new technologies.

Isn't that true?

MIKHAIL KRECHETOV: That's not true. The very evolution of classic DPCs has gradually pushed the implementation of the "Infrastructure as a Service" concept. In fact, the physical DPC has long been a private cloud. When moving to the hybrid cloud, customers who have been able to implement such a concept are not likely to change their approach to understanding their own infrastructure. In a way, it is easier for them now, because they have long understood and accepted the fact that a fully software-defined infrastructure requires different approaches in terms of security. It is really hard for those who still think of DPCs as some kind of "mastodon" consisting of hardware and cables that need to be controlled, going lower and lower on the scale of process detalization without any attempt at abstraction.

On the Zero Trust concept

How does this affect the information security tasks that, as always, all companies must address?

MIKHAIL KRECHETOV: By and large, the concept of security has not changed much, so abandoning of traditionalism in the protection of cloud environments is still not happening. All basic information security mechanisms for both physical and cloud infrastructures are the same: perimeter firewalls, protection against external intrusions, information security event analytics, endpoint protection.
But there are new risks, the most important of which is global accessibility. We don't know which device is currently connected to our network. If, say, it was plugged in 30 minutes ago, anything could have happened during that half hour in an uncontrolled space. That is why cloud security today is impossible without the implementation of the Zero Trust principle, which states: trust no one, let the outsiders in only after prior verification, and only give access to those resources that are explicitly specified. By the way, the US National Institute of Standards and Technology NIST last August released a regulatory document that describes the sequence of steps in the implementation of Zero Trust. Unfortunately, automated solutions that implement the "zero-trust" model are still quite expensive, and this hinders the implementation of the concept at the global level.

What is the essence of these solutions?

MIKHAIL KRECHETOV: In general, there are three main approaches to the Zero Trust model implementation. The first approach is through microsegmentation. In fact, STEP LOGIC has been saying this for the last 5–6 years: separate the sheep from the goats, that is, control east-west traffic flows (horizontal communications) within your infrastructure. Then you can practically create a separate firewall for each application.

The second approach is through global id: verification of credentials and access rights each time a subject accesses an object.

The third approach is a combined one, it involves the implementation of Zero Trust with a special tool based on network technologies. However, the most effective method is a combination of all these approaches. For example, using the Cisco Secure Workload solution (formerly known as Tetration) combined with the Cisco Identity Services Engine (ISE) identification platform with Google's two-factor authentication attached. This is the most elaborate Zero Trust model to date.

In general, the Cisco Secure Workload application model is multifaceted. There is the most secure solution possible—a software sensor embedded in every component of the virtual infrastructure (e.g., a virtual machine). This is called an agency scheme. Sometimes, however, there are issues related to the inability to run agents on some components, and they are not technical but rather organizational. That's when a so-called agentless scheme comes to the rescue—hardware sensors that allow you to analyze the traffic by integrating with infrastructure components, such as ADC, or even AWS VPC.

Cisco Secure Workload can automatically detect and map relevant policies and applications. In the solution, this process is called application dependency mapping (also known as ADM)—and it is just one of the steps of building ZTA, and otherwise, the platform can be characterized as a practical tool for organizing microsegmentation and zero-trust model in strict accordance with the requirements described in the standards.

Indeed, quite an expensive set of technologies...

MIKHAIL KRECHETOV: That's right. But without automated means, it is impossible to understand what is going on not only inside cloud infrastructures but even inside classic DPCs. If you dive into this abyss of data manually, you will never implement Zero Trust. You can spend a few years and then find an undocumented object—to be honest, every customer has enough of them. In our experience, the main reason for the slow implementation of adequate IS tools is the lack of understanding of the integrity and connectivity of traffic flows.

One must apply specialized tools that can separate the flow from the process. And this means controlling not only the network availability between the two applications but also the user: whether they have sufficient rights for this network communication. Without a reference to these factors, we cannot provide reliable protection.

On global availability provided by the cloud and allocation of responsibility

Global accessibility, which is the "trump card" of the cloud—is it also the most significant risk?

MIKHAIL KRECHETOV: Yes, it's a very serious aspect. This can be judged by the regulations adopted in the West, where the issues of data availability are monitored much more closely than in Russia. Recently, for example, the requirements for the id of the subject have been greatly tightened. Why is that?

The explanation is simple: malware most often uses a certain technique that involves the use of stolen credentials. Moreover, the higher the account in its service position, the better for the solution of the problem. In most cases, it is not even user credentials that are used but rather service records—forgotten ones whose password has not been changed for many years. According to IS analysts, this is the most popular exploit route used in 2020–2021.

And given the fact that employees of different companies have all gone remote and now work from different locations, it is extremely difficult to discern what is really going on: whether this is a legitimate user, or a device with these credentials has already been compromised, or an intruder hiding behind someone else's account is conducting surveillance or destructive activity on the IT infrastructure. Without special tools, the administrator cannot tell anything for sure, because there is no understanding of what this particular user is doing at this point in time.

There is another specific aspect of cloud structures—access to data and services is granted to two parties. It affects the IS features of hybrid environments, doesn't it?

MIKHAIL KRECHETOV: Yes, absolutely. Moreover, I think the delineation of responsibility in the cloud is the real scourge on security tools. Today it is a very difficult and confusing process. Why is that? In your own DPC, maintenance service, IS, cables, turnstiles, etc. are a controlled area, they can be detailed. But when we start moving parts of services to the cloud, there is a new layer—the cloud provider, which is responsible for the availability of the service. There doesn't seem to be a problem. In the classic IaaS paradigm, it does not rise above the hardware and virtualization platform, and in the platform paradigm—above the application level. That is, in terms of infrastructure, all levels are defined. But for information security, everything is much more complicated.

A high level of security can only be boasted by those who can control all the processes and bring them together. Naturally, in cloud infrastructures doing this is impossible. After all, we have no guarantee that the outside provider who is leasing the resource does not have vulnerabilities at the lower level.

How can this be addressed?

MIKHAIL KRECHETOV: In my opinion, the help of a Managed Security Service Provider (MSSP) is needed here. This is a third-party organization that, on the one hand, cooperates with the provider of computing resources and, on the other hand, is in contact with the customer, and can help them build the right security model for specific circumstances. This will greatly simplify the migration of services to the cloud and the future life of the company in a hybrid environment. I would say that MSSP acts as some kind of long-term integrator that provides security services.

How can a company properly "calculate" what combination of own IS solutions and external services will be most effective?

MIKHAIL KRECHETOV: First of all, it depends on the size and, oddly enough, the maturity of the customer. In a perfect process, the customer should independently, perhaps with the help of external auditors, define its information security policy and clearly describe the cloud security objectives of the migration. If they can formulate it independently, then they will be able to implement the appropriate tools.

If there are problems at the first stage, it is certainly better to immediately contact an external security vendor who can properly formulate policies, goals, and objectives, as well as convey to the customer all the possible risks and methods of their mitigation. After all, today not even all large companies with well-developed IS departments are well aware of the security issues of cloud environments.

On the blurred perimeter becoming visible

"Poor visibility" of corporate resources and systems, in terms of processes taking place in the cloud, a blurred perimeter of the protected landscape—can this be fixed?

MIKHAIL KRECHETOV: DPCs are switching to software-defined perimeters en masse. Whoever has made the transition will be able to move to the cloud, figuratively speaking, in two minutes: the service simply migrates to the new platform and lives there. Naturally, the perimeter visibility suffers from this. But with the implementation of Zero Trust and identity management, we can clearly define all the flows externally and internally. And then the degree of hybrid environment visibility will be as good or even better than in the classical DPCs, because there was no possibility to go to a higher level of detail than the "north-south" line (Enterprise DC—Cloud DC).

What does it look like for a remote user?

MIKHAIL KRECHETOV: Suppose we have a remote user who connects to a corporate resource. Previously, it was unknown what processes they were triggering within the infrastructure. Now Zero Trust tracks all the details: the geographical movement of the user (they could go on vacation to an tropical island), the connection to a particular resource, the launch of a particular application. The detailed security matrix allows you to determine if this interaction is allowed, or if they should not run this application? And why are they on a tropical island, anyway? We used to not think about it, but today's security mechanisms provide that information.

It turns out that if you don't pay attention to security issues in the cloud, the visibility of the perimeter is reduced to almost zero. But if we build it up correctly, we end up with information about all the interactions.

On the safe use of the cloud for service development

Nowadays, many companies use the cloud as a platform for deploying services. What role do development security issues play in the overall background of IS in hybrid environments?

MIKHAIL KRECHETOV: The security of the continuous development and deployment process, which is indicated by the acronym CI/CD (Continuous Integration, Continuous Delivery), is one of the cornerstones of the hybrid cloud IS. Both developers and DevOps can release in production a service that has not been sufficiently verified from a security point of view.

Are we back to the eternal argument between developers and security specialists? One needs the speed, and the other—a thorough check?

MIKHAIL KRECHETOV: Yes, it's not easy to find the right balance. Therefore, it is important to focus on the security of the development process itself. This will save a lot of money, that will be spent on the security installed.

If at the stage of service development and deployment we take into account all the risks and mitigate the possible threats, then we can limit the imposed IS tools only to a certain circuit, say, to a critical front-end application. But the issues of back-end security must be fully resolved.

I believe that, in terms of security development, the future has to do with a shift in the security community toward application development, code quality control, and early-stage development. It's no coincidence that over the past couple of years, the IT industry's attention has been focused on analyzing application source code for security: the industry has seen the costs of overlay security in the cloud skyrocket.

And going back to the first question, we can now specify four aspects that reflect the technological novelty of clouds, compared to traditional IT infrastructures: delineation of responsibility, microsegmentation, global identity and DevOps and CI/CD protection.

On the security of containerized environments

The method of implementing IT services in containerized environments is very popular today. What new threats and risks does this undoubtedly convenient mechanism pose?

MIKHAIL KRECHETOV: This is probably the only IS segment where there are still "blind spots." Containers are hosted on the same server, operating system, and with the same network interface. How do you distinguish one container from another? There is no universal solution yet. We are learning to use the existing IS solutions, such as the already mentioned Cisco Secure Workload platform. The goal is to use horizontal visibility control to level out the possibility of threats spreading within the containers themselves.

The second way to minimize risk is through developers. This matter has been discussed earlier. Security technologies have begun to explore territory that was previously the sole domain of DevOps: they are learning how to properly confine space, etc. The stage of accumulating practical experience is now in progress, which should in the foreseeable future result in the creation of a special concept of container security. That's when we will be able to confidently say that the difference between the classic and cloud infrastructure has completely disappeared, everything has become a cloud.

On the options for a practical approach to Zero Trust

A full Zero Trust implementation is quite expensive. Are there more economical approaches to solving problems of this class?

MIKHAIL KRECHETOV: If customers already have control between applications at the lower level, it is easier for them to implement the Zero Trust concept. For the rest, the use of microsegmentation technologies is the easiest and cheapest solution for separating applications within the infrastructure itself. A great approach is to combine microsegmentation with a robust and good vulnerability management platform.

And if microsegmentation is only being planned, so we basically have a kind of "black box"—an app with infrastructure, then you can use external tools to scan and identify potential gaps. Admittedly, despite the fact that such platforms are not as expensive as a full Zero Trust solution, there is a big "but"—the eternal struggle for performance, because of which, in particular, the IT department can oppose various scanning processes. This thread of misunderstanding goes back to the early 2000s. So I would advise applying both microsegmentation and vulnerability management at the same time. In this case, we don't just run a scan, which essentially gives us a "spoiler" listing of vulnerabilities, but we get a platform that has scanned for vulnerabilities, compiled a summary report, and stored it in a database to reflect the dynamics of vulnerabilities and their fixes the next time we run it. In other words, it implements the process of vulnerability management and, crucially, allows you to build IT and IS work in harmony.

You know, I would generally recommend starting the transition to a new IS management system by building vulnerability management processes. In this way, we will understand whether it is worth spending money on vulnerability scanning at all? Maybe the team has super-developers and DevOps who know how to avoid IS errors right away? This does not really happen, but you will immediately see the problem areas that can be reliably covered without the use of overlay products.

In general, three basic blocks are needed to create a secure hybrid environment that is IT-friendly: secure code development, secure DevOps processes, and classic cloud security.

On clouds and regulatory requirements

Are the new hybrid structures reflected in the IS regulator's documents?

MIKHAIL KRECHETOV: Unfortunately, there is no single document that regulates the use of security features in hybrid infrastructures yet. By the way, there isn't one in the West, either. The only requirement of the regulator that we must strictly comply with is the storage of all data within the Russian Federation. But this is where the activity of the cloud provider itself matters. For example, Yandex.Cloud, seeking to conquer the market, received the relevant conclusions of the FSTEC (Federal Service for Technology and Export Control) in the spring that their infrastructure can host both government information systems (GIS) and components of systems that process personal data of citizens. Rostelecom is headed in this direction as well.

In general, in the absence of specific regulatory requirements for the hybrid cloud, one has to use the requirements formulated for GIS, CII (critical information infrastructure), and personal data regulation, as well as FSTEC orders with their new threat model. All this is basically enough for a take-off. But more is needed. For example, the requirements for the severity of enhanced authentication are still not clearly defined. 

The domestic regulatory framework has one peculiarity: they tell us WHAT to do, but almost never tell us HOW to do it. This is why our documents, for example, the order on the protection of the CII, can be applied to absolutely any environment, whether hybrid or classic DPC, and it opens up a lot of room for creativity. At the same time, the question is always there: are we doing it right?

In my opinion, the domestic regulatory framework is slowly moving toward the American approach to regulatory documentation: in each case, it clearly spells out what to do, how, and why. We do not yet have the "why" aspect at all, but the "how" section is already beginning to take shape in the new documents. Perhaps someday we will see a cloud security matrix established in domestic regulatory documents.

On implemented IS projects in hybrid environments and lessons learned

At present, there is an active accumulation of experience in securing hybrid environments. What challenges are STEP LOGIC customers facing in this regard?

MIKHAIL KRECHETOV: The most common problem is the lack of understanding of network flows among customers. In this case, we help to advance the concept of microsegmentation, to separate the flows from each other. Another popular request is preparation for the process of moving to the cloud. The first stage is usually the implementation of global access and vulnerability management systems, followed by the implementation of internal environment protection, microsegmentation.

What companies or industries are most likely to undertake such projects?

MIKHAIL KRECHETOV: First of all, it is the financial sector. It is critical for banks to get a fail-safe, highly available infrastructure. More often than not, they build a kind of symbiosis of private clouds on someone else's computing power. There is also a tangible demand for cloud computing in the aircraft industry.

In fact, for banks, moving to the cloud is not an end in itself. Initially, due to the transfer of a huge number of employees to a remote location, the task of building a reliable service emerged. The primary idea was microsegmentation, and scattering the service across clouds began after the rudiments of cloud structures had been implemented in major DPCs.

In addition, many Russian customers use rented equipment from external DPCs. Many of them are building up and hybridizing their infrastructures by doing so. At some point, they encounter the visibility problem and come to us for a solution.

What advice do you have for companies thinking about hybrid cloud?

MIKHAIL KRECHETOV: First, the customer must analyze the assets on their own or with the help of outside contractors and eliminate the shadow component—Shadow IT. This is very important for the further implementation of the zero-trust concept protection.

Shadow IT are a big problem. When we implement microsegmentation on a customer's infrastructure, a service is bound to pop up that is very old, can't be turned off, can't be moved, and it's not clear what to do with it, even though it's obvious that this is a potential security breach.

For example, there was a case once where an unregistered DNS-service had lived quietly for several years at one of our customers. This unknown system was discovered when we started the microsegmentation process, and it was obvious that there were a lot of DNS-requests. Of course, we tried to turn off this information flow... Further investigation showed that this is the main service forwarding DNS-requests, of which no one knew and no records reflecting it existed. Obviously, it was a shock! That's why the asset table must be re-created: update it and find all the "skeletons in the closet," and, unfortunately, they exist in every infrastructure, especially if it is related to software development.

The second step is painstaking work to revise the existing risk model, because with the introduction of cloud technology, it can change completely. While much attention used to be paid to the risks of service availability disruption, in the cloud design with its principle of global availability, the probability that this will happen is close to zero. And privacy risks come to the fore.

The next step is the formation of specific security measures. They depend on the risk assessment conducted earlier. You can build a Secure Access Service Edge (SASE) access gateway that provides fast and secure access for all users to all apps through unification, or limit yourself to implementing a zero-trust model, but always with enhanced authentication. After that, you can start migrating to a prepared secure cloud, either in-house or with the help of a contractor.

Many companies use public clouds to test new services. Will this experience facilitate a quicker transition to a hybrid environment?

MIKHAIL KRECHETOV: Companies are used to playing with some kind of test pre-production environment in the cloud. In particular, the "big three" hyperscalers—AWS, Google Cloud, and Microsoft Azure—offer such opportunities. The Chinese colleagues have their own cloud platform that is geared to the provision of services. Of course, once the cloud mechanisms have been tested there, transferring the experience gained to your hybrid infrastructure is easy. However, there is one important aspect of this situation that, unfortunately, our companies often miss.

Our regulatory framework is silent on this, but the NIST guidelines are explicit: minimize the real data you use on someone else's public clouds. Unless you specifically built security mechanisms there, which is unlikely, because it is only about testing; in any case, you should not use the full set of confidential information that you have.

It is no coincidence that this requirement is highlighted in bold because recently there has been a noticeable increase in the theft of credentials. And this problem is rooted in development processes: a company's service inadvertently stays open in the cloud, for example, without requiring authentication, and draws a set of user accounts from a real database. But in general, experimenting with clouds, even foreign ones, is useful, because this experience is easily transferable to any Russian cloud provider.

In other words, the transition to a hybrid environment should not be considered a separate IT project?

MIKHAIL KRECHETOV: That's right. IS development is ongoing, and the transition to the cloud has not become a bucket of cold water suddenly thrown over one's head. We approached this systematically, and we are also trying to move forward with a somewhat different paradigm. The perimeter is blurring more and more, and in the future there will simply be no own DPCs in their current sense. They will become a place to store key confidential information, and all that data that should be highly accessible will be scattered all over the global information environment.

And if every company applies the approaches we talked about today, then there will be no chance for attackers and hacker groups.

Source: TAdviser

Back to all opinions
Subscribe