Our blog

How cyber security can adapt to the cloud

k

20 July 2016

Martin Stemplinger

Blogs by author:  Martin Stemplinger , Strategic Security Deals Lead, BT.

LinkedInTwitter

IT architecture paradigms and approaches change regularly, and security must adapt — rethinking the associated risk as well as processes and policies.

The changing state of IT.

IT is in a constant state of flux — evolving on a daily basis. And one of the biggest jumps in that evolution recently, is the move to the cloud. This transformation is highlighted by the prevalence of buzzwords such as ‘cloud-native applications’, ‘continuous deployment’, ‘infrastructure as code’ and ‘microservices architecture’.

Here’s just a taste of how the cloud has transformed IT so far:

  • Changes and deployments have increased massively in frequency. In 2014, for example, Amazon deployed 50 million changes — that’s more than one change deployed every second of every day.
  • Applications are no longer monolithic pieces of software that run on a few dedicated instances. The microservices architecture approach leads to applications composed of different independent pieces, that collaborate to provide the application. The amount of inter-process and inter-machine communication increases dramatically and may change dynamically at any point in time.
  • As part of cloud deployments, servers are no longer long-living well-known systems, but rather short-lived anonymous instances that are automatically started and stopped as needed. You may have come across the meme ‘pet vs. cattle’ that describes this fundamental change in thinking about servers. The connection between a certain server or container with an IP address is transient at best. A given instance may change its IP address frequently and, conversely, a given IP address may be attached to many different instances in a short time.

This creates a problem for security.

Generally speaking, these cloud-based changes are for the best. But there is one problem: they sometimes stand in stark contrast to time-proven approaches that form the basis of many security policies, technologies and procedures.

For example, the introduction of the cloud makes the following security procedures quite difficult:

  • We want to control changes and evaluate beforehand the impact of any change on the environment and on our security posture.
  • We want to perform security code reviews once before the application goes live.
  • We want to perform vulnerability scans and ethical hacking exercises, regularly enough that we can carefully assess the results and correct any findings.
  • We want to keep an inventory that maps applications and their criticality to the servers they run on.
  • We want to know exactly which servers with specific IP addresses are allowed to talk with other servers.
  • We want to correlate events in our security information and event management (SIEM), based on IP addresses and static context information.

More change, more risk.

Thus, from a superficial view point, risk increases dramatically in a cloud-based environment, and we might be tempted to fall back to saying no to change. But this is not really an option; more a recipe to become obsolete.

It’s much better to rethink technology, processes and procedures and adapt them to the new reality. Doing so not only improves acceptance for security but, in fact, may improve overall security.

Regain control the agile way.

The idea of many deployments only works because each change is small, easily rolled back and automatically performed. Therefore there’s an even better audit trail, because not only can we figure out when a change happened, but we can also be certain what was changed. If we integrate this information into our security data lake, then troubleshooting and incident response becomes faster and unintended changes are much easier to recognise.

Integration of automated vulnerability scans into the continuous deployment allows us to integrate application and network security requirements into the development lifecycle. Doing so provides timely feedback, both to the developers and the security team, allowing them to understand the risk involved in the deployment of this particular version of the software.

In this way we can overcome barriers between security teams on the one hand, and everybody else in IT on the other hand. Regular ethical hacking exercises are still valuable to understand which parts of the integrated security controls need improvement.

Microservices increase the complexity of the environment. And this complexity not only makes it harder to evaluate if changes in communication patterns are benign or suspicious, but also makes it much harder to understand what business impact an attack may have.

Collaboration is key.

Adapting to the challenge that microservices bring up, requires an increased collaboration between security teams and the DevOps teams, helping them understand the current state of the applications (what does this combination of services do and is it intended?) and continuously ingesting this information into a data lake to be able to include it in analytics. On a positive side, if security flaws are found within one service they can be much easier to fix because the impact is smaller and the different services are only loosely coupled.

Maybe the biggest shift in mindset that’s required, is in network security. We must let go of the assumption that our data centres consist of a fixed amount of servers; that they provide certain applications and have fixed IP addresses; and that they have well-defined communication paths between single entities, allowing us to statically configure firewall rules that permit communication between these well-defined entities.

Building new firewalls.

Many network security policies still mandate that every firewall rule must contain only one source and destination address. In the future we need to base firewall rules on the type of server and their general network characteristics and certain patterns of inter-service communication. Rethinking this risk may also mean the move to more advanced ‘east-west firewalls’ that are able to base their rulesets on this information from the underlying container and virtualisation infrastructure.

A ‘fluid’ environment like this may be daunting to all security people who are tasked with spotting suspicious behaviour or attack attempts. In order to succeed it becomes imperative to include a large amount of this fast-changing context information into the security repositories, and use it for analysis. This requires that the analysts receive support from automatic algorithms and machine learning, to detect the security incidents they want to uncover.

Find out more.

Find out how BT solutions can help you regain control of your security by heading to our dedicated web page.