agile cloud security examples

Cloud Computing – A Potpourri of Adaptive Defenses

Cloud adoption is gaining momentum by leaps and bounds. How can you argue with the ubiquity and elasticity that cloud computing offers? And with the price of cloud services in a freefall and no bottom in sight, there doesn’t seem to be anything stopping this train.

Oh, wait a minute, what about security? Well, the versatility of the cloud creates some attractive new options for adaptive security capabilities, while also providing a platform for new innovations. Here are 10 ways that active cyber defenses can help protect the cloud and the cloud can also, in turn, benefit active cyber defenses.

10 Adaptive Cloud Defenses

The very nature of the cloud’s scalability and elasticity makes it adaptive but how does that apply to adaptive security? Cloud computing offers several unique capabilities from an Active Cyber Defense perspective. These capabilities include:

  • Big Data analytics for deep insight into cyber events and information flows to detect suspicious anomalies and discover threats
  • Remote live forensics
  • Cloud-based threat intelligence services and honeynets/sinkholes
  • Virtual appliances to automate your incident response COAs
  • Virtual desktops to essentially eliminate patching and provide adaptable virtual spaces to securely operate as the risk level changes.

At the same time, adaptive defenses can assist in securing your cloud environment. I will cover examples for:

  • Digital identity
  • Data self-protection
  • Hypervisor and VM protection
  • Cloud gateways and brokers
  • Secure enclaves in the cloud.

I will cover the first five today and catch the next five in a follow-up article. Also, I would remiss to not mention the variety of cloud security frameworks that are being researched or worked on by folks at places like the Cloud Security Alliance (Software-Defined Perimeter – see here); and, by NIST (Cloud Computing Security Reference Architecture – see here.) I will examine their approaches to adaptive security in a future article as well.

Big Data – Big Insight

Many large enterprises are flooded with cyber sensor data coming from all directions – log data from servers, IDS and IPS data, netflows and packet captures, endpoint sensor data, firewall alerts, DNS alerts, alerts from outages, and many more. In addition, these enterprises may be subscribed to one or more threat intelligence sources which can also pile in lots of data that needs to be dissected and correlated to other events that are going on in the enterprise. SIEM tools have been the traditional tools used to make sense of all this data, but many of these tools cannot keep up with the volumes of information that now must be processed and stored by large enterprises. Plus, much of this data is unstructured making it difficult to perform the types of correlations that SIEM tools are good at doing.

So enter the cloud and Big Data. The evolution of Big Data tools is enabling security analytics to effectively add a level of context and awareness to security incidents that was previously impossible to achieve using traditional SIEM tools. The value of Big Data analytics lies in not just being able to efficiently collect and store large data sets, but being able to make sense of the data over time. Remember that OODA loop – the engine powering active cyber defenses (see article)? Well, Big Data lives in that Orient portion of the loop. Big Data analytics can put enterprises in a better position to predict attacks by comparing the current network states to “normal” baselines of network activity. Today’s network traffic is of a very different format, of much larger volume and speed, and data moves around and across networks very differently due to virtualized workloads – new baselines of what is normal versus abnormal are only possible with Big Data analytics.

And don’t just take this for granted from me. According to the Cloud Security Alliance report: Big Data Analytics for Security Intelligence – (see here), “Big Data tools have the potential to provide a significant advance in actionable security intelligence by reducing the time for correlating, consolidating, and contextualizing diverse security event information, and also for correlating long-term historical data for forensic purposes.”

Big Data technologies are able to keep pace with the ever-growing amount of cyber event data through elastic compute and storage capabilities, and by enabling more flexible algorithms for analysis. Other unique adaptive features of these technologies are their ability to achieve real-time analysis of streaming data and their ability to munge large amounts of unstructured data.

The CSA report highlights several Big Data use cases for adaptive security:

  • Big Data tools were trained to identify malware-infected hosts in an enterprise network and the malicious domains accessed by the enterprise’s hosts. The results showed that high true positive rates and low false positive rates can be achieved with minimal ground truth information (that is, having limited data labeled as normal events or attack events used to train anomaly detectors).
  • Big Data tools were used to identify infected hosts participating in a botnet using netflow records.
  • Big Data tools were used to sift through massive amounts of data in search of anomalies that would be indicative of an Advanced Persistent Threat (APT). By looking for small deviations from the usual patterns of users, and correlating these anomalies to attack patterns, attack indicators, and likely targets, the tools were able to ferret out APTs, exposing these stealthy attacks that could not be identified with more traditional detection methods.

Big Data tools can also be used to partition large datasets for analysis by other tools such as SIEM tools and Splunk, thereby reducing the need to retrain cyber analysts on the Big Data tools. Data provenance tools need to be added to the analytics mix as well. As Big Data expands the sources of data it can use, the trustworthiness of data sources needs to be verified.

cloud computing

Offloading Security to the Cloud

Security-as-a-service is not a new concept when it comes to the cloud. However, when it comes to cyber forensic services, using cloud-based services is fairly new and unique. The issue with cyber forensics to date is that it is highly manually intensive, requiring specialized tools and skills. Also, there are key issues with respect to preservation of the chain of evidence if prosecution is warranted. Cloud environments can make this difficult due to the transient nature of virtual machines and their multi-tenancy. However, cloud-based cyber forensics provides several advantages since many organizations lack the deep skills needed for this type of work. If prosecution is out of the question, then leveraging these services can be very useful. Also, you don’t have to be operating in the cloud to use these services.

One example of this capability is Google’s Rapid Response (GRR). GRR is an open source incident response framework which is intended to provide a scalable solution for remote live forensics. As everyone is aware, Google publicly disclosed, in 2010, that they were the subject of a targeted attack, commonly referred to as “Operation Aurora.” From this experience, Google quickly realized the nascent state of the incident response industry and tools. Thus, once the smoke cleared, the Google Team began investing a lot of resources into augmenting their own security capabilities and reducing incident response to a search problem.

GRR is now fully-supported and open sourced at github. GRR consists of an agent probe that is deployed to a target system, and a server infrastructure that can manage and talk to the agent. One of the key focus areas of Google’s work was on identifying artifacts that need to be collected for forensics analysis, and providing the tools to enable the live collection and timeline analysis of these artifacts in a trusted manner (timestamped, hashed, and securely transported). Artifacts are different from indicators of compromise (IOCs) – artifacts are also referred to as cyber observables and reflect the stateful property of an object (e.g., presence of a mutex) or a measurable event (e.g., creation of a registry key on a host), while an IOC is a description of an observation that may be related to an intrusion – the Who, What, Where, When, How, and sometimes why. IOCs tie to observables and artifacts.

Using GRR, a cloud-based security and forensics center can instruct each agent probe to collect events and raw traffic, send them back for deep analysis, and generate new security rules. These new security rules can be enforced by collaborative Unified Threat Management (UTM) tools and the feedback events of such rules can be returned to the security center. By this type of close-loop control, a collaborative, cloud-enterprise network security management system can identify and address new distributed attacks more quickly and effectively.

Another security-as-a-service that seems to be popping up everywhere is cloud-based cyber threat intelligence services. I covered this some in an article on Intel-Based Defenses – see link. What is interesting about cloud-based threat intelligence services is how the cloud helps to efficiently enable the use of honeynets, honeypots, and sinkholing for cyber threat intelligence processing and sharing, such as: discovering and tracking cyber threat actors, collecting TTPs, and monitoring botnets, as well as proactively denying access to the bots from the botnet herders. The leading organization for honey-based technologies is the Honeynet Project – see http://www.honeynet.org/. It provides a wide variety of open source capabilities that can be leveraged.

The cloud’s elasticity and scalability provides many options to exercise honey-based technologies. One example is through the use of low-interaction honeypots such as honeyd. Honeyd is a small daemon that creates virtual hosts on a network. The hosts can be configured to run arbitrary services, and their personality can be adapted so that they appear to be running certain operating systems. For example, Honeyd can appear to the attacker to be a Cisco router, a Windows web server, or Linux DNS server.

There are several advantages to emulating different operating systems. First, the honeypot can better blend in with existing networks if the honeypot has the same appearance and behavior of production systems. Second, you can target specific attackers by providing systems and services they often target, or you can target specific systems and services you want to learn about. Anytime Honeyd sees a connection attempt to an unused IP, it intercepts the connection and then interacts with the attacker, pretending to be the victim. The cloud also allows you to constantly shift the profile of victim systems by provisioning different virtual victim hosts.

Clouds are also useful in enabling honeynets by virtue of their platform-as-a-service or infrastructure-as-a-service offering. Honeynets are architecture, an entire network of computers designed to be attacked. The idea is to have an architecture that creates a highly controlled network, one where all activity is controlled and captured. Within this network you place your intended victims, real computers running real applications. The bad guys find, attack, and break into these systems on their own initiative. When they do, they do not realize they are within a Honeynet. All of their activity is captured without them knowing it. This is done by inserting kernel modules on the victim systems that capture all of the attacker’s actions. At the same time, the Honeynet controls the attacker’s activity. Honeynets do this using a Honeywall gateway. This gateway allows inbound traffic to the victim systems, but controls the outbound traffic using intrusion prevention technologies. This gives the attacker the flexibility to interact with the victim systems, but prevents the attacker from harming other non-Honeynet computers.

Using Virtualization to Adapt

Cloud service providers offer virtual workload templates which can automate Courses of Actions (COAs) for defensive responses and mitigation actions. For example, AWS Cloud Formation supports templates and workflows which have open APIs and can be integrated with enterprise tools like AD. These templates can be parameterized and come with enterprise features such as Role-Based Access Controls, governance, and automatic tagging of resources. Parameters are a way for users to specify unique or sensitive values in the properties of stack resources. This parameterization comes in handy, since security needs can adapt with different threat contexts, as AWS Cloud Formation allows you to change the set of resources that make up a security stack. Therefore, when your threat context changes or you need to spin up a new security tool or policy, you can easily do so through changes in parameters to your virtual security stack.

In a similar fashion, virtual desktop infrastructure (VDI) can create trusted, tailored, virtual spaces where users can select/create different environments for different activities that satisfy a variety of threat and mission scenarios. Desktop virtualization is leading to the desktop being disassembled. Intrusion detection and prevention, applications, and user personas can be discretely managed and stored, only to be recomposed via a VM orchestrator and the network into the familiar workspace for each user at log-in. Checking and patching vulnerabilities and making updates are easier as well with VDI, since a single gold image only needs to be scanned and patched and then replicated as users need them.

As you can see, there are many ways that cloud environments can help to enable proactive, predictive, and adaptive defenses for your enterprise. Let me hear from you if you have some more ideas. Next time we’ll examine the five ways that active cyber defenses can help provide better cloud security.

Thanks for reading and keep adapting.