Cloud Threat Detection: To Agent or Not to Agent?

The shift towards cloud and cloud-native application architectures represents an evolutionary step forward from older paradigms. The adoption of containers, Kubernetes, and serverless functions, along with the use of cloud-based infrastructure, introduces a new set of risks and security challenges — as well as new opportunities. These go well beyond application security and security posture management, spanning from the build phase all the way to the application run phase.

Three areas for cloud-native security

One particular area of focus for security defenses is actively security monitoring cloud-based applications and cloud workloads, often referred to as runtime security.

We can break down cloud-based runtime security into three main categories:

1. Cloud environment security

The cloud environment is where we provision the infrastructure and services to run our applications. Running applications often involves provisioning computing resources, networking, storage resources, and access credentials to external elements such as data, external services, or secrets. This is the foundation that our cloud applications are built on, and is a critical first step in ensuring their integrity.

2. Workload and workload orchestration security

Operating modern cloud-native applications often means leveraging a container orchestration platform. In recent years, Kubernetes has been the go-to application server vehicle. Leveraging application server infrastructure like Kubernetes requires attention from a risk and threat perspective. For example, Kubernetes credentials theft, or credential compromise as a result of application breach, can be detected through continuously analyzing the Kubernetes audit log. Another example would be the detection of malware that exploit inherent weaknesses in DNS protocol through network security analysis of the workload (Pod) communications.

3. Application security

If the cloud environment is our workload vehicle where we operate and run our workloads, and containerized workloads are our application vehicle, then OS processes are where our application logic runs. Cloud functions are another example of normally short-lived processes that carry our application logic. Protecting applications is a long-standing challenge on its own. This includes application API security, memory protection, data protection, network isolation, and control, and can be achieved using multiple techniques — some of which are only practically possible through the use of security agents.

Security agents defined

Security agents represent a specialized software deployed on an application workload or application endpoint to perform specific security functions, such as network monitoring, process-level monitoring, memory monitoring, file system monitoring, system API call monitoring, and memory monitoring. They may be involved in preventive actions, detection actions, or security forensics data collection actions.

For example, we can deploy security agents to virtual machines (cloud instances) and provide host-level security. We can use security agents for containerized environments like Kubernetes, where one security agent monitors and secures Kubernetes Pods, as well as the Kubernetes node itself. We can also have embedded security agents that monitor and secure serverless functions such as Lambda, or even security agents that provide process-level security and API-level security.

Agentless security is an approach that leverages security signals obtained via cloud APIs, such as network flows, DNS flows, cloud API access audit logs, or application API access logs. Collecting data from those security signals incurs a lower operational cost than agent-based security, but it can come with some limitations. For instance, in application security, the agentless approach has fewer security signals to analyze, and may not support some threat detection techniques such as process system call analysis.

Should I use agents to secure my cloud applications?

So should you be using agents, or not? The answer really boils down to how wide and deep a detection and protection fabric you want to cast, and how many skilled personnel are available to deploy and operate various security controls and respond to security incidents.

Agents provide a greater level of detail, and are generally your best bet when it comes to preemptive prevention of fine-grained policy-based controls such as network segmentation. However, they also require additional effort and overhead to manage the agents themselves with regular updates and configurations.

The agentless approach is excellent at correlating, segmenting, and analyzing data from various workloads, as it does not rely on sharing resources with the monitored workloads. That said, you’re going to sacrifice depth of coverage at certain layers of the stack as a trade-off to relatively lower operational overhead, because agentless approaches rely on cloud provider APIs, which are less granular than what host/workload or process-level agents can collect.

So to achieve comprehensive security and balance operational overhead, the recommendation is typically to leverage both technologies.

You’ll likely want to use an agentless approach to get fast and wide coverage of both risks and threats, or in places where agents can not be deployed, such as a hosted container environment like AWS Fargate or Cloud Functions. Another example would be to assess software vulnerability and detect persistent malware — which can be achieved using both technologies, but with different levels of time until detection.

Conversely, agents can be used in environments like Kubernetes where the operational overhead is relatively low, and the containerized workload granularity requires fine-grained and deeper security controls.

The decision of where to use an agent-based approach depends on what you’re trying to secure. If you’re looking to get real-time visibility into all of your cloud resources and workloads, establish a single source of “good” across your multiple cloud platforms, prioritize risk across your environments, and measure compliance against organizational and industry standards and best practices, an agentless approach like InsightCloudSec is a great choice.

According to a study made by security firm Cyble, about 900,000 Kubernetes are exposed to hackers because of misconfiguration errors. Thus, they could be exposed to involuntary malicious scans and can anytime evolve into simple data access points to cyber criminals.

Some companies do not engage professional staff to defend their IT assets and that’s proving as an enormous fortune to hackers, says research. As Kubernetes are administrative containerized applications, mis-configurations can prove fatal to these open source systems, says Cyble.

Usually, such kind of infrastructure is deployed in production environments in order to align physical and virtual machines into a uniform Application Programming Interface (API). If such environments are not configured as per the playbook, they can lead to vulnerabilities that could lead to data exfiltration and hacking attempts.

Surprisingly, Cyble mentioned in its security report that most Kubernetes were falling prey to data hacks due to the use of default settings. As hackers are familiar with such settings, they can easily take control of an environment and steal data or lead to downtimes.

Updating the Kubernetes with the latest software version, removing debugging tools from production environments and replacing default passwords might help avoid misconfiguration blunders.

Limiting the exposure of ports and permissions might also help in securing the Kubernetes to the core.

Note- Threat actors are using automated scanners to check for vulnerabilities in Kubernetes environments. And most of such vulnerable environments were found in the United States, Germany and Republic of China.

 

The post Kubernetes misconfiguration may lead to data hacks appeared first on Cybersecurity Insiders.

Is Your Kubernetes Cluster Ready for Version 1.24?

Kubernetes rolled out Version 1.24 on May 3, 2022, as its first release of 2022. This version is packed with some notable improvements, as well as new and deprecated features. In this post, we will cover some of the more significant items on the list.

The Dockershim removal

The new release has caught the attention of most users, mainly due to the official removal of Dockershim, a built-in Container Runtime Interface (CRI) in the Kubelet codebase, which has been deprecated since v1.20.

Docker is essentially a user-friendly abstraction layer, created before Kubernetes was introduced. Docker isn’t compliant with CRI, which is why Dockershim was needed in the first place. However, upon discovering maintenance overhead and weak points involving Docker and containerd, it was decided to remove Docker completely, encouraging users to utilize other CRI-compliant runtimes.

Docker-produced images are still able to run with all other CRI compliant runtimes, as long as worker nodes are configured to support those runtimes and any node customizations are properly updated based on the environment and runtime requirements. The release team also published an FAQ article dedicated entirely to the Dockershim removal.

Better security with short-lived tokens

A major update in this release is the reduction of secret-based service account tokens. This is a big step toward improving the overall security of service account tokens, which until now remained valid as long as their respective service accounts lived.

Now, with a much shorter lifespan, these tokens are significantly less susceptible to security risks, preventing attackers from gaining access to the cluster and from leveraging multiple attack vectors such as privileged escalations and lateral movement.

Network Policy status

Network Policy resources are implemented differently by different Container Network Interface (CNI) providers and often apply certain features in a different order.

This can lead to a Network Policy not being honored by the current CNI provider — worst of all, without notifying the user about the situation.

In this version, a new subresource status is added that allows users to receive feedback on whether a NetworkPolicy and its features have been properly parsed and help them understand why a particular feature is not working.

This is another great example of how developers and operation teams can benefit from features like this one, alleviating the often involved pain with troubleshooting a Kubernetes network issue.

CSI volume health monitoring

Container Storage Interface (CSI) drivers can now load an external controller as a sidecar that will check for volume health, and they can also provide extra information in the NodeGetVolumeStats function that Kubelet already uses to gather information on the volumes.

In this version, the Volume Health information is exposed as kubelet VolumeStats metrics. The kubelet_volume_stats_health_status_abnormal metric will have a persistentvolumeclaim label with a value of “1” if the volume is unhealthy, or “0” otherwise.

Additional noteworthy changes in Kubernetes Version 1.24

A few other welcome changes include new features like implementing new changes to the kubelet agent, Kubernetes’ primary component that runs on each node. Dockershim-related CLI flags were removed due to its deprecation. Furthermore, the Dynamic Kubelet Configuration feature, which allows dynamic Kubelet configurations, has been officially removed in this version, after it was announced as deprecated in earlier versions. This removal aims to simplify code and to improve its reliability.

Furthermore, the newly added kubectl create token command allows easier creation and retrieval of tokens for the Kubernetes API access and control management, or SIG-Auth.

This new command significantly improves automation processes throughout the CI/CD pipelines and will accelerate roles-based access control (RBAC) policy changes as well as hardening TokenRequest endpoint validations.

Lastly, a useful added feature for cluster operators is to identify Windows pods at API admission level authoritatively. This can be crucial for managing Windows containers by applying better security policies and constraints based on the operating system.

The first release for 2022 mainly introduces improvements towards providing helpful feedback for users, reducing the attack surface and improving security posture all around. The official removal of Dockershim support will push organizations and users to adapt and align with infrastructure changes, moving forward with new technology developments in Kubernetes and the cloud in general.

Additional reading:

NEVER MISS A BLOG

Get the latest stories, expertise, and news about security today.


InsightCloudSec Supports the Recently Updated NSA/CISA Kubernetes Hardening Guide

The National Security Agency (NSA) and the Cybersecurity and Infrastructure Security Agency (CISA) recently updated their Kubernetes Hardening Guide, which was originally published in August 2021.

With the help and feedback received from numerous partners in the cybersecurity community, this guide outlines a strong line of action towards minimizing the chances of potential threats and vulnerabilities within Kubernetes deployments, while adhering to strict compliance requirements and recommendations.

The purpose of the Kubernetes hardening guide

This newly updated guide comes to the aid of multiple teams — including security, DevOps, system administrators, and developers — by focusing on the security challenges associated with setting up, monitoring, and maintaining a Kubernetes cluster. It brings together strategies to help organizations avoid misconfigurations and implement recommended hardening measures by highlighting three main sources of compromise:

  • Supply chain risks: These often occur during the container build cycle or infrastructure acquisition and are more challenging to mitigate.
  • Malicious threat actors: Attackers can exploit vulnerabilities and misconfigurations in components of the Kubernetes architecture, such as the control plane, worker nodes, or containerized applications.
  • Insider threats: These can be administrators, users, or cloud service providers, any of whom may have special access to the organization’s Kubernetes infrastructure.

“This guide focuses on security challenges and suggests hardening strategies for administrators of National Security Systems and critical infrastructure. Although this guide is tailored to National Security Systems and critical infrastructure organizations, NSA and CISA also encourage administrators of federal and state, local, tribal, and territorial (SLTT) government networks to implement the recommendations in this guide,” the authors state.

CIS Benchmarks vs. the Kubernetes Hardening Guide

For many practitioners, the Center for Internet Security (CIS) is the gold standard for security benchmarks; however, their benchmarks are not the only guidance available.

While the CIS is compliance gold, the CIS Benchmarks are very prescriptive and usually offer minimal explanations. In creating their own Kubernetes hardening guidelines, it appears that the NSA and CISA felt there was a need for a higher-level security resource that explained more of the challenges and rationale behind Kubernetes security. In this respect, the two work as perfect complements — you get strategies and rationale with the Kubernetes Hardening Guide and the extremely detailed prescriptive checks and controls enumerated by CIS.

In other words, CIS Benchmarks offer the exact checks you should use, along with recommended settings. The NSA and CISA guide supplements these by explaining challenges and recommendations, why they matter, and detailing how potential attackers look at the attack. In version 1.1, the updates include the latest hardening recommendations necessary to protect and defend against today’s threat actors.

Breaking down the updated guidance

As mentioned, the guide breaks down the Kubernetes threat model into three main sources: supply chain, malicious threat actors, and insider threats. This model reviews threats within the Kubernetes cluster and beyond its boundaries by including underlying infrastructure and surrounding workloads that Kubernetes does not manage.

Via a new compliance pack, InsightCloudSec supports and covers the main sources of compromise for a Kubernetes cluster, as mentioned in the guide. Below are the high-level points of concern, and additional examples of checks and insights, as provided by the InsightCloud Platform:

  • Supply chain: This is where attack vectors are more diverse and hard to tackle. An attacker might manipulate certain elements, services, and other product components. It is crucial to continuously monitor the entire container life cycle, from build to runtime. InsightCloudSec provides security checks to cover the supply chain level, including:

    • Checking that containers are retrieved from known and trusted registries/repositories
    • Checking for container runtime vulnerabilities
  • Kubernetes Pod security: Kubernetes Pods are often used as the attacker’s initial execution point. It is essential to have a strict security policy, in order to prevent or limit the impact of a successful compromise. Examples of relevant checks available in InsightCloudSec include:

    • Non-root containers and “rootless” container engines
      • Reject containers that execute as the root user or allow elevation to root.
      • Check K8s container configuration to use SecurityContext:runAsUser specifying a non-zero user or runAsUser.
      • Deny container features frequently exploited to break out, such as hostPID, hostIPC, hostNetwork, allowedHostPath.
    • Immutable container file systems
      • Where possible, run containers with immutable file systems.
      • Kubernetes administrators can mount secondary read/write file systems for specific directories where applications require write access.
    • Pod security enforcement
      • Harden applications against exploitation using security services such as SELinux®, AppArmor®, and secure computing mode (seccomp).
    • Protecting Pod service account tokens
      • Disable the secret token from being mounted by using the automountServiceAccountToken: false directive in the Pod’s YAML specification.
  • Network separation and hardening: Monitoring the Kubernetes cluster’s networking is key. It holds the communication among containers, Pods, services, and other external components. These resources are not isolated by default and therefore could lead to lateral movement or privilege escalations if not separated and encrypted properly. InsightCloudSec provides checks to validate that the relevant security policies are in place:

    • Namespaces
      • Set up network policies to isolate resources. Pods and services in different namespaces can still communicate with each other unless additional separation is enforced.
    • Network policies
      • Set up network policies to isolate resources. Pods and services in different namespaces can still communicate with each other unless additional separation is enforced.
    • Resource policies
      • Use resource requirements and limits.
    • Control plane hardening
      • Set up TLS encryption.
      • Configure control plane components to use authenticated, encrypted communications using Transport Layer Security (TLS) certificates.
      • Encrypt etcd at rest, and use a separate TLS certificate for communication.
      • Secure the etcd datastore with authentication and role-based access control (RBAC) policies. Set up TLS certificates to enforce Hypertext Transfer Protocol Secure (HTTPS) communication between the etcd server and API servers. Using a separate certificate authority (CA) for etcd may also be beneficial, as it trusts all certificates issued by the root CA by default.
    • Kubernetes Secrets
      • Place all credentials and sensitive information encrypted in Kubernetes Secrets rather than in configuration files
  • Authentication and authorization: Probably the primary mechanisms to leverage toward restricting access to cluster resources are authentication and authorization. There are several configurations that are supported but not enabled by default, such as RBAC controls. InsightCloudSec provides security checks that cover the activity of both users and service accounts, enabling faster detection of any unauthorized behavior:

    • Prohibit the addition of the service token by setting automaticServiceAccountToken or automaticServiceAccounttoken to false.
    • Anonymous requests should be disabled by passing the --anonymous-auth=false option to the API server.
    • Start the API server with the --authorizationmode=RBAC flag in the following command. Leaving authorization-mode flags, such as AlwaysAllow, in place allows all authorization requests, effectively disabling all authorization and limiting the ability to enforce least privilege for access.
  • Audit logging and threat detection: Kubernetes audit logs are a goldmine for security, capturing attributed activity in the cluster and making sure configurations are properly set. The security checks provided by InsightCloudSec ensure that the security audit tools are enabled. In order to keep track of any suspicious activity:

    • Check that the Kubernetes native audit logging configuration is enabled.
    • Check that seccomp: audit mode is enabled. The seccomp tool is disabled by default but can be used to limit a container’s system call abilities, thereby lowering the kernel’s attack surface. Seccomp can also log what calls are being made by using an audit profile.
  • Upgrading and application security practices: Security is an ongoing process, and it is vital to stay up to date with upgrades, updates, and patches not only in Kubernetes, but also in hypervisors, virtualization software, and other plugins. Furthermore, administrators need to make sure they uninstall old and unused components as well, in order to reduce the attack surface and risk of outdated tools. InsightCloudSec provides the checks required for such scenarios, including:

    • Promptly applying security patches and updates
    • Performing periodic vulnerability scans and penetration tests
    • Uninstalling and deleting unused components from the environment

Stay up to date with InsightCloudSec

Announcements like this catch the attention of the cybersecurity community, who want to take advantage of new functionalities and requirements in order to make sure their business is moving forward safely. However, this can often come with a hint of hesitation, as organizations need to ensure their services and settings are used properly and don’t introduce unintended consequences to their environment.

In order to help our customers to continuously stay aligned with the new guidelines, InsightCloudSec is already geared with a new compliance pack that provides additional coverage and support, based on insights that are introduced in the Kubernetes Hardening Guide.

Want to see InsightCloudSec in action? Check it out today.

Additional reading: