Rapid7 Extends Cloud Security Capabilities with Updates to Exposure Command

The cloud has become the backbone of modern innovation, powering everything from AI to remote work. But as organizations embrace the cloud, they also face an ever-expanding and increasingly complex attack surface. With purpose-built harvesting technology providing real-time visibility into everything running across multi-cloud environments, Exposure Command from Rapid7 ensures teams have an up-to-date inventory, mapping their cloud attack surface and enriching asset data with risk and business context.

To ensure teams can keep up with the torrid pace of innovation and overcome increased complexity, Rapid7 remains dedicated to investing in advancing the cloud security capabilities available within Exposure Command. To that end, we’ve made a few significant updates across AI resource coverage, third-party CNAPP enrichment and more. Let’s dive right in.

Extending coverage for securing AI/ML development in the cloud

AI and machine learning (ML) are transforming industries, but the speed of adoption can often leave organizations vulnerable. AI/ML workloads often process sensitive or proprietary data, requiring robust protections to ensure compliance with ever-evolving regulations. Safeguarding these environments isn’t just about securing the infrastructure; it’s about understanding the unique workflows and ensuring compliance at every step.

These workloads also introduce unique risks, such as model poisoning attacks or vulnerabilities in APIs, creating new vectors for data exfiltration and service disruption. Additionally, the dynamic nature of cloud-hosted AI services presents challenges in maintaining secure configurations as resources scale elastically, potentially exposing sensitive endpoints or misconfigured setups.

To that end, Exposure Command has expanded support for critical AI services like Amazon Comprehend and Polly, AWS’s natural language processing and text-to-speech services.This provides comprehensive visibility across an organization’s attack surface, aligning AI-specific risks with broader enterprise priorities.

Shifting left and securing the software supply chain

Developers are at the forefront of modern cloud environments, making “shift-left” strategies essential for effective security. By addressing risks during development rather than after deployment, teams can eliminate vulnerabilities before they become costly issues.

Exposure Command now offers more robust Infrastructure-as-Code (IaC) scanning and deeper CI/CD integration, with Terraform and CloudFormation support across hundreds of resource types. For development teams, integrations like GitLab, GitHub Actions, AWS CloudFormation, and Azure DevOps bring security checks directly into their workflows. Whether it’s identifying misconfigurations in AWS Glue Catalogs or assessing risks in SES configurations, these tools help teams secure their code without breaking their stride.

Bridging the hybrid cloud gap with native and third-party CNAPP connectors

For many organizations, the challenge isn’t just securing the cloud – it’s securing everything holistically. Hybrid environments that span on-prem systems and multiple cloud providers can create silos, leading to gaps in visibility and risk management. To tackle this, we’ve integrated InsightCloudSec data directly into Surface Command, empowering security teams with a unified view of their entire attack surface in one place.

But we didn’t stop at consolidating our own native CNAPP capabilities. Teams now get out-of-the-box integrations with popular cloud security tools like Wiz and Orca as well as CSP-native services like AWS Inspector, all making it easier than ever to identify risks across cloud-native and hybrid environments. Everything can now be seen in one place – from endpoint vulnerabilities to cloud misconfigurations and overly permissive roles – allowing for faster action with clarity and precision.

Tackling virtual desktop risks with custom registry keys

With the rise of remote work, virtual desktop infrastructures (VDIs) like AWS Workspaces have become essential. Yet, their dynamic nature makes tracking vulnerabilities a challenge. Exposure Command addresses this with features like custom registry keys for golden images, ensuring you can trace a risk back to its source and effectively prioritize remediation.

Commanding the cloud attack surface

The challenges of securing modern environments aren’t going away. Attack surfaces will continue to expand, threats will grow more sophisticated, and organizations will face increasing pressure to innovate securely.

Keep an eye out for more updates coming soon as we continue to invest in helping organizations effectively manage exposures from endpoint to cloud.

Rapid7 Extends AWS Support to Include Coverage for Newly-Launched Resource Control Policies (RCPs)

In today’s cloud-first world, security and innovation go hand-in-hand. Rapid7 is excited to announce our support for Amazon Web Services’ (AWS) new Resource Control Policies (RCPs), a powerful tool designed to bolster security controls for organizations using AWS infrastructure. As a launch partner for this feature, Rapid7’s Exposure Command now extends its capabilities even further, helping organizations set precise, scalable guardrails within their AWS environments.

The need for strong guardrails in the Ciscloud

Cloud platforms like AWS have transformed business agility by enabling rapid development, fast deployments, and real-time scalability. Yet, as organizations increase their reliance on cloud infrastructure, they face a heightened risk landscape. Rapid development cycles and AI-driven cloud services often result in more identities, permissions, and resources—all of which can lead to excessive access and increased risk.

The need for stringent guardrails has never been more urgent. Without them, organizations risk unintentionally exposing data or resources as they rapidly scale operations.

AWS addresses this challenge with two main types of policies:

  • Service Control Policies (SCPs): Manage access at the principal level (such as IAM users and roles), setting maximum permissions across the organization.
  • Resource Control Policies (RCPs): Limit access directly at the resource level, with special utility for restricting external access across the AWS environment.

Building on broad and deep AWS coverage with support for RCPs

Exposure Command supports AWS RCPs through features that enhance security posture and operational insight by providing a centralized view of RCP use within the organization, enabling teams to monitor usage and governance of these policies. Cloud and Security teams can easily search, inspect, and understand RCP impacts on cloud resources, allowing for proactive adjustments with best practice recommendations that guide users through best practices in adopting RCPs.

This RCP support further extends the robust identity analysis capabilities offered by Exposure Command and InsightCloudSec, enabling organizations to automatically refine permissions organization-wide, uncovering and addressing overly permissive roles or unused access. By doing so, security teams are able to implement and effectively scale LPA adherence across AWS resources, enhancing security without compromising agility.

Exposure Command and InsightCloudSec support broad AWS coverage that extends well beyond RCPs and SCPs, encompassing a suite of tools to secure AWS cloud resources:

  • Real-Time Visibility into AWS accounts, services, and resources.
  • Vulnerability Management for proactive scanning, identification, and remediation across cloud assets.
  • Context-Driven Risk Prioritization to address the highest-impact vulnerabilities based on risk, exploitability, and blast radius.
  • Automated Remediation for rapid policy updates and resource configurations.
  • Extensive and rapidly-expanding support for foundational AI/ML services from AWS to securely configure and track AI services usage with support for services including AWS Bedrock, SageMaker, Kendra, Comprehend, Polly and more.

Ready to Take Command of your AWS security?

As organizations embrace the cloud’s full potential, maintaining robust security while supporting rapid growth is critical. Rapid7’s Exposure Command, now with AWS RCP support, empowers security teams to adopt a zero-trust approach while maintaining the agility and flexibility that cloud environments demand. Together with AWS, we’re committed to helping organizations reduce risk, ensure compliance, and innovate confidently in the cloud.

Interested in learning more about RCPs and our expanded AWS support? Be sure to swing by booth #697 at AWS Re:Invent to chat and see the Command Platform in action!

Modernizing Your VM Program with Rapid7 Exposure Command: A Path to Effective Continuous Threat Exposure Management

In today’s threat landscape, where cyber-attacks are increasingly sophisticated and pervasive, organizations face the daunting challenge of securing a constantly expanding attack surface. Traditional vulnerability management (VM) programs, while necessary, are no longer sufficient on their own. They often struggle to keep pace with the dynamic nature of threats and the complexity of modern IT environments.

This is where continuous threat exposure management (CTEM) comes into play – an approach that shifts the focus from merely identifying vulnerabilities to understanding and mitigating exposures across the entire attack surface.

Implementing a continuous threat and exposure management process

CTEM is a term originally coined by Gartner, who defined it as, “a five-stage approach that continuously exposes an organization's networks, systems, and assets to simulated attacks to identify vulnerabilities and weaknesses.”

The five stages of CTEM as defined by Gartner are:

Modernizing Your VM Program with Rapid7 Exposure Command: A Path to Effective Continuous Threat Exposure Management
  • Scoping: This involves understanding the full threat landscape by incorporating tools like external attack surface management (EASM) and network scanning. However, it emphasizes the need to think in terms of business context, focusing on crown jewels, critical applications, and understanding what matters most to the organization.
  • Discovery: This stage focuses on discovering assets and profiling the associated risks. It requires visibility into both cloud and on-premises environments and extends beyond identifying vulnerabilities to include coverage gaps, misconfigurations, and other security risks.
  • Prioritization: Since not all risks can be addressed simultaneously, this phase involves prioritizing issues based on a combination of factors like severity, exploitability, and potential business impact, to determine what should be tackled first.
  • Validation: This stage emphasizes investing in tools that help validate security controls and map potential attack paths. It includes the use of breach and attack simulation (BAS) tools, continuous assessment services, controls monitoring, and attack path mapping to test the effectiveness of existing defenses.
  • Mobilization: The final stage is about taking action. It includes both automation of responses and fostering cross-organizational alignment to ensure that remediation efforts are executed effectively and in sync with business priorities.

This framework helps organizations continuously manage and reduce their exposure to threats in a way that is strategic and aligned to the business.

The role of exposure assessment platforms (EAPs) in CTEM

Exposure assessment platforms (EAPs) are essential to a successful CTEM program. They continuously identify and prioritize exposures, such as vulnerabilities and misconfigurations, across a broad range of asset classes. By consolidating and contextualizing data from various sources, EAPs provide a more comprehensive view of an organization’s risk landscape. This enables security teams to prioritize remediation efforts based on factors such as asset criticality, business impact, and the likelihood of exploitation.

Gartner's insights into EAPs underscore their importance in modern cybersecurity strategies. By delivering a centralized view of high-risk exposures, EAPs empower organizations to take decisive actions to prevent breaches. They also enhance operational efficiency by offering a unified dashboard that tracks the lifecycle of vulnerabilities and other exposures.

How Rapid7 Exposure Command supports modern vulnerability management programs

Exposure Command is designed to bridge the security-visibility gap many organizations face. By integrating the capabilities of an EAP into a comprehensive security platform, Exposure Command enables organizations to modernize their VM programs and align them with the principles of CTEM.

Exposure Command can help organizations achieve this transformation in a few ways:

  • Consolidated view of exposures from the inside out and outside in: Exposure Command provides a single, consolidated view of all assets and identified exposures from an internal and external perspective, including vulnerabilities, misconfigurations, and other risk signals. This unified view reduces the overhead associated with managing multiple tools and platforms, enabling security teams to focus on what matters most: mitigating the most critical threats.
  • Vendor-agnostic approach: As organizations adopt a CTEM approach, they require tools that can evaluate a wide range of exposure telemetry, including security control configurations. Exposure Command excels in this area by leveraging data from existing endpoint and network investments to create a more accurate, situational picture of the organization’s risk landscape. This holistic view is crucial for making informed decisions about where to focus remediation efforts.
  • Contextualized risk prioritization: Traditional VM programs often rely on CVSS scores to prioritize vulnerabilities, which can lead to misaligned efforts. Exposure Command, however, incorporates threat intelligence, asset criticality, and business impact into its risk-prioritization algorithms. This results in a more accurate and actionable understanding of which exposures pose the greatest risk to the organization.
  • Identify exploitability and potential for lateral movement: Exposure Command provides the contextual asset enrichment that enables effective threat detection, investigation, and response. The platform showcases how an attacker might exploit vulnerabilities and provides guidance on how to prevent such incidents.
  • Automated response workflows and deep ecosystem integration: One of the key benefits of Exposure Command is its ability to automate and streamline workflows. By integrating with existing security tools and platforms, Exposure Command can automatically ingest and analyze exposure data, reducing the manual effort required to maintain a VM program. This automation not only improves efficiency but also ensures security teams have access to the most up-to-date information. More and more we’re running into non-patchable systems too, and this deep integration and ability to provide bi-directional workflows enables more effective mobilization across teams, giving actionable feedback to those around the organization who have the ability to execute the necessary remediation actions.

While the benefits of Exposure Command are clear, it’s important to recognize that its effectiveness is tied to the maturity of the organization’s CTEM processes. If these processes are broken or immature, the value of Exposure Command may be limited. However, by adopting an outcome-driven approach that scopes the most critical aspects of the business and correlates asset context with dynamic risk ratings, organizations can maximize the benefits of Exposure Command.

Furthermore, the platform’s ability to integrate with a wide range of tools ensures it can enhance existing security programs, rather than requiring a complete overhaul. This makes it an ideal solution for organizations looking to modernize their VM programs and adopt a more proactive approach to threat management. As you look to strengthen your organization’s cybersecurity posture, consider how Rapid7 Exposure Command can help you bridge the security visibility gap and take a more proactive approach to managing your threat landscape.

Black Hat 2024: Key Takeaways and Industry Trends

What a week! As Hacker Summer camp shifts into the rearview, it’s time to take a moment to reflect on the week, what we learned and the people we had the pleasure of meeting while out in Las Vegas. As is always the case at Black Hat 2024, the cybersecurity community was buzzing with the latest innovations and insights from their favorite vendors, industry speakers and training sessions. There was no shortage of information covered throughout the week, and with the sheer volume of it, it can be hard to catch everything going on. In this post I am going to do my part by attempting to summarize some of the key themes and takeaways from the event. So, with that, let’s get right to it.

  1. The rise of advanced threats: AI and machine learning at the forefront. One of the most striking themes at Black Hat 2024 was the sophistication of modern cyber threats. This year, sessions highlighted how attackers are leveraging artificial intelligence (AI) and machine learning (ML) to lower the barrier to entry, increase the scale and impact of attacks and circumvent traditional controls. From deepfake technology used in phishing schemes to AI-driven automated attacks, the industry is witnessing a new era of cyber threats that require equally advanced defensive strategies and continuous learning to ensure security teams keep pace with emerging trends and threat vectors.
  2. Zero trust and identity: the gradual shift towards never trust, always verify. Zero Trust was a major focal point at this year's event. Experts and vendors alike emphasized the importance of adopting a Zero Trust approach to cybersecurity. This model, which operates on the principle of “never trust, always verify,” aims to minimize trust within and outside the network. The shift towards Zero Trust reflects the growing need for more robust security frameworks that can handle today’s complex threat environment.
  3. Software supply chain security: extending your defense beyond the perimete. Software supply chain attacks were a hot topic, underscoring the need for organizations to extend their security measures beyond their immediate environment. Black Hat 2024 reinforced the importance of securing not just your own systems but also those of your vendors, partners and the software dependencies that modern applications consist of. Discussions centered on strategies for improving supply chain resilience, shifting security visibility and gates earlier on in the development lifecycle and the role of continuous monitoring in mitigating these risks over time.
  4. Emerging technologies: navigating the new cybersecurity landscape. Black Hat 2024 showcased numerous emerging technologies and their implications for cybersecurity. Sessions explored the security challenges associated with Generative AI, blockchain, the Internet of Things (IoT) and Quantum Computing. As these technologies evolve, they bring both new opportunities and new risks, making it crucial for security professionals to stay informed and prepared.
  5. Training and awareness: building a culture of security. Many sessions emphasized the critical role of security training and awareness programs. With human error often cited as a leading cause of security incidents, organizations are increasingly focusing on educating their employees and fostering a culture of security awareness. Training programs that address current threats and promote best practices are becoming integral to comprehensive security strategies.

Keynote sessions did not disappoint

The keynote sessions at Black Hat are always one of my personal favorite parts, and this year was no exception. While there were a number of sessions I found insightful and well worth the watch, one in particular that stood out was Thursday’s Fireside chat with Moxie Marlinspike, the Founder of Signal, and Jeff Moss, the Founder of Black Hat and member of the U.S. Department of Homeland Security Advisory Council. During the session they covered a range of topics, but chief among them was the future of privacy and the balance between privacy and security.

Product launches: Surface Command and Exposure Command unveiled

Beyond rich discussions and cutting-edge presentations, we made some significant waves with the launch of Surface Command and Exposure Command, two exciting new product offerings designed to unify your attack surface and deliver effective hybrid risk management. We covered these new products a little more in-depth here, but to recap:

Surface Command: unifying your attack surface

Surface Command offers a unified view of both internal and external attack surfaces, breaking down data silos and providing a comprehensive picture of your environment. This tool helps organizations identify and address vulnerabilities more effectively.

Exposure Command: prioritizing critical threats with precision

Exposure Command extends these capabilities by enriching asset data with high-fidelity risk context, enabling teams to prioritize and address the most critical threats with greater precision.

These launches are a testament to Rapid7’s commitment to advancing cybersecurity and providing our customers with the tools they need to stay ahead of potential threats, and represent the next chapter in our mission to enable security teams to take command of their attack surface.

What’s Next for Rapid7?

Black Hat 2024 was a microcosm of the dynamic and rapidly evolving nature of the cybersecurity landscape. The insights gained and the innovations showcased will undoubtedly influence the industry’s approach to security in the coming years. As we move forward, the lessons from Black Hat and the invaluable direct feedback will inform our strategy and drive the development of new capabilities to meet the ever-changing demands of our customers and the industry at large.

As we wrap up our experiences from Black Hat 2024, it's clear that the cybersecurity landscape is evolving rapidly, with new threats and technologies shaping the way we approach security. The insights gained from the event, along with the direct feedback from industry peers, will be instrumental in guiding our strategy at Rapid7. We're excited to continue innovating and leading the charge in helping organizations take command of their attack surfaces. Stay tuned as we build on these insights to deliver even more powerful solutions in the coming months.

Casting a Light on Shadow IT in Cloud Environments

The term “Shadow IT” refers to the use of systems, devices, software, applications, and services without explicit IT approval. This typically occurs when employees adopt consumer products to increase productivity or just make their lives easier. This type of Shadow IT can be easily addressed by implementing policies that limit use of consumer products and services. However, Shadow IT can also occur at a cloud infrastructure level. This can be exceedingly hard for organizations to get a handle on.

Historically, when teams needed to provision infrastructure resources, this required review and approval of a centralized IT team—who ultimately had final say on whether or not something could be provisioned. Nowadays, cloud has democratized ownership of resources to teams across the organization, and most organizations no longer require their development teams to request resources in the same manner. Instead, developers are empowered to provision the resources that they need to get their jobs done and ship code efficiently.

This dynamic is critical to achieving the promise of speed and efficiency that cloud, and more specifically DevOps methodologies, offer. The tradeoff here, however, is control. This paradigm shift means that development teams are spinning up resources without the security team’s knowledge. Obviously, the adage “you can’t secure what you can’t see” comes into play here, and you’re now running blind to the potential risk that this could pose to your organization in the event it was configured improperly

Cloud Shadow IT risks

Blind spots: As noted above, since security teams are unaware of Shadow IT assets, security vulnerabilities inevitably go unaddressed. Dev teams may not understand (or simply ignore) the importance of cloud security updates, patching, etc for these assets.

Unprotected data: Unmitigated vulnerabilities in these assets can put businesses at risk of data breaches or leaks, if cloud resources are accessed by unauthorized users. Additionally, this data will not be protected with centralized backups, making it difficult, if not impossible, to recover.

Compliance problems: Most compliance regulations requirements for processing, storing, and securing customers’ data. Since businesses have no oversight of data stored on Shadow IT assets, this can be an issue.

Addressing Cloud Shadow IT

One way to address Shadow IT in cloud environments is to implement a cloud risk and compliance management platform like Rapid7’s InsightCloudSec.

InsightCloudSec continuously assesses your entire cloud environment whether in a single cloud or across multiple clouds and can detect changes to your environment—such as the creation of a new resource—in less than 60 seconds with event-driven harvesting.

The platform doesn’t just stop at visibility, however. Out-of-the-box, users get access to 30+ compliance packs aligned to common industry standards like NIST, CIS Benchmarks, etc. as well as regulatory frameworks like HIPAA, PCI DSS, and GDPR. Teams also have the ability to tailor their compliance policies to their specific business needs with custom packs that allow you to set exceptions and/or add additional policies that aren’t included in the compliance frameworks you either choose or are required to adhere to.

When a resource is spun up, the platform detects it in real-time and automatically identifies whether or not it is in compliance with organization policies. Because InsightCloudSec offers native, no-code automation, teams are able to build bots that take immediate action whenever Shadow IT creeps into their environment by either adjusting configurations and permissions to regain compliance or even deleting the resource altogether if you so choose.

To learn more, check out our on-demand demo.

Aligning to AWS Foundational Security Best Practices With InsightCloudSec

Written by Ryan Blanchard and James Alaniz

When an organization is moving their IT infrastructure to the cloud or expanding with net-new investment, one of the hardest tasks for the security team is to identify and establish the proper security policies and controls to keep their cloud environments secure and the applications and sensitive data they host safe.

This can be a challenge, particularly when teams lack the relevant experience and expertise to define such controls themselves, often looking to peers and the cloud service providers themselves for guidance. The good news for folks in this position is that the cloud providers have answered the call by providing curated sets of security controls, including recommended resource configurations and access policies to provide some clarity. In the case of AWS, this takes the form of the AWS Foundational Security Best Practices.

What are AWS Foundational Security Best Practices?

The AWS Foundational Security Best Practices standard is a set of controls intended as a framework for security teams to establish effective cloud security standards for their organization. This standard provides actionable and prescriptive guidance on how to improve and maintain your organization’s security posture, with controls spanning a wide variety of AWS services.

If you’re an organization that is just getting going in the cloud and has landed on AWS as your platform of choice, this standard is undoubtedly a really good place to start.

Enforcing AWS Foundational Security Best Practices can be a challenge

So, you’ve now been armed with a foundational guide to establishing a strong security posture for your cloud. Simple, right? Well, it’s important to be aware before you get going that actually implementing and operationalizing these best practices can be easier said than done. This is especially true if you’re working with a large, enterprise-scale environment.

One of the things that make it challenging to manage compliance with these best practices (or any compliance framework, for that matter) is the fact that the cloud is increasingly distributed, both from a physical perspective and in terms of adoption, access, and usage. This makes it hard to track and manage access permissions across your various business units, and also makes it difficult to understand how individual teams and users are doing in complying with organizational policies and standards.

Further complicating the matter is the reality that not all of these best practices are necessarily right for your business. There could be any number of reasons that your entire cloud environment, or even specific resources, workloads, or accounts, should be exempt from certain policies — or even subject to additional controls that aren’t captured in the AWS Foundational Security Best Practices, often for regulatory purposes.

This means you’ll want a security solution that has the ability to not just slice, dice, and report on compliance at the organization and account levels, but also lets you customize the policy sets based on what makes sense for you and your business needs. If not, you’re going to be at risk of constantly dealing with false positives and spending time working through which compliance issues need your teams’ attention.

Highlights from the AWS Foundational Security Best Practices Compliance Pack

There are hundreds of controls in the AWS Foundational Security Best Practices, and each of them have been included for good reason. In this interest of time this post won’t detail all of them, but will instead present a few highlights of controls to address issues that unfortunately pop up far too often.

KMS.3 — AWS KMS Keys should not be unintentionally deleted

AWS Key Management Service (AWS KMS) is a managed service that makes it easy for you to create and control the cryptographic keys used to encrypt and protect your data. It’s possible for keys to be inadvertently deleted. This can be problematic, because once keys are deleted they can never be recovered, and the data encrypted under that key is also permanently unrecoverable. When a KMS key is scheduled for deletion, a mandatory waiting period is enforced to allow time to correct an error or reverse the decision to delete. To help avoid unintentional deletion of KMS keys, the scheduled deletion can be canceled at any point during the waiting period and the KMS key will not be deleted.

Related InsightCloudSec Check: “Encryption Key with Pending Deletion”

[S3.1] — S3 Block Public Access setting should be enabled

As you’d expect, this check focuses on identifying S3 buckets that are available to the public internet. One of the first things you’ll want to be sure of is that you’re not leaving your sensitive data open to anyone with internet access. You might be surprised how often this happens.

Related InsightCloudSec Check: “Storage Container Exposed to the Public”

CloudFront.1 — CloudFront distributions should have origin access identity enabled

While you typically access content from CloudFront by requesting the specific object — or objects — you’re looking for, it is possible for someone to request the root URL instead. To avoid this, AWS allows you to configure CloudFront to return a “default root object” when a request for the root URL is made. This is critical, because failing to define a default root object passes requests to your origin server. If you are using an S3 bucket as your origin, the user would gain access to a complete list of the contents of your bucket.

Related InsightCloudSec Check: “Content Delivery Network Without Default Root Object”

Lambda.1 — Lambda function policies should prohibit public access

Like in the control highlighted earlier about publicly accessible S3 buckets, it’s also possible for Lambda to be configured in such a way that enables public users to access or invoke them. You’ll want to keep an eye out and make sure you’re not inadvertently giving people outside of your organization access and control of your functions.

Related InsightCloudSec Check: “Serverless Function Exposed to the Public”

CodeBuild.5 — CodeBuild project environments should not have privileged mode enabled

Docker containers prohibit access to any devices by default unless they have privileged mode enabled, which grants a build project's Docker container access to all devices and the ability to manage objects such as images, containers, networks, and volumes. Unless the build project is used to build Docker images, to avoid unintended access or deletion of critical resources, this should never be used.

Related InsightCloudSec Check: “Build Project With Privileged Mode Enabled”

Continuously enforce AWS Foundational Security Best Practices with InsightCloudSec

InsightCloudSec allows security teams to establish and continuously measure compliance against organizational policies, whether they’re based on service provider best practices like those provided by AWS or tailored to specific business needs. This is accomplished through the use of compliance packs. A compliance pack within InsightCloudSec is a set of checks that can be used to continuously assess your cloud environments for compliance with a given regulatory framework or industry or provider best practices. The platform comes out of the box with 30+ compliance packs, including a dedicated pack for the AWS Foundational Security Best Practices.

InsightCloudSec continuously assesses your entire AWS environment for compliance with AWS’s recommendations, and detects non-compliant resources within minutes after they are created or an unapproved change is made. If you so choose, you can make use of the platform’s native, no-code automation to remediate the issue — either via deletion or by adjusting the configuration or permissions — without any human intervention.

If you’re interested in learning more about how InsightCloudSec helps continuously and automatically enforce cloud security standards, be sure to check out our bi-weekly demo series that goes live every other Wednesday at 1pm EST!

Adapting existing VM programs to regain control

Stop me if you’ve heard this before. The scale, speed and complexity of cloud environments — particularly when you introduce containers and microservices — has made the lives of security professionals immensely harder. While it may seem trite, the reason we keep hearing this refrain is because, unfortunately, it’s true. In case you missed it, we discussed how cloud adoption creates a rapidly expanding attack surface in our last post.

One could argue that no subgroup of security professionals is feeling this pain more than the VM team. From elevated expectations, processes, and tooling to pressured budgets, the scale and complexity has made identifying and addressing vulnerabilities in cloud applications and the infrastructure that supports them a seemingly impossible task. During a recent webinar, Rapid7’s Cindy Stanton (SVP, Product and Customer Marketing) and Peter Scott (VP, Product Marketing) dove into this very subject.

Cindy starts off this section by unpacking why modern cloud environments require a fundamentally different approach to implementing and executing a vulnerability management program. The highly ephemeral nature of cloud resources with upwards of 20% of your infrastructure being spun down and replaced on a daily basis makes maintaining continuous and real-time visibility non-negotiable. Teams are also being tasked with managing exponentially larger environments, often consisting of 10s of thousands of instances at any given moment.

Adapting existing VM programs to regain control

To make matters worse, it doesn’t stop at the technical hurdles. Cindy breaks down how ownership of resources and responsibilities related to addressing vulnerabilities once they’re identified has shifted. With traditional approaches it was typical to have a centralized group (typically IT) that owned and was ultimately responsible for the integrity of all resources. Today, the self-serve and democratized nature of cloud environments has created a dynamic in which it can be extremely difficult to track and identify who owns what resource or workload and who is ultimately responsible to remediate an issue when one arises.

Adapting existing VM programs to regain control

Cindy goes on to outline how drastically remediation processes need to shift when dealing with immutable infrastructure (i.e. containers) and how that also requires a shift in mindset. Instead of playing a game of whack-a-mole in production workloads trying to address vulnerabilities, the use of containers introduces a fundamentally new approach centered around making patches and updates to base images — often referred to as golden images — and then building new workloads from scratch based off of the hardened image rather than updating and retaining the existing workload. As Cindy so eloquently puts it, “the ‘what’ I have to do is relatively unchanged, but the ‘how’ really has to shift to adjust to this different environment.”

Adapting existing VM programs to regain control

Peter follows up Cindy’s assessment of how cloud impacts and forces a fundamentally different approach to VM programs by providing some recommendations and best practices to adapt your program to this new paradigm as well as how to operationalize cloud vulnerability management across your organization. We’ll cover these best practices in our next blog in this series, including shifting your VM program left to catch vulnerabilities earlier on in the development process. We will also discuss enforcing proper tagging strategies and the use of automation to eliminate repetitive tasks and accelerate remediation times. If you’re interested in learning more about Rapid7's InsightCloudSec solution be sure to check out our bi-weekly demo, which goes live every other Wednesday at 1pm EST. Of course, you can always watch the complete replay of this webinar anytime as well!

Cloud IAM Done Right: How LPA Helps Significantly Reduce Cloud Risk

Today almost all cloud users, roles, and identities are overly permissive. This leads to repeated headlines and forensic reports of attackers leveraging weak identity postures to gain a foothold, and then moving laterally within an organization's modern cloud environment.

This has become a prevalent theme in securing the cloud, where identity and access management (IAM) plays a much larger role in governing access than in traditional infrastructure. However, the cloud was built for innovation and speed, with little consideration as to whether the access that has been granted is appropriate. The end result is an ever-growing interconnected attack surface that desperately needs to be tailored down.

To govern and minimize IAM risk in the cloud, organizations need to adopt the principle of least privilege access (LPA). Rapid7 is pleased to announce the release of LPA Policy Remediation as part of its InsightCloudSec product line. If you're not familiar, InsightCloudSec is a fully-integrated cloud-native security platform (CNSP) that enables organizations to drive cloud security forward through continuous security and compliance. The platform provides real-time visibility into everything running across your cloud environment(s), detecting and prioritizing risk signals (including those associated with IAM policies), privileges, and entitlements, and provides native automation to return resources to a state of good whenever compliance drift is identified.

With the release of LPA Policy Generation, InsightCloudSec enables customers to take action when overly permissive roles or unused access is detected, automatically modifying the existing policy to align actual usage with granted permissions. Any actions that aren't utilized over a 90-day period will be excluded from the new policy.

Permissions can't become a point of friction for developers

In today's world of continuous, fast-paced innovation, being able to move quickly and without friction is a key ingredient to delivering for customers and remaining competitive within our industries. Therefore, developers are often granted “godlike" access to leverage cloud services and build applications, in an effort to eliminate the potential that they will hit a roadblock later on. Peeling that back is a daunting task.

So how do you do that? Adopt the Principle of least privilege access, which recommends that a user should be given only those privileges needed for them to perform their function or task. If a user does not need a specific permission, the user should not have that permission.

Identity LPA requires dynamic assessment

The first step to executing on this initiative of LPA is to provide evidence to your dev teams that there is a problem to be solved. When first collaborating with your development partners, having a clear report of what permissions users have leveraged and what they have not can help move the discussion forward. If “Sam" has not used [insert permission] in the past 90 days, then does Sam really need this permission?

InsightCloudSec tracks permission usage and provides reporting over time of all your clouds, and is a handy tool to commence the discussion, laying the groundwork for continuous evaluation of the delta between used and unused permissions. This is critical, because while unused permissions may seem benign at first glance, they play a significant role in expanding your organization's attack surface.

Effective cloud IAM requires prioritization

The continuous evaluation of cloud user activity compared to the permissions they have been previously granted will give security teams visibility into what permissions are going unused, as well as permissions that have been inappropriately escalated. This then provides a triggering point to investigate and ultimately enforce the principle of least privilege.

InsightCloudSec can proactively alert you to overly permissive access. This way security teams are able to continuously establish controls, and also respond to risk in real time based on suspicious activity or compliance drift.

Like with most security problems, prioritization is a key element to success. InsightCloudSec helps security teams prioritize which users to focus on by identifying which unused permissions pose the greatest risk based on business context. Not all permissions issues are equal from a risk perspective. For example, being able to escalate your privileges, exfiltrate data, or make modifications to security groups are privileged actions, and are often leveraged by threat actors when conducting an attack.

Taking action

Ultimately, you want to modify the policy of the user to match the user's actual needs and access patterns. To ensure the insights derived from dynamically monitoring cloud access patterns and permissions are actionable, InsightCloudSec provides comprehensive reporting capabilities (JSON, report exports, etc.) that help streamline the response process to harden your IAM risk posture.

In an upcoming release, customers will be able to set up automation via “bots" to take immediate action on those insights. This will streamline remediation even further by reducing dependency on manual intervention, and in turn reduces the likelihood of human error.

When done right, LPA significantly reduces cloud risk

When done right, establishing and enforcing least-privilege access enables security teams to identify unused permissions and overly permissive roles and report them to your development teams. This is a key step in providing evidence of the opportunity to reduce an organization's attack surface and risk posture. Minimizing the number of users that have been granted high-risk permissions to the ones that truly need them helps to reduce the blast radius in the event of a breach.

InsightCloudSec's LPA Policy Remediation module is available today and leverages all your other cloud data for context and risk prioritization. If you're interested in learning more about InsightCloudSec, and seeing how the solution can help your team detect and mitigate risk in your cloud environments, be sure to register for our bi-weekly demo series, which goes live every other Wednesday at 1pm EST.

No Damsels in Distress: How Media and Entertainment Companies Can Secure Data and Content

Streaming is king in the media and entertainment industry. According to the Motion Picture Association’s Theatrical and Home Entertainment Market Environment Report, the global number of streaming subscribers grew to 1.3 billion in 2021. Consumer demand for immediate digital delivery is skyrocketing. Producing high-quality content at scale is a challenge media companies must step up to on a daily basis. One thing is for sure: Meeting these expectations would be unmanageable left to human hands alone.

Fortunately, cloud adoption has enabled entertainment companies to meet mounting customer and business needs more efficiently. With the high-speed workflow and delivery processes that the cloud enables, distributing direct-to-consumer is now the industry standard.

As media and entertainment companies grow their cloud footprints, they’re also opening themselves up to vulnerabilities threat actors can exploit — and the potential consequences can be financially devastating.

Balancing cloud security with production speed

In 2021, a Twitch data breach showed the impact cyberattacks can have on intellectual property at media and entertainment companies. Attackers stole 128 gigabytes of data from the popular streaming site and posted the collection on 4chan. The released torrent file contained:

  • The history of Twitch’s source code
  • Programs Twitch used to test its own vulnerabilities
  • Proprietary software development kits
  • An unreleased online games store intended to compete with Steam
  • Amazon Game Studios’ next title

Ouch. In mere moments, the attackers stole a ton of sensitive IP and a key security strategy. How did attackers manage this? By exploiting a single misconfigured server.

Before you think, "Well, that couldn’t happen to us," consider that cloud misconfigurations are the most common source of data breaches.

Yet, media and entertainment businesses can’t afford to slow down their adoption and usage of public cloud infrastructure if they hope to remain relevant. Consumers demand timely content, whether it’s the latest midnight album drop from Taylor Swift or breaking news on the war in Ukraine.

Media and entertainment organizations must mature their cloud security postures alongside their content delivery and production processes to maintain momentum while protecting their most valuable resources: intellectual property, content, and customer data.

We’ve outlined three key cloud security strategies media and entertainment companies can follow to secure their data in the cloud.

1. Expand and consolidate visibility

You can’t protect what you can’t see. There are myriad production, technical, and creative teams working on a host of projects at a media and entertainment company – and they all interact with cloud and container environments throughout their workflow. This opens the door for potential misconfigurations (and then breaches) if these environments aren’t carefully tracked, secured, or even known about.

Here are some key considerations to make:

  • Do you know exactly what platforms are being used across your organization?
  • Do you know how they are being used and whether they are secure?

Most enterprises lack visibility into all the cloud and container environments their teams use throughout each step of their digital supply chain. Implementing a system to continuously monitor all cloud and container services gives you better insight into associated risks. Putting these processes into place will enable you to tightly monitor – and therefore protect – your growing cloud footprint.

How to get started: Improve visibility by introducing a plan for cloud workload protection.

2. Shift left to prevent risk earlier

Cloud, container, and other infrastructure misconfigurations are a major area of concern for most security teams. More than 30% of the data breaches studied in our 2022 Cloud Misconfigurations Report were caused by relaxed security settings and misconfigurations in the cloud. These misconfigurations are alarmingly common across industries and can cause critical exposures, as evidenced in the following example:

In 2021, a server misconfiguration on Sky.com (a UK-based media company) revealed access credentials to a production-level database and IP addresses to development endpoints. This meant that anyone with those released credentials or addresses could easily access a mountain of proprietary data from the Comcast subsidiary.

One way to avoid these types of breaches is to prevent misconfigurations in your Infrastructure as Code (IaC) templates. Scanning IaC templates, such as Terraform, reduces the likelihood of cloud misconfigurations by ensuring that any templates that are built and deployed are already vetted against the same security and compliance checks as your production cloud infrastructure and services.

By leveraging IaC scanning that provides fast, context-rich results to resource owners, media and entertainment organizations can build a stronger security foundation while reducing friction across DevOps and security teams and cutting down on the number of 11th-hour fixes. Solving problems in the CI/CD pipeline improves efficiency by correcting issues once rather than fixing them over and over again at runtime.

How to get started: Learn about the first step of shifting left with Infrastructure as Code in the CI/CD pipeline.

3. Create a culture of security

As the saying goes, a chain is only as strong as its weakest link. Cloud security practices won’t be as effective or efficient if an organization’s workforce doesn’t understand and value secure processes. A culture of collaboration between DevOps and security is a good start, but the entire organization must understand and uphold security best practices.

Fostering a culture that prioritizes the protection of digital content empowers all parts of (and people in) your supply chain to work with secure practices front-of-mind.

What’s the tell-tale sign that you’ve created a culture of security? When all employees, no matter their department or role, see it as simply another part of their job. This is obviously not to say that you need to turn all employees, or even developers, into security experts, but they should understand how security influences their role and the negative consequences to the business if security recommendations are avoided or ignored.

How to get started: Share this curated set of resources on cloud security for media and entertainment companies with your team.

Achieving continuous content security

Media and entertainment companies can’t afford to slow down if they hope to meet consumer demands. They can’t afford to neglect security, either, if they want to maintain consumer trust.

Remember, the ultimate offense is a strong defense. Building security into your cloud infrastructure processes from the beginning dramatically decreases the odds that an attacker will find a chink in your armor. Moreover, identifying and remediating security issues sooner plays a critical role in protecting consumer data and your intellectual property and other media investments.

Want to learn more about how media and entertainment companies can strengthen their cloud security postures?

Read our eBook: Protecting IP and Consumer Data in the Streaming Age: A Guide to Cloud Security for Digital Media & Entertainment.

Additional reading:

NEVER MISS A BLOG

Get the latest stories, expertise, and news about security today.


Shift Left: Secure Your Innovation Pipeline

There’s no shortage of buzzwords in the tech world. Some are purely marketing spin. But others are colloquial ways for the industry to talk about complex topics that have a massive impact on how organizations and teams drive innovation and work more efficiently. Here at Rapid7, we believe the “shift left” movement very much falls in the latter category.

Because we see shifting left as so critical to an effective cloud security strategy, we’re kicking off a new blog series covering how organizations can seamlessly incorporate security best practices and technologies into their existing DevOps workflows — and, of course, how InsightCloudSec and the brilliant team here at Rapid7 can help.

What does “shift left” actually mean?

For those who might not be familiar with the term, “shift left” can be used interchangeably with DevOps methodologies. The idea is to “shift” tasks that have typically been performed by centralized and dedicated operations teams earlier in the software development life cycle (SDLC). In the case of security, this means weaving security guardrails and checks into development, fixing problems at the source rather than waiting to do so upon deployment or production.

Shift Left: Secure Your Innovation Pipeline

Historically, security was centered around applying checks and scanning for known vulnerabilities after software was built as part of the test and release processes. While this is an important step in the cycle, there are many instances in which this is too late to begin thinking about the integrity of your software and supporting infrastructure — particularly as organizations adopt DevOps practices, resources are increasingly provisioned declaratively, and the development cycle becomes a more iterative, continuous process.

Our philosophy on shift left

One of the most commonly cited concerns we hear from organizations attempting to shift left is the potential to create a bottleneck in development, as developers need to complete additional steps to clear compliance and security hurdles. This is a crucial consideration, given that accelerating software development and increasing efficiency is often the driving force behind adopting DevOps practices in the first place. Security must catch up to the pace of development, not slow it down.

Shift left is very much about decentralizing security to match the speed and scale of the cloud, and when done poorly, it can erode trust and be viewed as a gating factor to releasing high-quality code. This is what drives Rapid7’s fundamental belief that in order to effectively shift security left, you need to avoid adding friction into the process, and instead embrace the developer experience and meet devs where they are today.

How do you accomplish this? Here’s a few core concepts that we here at Rapid7 endorse:

Provide real-time feedback with clear remediation guidance

The main goal of DevOps is to accelerate the pace of software development and improve operating efficiency. In order to accomplish this without compromising quality and security, you must make sure that insights derived from your tooling are actionable and made available to the relevant stakeholders in real time. For instance, if an issue is detected in an IaC template, the developer should be immediately notified and provided with step-by-step guidance on how to fix the issue directly in the template itself.

Establish clear and consistent security and compliance standards

It’s important for an organization to have a clear and consistent definition of what “good” looks like. A well-centered definition of security and compliance controls helps establish a common standard for the entire organization, making measurement of compliance and risk easier to establish and report. Working from a single, centrally managed policy set makes it that much easier to ensure that teams are building compliant workloads from the start, and you can limit the time wasted repeatedly fixing issues after they reach production. A common standard for security that everyone is accountable for also establishes trust with the development community.

Integrate seamlessly with existing tool chains and processes

When adding any tools or additional steps into the development life cycle, it is critically important to integrate them with existing tools and processes to avoid adding friction and creating bottlenecks. This means that your security tools must be compatible with existing CI/CD tools (e.g., GitHub, Jenkins, Puppet, etc.) to make the process of scanning resources and remediating issues seamless, and to enable developers to complete their tasks without ever leaving the tools they are most comfortable working with.

Enable automation by shifting security left

Automation can be a powerful tool for teams managing sprawling and complex cloud environments. Shifting security left with IaC scanning allows you to catch faulty source templates before they’re ever used, allowing teams to leverage automation to deploy their cloud infrastructure resources with the confidence that they will align to organizational security standards.

Shifting cloud security left with IaC scanning

Infrastructure as code (IaC) refers to the ability to provision cloud infrastructure resources declaratively, by writing code in the same development environments used to write the software it is intended to support. IaC is a critical component of shifting left, as it empowers developers to write, test, and release software and infrastructure resources programmatically in a highly integrated process. This is typically done through pre-configured templates based on policies determined by operations teams, making development a shared and reproducible process.

When it comes to IaC security, we’re primarily talking about integrating the process of checking IaC templates to be sure that they won’t result in non-compliant infrastructure. But it shouldn’t stop there. In a perfect world, the IaC scanning tool will identify why a given template will be non-compliant, but it should also tell you how to fix it (bonus points if it can fix the problem for you!).

IaC scanning with InsightCloudSec

By this point, it should be clear that we here at Rapid7 strongly believe in incorporating security and compliance as early as possible in the development process, but we know this can be a daunting task. That’s why we built powerful capabilities into the InsightCloudSec platform to make integrating IaC scanning into your development workflows as easy and seamless as possible.

With IaC scanning in InsightCloudSec, your teams can identify and evaluate risk before infrastructure is ever built, stopping non-compliant or misconfigured resources from ever reaching production, and improving efficiency by fixing problems at the source once and for all, rather than repeatedly addressing them in runtime. With out-of-the-box support for popular IaC tools like Terraform and CloudFormation, InsightCloudSec provides teams with a common understanding of good that is consistent throughout the entire development life cycle.

Shifting security left requires consistency

Consistency is critical when shifting left, because if you’re scanning IaC templates with checks against policies that differ from those being applied in production, there’s a high likelihood that after some — likely short — period of time, those policy sets are going to drift, leading to missed vulnerabilities, misconfigurations, and/or non-compliant workloads. That may not seem like the end of the world, but it creates real problems for communicating issues across teams and increases the risk of inconsistent application of policies. When you lack consistency, it creates confusion among your stakeholders and erodes confidence in the effectiveness of your security program.

To address this, InsightCloudSec applies the same exact set of configuration standards and security policies across your entire CI/CD pipeline and even across your various cloud platforms (if your organization is one of the many that employ a hybrid cloud strategy). That means teams using IaC templates to provision infrastructure resources for their cloud-native applications can be confident they are deploying workloads that are in line with existing compliance and security standards — without having to apply a distinct set of checks, or cross-reference them with those being used in production environments.

Sounds amazing, right?! There’s a whole lot more that InsightCloudSec has to offer cloud security teams that we don’t have time to cover in this post, so follow this link if you’d like to learn more.

Additional reading:

NEVER MISS A BLOG

Get the latest stories, expertise, and news about security today.