“Only 17% of organizations can clearly identify and inventory a majority (95% or more) of their assets.” - Gartner

Mind the Gap: How Surface Command Tackles Asset Visibility in Attack Surface Management

Imagine the scenario: your organization has been exposed to a new zero-day vulnerability. You are responsible for Threat & Vulnerability Management (TVM), you have asked your IT department for an assessment of the asset inventory in your organization.

You make the same request to your security team. Both teams give you a different number of assets, with a significant disparity: IT reports 10,000 assets, compared to 8,200 from your colleagues in security.

When you look up your Configuration Management Database (CMDB_ application, you quickly discover that it has not been updated for months and does not accurately represent of your attack surface either.

How do you measure your risk exposure when three sources of information are not in agreement? Your highly-skilled colleagues are now back to using spreadsheets to document your assets—a very manual and time-consuming process that is not a productive use of their time.

Attack Surface Management (ASM)

ASM covers both internal and external assets—the physical and digital assets that an organization needs to have visibility into in order to understand its security posture. By establishing visibility of the attack surface and implementing management processes to prioritize, validate, and mobilize responses, security teams can reduce exposures exploited by malicious threat actors.

“Asset inventory is a common and well-known problem for organizations.”

Manage the Gap in Asset Inventory with Surface Command

We began this blog with a real-life and anonymized example for a customer and the disparity in their asset count between IT and Security teams. Surface Command addresses this operational challenge. Firstly, Surface Command is platform-agnostic; what’s important to Rapid7 is capturing your actual number of assets using a mixture of external scanning and importing data feeds from over 100 commonly used IT and Security tools (EDR, CNAPP, VM, CMDB, etc.). This provides a true, constantly updated view of all assets across the cloud and on-premises. Assets detailed will include cloud containers, servers, workstations, IoT devices, identities, smartphones and more.

To help demonstrate the value of this complete visibility, we have created a short, 2-minute product tour, which you can view at your convenience. In this initial product tour, we show how to identify coverage gaps in your security posture using Surface Command. Take the example of a zero-day vulnerability discovered for a particular operating system; you need to understand your attack surface immediately.

Surface Command will quickly display assets missing  key security controls, such as a deployed endpoint security agent. You can drill down further to focus on assets by operating system or device type. This technology is powered by Rapid7’s Machine Learning (ML) classifiers to ensure coverage and data accuracy.

Watch as we filter down from a large number of total assets, to a smaller, focused number of high-risk assets that can be prioritized for action by your IT and Security Teams, all done with just a few clicks.

This scenario is commonly used by our customers to quickly identify simple security gaps, and with Surface Command, you can easily save this for future use, as well as publish the results to reporting dashboards.

By establishing visibility of the attack surface and implementing management processes to prioritize, validate, and mobilize responses, security teams can reduce their exposure and improve cyber risk management.

After all, you can’t protect what you can’t see.

Mind the Gap: How Surface Command Tackles Asset Visibility in Attack Surface Management

To learn more, click here.

Sources:

Gartner, Innovation Insight: Attack Surface Management - 9 April 2024 - ID G00809126

Gartner, Innovation Insight: Attack Surface Management - 9 April 2024 - ID G00809126

Understanding your Attack Surface: Different Approaches to Asset discovery

Over the past two blogs (Help, I can’t see! A Primer for Attack Surface Management Blog Series and The Main Components of an Attack Surface Management (ASM) Strategy) in our series on Attack Surface Management, we’ve focused on the drivers and core elements of an Attack Surface Management solution. In this post, we’ll delve intoprocess of discovering assets. We cannot secure what we cannot see so getting this piece right is foundational to the success of your ASM program. This blog will explore four different methods of asset discovery starting with the most basic, deployed software agents.

Software Agents

Deployed agents are how most asset inventory and asset management systems work. A software agent is deployed on a workstation or server, phoning home to the management system about the asset. The benefits of this approach are that you get a very high fidelity dataset on this particular asset, including up-to-date information on installed software, location, etc. However, this approach is only as good as the reach of the software agent. From an asset discovery perspective, the problem that is encountered is that you cannot discover assets that do not have that software agent installed. If we consider the funnel diagram below, this is effectively having visibility from one row in the funnel. In reality, most organizations do not have 100% deployment coverage of agents and many assets that will not be able to deploy agents, so they have many different tools that provide asset visibility, all with different perspectives.

Understanding your Attack Surface: Different Approaches to Asset discovery

Also, If the software agent is an IT management agent, and not an endpoint security agent, it will typically lack security controls, vulnerability and exposure context which means key information to best understand the attack surface may be missing.

In sum, software agents should be treated as pieces of the attack surface puzzle.

Data Aggregation & Correlation

The more comprehensive way to discover assets is through the ingestion of asset data across a variety of tools the organization uses. This is the primary way assets are discovered with a CAASM solution. By ingesting data from your IT, business applications, and security tools via API connectors we get the broadest visibility and can see across the data gaps from any individual tools.

A CAASM solution asks each connected tool for the latest list of assets on a recurring basis. Security and identity data related to an asset is then stored and mapped to build a relationship in a database (ideally a graph) that is easily discoverable and queryable. Note that some solutions exist, but not many, that enable asset history tracking to perform data trending to view how an asset and organization changes over time. In this case, more than a single asset record is stored per asset, retained for a configurable length of time.

Due to the correlation engine provided by CAASM solutions, the more data you ingest from your tools, the better your attack surface visibility and accuracy. Remember the funnel illustrated at the beginning of this article? Since your tools might not agree on the fundamental aspects like the number of assets, it’s necessary to ingest data across them to get closer to a truer picture of your attack surface.

Organization’s start putting together the pieces by ingesting data typically from 5 primary sources

  • Directory Services (Active Directory, Azure AD, LDAP, etc.)
  • Endpoint Security (insightIDR, Crowdstrike, SentinelOne, etc.)
  • Vulnerability Management (insightVM, Qualys, Tenable etc.)
  • Identity & Access Management (Okta, Duo, etc.)
  • Cloud Service Provider or Cloud Security Posture Management (AWS, Azure, insightCloudSec, Wiz, etc.)

Full deployments typically have between 10-20 data sources depending on the size of the organization. These will also include integrations into CMDB, IT Asset Management systems, Digital Risk Protection Service (DRPS), and more.

For the external attack surface (EASM), assets are discovered using one of two methods, or a combination of both. The first method is again, data ingestion from sources: like Shodan, Bitsight, etc. The second method is through active internet scanning that occurs on a recurring basis to discover the latest public facing assets and services running on them. We have covered data ingestion in detail already, we will then take a look at active network scanning shortly but let’s start with passive network scanning first.

Passive Network Scanning

Not every asset in the organization will be linked to a pre-existing data source. For complete attack surface coverage, you also need to consider methods that go further that address visibility gaps from your data sources. The first of these is passive network scanning.

In one scenario, attackers could gain access to your internal network through a malicious insider. A disgruntled employee could plug an unapproved workstation or malicious device into an ethernet port, or attackers might gain a foothold through WiFi attacks to gain entry on the network with a static IP address. In both of these cases, the malicious device would effectively be invisible to your teams and tools, with the exception of the perspective of network switches, firewalls, and network traffic analysis.

Support for data sources of passive network traffic therefore can give teams visibility into new assets that come online that are not correlated with any other data source. This can provide visibility into rogue devices that are circumventing security policy and protocol. Most CAASM solutions today do not ingest network data such as Netflow, or support NTA data ingestion, although some can use data from agents that process ARP or DHCP broadcasts to discover new assets. However, these agents need to be deployed on a specific network segment otherwise they won't be able to discover unknown assets. In these cases, Active Network Scanning is a potential alternative to increase visibility of assets that are circumventing normal controls and monitoring.

Active Network Scanning

The most difficult-to-discover Shadow IT assets can also be the most vulnerable, because they won't have the necessary security controls enabled. These assets are not discoverable through network data alone, as they provide no telemetry.  Even with network data, security teams often miss fingerprinting and fail to identify the services running on these devices. Active scanning offers a way to capture information from these assets that are otherwise missed. Active network scanning is a necessary feature in environments where full visibility is extremely important.

A fully deployed vulnerability scanner is superior to native active network scanning because it uses the same network discovery techniques but also understands vulnerabilities and exposures. Using a CAASM solution to understand which assets and networks are not being continuously assessed for vulnerabilities is a great way to also increase your ability to discover new assets by active network scanning.

For the final blog in this series, we will look at how we can drive greater insights through the context we can acquire with effective Attack Surface Management (ASM).

The Main Components of an Attack Surface Management (ASM) Strategy

In part one of this blog series, we looked at some of the core challenges that are driving the demand for a new approach to Attack Surface Management. In this second blog I explore some of the key technology approaches to ASM and also some of the core asset types we need to understand. We can break the attack surface down into two key perspectives (or generalized network locations), each of which covers hybrid environments (Cloud, On-Premise):

  • External (EASM) - Public facing, internet exposed cyber assets
  • Internal  - Private network accessible cyber assets

External (EASM)

Today, most available ASM solutions are focused on External Attack Surface Management (EASM) which provides an attacker’s perspective of an organization, an outside-in view. In fact, it’s common for organizations, and some analyst firms,  to refer to EASM as ASM. However, while this is important, it is only a small, and partial view of the attack surface in most organizations.

EASM seeks to understand an organization’s external attack surface by collecting telemetry about an organization’s internet exposed, public facing assets. This telemetry is derived from different data sources such as vulnerability & port scans, system fingerprinting, domain name searches, TLS certificate analysis and more. It provides valuable insights into the low hanging fruit that attackers will target. Core EASM capability is the equivalent of pointing a vulnerability scanner at your known external IP address range.However, unless your external environment is most of your business, this visibility alone is not enough and leaves organization’s with a limited, partial view of their attack surface.

Internal

The internal attack surface is often the largest portion of an organization’s digital footprint. Attackers frequently gain footholds in organization’s through identity, ransomware, and supply-chain attacks, among many other attack vectors. Organization’s need visibility into their internal attack surface to gain real insight into their digital estate and to be able to reduce their risk by understanding how their most vulnerable and business critical systems are connected, monitored, and protected.

Today, most organizations that have adopted an ASM approach are manually correlating asset information in spreadsheets from various sources to combine business context with the security controls deployed on those assets so they can answer basic questions about their security tool coverage & deployment gaps, and measure their compliance adherence.

The data sources in these spreadsheets typically include their directory services such as Active Directory, combined with outputs from common security controls such as EDR or vulnerability scanning.. Not only is this manual process time-consuming but the information is often out of date by the next morning.

Organization’s need a more scalable solution to this problem, which has led to the development of CAASM solutions to address this challenge..

Introducing CAASM, a new approach to attack surface and exposure management

Over the last few years an approach has emerged to address the attack surface discovery & visibility problem in a scalable, holistic way. It’s a long acronym that stands for Cyber Asset Attack Surface Management (CAASM).

CAASM is the security team’s take on asset management, but it’s much more than that. It addresses the internal visibility problem by aggregating and correlating asset information across an organization’s security and IT tools, providing a clearer, more accurate picture of an organization’s attack surface. Foundational to CAASM is a correlation engine and data model that builds relationships across different types of assets, controls, exposures and more.  This technology is able to provide the best representation of an asset with full context from IT and security tools. It enables IT, SecOps, DevOps, and CloudOps teams to operate with the same information by breaking tool sprawl and data silos, enabling better visibility, communication, prioritization, and remediation of risk.

CAASM solutions work by ingesting data from IT, business applications, and security tools through simple API integrations that pull in asset data from each respective tool on a continuous basis, identifying unique assets through aggregation, de-duplication and correlation. This provides the best picture of your digital estate by breaking down the data silos and tool deployment gaps.  The more data you ingest from your environment the more accurate the picture of your attack surface becomes.

These solutions are continuing to evolve today to treat identities as assets, create software inventories, and map SaaS applications as part of the attack surface. When seeking a holistic attack surface solution, you should ensure it includes the following key features for optimal visibility:

  • External Attack Surface Management
  • Internal Attack Surface Management
  • Unified data correlation engine
  • Cloud resource aware
  • Identities
  • Software Inventory

Key Asset Types to Drive Attack Surface Visibility

NIST has a definition of asset that is very broad but will suffice for this article:

“An item of value to stakeholders…the value of an asset is determined by stakeholders in consideration of loss concerns across the entire system life cycle. Such concerns include but are not limited to business or mission concerns.

Based on this definition, we will further narrow down the scope to focus on types of cyber assets that add the most value in understanding the Attack Surface. Let’s start with the most basic: machines.

Traditional Assets (Machines)

Often referred to simply as "assets," these primarily include your employee and business application compute devices,  such as workstations and servers. Due to the fast paced evolution of digital infrastructure, this definition is quickly expanding to include infrastructure like virtual machines, containers, and object stores, or new asset categories are being created in  Attack Surface Management solutions. The important thing is to make sure you have visibility into the cyber assets in your organization, however they’re defined.

Identities

Identities are the new perimeter, as some say, and are valuable assets to the business because they grant access to the business’s resources. Identity data suffers from the same data silo problems as other assets. Your company email address, for example, is typically used to authenticate and access many different business services and applications. If we can correlate data from sources like Active Directory, Okta, Google Suite, Office 365, KnowBe4 security training, we can provide security and IAM teams with visibility into not just the identities within the organization, but also key challenges in the identity attack surface, such as identities that have MFA disabled but also have Administrator access to key services.

A common challenge with identity discovery and attack surface management is that security teams attempt to map it using threat data. There is a significant difference in accuracy between detection rules and the identity source. For example, a service account that is actively enabled may be missed by aSIEM/XDR solution due to a lack of recent log activity, therefore excluding it from reports. By inventorying identities as assets, we can gather the status of the service account directly from the data source’s API. Both the identity telemetry data from the source (e.g. Okta, AD) and threat data (e.g SIEM/XDR) can be leveraged to give a more accurate picture of the state of the environment.

Software Inventory

With the rise of supply-chain attacks and the increased presence of unapproved or outdated software, visibility into software has become a key part of understanding your attack surface. Inventorying all software installed and running on an assets, combined with security context around that software from vulnerability scanners, NGAV and Threat Intelligence, gives teams the best visibility into understanding and measuring the risks posed by unapproved or unauthorized code. A software inventory helps answer questions like:

  • Which of my machines are running software that has a new, high-risk vulnerability?
  • Which machines are running legacy or outdated software?
  • What is the most vulnerable software in my environment that we should prioritize for remediation?
  • Am I over utilizing an application license?

Other types of ‘software adjacent’ assets include SaaS applications and web applications.

Now that we have identified the three major types of business assets to monitor in your attack surface, in the next blog we will explore how ASM solutions discover the assets in your environment and what to watch out for to ensure you have the best discovery capabilities so that you’re not missing large portions of your attack surface.


Proactive Visibility Is Foundational to Strong Cybersecurity

Authored by Guest IDC Blogger: Michelle Abraham

Exposures are more than CVEs, so organizations need to move beyond the traditional thinking of vulnerability management to a holistic view. Part of that view must be greater visibility into devices, users, applications, and all the digital infrastructure connected to an organization’s environment. Gaps in that view create risk exposure. Organizations must proactively identify anything that presents a risk to determine whether to act.

Solutions that improve visibility discover assets, aggregate all asset data in one place, and enrich that data to understand the relationships between users, assets, and applications. These cybersecurity asset management systems connect to other security tools in the IT environment to gather their telemetry on what they see and the communications they have. The data from these connections can overlap and be duplicative, so the system needs to deduplicate the data to render it useful for security.

Attack surface management (ASM) adds to the visibility by showing an external view of the digital estate, allowing security teams to see the view attackers have from outside their environment. Attack surfaces have expanded rapidly and often involve a hybrid multicloud environment and SaaS applications, including GenAI. Identifying unknown internet-exposed assets that provide a pathway to critical data is essential to managing risk.

Knowing what constitutes the environment that must be secured should be the foundation upon which the rest is built. Finding part of shadow IT helps with a portion of the problem but does not solve it. Alternatively, investigating assets that are falsely attributed to an organization wastes time. It is common for organizations to find 15%–30% more assets when they adopt security tooling for asset discovery.

Solutions need to bring together many sources of data — both first- and-third-party internal and external views of the environment — for a single source of truth about an organization's digital estate. The assets must include both cloud and on-premises resources to optimize the organization’s security posture for its risk tolerance level. Solutions should also be capable of discovering unknown users and the unsanctioned use of IT resources and applications, which are additional risk exposures. The addition of threat and vulnerability intelligence helps security team's understand the exploitability of the exposure so the most critical issues can be prioritized for remediation.

The flow of information from these tools requires continuous updating because threat actors can seize on any gap, whether recent or present from the beginning. The data shown should include asset configuration and asset criticality in the context of the business, such as whether the asset supports key business applications or has access to sensitive datasets. Knowing who owns an asset is also vital information so that security and IT know who is responsible for fixing a problem when it arises, particularly if ownership resides outside these two areas. Asset ownership will drive accountability for remediation programs and campaigns.

With a bi-directional connection to the configuration management database (CMDB), a solution that combines Cyber Asset Attack Surface Management (CAASM) and ASM further aligns the entire organization with the most updated information. It augments the CMDB to help with asset lifecycle management because end-of-life devices that no longer receive updates pose a risk. Systems should also be able to track and report on additional exposures, such as expiring certificates or unknown certificate issuers.

A map of asset and user relationships helps visualize the paths that attackers can take to traverse the network for lateral movement in the environment to get to the organization’s crown jewels. CAASM and ASM output must be more than just a dump of data from various tools; the data must be easy to query, with actionable insights that help the organization reduce risk. Matching the data from assets provides teams reacting to threats with complete context regarding assets to aid their investigation and remediation efforts. The remediation process is easier when there are recommended actions as well as integrations with ticketing systems or automation platforms that inform asset owners of issues as well as track the status of the patch or mitigation.

Consider CAASM and ASM as foundational elements to a strong, mature security program that is aware of its entire digital estate. This visibility eliminates one of the ways attackers take organizations by surprise, thereby reducing overall risk.

Message from the Sponsor

The dynamic nature of modern IT environments demands a proactive and continuous approach to exposure management. Doing so requires real-time visibility into your entire digital estate and the exposures that leave your organization vulnerable to compromise. By enriching unified internal and external views of your attack surface with real-world threat intelligence and context from your entire tooling ecosystem, teams have the situational awareness needed to prioritize response efforts and accelerate mean time to remediation. Watch this on-demand demo to learn how Rapid7 Exposure Command can help transform your security program and allow you to take command of your attack surface.

Rapid7 Recognized in Forrester’s 2024 Attack Surface Management (ASM) Wave Report

This week, Rapid7 was recognized as a Contender in Forrester’s 2024 Attack Surface Management (ASM) Wave report. We’re proud to have been selected for inclusion in the report, reflecting a continued dedication to enabling customers to monitor 100% of their attack surface in real-time, and proactively mitigating exposures that leave their organizations susceptible to compromise.

Since Forrester’s initial assessment earlier this year, we’ve further extended our investments in this space, announcing the acquisition of Noetic Cyber, a market-leading cyber asset attack surface management (CAASM) vendor, and subsequently launching the Command Platform with attack surface management - and our new Surface Command product - as the foundation.

Modern business dynamics and an ever-evolving threat landscape makes successful data management a daunting challenge. This leads to a majority of organizations not having a strong grasp on their true attack surface.

  • Teams have accumulated numerous point solutions to try to keep pace with business growth and adapt to their changing environment.
  • Practitioners are consumed by assuming the role of a system integrator, trying to connect a myriad of different solutions that were never intended to be interoperable.
  • This lack of connectivity makes it impossible to get the context and clarity needed to actually make sense of data, know what to prioritize, and where to focus.

Attackers are able to exploit this data sprawl - lurking in mountains of data and betting on your inability to detect them and identify the insights that matter before it’s too late. We recognize that teams need a new path forward, and we are excited to support our customers through this next era of security with our Command Platform.

Establishing A Strong Foundation to Transform Vulnerability Management into a Proactive, Continuous Exposure Management Process

As cyber threats continue to grow in complexity, the traditional approach to Vulnerability Management (VM) must evolve. Static scanning and isolated patching efforts are no longer sufficient in the face of sophisticated attackers who exploit even the smallest gaps in security. Organizations need to adopt a more dynamic, integrated approach to exposure management - one that is continuous, context-aware, and capable of adapting to the sprawling attack surface and shifting threat landscape.

Rapid7 is uniquely positioned to support your organization’s evolution toward a more holistic and continuous process designed to continuously assess, prioritize, and remediate threats across an organization’s entire attack surface. Surface Command is built to provide the comprehensive visibility and actionable insights necessary for effective threat exposure management. Integrating data from across your entire environment - whether it’s on-premises, in the cloud, or somewhere in between - customers are able to see and understand risks in their full context.

With Rapid7, you’re not just getting another vulnerability or attack surface management tool; you’re gaining a partner that helps you elevate your entire security strategy. Our platform’s ability to aggregate and correlate data from different data sources ensures you have a complete, accurate picture of your threat landscape that you can trust. Moreover, our advanced querying capabilities allow you to quickly identify and focus on the most critical risks, enabling timely and precise remediation efforts.

Surface Command stands out in a few ways:

  • Unified Internal and External Attack Surface Visibility: Monitor your attack surface from the inside out with a dynamic asset and identity inventory alongside continuous external scans that provide an adversary’s perspective.
  • Vendor Agnostic Approach: Aggregate all data from your internal and external environments as well as your entire technology ecosystem into a unified asset model.
  • Powerful Search and Analytics: Slice and dice your data however you see fit, with powerful querying capabilities that help you find the needle in the haystack.
  • Seamless Integration and Remediation Workflows: Quickly get relevant asset insights, risk context and initiate remediation workflows all from one place.

This comprehensive visibility and contextual prioritization empowers your security team to shift from a reactive to a proactive posture, transforming your vulnerability management program into a robust, continuous defense mechanism.

Proactively Mitigate Exposures from Endpoint to Cloud

Exposure Command builds off the complete environment visibility powered by Surface Command - ingesting high-fidelity asset data from proprietary and third-party sources, automatically aggregating and correlating that data into an up-to-date asset inventory and topology map. Our powerful querying capabilities allow you to easily adjust your scope and drill into the details you need to spot control gaps, non-compliance and extinguish risk across your hybrid environment.

The platform goes beyond monitoring and asset inventory mapping, enriching telemetry with compliance and risk findings from Rapid7’s entire set of exposure management capabilities. With  hybrid vulnerability management, comprehensive cloud security, and web application testing in one complete solution, security teams can shift from reactive to proactive to stay ahead of adversaries.

Exposure Command extends the power of Surface Command with:

  • Pinpoint and Mitigate Vulnerabilities Everywhere: Automatically prioritize vulnerabilities across your hybrid environment based on exploitability and potential impact.
  • Monitor Effective Access and Enforce Least Privilege Access: Analyze all roles and identities across your clouds to help eliminate excessive permissions and enforce LPA at scale.
  • Proactively Mitigate Exposures in Cloud-native Apps: Avoid risk before it reaches production with IaC and web app scanning that gives actionable feedback to devs where they work.
  • Spot Avenues for Attackers to Traverse Your Cloud Network: Visualize interconnected resources and uncover paths for attackers to move laterally across your environment with attack path analysis.

With these powerful capabilities, Exposure Command allows teams to continuously assess their attack surface, validate exposures and confidently take action with remediation guidance that takes into account existing downstream controls and the blast radius of a potential compromise.

Interested in Learning More About Exposure Command?

If you’re interested in diving deeper into how Rapid7 can help transform your security operations, be sure to check out our recent webcast with Jon Schipp, Sr Dir. Product Management, and Thomas Green, Sr Security Solutions Engineer during which they discuss key strategies for leveraging Exposure Command to stay ahead of today’s evolving threats.

Part 1: Overview of the Problem ASM Solves and a High-Level Description of ASM and Its Components

Help, I can’t see! A Primer for Attack Surface Management blog series

Welcome to the first installment of our multipart series, "Help! I Can’t See! A Primer for Attack Surface Management Blog Series." In this series, we will explore the critical challenges and solutions associated with Attack Surface Management (ASM), a vital aspect of modern cybersecurity strategy. This initial blog, titled "Overview of the Problem ASM Solves and a High-Level Description of ASM and Its Components," sets the stage by examining the growing difficulties organizations face in managing their digital environments and how ASM can help address these issues effectively.

The fast paced evolution of digital infrastructure that is driving businesses forward (e.g. workstations, virtual machines, containers, edge) is also making it more difficult for organizations to keep track of and account for the cyber attack surface they’re responsible for protecting. Despite security teams continuing to invest exorbitant amounts of money on tools (VM, EDR, CNAPP, etc.) to both manage their digital environment and also secure it, the problem isn’t getting any better. In this 3-part blog series  we will help demystify the problems of security data silos and tool sprawl so you can answer pertinent questions like

  • How many assets and identities am I responsible for protecting?
  • How many assets and identities are lacking security controls like endpoint security or MFA?
  • What is my overall security posture?

When we look at the number and types of tools organizations spend money on to manage and secure their digital environment, we typically see things like vulnerability scanners, endpoint security, IdP, patching, IT asset management, Cloud Service Providers, and more.  Each of these tools and technologies tend to do a pretty good job at their core function but unintentionally contribute to a fractured ecosystem that provides organizations with contradictory information about their digital environment.

Help, I can’t see! A Primer for Attack Surface Management blog series

The age old problem: How many assets do I have?

Let’s look at a real-world example of this where an organization has solutions for Vulnerability Management (VM), Cloud Security Posture Management (CSPM), Endpoint Security (EDR/EPP), Active Directory (Directory Services) and IT Asset Management( ITAM).

Help, I can’t see! A Primer for Attack Surface Management blog series

None of these tools can agree on the number of assets in the environment. It’s practically  impossible to achieve 100% deployment of agent-based tools across your business (some types of assets cannot have agents!). It then becomes a real challenge to see across these tooling visibility gaps. The result is that we cannot answer the basic question of “How many assets am I responsible for protecting”.

This fact is compounded because if we can’t agree on the total number of assets, then we don’t know the number of controls in place, the number of vulnerabilities and exposures that exist, and the number of active threats in our environment. Teams that manage and secure organizations are relying upon incorrect information in an environment where prioritization and decision making needs to be based on high-fidelity information that incorporates IT, security, and business context to lead to the best outcomes.

To drill down on  these points, let’s pick on a few tools from the infographic for illustrative purposes. Wiz will only see assets in the cloud, Active Directory only sees assets (mostly Windows) tied to the Domain Controller, and traditional vulnerability scanners see across hybrid environments but tend to be mostly deployed on-premise. If you hone in on the numbers in the Asset column you will immediately notice that none of these tools agree on the number of assets in the environment. Lacking visibility and confidence in your attack surface is a big data problem, and deploying the latest shiny security tool is not going to fix it.

Ultimately, we have an industry created data problem that Rapid7 is not immune to. For a number of perfectly good reasons, we have created a fractured technology ecosystem that is preventing security teams from having the best data available to determine their cyber risk and enabling them to prioritize the most effective remediation and response.

We need to see across the gaps that truly matter; for that we need Attack Surface Management.

What is Attack Surface Management?

Attack Surface Management (ASM) is generally part of a wider Exposure Management program and  is a different way to think about cyber risk by focusing on addressing the digital parts of the business that are most vulnerable to attack. Taking an attack surface-based approach to your security program needs to consider a number of different elements including:

  • Discovery and inventory of all cyber assets in the organization, from the endpoint to the cloud
  • Internet-scanning to identify unknown exposures and map them to the existing asset inventory
  • High-fidelity visibility and insights into the IT, security, and business context of those assets
  • Relationship-mapping between the assets and the wider network and business infrastructure

ASM is a continuous process that is constantly assessing the state of the attack surface by uncovering new or updated assets, identifying the use of shadow IT in network or cloud use cases and prioritizing exposures based on their potential risk to the business. These elements of discovery and prioritization are foundational elements of a Continuous Threat Exposure Management (CTEM) initiative, where security teams are taking a more holistic approach to managing all types of exposures in their organizations.

A positive trend that we are currently seeing is that security teams are going back to basics and focusing on cyber asset management to first discover and understand the assets they’re responsible for protecting, along with their business function.

They gain visibility into the assets through a combination of external scanning to identify internet-facing assets which are potentially higher risk, this is known as External Attack Surface Management (EASM). A  complementary approach to cyber asset discovery  that provides greater insights into the whole cyber estate uses API-based integrations into existing IT management and security tools to ingest asset data; this is known as Cyber Asset Attack Surface Management (CAASM). Together, they provide organizations with the asset visibility they need to drive security decisions.

Put simply, you cannot secure what you can’t see. Managing the attack surface requires asset discovery and visibility, combined with rich context from all tools in the environment.

Attack Surface Management vs. Asset Inventory

There is a common confusion with customers today that they already have elements of an ASM strategy with their current approach to asset inventory. This is typically based on an asset inventory system that IT is using for asset lifecycle management. A traditional asset inventory’s view of the environment is almost entirely based on what it is able to discover on its own, and with an IT focus. These are often agent-based,  with limited integrations, so they are not able to take advantage of an organization’s wide range of tools, which impairs their value.

Many asset inventories today can only discover assets where they have a deployed agent, such as an endpoint agent or being tied to the domain controller. While these technologies are effective at making policy and configuration changes on their fleet of endpoints, they do not have a data aggregation and correlation engine that sees beyond the specific agent. Additionally, they have limited security insights and context, and are only able to provide a partial view of the attack surface, assuming that no agent has 100% coverage.

This is not the reality in most organizations, and it’s why one should not confuse Asset Inventories with Attack Surface Management, the latter being a much more effective approach to surfacing the best asset and security telemetry across your ecosystem. An Attack Surface Management solution will ingest data from an IT Asset Inventory or Management tool as one of many data sources to collate.

The next blog in this series will look at the different components of an ASM program, and how they can be leveraged to improve security hygiene and reduce cyber risk.

Help, I can’t see! A Primer for Attack Surface Management blog series

The Japanese Threat Landscape: A Report on Cyber Threats in the Third Largest Economy on Earth

The Japanese economy is massive, global, and varied. It is also a major target for cyber threat actors. As a hub for automotive, manufacturing, technology, and financial services, Japanese companies and organizations face significant cyber risk. There is nonetheless relatively little English-language coverage of Japan’s cyber threat landscape.  

In a new report released today by Rapid7, Principal Security Analyst, Paul Prudhomme, analyzes the threat landscape of the third-largest economy in the world and enumerates threats across Japan’s main industries as well as some of the largest cyber concerns affecting those companies, such as ransomware and cyber espionage.

Perhaps the most important takeaway from the report on Japanese cyber threats is that the biggest risk to Japanese companies may not even be the companies themselves. Overseas subsidiaries and affiliates offer softer targets for threat actors targeting global Japanese brands. In many of the most recent, large-scale, attacks on Japanese companies, attackers chose to compromise overseas subsidiaries or otherwise affiliated companies in other countries as a way into the networks of Japanese targets.

The report posits two potential explanations for why attackers chose to use the overseas affiliates and subsidiaries of Japanese companies as access vectors. One possible factor is the security culture in those countries and the subsidiaries themselves. Overseas affiliates may have less optimal security oversight than their Japanese counterparts. This discrepancy could be due to acquisition of overseas firms introducing existing security vulnerabilities into the parent company, or the development of separate hierarchies that are not in lock step with the security culture at a parent company. Regulatory environments vary, and business and technology habits could be different as well. There are a multitude of ways even the most secure Japanese company could be let down by their overseas affiliates.

Another reason why attackers aim to infiltrate Japanese companies through their overseas partners could be due to language barriers. There are many Japanese speakers in the world, though most are concentrated within Japan itself. Considered a challenging language to master, attackers often seek to operate within companies with a lower language threshold to clear and when access to the main target is still available through outside companies, the path of least language resistance could be ruling the day.

Ransomware

Rapid7’s research has found that ransomware is a particular threat for Japanese companies due to the large number of manufacturing and other technical companies based there. The nature of some of the data that many manufacturing organizations possess may make it harder to sell on criminal markets, making ransomware a more lucrative way to extract funds from a breached manufacturer. In fact, ransomware incidents have increased every six months between the back half of 2020—where just 21 incidents were reported—to the first six months of 2022 when 114 incidents were reported. Manufacturing is the hardest hit with one-third of ransomware attacks being focused on this one industry in the first half of 2022.

State-sponsored Threats

Japanese companies are also high-value targets for state-sponsored threat actors, with several of its neighbors posing significant threats. In fact, of the four most well-known state sponsors of cyber attacks (Russia, China, Iran, and North Korea), three of them are Japan’s neighbors and thus have reasons to target it.

Chinese cyber-espionage groups pose a significant threat to the IP of Japanese manufacturing and technology companies. As a regional competitor in these spaces, IP is a valuable resource and thus a valuable target. Chinese attackers also seem to be attempting to breach Japanese companies through their overseas affiliates and subsidiaries.

North Korean cyber criminal outfits, in contrast, prefer to steal Japanese cryptocurrency, as it is a funding source that is outside of traditional financial institutions. Cryptocurrency exchanges are not the only targets. In late 2021, a North Korean group impersonated a Japanese venture capital firm to steal cryptocurrency from individuals.

Targeted Industries

Japanese companies are major global players in the automotive, manufacturing, technology, and financial services industries. Those industries are thus among the top targets. As mentioned before, manufacturers, particularly automotive, can be subject to IP theft. Targeted data sets in the financial services industry include customer credentials and payment card details, personally identifiable information, and cryptocurrency. Technology companies are valuable targets in part because compromises of them can enable access to their customers, even including Japanese government and defense organizations.

If you’d like more information about these targeted industries check out the full report or one of our one-page briefs looking at the main points of the automotive, financial services, and technology industries.

Ultimately, Japan has a huge attack surface and is an incredibly important economy on the global stage. Its companies have global reach and are often market leaders outside of Japan. This puts Japanese companies at high risk for attacks. For more detail on what we’ve discussed in this blog (and way more detailed information about the attack surface of Japan) download the report here.

Understanding CAASM

Cyber Asset Attack Surface Management 101

This article was written by Ethan Smart, Co-Founder and Chief Solution Architect, appNovi (a Rapid7 integration partner).

It's essential for security and IT teams to have a comprehensive view and control of their cyber assets. This is why Cyber Asset Attack Surface Management (CAASM) has received so much attention from security practitioners and leaders.

According to Gartner, “CAASM tools use API integrations to connect with existing data sources of the organization. These tools then continuously monitor and analyze detected vulnerabilities to drill down the most critical threats to the business and prioritize necessary remediation and mitigation actions for improved cyber security.”

CAASM provides a unified view of all cyber assets to identify exposed assets and potential security gaps through data integration, conversion, and analytics. It is intended to be authoritative source of asset information complete with ownership, network, and business context for IT and security teams.

Security teams integrate CAASM with existing workflows to automate security control gap analysis, prioritization, and remediation processes. These integration outcomes boost efficiency and break down operational silos between teams and their tools. Common key performance indicators of CAASM are asset visibility, endpoint agent coverage, SLAs, and MTTR.

It’s important to understand assets are more than devices and infrastructure. In a Security Operations Center (SOC), assets include users, applications, and application code. Recognizing the interconnectedness of these assets is key to enhancing the SOC's capabilities. For example, consider a scenario where 1,000 servers have the same vulnerability. Assessing each one individually would be incredibly time-consuming. CAASM enriches cyber asset data to automate the majority of analysis.

For example, when you understand only eight of the 1,000 servers are internet-facing, and of those only two are exposed through the necessary port and protocol for exploitation of the vulnerability, you know which assets have the highest contextual exposure, which are exploitable, and which should be addressed first.

In this blog, we’ll cover how security teams can leverage their existing tech stack for Cyber Asset Attack Surface Management.

Understanding the Attack Surface

Comprehensive attack surface management hinges on a comprehensive understanding of everything that is a target for attackers. In a sprawling enterprise environment, there's an abundance of assets distributed across different networks (e.g. cloud, SDN, on-prem), each with its own set of monitoring and alerting tools. When these security tools don’t interoperate or mesh with one another, security teams lack a complete picture of the attack surface. This fragmented understanding results in the continued siloing of teams and tools and inhibits effective data sharing.

One of the oldest adages in cybersecurity is complexity is the enemy of security—and complexity increases when teams recognize assets as more than devices. Assets are more than just computers and servers connecting on the network, as those assets are used to support applications to drive revenue. Applications also use code, which can be used by multiple applications. Users are assets that operate the business using technology. This complex asset tracking and relationship mapping spans network connections, application and code ownership, and the dependencies and indirect dependencies between applications.

CAASM emerged to address this complexity. CAASM is founded through the consolidation of existing data from all the different network and security tools. For example, by integrating Rapid7's portfolio of products with a security data integration and visualization solution like appNovi, organizations can achieve and maintain full visibility across their entire connected network—including on-prem, Software Defined Network (SDN), and hybrid cloud.

Using CAASM, organizations can leverage analytics to refine search results, identify trends, or disseminate specific information to defined groups or individuals. One common use case with appNovi is identifying vulnerable application servers contextually exposed for exploitation and identifying owners based on login telemetry and notifying the server owner and security. This integrated approach delivers comprehensive attack surface visibility and mapping to enable organizations to address risks and manage vulnerabilities more efficiently. When analytics are coupled with automation tools, such as orchestrators, the SOC is able to focus on threat hunting and less on data analysis. Common examples include asset inventory management and security control gap analysis.

Cyber Asset Inventory and Mapping

To manage the attack surface proficiently, it's essential to discover and map an organization's assets accurately and with the greatest level of detail. Organizations that use Rapid7’s Insight Platform already identify network infrastructure to pinpoint active devices, open ports, and running services. When combined with your other tools’ data through the enrichment capabilities of appNovi, Rapid7’s InsightVM integrates with the entire network and security tech stack to reveal overlooked assets, those that were inadvertently deployed without endpoint detection and response (EDR) agents, and those that require a prioritized response.

Telemetry data can also be leveraged from Rapid7’s InsightIDR to enrich asset data to understand network connections, ownership, and user activity. This relationship and connection mapping supports establishing the relationships between assets and their relevance to applications. With an automated and continuously updated asset inventory enriched by telemetry, IT and security teams not only gain visibility but also develop a comprehensive understanding of each asset’s dependencies and business significance.

Risk Assessment and Prioritization Based on Exposure

Vulnerability scanners and agents help you understand what devices and their software are vulnerable. For teams today to understand the exposure of their vulnerable devices requires sifting through large amounts of network log data. This time-consuming process often inhibits the ability to prioritize devices based on their network contextual exposure. But when telemetry sources are abstracted and converged with cyber asset data, contextual exposure analysis becomes a simple and automated analysis. That’s why data convergence in appNovi with Rapid7’s platform compiles network, asset, and vulnerability data into a comprehensive and easily accessible format.

This powerful data management capability means teams efficiently and accurately identify the devices that are the most vulnerable and exposed to both external threats and lateral movement from within the network. With this level of enrichment, security teams can quickly identify the handful of assets that require immediate prioritization to support an effective remediation strategy.

Identifying and Managing New Assets

Monitoring the attack surface involves leveraging a diverse set of tools to identify new assets within an organization's digital ecosystem. It is vital to utilize comprehensive asset discovery tools, vulnerability scanners, and other solutions to gain a holistic view of the digital infrastructure.

However, some infrastructure is ephemeral or may be inaccessible to all monitoring tools, in which case telemetry data sources and other SIEM data can be used to identify new assets. This aggregation, enrichment, and analysis can feed into other actions whether it be as simple as email notifications of results or triggering specific automated actions.

Creating Closed-Loop Remediation

When an authoritative source of detailed asset data is established standard searches can be run to provide consistent results and define specific outcomes. As an example, many organizations want to prioritize appropriate EDR agent and Rapid7 IDR agent installations across their application infrastructure.

To achieve this functionality, security teams define what constitutes appropriate security controls and search for all assets that do not meet the criteria. The results can trigger playbooks or workflows to create automated remediation notifications. In instances where orchestrators can install agents, those assets without agents can be automatically remediated in a self-healing loop.

By integrating Rapid7's platform with appNovi, businesses gain actionable insights into the changes that occur across their attack surface with the ability to implement streamlined remediation.

Best Practices for Cyber Asset Attack Surface Management

Maintaining a robust attack surface management initiative is essential—automating as much of it as possible is what will result in efficiencies for the SOC. There are several best practices for organizations that want to undertake the initiative to uplevel security operations with Cyber Asset Attack Surface Management.

Different data, same problem
Rarely is all data in the same format. Even more rarely does all data provide the same match values of assets. For CAASM to be effective, ingestion and data convergence must facilitate data normalization through abstraction. This needs to be done through unique identifiers. Without integrated data feeds that support the wide variety of data structures and vendor nuances, you’ll end up back in an Excel spreadsheet that effectively only saves you a SIEM query.

Less is hard
There are many different data points about assets. All the asset attributes must converge into a single asset profile. Without this capability, security teams will be sifting through duplicate records providing two different perspectives on the same asset which often leads to partial resolution or inaction. To be effective, the SOC needs a high-fidelity source of data and not several incomplete profiles of the same asset.

Where is it?
Complete asset inventories are helpful to satiate compliance requirements, but without context, all assets will be viewed based on an objective data point. Because you have network data, you should be able to apply your network context to it and make the asset subjective. An external-facing asset with a medium risk is more important than a high risk asset buried behind several network security controls. Your tools already monitor and have network and business context—that telemetry and enrichment need to extend to assets.

What is it?
Every enterprise has applications. Few know how many they have deployed in their network. Using application data sources can help delineate and track application servers and what they are direct and indirect dependencies of. The business importance of an asset helps not only in prioritization, but telemetry such as logins can expedite ownership identification.

Conclusion

By leveraging the power of CAASM, organizations can overcome the complexity of asset tracking and relationship mapping, optimize their security workflows, and effectively manage the evolving threat landscape. The tooling already exists, all that’s required is the integration and data convergence capabilities for you to uplevel the SOC.

Watch appNovi’s video on CAASM capabilities with Rapid7 today to understand this comprehensive and proactive approach to cybersecurity.

OWASP TOP 10 API Security Risks: 2023!

The OWASP Top 10 API Security Risks 2023 has arrived! OWASP's API Top 10 is always a highly anticipated release and can be a key component of API security preparedness for the year. As we discussed in API Security Best Practices for a Changing Attack Surface, API usage continues to skyrocket. As a result, API security coverage must be more advanced than ever.

What are the OWASP Top 10 API Security Risks?

The OWASP Top 10 API Security Risks is a list of the highest priority API based threats in 2023. Let’s dig a little deeper into each item on the OWASP Top 10 API Security Risks list to outline the type of threats you may encounter and appropriate responses to curtail each threat.

1. Broken object level authorization

Object level authorization is a control method that restricts access to objects to minimize system exposures. All API endpoints that handle objects should perform authorization checks utilizing user group policies.

We recommend using this authorization mechanism in every function that receives client input to access objects from a data store. As an additional means for hardening, it is recommended to use cryptographically secure random GUID values for object reference IDs.

2. Broken authentication

Authentication relates to all endpoints and data flows that handle the identity of users or entities accessing an API. This includes credentials, keys, tokens, and even password reset functionality. Broken authentication can lead to many issues such as credential stuffing, brute force attacks, weak unsigned keys, and expired tokens.

Authentication covers a wide range of functionality and requires strict scrutiny and strong practices. Detailed threat modeling should be performed against all authentication functionality to understand data flows, entities, and risks involved in an API. Multi-factor authentication should be enforced where possible to mitigate the risk of compromised credentials.

To prevent brute force and other automated password attacks, rate-limitation should be implemented with a reasonable threshold. Weak and expired credentials should not be accepted, this includes JWTs, passwords, and keys. Integrity checks should be performed against all tokens as well, ensuring signature algorithms and values are valid to prevent tampering attacks.

3. Broken object property level authorization

Related to object level authorization, object property level authorization is another control method to restrict access to specific properties or fields of an object. This category combines aspects of 2019 OWASP API Security’s “excessive data exposure” and “mass assignment”. If an API endpoint is exposing sensitive object properties that should not be read or modified by an unauthorized user it is considered vulnerable.

The overall mitigation strategy for this is to validate user permissions in all API endpoints that handle object properties. Access to properties and fields should be kept to a bare minimum at an as-needed basis scoped to the functionality of a given endpoint.

4. Unrestricted resource consumption

API resource consumption pertains to CPU, memory, storage, network, and service provider usage for an API. Denial of service attacks result from overconsumption of these resources leading to downtime and racked up service charges.

Setting minimum and maximum limits relative to business functional needs is the overall strategy to mitigating resource consumption risks. API endpoints should limit the rate and maximum number of calls at a per-client basis. For API infrastructure, using containers and serverless code with defined resource limits will mitigate the risk of server resource consumption.

Coding practices that limit resource consumption need to be in place, as well. Limit the number of records returned in API responses with careful use of paging, as appropriate. File uploads should also have size limits enforced to prevent overuse of storage. Additionally, regular expressions and other data-processing means must be carefully evaluated for performance in order to avoid high CPU and memory consumption.

5. Broken function level authorization

Lack of authorization checks in controllers or functions behind API endpoints are covered under broken function level authorization. This vulnerability class allows attackers to access unauthorized functionality; whether they are changing an HTTP method from a `GET` to a `PUT` to modify data that is not expected to be modified, or changing a URL string from `user` to `admin`. Proper authorization checks can be difficult due to controller complexities and the numbers of user groups and roles.

Comprehensive threat modeling against an API architecture and design is paramount in preventing these vulnerabilities. Ensure that API functionality is carefully structured and corresponding controllers are performing authentication checks. For example, all functionality under an `/api/v1/admin` endpoint should be handled by an admin controller class that performs strict authentication checks. When in doubt, access should be denied by default and grants should be given on a as needed basis.

6. Unrestricted Access to Sensitive Business Flows

Automated threats are becoming increasingly more difficult to combat and must be addressed on a case-by-case basis. An API is vulnerable if sensitive functionality is exposed in such a way that harm could occur if excessive automated use occurs. There may not be a specific implementation bug, but rather an exposure of business flow that can be abused in an automated fashion.

Threat modeling exercises are important as an overall mitigation strategy. Business functionality and all dataflows must be carefully considered, and the excessive automated use threat scenario must be discussed. From an implementation perspective, device fingerprinting, human detection, irregular API flow and sequencing pattern detection, and IP blocking can be implemented on a case-by-case basis.

7. Server side request forgery

Server side request forgery (SSRF) vulnerabilities happen when a client provides a URL or other remote resource as data to an API. The result is a crafted outbound request to that URL on behalf of the API. These are common in redirect URL parameters, webhooks, file fetching functionality, and URL previews.

SSRF can be leveraged by attackers in many ways. Modern usage of cloud providers and containers exposes instance metadata URLs and internal management consoles that can be targeted to leak credentials and abuse privileged functionality. Internal network calls such as backend service-to-service requests, even when protected by service meshes and mTLS, can be exploited for unexpected results. Internal repositories, build tools, and other internal resources can all be targeted with SSRF attacks.

We recommend validating and sanitizing all client provided data to mitigate SSRF vulnerabilities. Strict allow-listing must be enforced when implementing resource-fetching functionality. Allow lists should be granular, restricting all but specified services, URLs, schemes, ports, and media types. If possible, isolate this functionality within a controlled network environment with careful monitoring to prevent probing of internal resources.

8. Security misconfiguration

Misconfigurations in any part of the API stack can result in weakened security. This can be the result of incomplete or inconsistent patching, enabling unnecessary features, or improperly configuring permissions. Attackers will enumerate the entire surface area of an API to discover these misconfigurations, which could be exploited to leak data, abuse extra functionality, or find additional vulnerabilities in out of date components.

Having a robust, fast, and repeatable hardening process is paramount to mitigating the risk of misconfiguration issues. Security updates must be regularly applied and tracked with a patch management process. Configurations across the entire API stack should be regularly reviewed. Asset Management and Vulnerability Management solutions should be considered to automate this hardening process.

9. Improper inventory management

Complex services with multiple interconnected APIs present a difficult inventory management problem and introduces more exposure to risk. Having multiple versions of APIs across various environments further increases the challenge. Improper inventory management can lead to running unpatched systems and exposing data to attackers. With modern microservices making it easier than ever to deploy many applications, it is important to have strong inventory management practices.

Documentation for all assets including hosts, applications, environments, and users should be carefully collected and managed in an asset management solution. All third-party integrations need to be vetted and documented, as well, to have visibility into any risk exposure. API documentation should be standardized and available to those authorized to use the API. Careful controls over access to and changes of environments, plus what's shared externally vs. internally, and data protection measures must be in place to ensure that production data does not fall into other environments.

10. Unsafe consumption of APIs

Data consumed from other APIs must be handled with caution to prevent unexpected behavior. Third-party APIs could be compromised and leveraged to attack other API services. Attacks such as SQL injection, XML External Entity injection, deserialization attacks, and more, should be considered when handling data from other APIs.

Careful development practices must be in place to ensure all data is validated and properly sanitized. Evaluate third-party integrations and service providers’ security posture. Ensure all API communications occur over a secure channel such as TLS. Mutual authentication should also be enforced when connections between services are established.

What's next?

The OWASP Top 10 API Security Risks template is now ready and available for use within InsightAppSec, mapping each of Rapid7’s API attack modules to their corresponding OWASP categories for ease of reference and enhanced API threat coverage.

Make sure to utilize the new template to ensure best in class coverage against API security threats today! And of course, as is always the case, ensure you are following Rapid7’s best practices for securing your APIs.