The Importance of Asset Context in Attack Surface Management.

This is the last of the four blogs (Help, I can’t see! A Primer for Attack Surface Management Blog Series, The Main Components of an Attack Surface Management (ASM) Strategy, and Understanding your Attack Surface: Different Approaches to Asset Discovery)  covering the foundational elements of Attack Surface Management (ASM), and this topic covers one of the main drivers for ASM and why companies are investing in it, the context it delivers to inform better security decision making.

ASM goes far beyond traditional IT asset visibility by bringing in the relevant security context that helps teams better prioritize and remediate. In general, the more context that you can make sense of, the more equipped your teams will be to make good decisions and drive toward action.

A clear example of this can be seen in an investigation of a machine under an active threat recorded by your SIEM or XDR solution. You likely have thousands of assets in your environment where the security team is unclear about the machine’s purpose. You now leverage context from your ASM solution to learn that the machine has access to several critical business networks and that it has a high-risk exposure on it related to an ongoing active threat. It’s just a matter of time before compromise and lateral movement. This augmented context during an investigation enables you to immediately make this the number one priority for your team.

Another key example involves identities. By inventorying all the identities across your environment, you can easily determine which ones have MFA disabled, and further filter based on those that have administrative access to a business application. To improve this identity context even further,  you can pull in additional context from tools like KnowBe4 to understand how likely the user is to click on a phishing email based on their phishing training success rate. The marriage of identity data with security controls and business context helps teams better prioritize their most at-risk users for remediation.

Let’s look further at the key types of asset context that we believe are critical for effective ASM.

Business Context-Aware

The first, and arguably most important, is the asset’s business context. This enables teams to understand the business function and risk, as well as the chain of command for contact or remediation. Visibility into the chain of command provides teams with the system owner, primary user, and which department and leader they fall under.

This business context is often pulled from CMDBs such as ServiceNow, Directory Services, HR tools like Workday, and by ingesting tags from CSP and security tool data sources. To effectively leverage business context, organizations need to develop and maintain an information architecture across the environment. Business context also helps identify which assets are a key dependency for business critical applications.

Exposures & Security Controls-Aware

Understanding an asset's vulnerabilities and exposures along with security control, mitigations, and business context is key to giving vulnerability teams the necessary means to make the best prioritization decisions. If a group of 100 machines all contain a Known Exploitable Vulnerability (KEV) that is being used in the wild by a specific piece of malware that is targeting your industry, your team may need to be up all night trying to remediate or mitigate this critical risk. But what if the majority of those same machines also have a security control or configuration in place that effectively causes that piece of malware to fail? Instead, your team can focus on a much smaller number within that group that lacks the required controls and focus on remediating those instead. Being able to harness all the available security context for assets enables teams to prioritize much more effectively.

Threat-Aware

Finally, threat context derived from SIEM, Threat Intelligence Platforms (TIP), and endpoint security tools enables security operations teams to gain insight into active threats and investigations when looking at an asset. It also enables teams to  threat-hunt across all asset data, understand the blast radius from a compromised machine, and use threat insights to prioritize response. If you can identify all machines that have a specific vulnerability and are also seeing TTPs related to it, remediation activities for these  machines can be prioritized.

Data Confidence, Aggregation & Correlation

A key factor in having confidence in security data and the context derived from it is having belief in the accuracy and integrity of the data itself. There are a few ways in which technology can help deliver that confidence. Because ASM is all about having visibility across your data and tooling silos, the final thing to consider is technology features related to an organization’s ability to analyze, troubleshoot, and configure data so that it matches your view of the attack surface. We can break this section into 3 main areas:

Unified Data Ingestion & Correlation

According to research from 451 Group, most security teams rely on between 11 and 30 different security tools to manage and secure their environments. Each of these tools only provides a partial view of the environment, and only from a particular perspective. As an example, Active Directory typically only sees Windows machines that are joined to the Domain Controller, DHCP only sees networked devices that have broadcasted and been given a lease, and CSPM tools only see cloud resources for Cloud Service Providers that have been configured.

Due to these visibility gaps, a holistic ASM solution must be able to see across these data silos and tools by ingesting and correlating data from many different sources, deduplicating it to deliver an accurate, continuously updated view of an organization’s asset landscape.

Data Transparency

Data transparency is all about giving users the ability to understand where their data has come from, how well the data is being ingested, and how the data is populated within the data model. This also enables users to follow & configure correlation logic. It is critical that you trust the data of a solution that is intended to become the ‘single source of truth’ for security data in your organization, so we cannot emphasize enough the importance of having the right visibility into how data is used in an ASM solution.

For reference, I’m including several examples of how data transparency is a core capability of Rapid7’s Surface Command.

In the image below, we’re looking at the distribution of raw asset records to uniquely correlated assets in an organization. The system has received over 200,000 raw assets from many different data sources, and is able to narrow it down through its asset correlation algorithm to 63,179 unique assets.

The Importance of Asset Context in Attack Surface Management.

The next example shows correlation effectiveness and property fulfillment (data fields with actual values) for Azure AD’s Device type. This capability is available on a per-connector basis and can be used to see how well the data source in question is correlating with other data sources (i.e., are they seeing the same assets?), and also how much of the data is being fulfilled by the API which can help pinpoint configuration issues that are limiting your view of your attack surface.

The Importance of Asset Context in Attack Surface Management.

The final example is a table view of all the data sources coming into the system and key insights from them. This can be used to assess the quality of your data sources and to debug issues like when duplicate records occur. In that case, correlation rules can be updated to reduce those duplications so users get the best correlation, and thus the best and most accurate view of their attack surface.

The Importance of Asset Context in Attack Surface Management.

This transparency into data ingestion and correlation is also critical when working with other stakeholders in the business, ensuring that everyone is in alignment on the most accurate data.

Data Prioritization

The final key aspect to successful ASM is being able to customize data in the way that an organization wants to see it. Teams rely on some tools more than others, and the weighting of those tools should match the overall preferences of the business. If Active Directory is your source of truth for ‘business owner’ and ‘department’ information over ServiceNow CMDB, then the system should be able to re-correlate the data based on the way an organization sees and utilizes the data.

Below, we show an example of how we are able to configure data prioritization in Rapid7’s Surface Command. Weighting the data can be configured on a per-property basis, so any ingestible and correlatable field can be customized to prioritize which tool should be preferred in the event of a data conflict. This enables teams to select and leverage the tools that they trust the most for specific data and use cases, so the attack surface matches the way they see their environment.

The Importance of Asset Context in Attack Surface Management.
[Example: Where ServiceNow takes priority on the Business Owner of an asset, followed by Azure AD.]

Conclusion: The Value of Context in Attack Surface Management

Over the past four blogs, I have tried to cover some of the key benefits and use cases for ASM. Much of it comes down to the core value that you can only protect what you know about, but in reality, it’s more complex than that.

The context that ASM solutions can provide you about both the external threat, and internal cyber risks, help security teams focus on what is most critical to protecting their organization. With the ever-growing number of vulnerabilities and non-patchable exposures, it just isn’t practical to expect to address everything, so prioritization is key. This is where the real value of ASM lies.

Once we understand our overall security posture, which assets are the most critical to the business, which services are the most exposed to attacks, we have the context needed to drive an effective cybersecurity program. We can take these insights and make them actionable, working with colleagues in DevOps and IT to harden machines and patch the most high-risk vulnerabilities. If we are successful in finding the gaps before the attacker, then we should also reduce the burden downstream on our SOC and IR teams.

I hope you found this blog series valuable. I’d encourage you to explore more information on Rapid7’s market-leading attack surface and exposure management solutions at https://www.rapid7.com/products/command/attack-surface-management-asm/.

Understanding your Attack Surface: Different Approaches to Asset discovery

Over the past two blogs (Help, I can’t see! A Primer for Attack Surface Management Blog Series and The Main Components of an Attack Surface Management (ASM) Strategy) in our series on Attack Surface Management, we’ve focused on the drivers and core elements of an Attack Surface Management solution. In this post, we’ll delve intoprocess of discovering assets. We cannot secure what we cannot see so getting this piece right is foundational to the success of your ASM program. This blog will explore four different methods of asset discovery starting with the most basic, deployed software agents.

Software Agents

Deployed agents are how most asset inventory and asset management systems work. A software agent is deployed on a workstation or server, phoning home to the management system about the asset. The benefits of this approach are that you get a very high fidelity dataset on this particular asset, including up-to-date information on installed software, location, etc. However, this approach is only as good as the reach of the software agent. From an asset discovery perspective, the problem that is encountered is that you cannot discover assets that do not have that software agent installed. If we consider the funnel diagram below, this is effectively having visibility from one row in the funnel. In reality, most organizations do not have 100% deployment coverage of agents and many assets that will not be able to deploy agents, so they have many different tools that provide asset visibility, all with different perspectives.

Understanding your Attack Surface: Different Approaches to Asset discovery

Also, If the software agent is an IT management agent, and not an endpoint security agent, it will typically lack security controls, vulnerability and exposure context which means key information to best understand the attack surface may be missing.

In sum, software agents should be treated as pieces of the attack surface puzzle.

Data Aggregation & Correlation

The more comprehensive way to discover assets is through the ingestion of asset data across a variety of tools the organization uses. This is the primary way assets are discovered with a CAASM solution. By ingesting data from your IT, business applications, and security tools via API connectors we get the broadest visibility and can see across the data gaps from any individual tools.

A CAASM solution asks each connected tool for the latest list of assets on a recurring basis. Security and identity data related to an asset is then stored and mapped to build a relationship in a database (ideally a graph) that is easily discoverable and queryable. Note that some solutions exist, but not many, that enable asset history tracking to perform data trending to view how an asset and organization changes over time. In this case, more than a single asset record is stored per asset, retained for a configurable length of time.

Due to the correlation engine provided by CAASM solutions, the more data you ingest from your tools, the better your attack surface visibility and accuracy. Remember the funnel illustrated at the beginning of this article? Since your tools might not agree on the fundamental aspects like the number of assets, it’s necessary to ingest data across them to get closer to a truer picture of your attack surface.

Organization’s start putting together the pieces by ingesting data typically from 5 primary sources

  • Directory Services (Active Directory, Azure AD, LDAP, etc.)
  • Endpoint Security (insightIDR, Crowdstrike, SentinelOne, etc.)
  • Vulnerability Management (insightVM, Qualys, Tenable etc.)
  • Identity & Access Management (Okta, Duo, etc.)
  • Cloud Service Provider or Cloud Security Posture Management (AWS, Azure, insightCloudSec, Wiz, etc.)

Full deployments typically have between 10-20 data sources depending on the size of the organization. These will also include integrations into CMDB, IT Asset Management systems, Digital Risk Protection Service (DRPS), and more.

For the external attack surface (EASM), assets are discovered using one of two methods, or a combination of both. The first method is again, data ingestion from sources: like Shodan, Bitsight, etc. The second method is through active internet scanning that occurs on a recurring basis to discover the latest public facing assets and services running on them. We have covered data ingestion in detail already, we will then take a look at active network scanning shortly but let’s start with passive network scanning first.

Passive Network Scanning

Not every asset in the organization will be linked to a pre-existing data source. For complete attack surface coverage, you also need to consider methods that go further that address visibility gaps from your data sources. The first of these is passive network scanning.

In one scenario, attackers could gain access to your internal network through a malicious insider. A disgruntled employee could plug an unapproved workstation or malicious device into an ethernet port, or attackers might gain a foothold through WiFi attacks to gain entry on the network with a static IP address. In both of these cases, the malicious device would effectively be invisible to your teams and tools, with the exception of the perspective of network switches, firewalls, and network traffic analysis.

Support for data sources of passive network traffic therefore can give teams visibility into new assets that come online that are not correlated with any other data source. This can provide visibility into rogue devices that are circumventing security policy and protocol. Most CAASM solutions today do not ingest network data such as Netflow, or support NTA data ingestion, although some can use data from agents that process ARP or DHCP broadcasts to discover new assets. However, these agents need to be deployed on a specific network segment otherwise they won't be able to discover unknown assets. In these cases, Active Network Scanning is a potential alternative to increase visibility of assets that are circumventing normal controls and monitoring.

Active Network Scanning

The most difficult-to-discover Shadow IT assets can also be the most vulnerable, because they won't have the necessary security controls enabled. These assets are not discoverable through network data alone, as they provide no telemetry.  Even with network data, security teams often miss fingerprinting and fail to identify the services running on these devices. Active scanning offers a way to capture information from these assets that are otherwise missed. Active network scanning is a necessary feature in environments where full visibility is extremely important.

A fully deployed vulnerability scanner is superior to native active network scanning because it uses the same network discovery techniques but also understands vulnerabilities and exposures. Using a CAASM solution to understand which assets and networks are not being continuously assessed for vulnerabilities is a great way to also increase your ability to discover new assets by active network scanning.

For the final blog in this series, we will look at how we can drive greater insights through the context we can acquire with effective Attack Surface Management (ASM).

The Main Components of an Attack Surface Management (ASM) Strategy

In part one of this blog series, we looked at some of the core challenges that are driving the demand for a new approach to Attack Surface Management. In this second blog I explore some of the key technology approaches to ASM and also some of the core asset types we need to understand. We can break the attack surface down into two key perspectives (or generalized network locations), each of which covers hybrid environments (Cloud, On-Premise):

  • External (EASM) - Public facing, internet exposed cyber assets
  • Internal  - Private network accessible cyber assets

External (EASM)

Today, most available ASM solutions are focused on External Attack Surface Management (EASM) which provides an attacker’s perspective of an organization, an outside-in view. In fact, it’s common for organizations, and some analyst firms,  to refer to EASM as ASM. However, while this is important, it is only a small, and partial view of the attack surface in most organizations.

EASM seeks to understand an organization’s external attack surface by collecting telemetry about an organization’s internet exposed, public facing assets. This telemetry is derived from different data sources such as vulnerability & port scans, system fingerprinting, domain name searches, TLS certificate analysis and more. It provides valuable insights into the low hanging fruit that attackers will target. Core EASM capability is the equivalent of pointing a vulnerability scanner at your known external IP address range.However, unless your external environment is most of your business, this visibility alone is not enough and leaves organization’s with a limited, partial view of their attack surface.

Internal

The internal attack surface is often the largest portion of an organization’s digital footprint. Attackers frequently gain footholds in organization’s through identity, ransomware, and supply-chain attacks, among many other attack vectors. Organization’s need visibility into their internal attack surface to gain real insight into their digital estate and to be able to reduce their risk by understanding how their most vulnerable and business critical systems are connected, monitored, and protected.

Today, most organizations that have adopted an ASM approach are manually correlating asset information in spreadsheets from various sources to combine business context with the security controls deployed on those assets so they can answer basic questions about their security tool coverage & deployment gaps, and measure their compliance adherence.

The data sources in these spreadsheets typically include their directory services such as Active Directory, combined with outputs from common security controls such as EDR or vulnerability scanning.. Not only is this manual process time-consuming but the information is often out of date by the next morning.

Organization’s need a more scalable solution to this problem, which has led to the development of CAASM solutions to address this challenge..

Introducing CAASM, a new approach to attack surface and exposure management

Over the last few years an approach has emerged to address the attack surface discovery & visibility problem in a scalable, holistic way. It’s a long acronym that stands for Cyber Asset Attack Surface Management (CAASM).

CAASM is the security team’s take on asset management, but it’s much more than that. It addresses the internal visibility problem by aggregating and correlating asset information across an organization’s security and IT tools, providing a clearer, more accurate picture of an organization’s attack surface. Foundational to CAASM is a correlation engine and data model that builds relationships across different types of assets, controls, exposures and more.  This technology is able to provide the best representation of an asset with full context from IT and security tools. It enables IT, SecOps, DevOps, and CloudOps teams to operate with the same information by breaking tool sprawl and data silos, enabling better visibility, communication, prioritization, and remediation of risk.

CAASM solutions work by ingesting data from IT, business applications, and security tools through simple API integrations that pull in asset data from each respective tool on a continuous basis, identifying unique assets through aggregation, de-duplication and correlation. This provides the best picture of your digital estate by breaking down the data silos and tool deployment gaps.  The more data you ingest from your environment the more accurate the picture of your attack surface becomes.

These solutions are continuing to evolve today to treat identities as assets, create software inventories, and map SaaS applications as part of the attack surface. When seeking a holistic attack surface solution, you should ensure it includes the following key features for optimal visibility:

  • External Attack Surface Management
  • Internal Attack Surface Management
  • Unified data correlation engine
  • Cloud resource aware
  • Identities
  • Software Inventory

Key Asset Types to Drive Attack Surface Visibility

NIST has a definition of asset that is very broad but will suffice for this article:

“An item of value to stakeholders…the value of an asset is determined by stakeholders in consideration of loss concerns across the entire system life cycle. Such concerns include but are not limited to business or mission concerns.

Based on this definition, we will further narrow down the scope to focus on types of cyber assets that add the most value in understanding the Attack Surface. Let’s start with the most basic: machines.

Traditional Assets (Machines)

Often referred to simply as "assets," these primarily include your employee and business application compute devices,  such as workstations and servers. Due to the fast paced evolution of digital infrastructure, this definition is quickly expanding to include infrastructure like virtual machines, containers, and object stores, or new asset categories are being created in  Attack Surface Management solutions. The important thing is to make sure you have visibility into the cyber assets in your organization, however they’re defined.

Identities

Identities are the new perimeter, as some say, and are valuable assets to the business because they grant access to the business’s resources. Identity data suffers from the same data silo problems as other assets. Your company email address, for example, is typically used to authenticate and access many different business services and applications. If we can correlate data from sources like Active Directory, Okta, Google Suite, Office 365, KnowBe4 security training, we can provide security and IAM teams with visibility into not just the identities within the organization, but also key challenges in the identity attack surface, such as identities that have MFA disabled but also have Administrator access to key services.

A common challenge with identity discovery and attack surface management is that security teams attempt to map it using threat data. There is a significant difference in accuracy between detection rules and the identity source. For example, a service account that is actively enabled may be missed by aSIEM/XDR solution due to a lack of recent log activity, therefore excluding it from reports. By inventorying identities as assets, we can gather the status of the service account directly from the data source’s API. Both the identity telemetry data from the source (e.g. Okta, AD) and threat data (e.g SIEM/XDR) can be leveraged to give a more accurate picture of the state of the environment.

Software Inventory

With the rise of supply-chain attacks and the increased presence of unapproved or outdated software, visibility into software has become a key part of understanding your attack surface. Inventorying all software installed and running on an assets, combined with security context around that software from vulnerability scanners, NGAV and Threat Intelligence, gives teams the best visibility into understanding and measuring the risks posed by unapproved or unauthorized code. A software inventory helps answer questions like:

  • Which of my machines are running software that has a new, high-risk vulnerability?
  • Which machines are running legacy or outdated software?
  • What is the most vulnerable software in my environment that we should prioritize for remediation?
  • Am I over utilizing an application license?

Other types of ‘software adjacent’ assets include SaaS applications and web applications.

Now that we have identified the three major types of business assets to monitor in your attack surface, in the next blog we will explore how ASM solutions discover the assets in your environment and what to watch out for to ensure you have the best discovery capabilities so that you’re not missing large portions of your attack surface.


Part 1: Overview of the Problem ASM Solves and a High-Level Description of ASM and Its Components

Help, I can’t see! A Primer for Attack Surface Management blog series

Welcome to the first installment of our multipart series, "Help! I Can’t See! A Primer for Attack Surface Management Blog Series." In this series, we will explore the critical challenges and solutions associated with Attack Surface Management (ASM), a vital aspect of modern cybersecurity strategy. This initial blog, titled "Overview of the Problem ASM Solves and a High-Level Description of ASM and Its Components," sets the stage by examining the growing difficulties organizations face in managing their digital environments and how ASM can help address these issues effectively.

The fast paced evolution of digital infrastructure that is driving businesses forward (e.g. workstations, virtual machines, containers, edge) is also making it more difficult for organizations to keep track of and account for the cyber attack surface they’re responsible for protecting. Despite security teams continuing to invest exorbitant amounts of money on tools (VM, EDR, CNAPP, etc.) to both manage their digital environment and also secure it, the problem isn’t getting any better. In this 3-part blog series  we will help demystify the problems of security data silos and tool sprawl so you can answer pertinent questions like

  • How many assets and identities am I responsible for protecting?
  • How many assets and identities are lacking security controls like endpoint security or MFA?
  • What is my overall security posture?

When we look at the number and types of tools organizations spend money on to manage and secure their digital environment, we typically see things like vulnerability scanners, endpoint security, IdP, patching, IT asset management, Cloud Service Providers, and more.  Each of these tools and technologies tend to do a pretty good job at their core function but unintentionally contribute to a fractured ecosystem that provides organizations with contradictory information about their digital environment.

Help, I can’t see! A Primer for Attack Surface Management blog series

The age old problem: How many assets do I have?

Let’s look at a real-world example of this where an organization has solutions for Vulnerability Management (VM), Cloud Security Posture Management (CSPM), Endpoint Security (EDR/EPP), Active Directory (Directory Services) and IT Asset Management( ITAM).

Help, I can’t see! A Primer for Attack Surface Management blog series

None of these tools can agree on the number of assets in the environment. It’s practically  impossible to achieve 100% deployment of agent-based tools across your business (some types of assets cannot have agents!). It then becomes a real challenge to see across these tooling visibility gaps. The result is that we cannot answer the basic question of “How many assets am I responsible for protecting”.

This fact is compounded because if we can’t agree on the total number of assets, then we don’t know the number of controls in place, the number of vulnerabilities and exposures that exist, and the number of active threats in our environment. Teams that manage and secure organizations are relying upon incorrect information in an environment where prioritization and decision making needs to be based on high-fidelity information that incorporates IT, security, and business context to lead to the best outcomes.

To drill down on  these points, let’s pick on a few tools from the infographic for illustrative purposes. Wiz will only see assets in the cloud, Active Directory only sees assets (mostly Windows) tied to the Domain Controller, and traditional vulnerability scanners see across hybrid environments but tend to be mostly deployed on-premise. If you hone in on the numbers in the Asset column you will immediately notice that none of these tools agree on the number of assets in the environment. Lacking visibility and confidence in your attack surface is a big data problem, and deploying the latest shiny security tool is not going to fix it.

Ultimately, we have an industry created data problem that Rapid7 is not immune to. For a number of perfectly good reasons, we have created a fractured technology ecosystem that is preventing security teams from having the best data available to determine their cyber risk and enabling them to prioritize the most effective remediation and response.

We need to see across the gaps that truly matter; for that we need Attack Surface Management.

What is Attack Surface Management?

Attack Surface Management (ASM) is generally part of a wider Exposure Management program and  is a different way to think about cyber risk by focusing on addressing the digital parts of the business that are most vulnerable to attack. Taking an attack surface-based approach to your security program needs to consider a number of different elements including:

  • Discovery and inventory of all cyber assets in the organization, from the endpoint to the cloud
  • Internet-scanning to identify unknown exposures and map them to the existing asset inventory
  • High-fidelity visibility and insights into the IT, security, and business context of those assets
  • Relationship-mapping between the assets and the wider network and business infrastructure

ASM is a continuous process that is constantly assessing the state of the attack surface by uncovering new or updated assets, identifying the use of shadow IT in network or cloud use cases and prioritizing exposures based on their potential risk to the business. These elements of discovery and prioritization are foundational elements of a Continuous Threat Exposure Management (CTEM) initiative, where security teams are taking a more holistic approach to managing all types of exposures in their organizations.

A positive trend that we are currently seeing is that security teams are going back to basics and focusing on cyber asset management to first discover and understand the assets they’re responsible for protecting, along with their business function.

They gain visibility into the assets through a combination of external scanning to identify internet-facing assets which are potentially higher risk, this is known as External Attack Surface Management (EASM). A  complementary approach to cyber asset discovery  that provides greater insights into the whole cyber estate uses API-based integrations into existing IT management and security tools to ingest asset data; this is known as Cyber Asset Attack Surface Management (CAASM). Together, they provide organizations with the asset visibility they need to drive security decisions.

Put simply, you cannot secure what you can’t see. Managing the attack surface requires asset discovery and visibility, combined with rich context from all tools in the environment.

Attack Surface Management vs. Asset Inventory

There is a common confusion with customers today that they already have elements of an ASM strategy with their current approach to asset inventory. This is typically based on an asset inventory system that IT is using for asset lifecycle management. A traditional asset inventory’s view of the environment is almost entirely based on what it is able to discover on its own, and with an IT focus. These are often agent-based,  with limited integrations, so they are not able to take advantage of an organization’s wide range of tools, which impairs their value.

Many asset inventories today can only discover assets where they have a deployed agent, such as an endpoint agent or being tied to the domain controller. While these technologies are effective at making policy and configuration changes on their fleet of endpoints, they do not have a data aggregation and correlation engine that sees beyond the specific agent. Additionally, they have limited security insights and context, and are only able to provide a partial view of the attack surface, assuming that no agent has 100% coverage.

This is not the reality in most organizations, and it’s why one should not confuse Asset Inventories with Attack Surface Management, the latter being a much more effective approach to surfacing the best asset and security telemetry across your ecosystem. An Attack Surface Management solution will ingest data from an IT Asset Inventory or Management tool as one of many data sources to collate.

The next blog in this series will look at the different components of an ASM program, and how they can be leveraged to improve security hygiene and reduce cyber risk.

Help, I can’t see! A Primer for Attack Surface Management blog series