Pity the poor CISO at any enterprise you care to name.

Related: The rise of ‘XDR’

As their organizations migrate deeper into an intensively interconnected digital ecosystem, CISOs must deal with cyber attacks raining down on all fronts. Many are working with siloed security products from another era that serve as mere speed bumps. Meanwhile, security teams are stretched thin and on a fast track to burn out.

Help is on the way. At RSA Conference 2022, which opened today in San Francisco, new security frameworks and advanced, cloud-centric security technologies will be in the spotlight. The overarching theme is to help CISOs gain a clear view of all cyber assets, be able to wisely triage exposures and then also become proficient at swiftly mitigating inevitable breaches.

Easier said than done, of course. I had the chance to discuss this with Lori Smith, director of product marketing at Trend Micro. With $1.7 billion in annual revenue and 7,000 employees, Trend Micro is a prominent leader in the unfolding shift towards a more holistic approach to enterprise security, one that’s a much better fit for the digital age. For a full drill down on our discussion, please give the accompanying podcast a listen. Here are key takeaways.

Beyond silos

It was only a few short years ago that BYOD and Shadow IT exposures were the hot topics at RSA. Employees using their personally-owned smartphones to upload cool new apps presented a nightmare for security teams.

Fast forward to today. Enterprises are driving towards a dramatically scaled-up and increasingly interconnected digital ecosystem. The attack surface of company networks has expanded exponentially, and fresh security gaps are popping up everywhere.

What’s more, the rapid rise of a remote workforce, in the wake of Covid 19, has only served to accelerate cloud migration, as well as scale up the attendant network exposures. Unmanaged smartphones and laptops, misconfigured Software as a Service (SaaS) apps, unsecured Internet access present more of an enterprise risk than ever.

“The increased number of these cyber assets means that there’s more cyber assets that can potentially be vulnerable,” Smith says. “This opens up an even bigger and more profitable attack surface that cybercriminals are only too eager to target and exploit.”

Smith

In this hyperkinetic environment, a harried CISO needs to be able to visualize risk from a high level — as if it were moving in slow motion – and then make smart, strategic decisions. No single security solution now does this; there is no silver bullet. And the usual collection of security tools – firewall, endpoint detection, intrusion detection, SIEM, etc. – typically arranged as siloed layers to protect on-premise networks, falls short as well, Smith says.

See, assess, mitigate

In life, solving any complex challenge often comes down to going back to basics. Enterprises can head down several viable paths to start doing this, with respect to network security. Trend Micro is in the camp advocating that a more holistic security posture can be attained through securing three fundamental capabilities.

The first is the ability to see everything. Enterprises need to gain a crystal-clear view of every component of on-premises, private cloud and public cloud IT infrastructure, Smith says. This is not a snapshot; it’s more of a process of continuously discovering evolving tools, services and behaviors, she says.

Observes Smith: “This is about gaining visibility into all cyber assets, internal and external, and answering questions like, ‘What is my attack surface?’ and ‘How well can I see all the assets in my environment?’ ‘How many assets do I have?’ ‘What types?’ ‘What kinds of profiles do my assets have and how is that changing over time?’”

Discovering and continuously monitoring all cyber assets enables the second essential capability: doing strategic risk assessments to gain important insight into the status of their cyber risks and security posture. Need a roadmap? CISOs need only to follow the principles honed over the past 200 years by the property and casualty insurance industry.

It comes down to taking an informed approach to triaging cyber exposures, Smith says. Organizations need better insight in order to prioritize those actions that will help them reduce their risk the most. It helps identify the security controls that should be in place as appropriate for that cyber asset. For example, strong authentication and least privileged access should be essential for sensitive assets but may be unnecessary for benign assets.

The third capability has to do with mitigating risks. Data analytics and automation can very effectively be applied to dialing in the optimum mix of security and agility, at scale. “This is about applying the right controls,” Smith says. “Whether that’s automated remediation action using security playbooks or prioritizing and proactively implementing recommended actions to lower risk.”

Towards holistic security

It’s remarkable – and telling – that Trend Micro got its start in 1988 as the supplier of a siloed security product: antivirus software. The company has evolved to stay in step with the evolution of network architectures and a threat landscape in which threat actors always seem to operate several steps ahead of security teams.

Trend Micro One, its unified security platform, along with its XDR capabilities, represent the latest iteration of its product strategy. Consolidating native Trend Micro tools and services with partner solution integrations will help enterprises put aside their siloed defense mentality and achieve comprehensive security in a powerful way.

“For effective security, you must have protection, detection, and response in place,” Smith says. “And you must have that continuous attack surface discovery and risk assessment so that you are prioritizing your actions and optimizing your security controls appropriately . . . I think that’s why we’re seeing security platforms, in general, gaining traction; because today’s environment requires that holistic approach.”

The rise of security platforms optimized for modern networks is an encouraging development. It’s early; there’s more to come. I’ll keep watch and keep reporting.

Acohido

Pulitzer Prize-winning business journalist Byron V. Acohido is dedicated to fostering public awareness about how to make the Internet as private and secure as it ought to be.


(LW provides consulting services to the vendors we cover.)

The zero trust approach to enterprise security is well on its way to mainstream adoption. This is a very good thing.

Related: Covid 19 ruses used in email attacks

At RSA Conference 2022, which takes place next week in San Francisco, advanced technologies to help companies implement zero trust principals will be in the spotlight. Lots of innovation has come down the pike with respect to imbuing zero trust into two pillars of security operations: connectivity and authentication.

However, there’s a third pillar of zero trust that hasn’t gotten quite as much attention: directly defending data itself, whether it be at the coding level or in business files circulating in a highly interconnected digital ecosystem. I had a chance to discuss the latter with Ravi Srinivasan, CEO of  Tel Aviv-based Votiro which launched in 2010 and has grown to  .

Votiro has established itself as a leading supplier of advanced technology to cleanse weaponized files. It started with cleansing attachments and weblinks sent via email and has expanded to sanitizing files flowing into data lakes and circulating in file shares. For a full drill down on our discussion, please give the accompanying podcast a listen. Here are key takeaways.

Digital fuel

Votiro’s new cloud services fit as a pillar of zero trust that is now getting more attention: directly protecting digital content in of itself. Zero trust, put simply, means eliminating implicit trust. Much has been done with connectivity and authentication. By contrast, comparatively little attention has been paid to applying zero trust directly to data and databases, Srinivasan observes. But that needs to change, he says. Here’s his argument:

Companies are competing to deliver innovative digital services faster and more flexibly than ever. Digital content creation is flourishing with intellectual property, financial records, marketing plans and legal documents circulating within a deeply interconnected digital ecosystem.

Digital content has become the liquid fuel of digital commerce—and much of it now flows into and out of massive data lakes supplied by Amazon Web Services, Microsoft Azure and Google Cloud. This transition happened rapidly, with scant attention paid to applying zero trust principles to digital content.

However, a surge of high-profile ransomware attacks and supply chain breaches has made company leaders very nervous. “I speak to a lot of security leaders around the world, and one of their biggest fears is the rapid rise of implementing data lakes and the fear that the data lake will turn into a data swamp,” Srinivasan says.

Votiro’s technology provides a means to sanitize weaponized files at all of the points where threat actors are now trying to insert them. It does this by permitting only known good files into a network, while at the same time  extracting unknown and untrusted elements for analysis. Votiro refined this service, cleansing weaponized attachments and web links sent via email, and has extended this service to cleansing files as they flow into a data lake and as they circulate in file shares. 

Exploiting fresh gaps

As agile, cloud-centric business communications has taken center stage, cyber criminals quite naturally have turned their full attention to inserting weaponized files wherever it’s easiest for them to do so, Srinivasan observes. As always, the criminals follow the data, he says.

Srinivasan

“The trend that we’re seeing is that more than 30 percent of the content flowing into data lakes is from untrusted sources,” he says. “It’s documents, PDFs, CSV files, Excel files, images, lots of unstructured data; we track 150 different file types . . . we’re seeing evasive objects embedded in those files designed to propagate downstream within the enterprise.”

This is the dark side of digital transformation. Traditionally, business applications tapped into databases kept on servers in a temperature-controlled clean room — at company headquarters. These legacy databases were siloed and well-protected; there was one door in and one door out.

Data – i.e. coding and content — today fly around intricately connected virtual servers running in private clouds and public clouds. As part of this very complex, highly distributed architecture, unstructured data flows from myriad sources into and back out of partner networks, cloud file shares and data lakes. This in-flow and out-flow happens via custom-coded APIs configured by who knows whom.

Votiro’s cleansing scans work via an API that attaches to each channel of content flowing into a data lake. This cleansing process is shedding light on the fresh security gaps cyber criminals have discovered – and have begun exploiting, Srinivasan says.

Evolving attacks

He told me about this recent example: an attacker was able to slip malicious code into a zip file sent from an attorney to a banking client in a very advanced way. The attacker managed to insert attack code into a zip file contained in a password-protected email message – one that the banker was expecting to receive from the attorney.

At a fundamental level, this attacker was able to exploit gaps in the convoluted matrix of interconnected resources the bank and law firm now rely on to conduct a routine online transaction. “Bad actors are constantly evolving their techniques to compromise the organization’s business services,” Srinivasan says.

Closing these fresh gaps requires applying zero trust principles to the connectivity layer, the authentication layer — and the content layer, he says. “What we’re doing is to deliver security as a service that works with the existing security investments companies have made,”  Srinivasan  says. “We integrate with existing edge security and data protection capabilities as that final step of delivering safe content to users and applications at all times.”

It’s encouraging that zero trust is gaining material traction at multiple layers. There’s a lot more ground to cover. I’ll keep watch and keep reporting.

Acohido

Pulitzer Prize-winning business journalist Byron V. Acohido is dedicated to fostering public awareness about how to make the Internet as private and secure as it ought to be.


(LW provides consulting services to the vendors we cover.)

It’s not difficult to visualize how companies interconnecting to cloud resources at a breakneck pace contribute to the outward expansion of their networks’ attack surface.

Related: Why ‘SBOM’ is gaining traction

If that wasn’t bad enough, the attack surface companies must defend is expanding inwardly, as well – as software tampering at a deep level escalates.

The Solar Winds breach and the disclosure of the massive Log4J vulnerability have put company decision makers on high alert with respect to this freshly-minted exposure. Findings released this week by ReversingLabs show 87 percent of security and technology professionals view software tampering as a new breach vector of concern, yet only 37 percent say they have a way to detect it across their software supply chain.

I had a chance to discuss software tampering with Tomislav Pericin, co-founder and chief software architect of ReversingLabs, a Cambridge, MA-based vendor that helps companies granularly analyze their software code. For a full drill down on our discussion please give the accompanying podcast a listen. Here are the big takeaways:

‘Dependency confusion’

Much of the discussion at RSA Conference 2022, which convenes next week in San Francisco, will boil down to slowing attack surface expansion. This now includes paying much closer attention to the elite threat actors who are moving inwardly to carve out fresh vectors taking them deep inside software coding.

The perpetrators of the Solar Winds breach, for instance, tampered with a build system of the widely-used Orion network management tool. They then were able to trick some 18,000 companies into deploying an authentically-signed Orion update carrying a heavily-obfuscated backdoor.

Log4J, aka Log4Shell, refers to a gaping vulnerability that exists in an open-source logging library that’s deeply embedded within servers and applications all across the public Internet. Its function is to record events in a log for a system administrator to review and act upon. Left unpatched, Log4Shell, presents a ripe opportunity for a bad actor to carry out remote code execution attacks, Pericin told me.

This type of attack takes advantage of the highly dynamic, ephemeral way software interconnects to make modern digital services possible.

Pericin

“As we go about defining layers on top of layers of application code, understanding all the interdependencies becomes very complex,” Pericin told me. “You really need to go deep into all of these layers to be able to understand if there’s any hidden behaviors or unaccounted for code that introduce risk in any of the layers.”

Obfuscated tampering

Dependency confusion can arise anytime a developer reaches out to a package repository. Modern software is built on pillars of open-source components, and package repositories offer an easy access to the wealth of pre-built code that makes development faster. However, not all of that code is safe to use. Capitalizing on dependency confusion, threat actors seek ways to insert malicious elements; and they take intricate steps to obfuscate their code tampering. Most often their objective is to install a back door through which they can come and go – and take full control of the underlying system anytime they please, Pericin says.

Last year, white hat researcher Alex Birsan shed a bright light on just how big an opportunity this presents to malicious hackers. Birsan demonstrated how dependency confusion attacks could be leveraged to tamper with coding deep inside of system software at Apple, PayPal, Tesla, Netflix, Uber, Shopify and Yelp!.

Then in late April, ReversingLabs and other vendors shared stunning evidence of such attacks moving beyond the theoretical and into live service. A red team of security researchers dissected a dependency confusion campaign aimed at taking control of the networks of leading media, logistics and industrial firms in Germany.

The basic definition of software tampering, Pericin notes, is to insert unverified code into the authorized code base. In the current, operating environment, there’s limitless opportunity to tamper with code. This is because such a high premium is put on agility.

“There are many places in the software supply chain where you can add unverified code, and the attackers are actually doing that,” Pericin says. “And that’s also why it can be so hard to detect.”

Implementing SBOM

Even as their organizations push more operations out to the Internet edge, senior executives are starting to realize that their internal attack surface is riddled with security holes, as well. Some 98 percent of the respondents to the ReversingLabs poll acknowledged that software supply chain risks are rising – due to their intensive use of built-on third party code and open source code. However, only 51 percent believed they could prevent their software from being tampered with.

For its part, ReversingLabs supplies an advanced code scanning and analysis service, called Software Assurance, that can help companies verify that its applications haven’t been tampered with. Software developers at large shops are getting into the habit of using this tool to deeply scan software packages as a final quality check, just before deployment, Pericin told me.

Some companies are going so far as using this tool to selectively scan mission-critical software arriving from smaller houses and independent developers for behavioral oddities, as well, he says.

Having the ability to granularly scan code also plays well with the drive to mainstream SBOM, which stands for Software Bill of Materials.

SBOM is an industry effort to standardize the documentation of a complete list of authorized components in a software application.

President Biden’s cybersecurity executive order, issued in May, includes a detailed SBOM requirement for all software delivered to the federal government.

And now advanced scanning tools, like those supplied by ReversingLabs, are ready for prime time – to help companies detect and deter software tampering, as well as implement SBOM as a standard practice.

“One of the outcomes of doing this analysis is you gain the ability to correctly identify what’s present in the software package, which is the software bill of materials,” Pericin observes.

In today’s environment, organizations need to figure out how to secure their external edge, that’s for certain. But it’s equally important to account for their internal edge, to stop software tampering in its tracks. It’s encouraging that the technology to do that is available. I’ll keep watch and keep reporting.

Acohido

Pulitzer Prize-winning business journalist Byron V. Acohido is dedicated to fostering public awareness about how to make the Internet as private and secure as it ought to be.


(LW provides consulting services to the vendors we cover.)

 

 

Third-Party Risk Management (TPRM) has been around since the mid-1990s – and has become something of an auditing nightmare.

Related: A call to share risk assessments

Big banks and insurance companies instilled the practice of requesting their third-party vendors to fill out increasingly bloated questionnaires, called bespoke assessments, which they then used as their sole basis for assessing third-party risk.

TPRM will be in the spotlight at the RSA Conference 2022 next week in San Francisco. This is because third-party risk has become a huge problem for enterprises in the digital age. More so than ever, enterprises need to move beyond check-the-box risk assessments; there’s a clear and present need to proactively mitigate third-party risks.

The good news is that TPRM solution providers are innovating to meet this need, as will be showcased at RSA. One leading provider is Denver, Colo.-based CyberGRX. I had the chance to sit down with their CISO, Dave Stapleton, to learn more about the latest advancements in TPRM security solutions. For a full drill down of our discussion, please give the accompanying podcast a listen. Here are key takeaways:

Smoothing audits

CyberGRX launched in 2016 precisely because bespoke assessments had become untenable. Questionnaires weren’t standardized, filling them out and collecting them had become a huge burden, and any truly useful analytics just never happened.

“Sometimes you’d get a 500-question questionnaire and that would be one out of 5,000 you’d get over the course of a year,” Stapleton says, referring to a scenario that a large payroll processing company had to deal with.

CyberGRX created an online exchange to serve as a clearinghouse where assessments could be more efficiently – and usefully – administered. Digital transformation had taken hold; so their timing was pitch perfect.

“Usage of third-party vendors has escalated exponentially in the past 10 years, and businesses also rely on them for more sensitive and critical activities,” Stapleton noted.

Moving the questionnaires to an exchange model meant introducing a standardized crowdsourcing approach to compiling and making available what was previously bespoke assessment data. This also made remediation – i.e. getting third-party vendors to mitigate potential risks and maintain compliance with audit benchmarks – much smoother.

Stapleton

This alone was a huge improvement. “The exchange model has been quite revolutionary,” Stapleton says. “We were able to reduce the level of effort for both third parties and their customers. Third parties get fewer requests so they can focus more time and energy on security; customers have one place they can go to get the data they need.”

Cyber risks profiling

CyberGRX’s global cyber risk Exchange caught on quickly. But, the company founders never intended to stop at simply cleaning up bespoke  assessments. The exchange has proven to be a perfect mechanism for fleshing out much richer cyber risk profiles of third-party vendors. It does this by ingesting and correlating data from a wide array of security-related  datasets.

This folds in fresh intelligence that goes far beyond the ground covered in traditional bespoke assessments, which are merely the starting point. Questionnaire answers get cross referenced against cybersecurity best practice protocols put out by the National Institute of Standards and Technology, namely NIST 800-53 and NIST 800-171.

CyberGRX is also able to leverage real-time threat intelligence feeds by partnering with leading threat intelligence vendors. These vendors integrate their abilities to monitor malware circulation and cyber-attack activity in real time within the Exchange platform, including staying alert for any signs of third-party vendor cyber assets turning up in murky parts of the Dark Web.

Another function of the Exchange is to analyze a third-party vendor’s “firmographics” – publicly known details such as geographic location, industry type, target markets, business performance and organizational structure. So contextual industry background and fresh threat landscape intel gets continually infused into traditional audit findings. Stapleton characterizes this as “cyber risk intelligence” profiling.

“The idea behind it is that this is a process of collecting the right data, creating your own quality data and performing very complex analysis in order to produce actionable results,” he says.

Cyber hygiene boost

This enrichment of the check-the-box approach to third-party risk assessments is paying off on a number levels, he says. Material productivity gains derive from risk managers on both sides spending much less time mucking with bespoke audits. “”Our methodology provides security and risk professionals with next-level insights that empower them to quickly make decisions in regards to risk management. Therefore, spending less time on mitigating risks and more time focusing on other important initiatives”,” Stapleton says.

More nuanced benefits accrue, as well. For instance, as more substantive vetting of third-party vendors gains traction, the overall level of supply chain cyber hygiene gets boosted. Third parties quickly discover that checking boxes isn’t going to be enough; first-party enterprises gain clarity, in a very practical sense, on security practices they need to prioritize.

Observes Stapleton:  “It’s a combination of capabilities that produces something that is truly actionable, specifically for the purposes of improving third party risk management outcomes.”

The ceiling for strengthening security postures – of third parties and first parties alike — is high. For instance, Stapleton described for me how CyberGRX can now correlate firmographics to threat intel feeds and audit data to provide innovative new services that were unheard of just a couple of years ago.

For one, the exchange can now reliably predict how a vendor will respond to a risk assessment without having them input any information. Thus, an enterprise can weigh whether to accept a given supplier — without necessarily administering a full-blown assessment audit.

For another, the exchange is continually improving its capacity to granularly gauge a third-party vendor’s exposure to a high-profile vulnerability or even a certain type of exploit known to be circulating in the wild.

“We can map to something like the MITRE ATT&CK framework and perform an analysis that tells you which of your third parties are most likely to be vulnerable to something like Log4J.”

What’s more, advanced third-party risk mitigation can also help offset the cybersecurity skills shortage. “We’re putting our security professionals back to work instead of filling out spreadsheets,” Stapleton asserts, “and we’re giving enterprises information they can use to start working with their third parties today to improve security of the supply chain.”

This is one part of igniting a virtuous cycle. New cloud-centric security frameworks, like Zero Trust Network Access (ZTNA) and Secure Access Services Edge (SASE) Access, and new security tools – to advance detection and response, as well as properly configure all cyber assets —  must take hold as well. I’ll keep watch and keep reporting.

Acohido

Pulitzer Prize-winning business journalist Byron V. Acohido is dedicated to fostering public awareness about how to make the Internet as private and secure as it ought to be.


(LW provides consulting services to the vendors we cover.)

 

Companies have come to depend on Software as a Service – SaaS — like never before.

Related: Managed security services catch on

From Office 365 to Zoom to Salesforce.com, cloud-hosted software applications have come to make up the nerve center of daily business activity. Companies now reach for SaaS apps for clerical chores, conferencing, customer relationship management, human resources, salesforce automation, supply chain management, web content creation and much more, even security.

This development has intensified the pressure on companies to fully engage in the “shared responsibility” model of cybersecurity, a topic in that will be in the limelight at RSA Conference 2022 next week in San Francisco.

I visited with Maor Bin, co-founder and CEO of Tel Aviv-based Adaptive Shield, a pioneer in a new security discipline referred to as SaaS Security Posture Management (SSPM.) SSPM is part of emerging class of security tools that are being ramped up to help companies dial-in SaaS security settings as they should have started doing long ago.

This fix is just getting under way. For a full drill down, please give the accompanying podcast a listen. Here are the key takeaways:

Shrugging off security

A sharp line got drawn in the sand, some years ago, when Amazon Web Services (AWS) took the lead in championing the shared responsibility security model.

To accelerate cloud migration, AWS, Microsoft Azure and Google Cloud guaranteed that the hosted IT infrastructure they sought to rent to enterprises would be security-hardened – at least on their end. For subscribers, the tech giants issued a sprawling set of security settings for their customers’ security teams to monkey with. It was left up to each company to dial-in just the right amount of security-vs-convenience.

SaaS vendors, of course, readily adopted the shared responsibility model pushed out by the IT infrastructure giants. Why wouldn’t they? Thus, the burden was laid squarely on company security teams to harden cloud-connections on their end.

Bin

What happened next was predictable. Caught up in chasing the productivity benefits of cloud computing, many companies looked past  doing any security due diligence, Bin says.

Security teams ultimately were caught flat-footed, he says. Security analysts had gotten accustomed to locking down servers and applications that were on premises and within their arms’ reach. But they couldn’t piece together the puzzle of how to systematically configure myriad overlapping security settings scattered across dozens of SaaS applications.

The National Institute of Standards and Technology recognized this huge security gap for what it was, and issued NIST 800-53 and NIST 800-171 –detailed criteria for securely configuring cloud connections. But many companies simply shrugged off the NIST protocols.

“It turned out to be very hard for security teams to get control of SaaS applications,” Bin observes.  “First of all, there was a lack of any knowledge base inside companies and often times the owner of the given SaaS app wasn’t very cooperative.”

SaaS due diligence

Threat actors, of course, didn’t miss their opportunity. Wave after wave of successful exploits took full advantage of the misconfigurations spinning out of cloud migration. Fraudulent cash transfers, massive ransomware payouts, infrastructure and supply chain disruptions all climbed to new heights. And malicious hackers attained deep, unauthorized access left and right. Every CISO should, by now, cringe at the thought of his or her organization becoming the next Capital One or Solar Winds or Colonial Pipeline.

At RSA Conference 2022, which opens next week in San Francisco, the buzz will be around the good guys finally getting their act together and pushing back. For instance, an entire cottage industry of cybersecurity vendors has ramped up specifically to help companies improve their cloud “security posture management.”

This includes advanced cloud access security broker (CASB) and cyber asset attack surface management (CAASM) tools.  SSPM solutions, like Adaptive Shield’s, are among the newest and most innovative tools. Other categories getting showcased at RSAC 2022 include cloud security posture management (CSPM) and application security posture management (ASPM) technologies.

For its part, Adaptive Shield supplies a solution designed to provide full visibility and control of every granular security configuration in some 70 SaaS applications now used widely by enterprises. This can range from dozens to hundreds of security toggles, per application, controlling things like privileged access, multi-factor authentication, phishing protection, digital key management, auditing and much more.

Tools at hand

Security teams now have the means to methodically filter through and make strategic adjustments of each and every SaaS security parameter. Misconfigurations – i.e. settings that don’t meet NIST best practices — can be addressed immediately, or a service ticket can be created and sent on its way.

“I like to call this SaaS security hygiene,” Bin says. “It’s a way to align your users, your devices and your third-party applications with different activities and different privileges. Misconfigurations is huge part of it, but it’s just one of the moving parts of securing your SaaS.”

Doing this level of SaaS security due diligence on a consistent basis is clearly something well worth doing and something that needs to become standard practice. It will steadily improve an organization’s cloud security policies over time; and it should also promote security awareness and reinforce security best practices far beyond the security team, namely to the users of the apps.

Company by company this will slow the expansion of the attack surface, perhaps even start to help shrink the attack surface over time. Things are moving in a good direction. I’ll keep watch and keep reporting.

Acohido

Pulitzer Prize-winning business journalist Byron V. Acohido is dedicated to fostering public awareness about how to make the Internet as private and secure as it ought to be.


(LW provides consulting services to the vendors we cover.)

Vulnerability management, or VM, has long been an essential, if decidedly mundane, component of network security.

Related: Log4J’s long-run risks

That’s changing — dramatically. Advanced VM tools and practices are rapidly emerging to help companies mitigate a sprawling array of security flaws spinning out of digital transformation.

I visited with Scott Kuffer, co-founder and chief operating officer of Sarasota, FL-based Nucleus Security, which is in the thick of this development. Nucleus launched in 2018 and has grown to over 50 employees. It supplies a unified vulnerability and risk management solution that automates vulnerability management processes and workflows.

We discussed why VM has become acutely problematic yet remains something that’s vital for companies to figure out how to do well, now more so than ever. For a full drill down, please give the accompanying podcast a listen. Here are the key takeaways:

Multiplying exposures

Scan and patch. Historically, this has been what VM was all about. Microsoft’s Patch Tuesday speaks to the never-ending flow of freshly discovered software flaws — and the software giant’s best efforts ease the burden of mitigating them.

Scan-and-patch systems from Tenable, Qualys, Rapid7, Checkmarx and others came along and became indispensable; enterprises could use these tools to keep software bugs and security holes patched at a tolerable level; not just flaws in Microsoft code, of course, but software from all of their suppliers, internal and external.

However, that scan-and-patch equilibrium is no more. Digital transformation has spawned a cascade of nuanced, abstract vulnerabilities – and they’re everywhere. This results from companies chasing after agile software development and cloud-centric innovations, above all else. In aggressively leveraging digital services to achieve productivity gains they’ve also exponentially multiplied security gaps across a steadily expanding network attack surface.

“There are configuration weaknesses in cloud resources, vulnerabilities that need patching,  like Log4J, and vulnerabilities in the business logic of old code,” Kuffer observes. “There are many different types of these vulnerabilities, and it’s a matter of figuring out who owns them and how to fix them quickly, and, also, which ones to fix.”

Current threat landscape

So, what exactly constitutes a vulnerability these days? As Kuffer alluded to not long into our discussion, it goes far beyond the bug fixes and security patches that Microsoft, Oracle, Adobe and every supplier of business software distributes periodically.

Kuffer

A vulnerability, simply put, is a coding weakness by which software can be manipulated in a way that was never intended. Today, these exposures lurk not just in the legacy enterprise apps – the ones that need continually patching – they’re turning up even more so in the cloud-hosted storage buckets, virtual servers and Software-as-a-Service (SaaS) subscriptions that have become the heart of IT operations.

It all starts with DevOps, under which agile software is being pushed out based on the principle of continuous integration and continuous delivery (CI/CD.) Much heralded, CI/CD is a set of principles said to result in the delivery new software frequently and reliably.

Truthfully, CI/CD really is nothing more than an updated version rushing shrink-wrapped boxes of new apps to store shelves. Remember when early adopters were giddy to receive the bug-riddled version 1.0 of a cool new app, anticipating that major bugs would get fixed in 1.1, 1.2 etc.?

Under CI/CD, developers collaborate remotely to press new code into live service as fast as possible and count on making iterative fixes on the fly.

This fail-fast imperative often leverages cloud storage and virtual servers; code development revolves around interconnecting modular microservices and software containers scattered all across public and private clouds.

To malicious hackers this translates into a candy store of fresh vulnerabilities. In many ways it’s easier than ever for threat actors to get deep access, steal data, spread ransomware, disrupt infrastructure and attain long run unauthorized access.

Unified solution

All of that said, it’s not so much the agile software trend, in and of itself, that’s to blame. Security gaps generally — and vulnerabilities specifically — have surpassed the tolerable level in large part because companies have not paid nearly enough attention to configuring their public cloud and hybrid cloud IT systems. In short, software interconnections are skewed toward agility.

Fine tuning is in order and there’s really no mystery how to go about dialing in the necessary measure of security. Robust data security frameworks have been painstakingly assembled and vetted by the National Institute of Standards and Technology (NIST.) However, adhering to NIST 800-53 and NIST 800-171 is voluntary and, for whatever reasons, far too many enterprises have yet to fully embrace robust data security best practices.

To illustrate, Kuffer pointed me to the all-too-common scenario where a company goes live with an AWS root account that uses a default password to access all of it its EC2 virtual servers and S3 storage buckets. “That misconfiguration is a vulnerability because anybody who finds that password and then logs in to your AWS account has full admin control over your entire cloud infrastructure,” he says.

This kind of thing can be rectified by adopting risk-assessment principles alongside CD/CI. And the good news, Kuffer says, is that the cybersecurity industry is driving towards helping companies get better at systematically identifying, analyzing and controlling vulnerabilities. Nucleus Security refers to this as a shift towards “.”

Risk-tolerance security

VM done from a risk-assessment lens boils down to enterprises making a concerted effort to discover and thoughtfully inventory all of the coding flaws and misconfigurations inhabiting their increasingly cloud-centric networks, and then doing triage based on risk-tolerance principles.

This absolutely can be done, ironically, because cybersecurity vendors themselves are innovating off the strengths of cloud resources and agile software. At RSA Conference 2022 opening next week in San Francisco, there will be considerable buzz around new tools and frameworks that empower companies to discover and inventory software bugs and misconfigurations, cost-effectively and at scale.

This includes a host of leading-edge technologies supporting emerging frameworks such cyber asset attack surface management (CAASM,) cloud security posture management (CSPM,) application security posture management (ASPM,) and even software-as-a-service security posture management (SSPM.)

Specialized analytics platforms — like those from Nucleus Security and other suppliers of advanced VM technologies – fit in by enabling companies to ingest security posture snapshots from all quarters, Kuffer says. Advanced VM systems are designed to efficiently implement and enforce wise policy, without unduly disrupting agility, he says.

Clearly, the frameworks and technology are ready for prime time. If the continuing ransomware scourge and widening supply chain hacks tell us anything, it’s that it’s high time for companies to dial back on agility and dial in more security. I’ll keep watch and keep reporting.

Acohido

Pulitzer Prize-winning business journalist Byron V. Acohido is dedicated to fostering public awareness about how to make the Internet as private and secure as it ought to be.


(LW provides consulting services to the vendors we cover.)