The Amazon Web Services (AWS) Shared Responsibility Model has come a long way, indeed.

Related: ‘Shared Responsibility’ best practices

In 2013, Amazon planted a stake in the ground when it divided cloud security obligations between AWS and its patrons, guaranteeing the integrity of its infrastructure, but placing a huge burden on customers to secure things on their end.

For years, misconceptions abounded – especially among small and mid-sized organizations, but also among more than a few marquee enterprises. It was all too easy to assume that moving to AWS equated with outsourcing all security responsibilities.

Not so, of course. High-profile breaches, often stemming from misconfigured services like S3 buckets or exposed APIs, inevitably followed. The 2019 Capital One debacle comes to mind.

Emerging ecosystem

Fast forward to today, and the notion of shared responsibility, when it comes to securing AWS, appears to be steadily gaining meaningful traction. Several drivers have come into play.

For it’s part, Amazon has introduced and promoted a range of tools like AWS Config, GuardDuty, and Security Hub to simplify compliance and improve visibility into cloud environments.

What’s more, third-party cybersecurity vendors have been innovating like crazy to address the obvious gaps. A plethora of advanced tools and services are readily available today; they’re designed to automate best practices and reduce the complexity of managing cloud security tasks.

Meanwhile, the standards bodies and regulators have kept up the pressure for companies to do the right thing, when it comes to cloud security. Frameworks like SOC 2, SOX, and GDPR have forced organizations to take a more proactive approach to account for sensitive data increasing stored and accessed via the cloud.

Last Watchdog engaged Aiman Parvaiz, Director of DevSecOps, at Nimbus Stack, a DevOps consultancy specializing in AWS security, about how the steadily growing momentum of companies living up to their part of Amazon’s shared responsibility requirement. Here’s the gist of our exchange about all of this, edited for clarity and length.

LW: Grasping, much less embracing, ‘Shared Responsibility’ hasn’t been easy for many companies. So what’s changed over the past few years?

Parvaiz

Parvaiz: It’s a combination of factors, really. Companies have learned through experience—especially high-profile breaches—that AWS, while robust, isn’t an out-of-the-box security provider. AWS has also made significant strides in raising awareness about this model, and  the proliferation of third-party tools has reinforced this understanding by providing solutions that help businesses actively manage their security posture.

LW: What should companies come to understand about AWS security tools?

Parvaiz: The key takeaway is that securing their environment is ultimately the company’s responsibility. AWS does provide a rich set of security-focused tools to help with this. WAF and Shield help safeguard public endpoints, while SSM Patch Manager ensures your operating systems remain secure and up to date. Tools like Amazon GuardDuty continuously scan for malicious activity and notify you of anomalies in real time.

LW: Can you frame the state of third-party support?

Parvaiz: The ecosystem of third-party support has grown tremendously in recent years. AWS has built a robust network of partners and vendors, enabling businesses to leverage specialized solutions tailored to their unique needs.

The key to unlocking the full value of third-party tools lies in seamless integration with your existing workflows and infrastructure. When third-party solutions are deeply integrated into your setup—feeding into your monitoring systems, alerting pipelines, and operational processes—they enhance visibility and control, making them actionable and impactful.

LW: What does Nimbus Stack bring to the table?

Parvaiz: At our core, we are a team of seasoned system and cloud engineers dedicated to helping businesses using AWS to fortify their security posture.

We excel at identifying potential threats and mitigating them before they materialize. This expertise is particularly valuable in achieving compliance with standards like SOC 2, FedRAMP, or SOX. Our proactive approach allows us to anticipate auditor focus areas and address compliance hotspots during workload design.

LW: What should companies understand – and anticipate –when it comes to compliance pressures?

Parvaiz: Looking ahead, compliance will shift from being a competitive advantage to a baseline expectation. Integrating security practices and compliance requirements directly into infrastructure management and the software development lifecycle will become essential. Beyond checking boxes for audits, these measures demonstrate a commitment to protecting customer interests, making compliance a critical factor for businesses aiming to grow and remain credible in the market.

LW: Anything else?

Parvaiz: It’s understandable that competing priorities like product development or time-to-market can delay investments in security. That said, strengthening security isn’t a one-time task or a siloed effort—it needs to be embedded across operations and championed by management to be truly effective. Today, robust security isn’t a ‘nice-to-have,’ it’s a ‘must-have’ and the real question is how quickly can you get there?

Acohido

Pulitzer Prize-winning business journalist Byron V. Acohido is dedicated to fostering public awareness about how to make the Internet as private and secure as it ought to be.


(LW provides consulting services to the ven

 

 

The post Shared Intel Q&A: A thriving ecosystem now supports AWS ‘shared responsibility’ security model first appeared on The Last Watchdog.

Foreign adversaries proactively interfering in U.S. presidential elections is nothing new.

Related: Targeting falsehoods at US minorities, US veterans

It’s well-documented how Russian intelligence operatives proactively meddled with the U.S. presidential election in 2016 and technologists and regulators have been monitoring and developing measures to address election meddling by foreign adversaries, which now happens routinely.

They’re at it again. Russian actors “manufactured and amplified” a recent viral video that falsely showed a person tearing up ballots in Pennsylvania, the FBI and two other federal agencies recently disclosed. The FBI and officials from the Office of the Director of National Intelligence and the Cybersecurity and Infrastructure Security Agency said the U.S. intelligence community made the assessment based on available information and past activities from other Russian influence actors, including videos and disinformation efforts.

Now comes fresh evidence highlighting the nuances of social-media fueled disinformation of this moment —  leading up to the imminent 2024 U.S. presidential election.

Analysts at Los Angeles-based Resecurity have been monitoring a rising wave of troll factories, fake accounts and strategic disinformation clearly aimed at swaying public opinion. This time around the overall thrust is not so much to champion Donald Trump or smear Kamala Harris, as it is to generally and deeply erode trust in time-honored democratic elections, says Shawn Loveland, Resecurity’s Chief Operating Officer (COO).

Towards this end, faked social media accounts impersonating both Trump and Harris, as well as prominent U.S. institutions, have been springing up and spilling forth outrageous falsehoods, especially via the Telegram anonymous messaging platform.

Telegram, it turns out, is a social media venue favored by disinformation spreaders. This popular cloud-based messaging app is known for its security features, flexibility and use across global audiences. Telegram’s minimal moderation makes it a haven for privacy-conscious users but also a perfect tool for spreading lies and conspiracy theories.

Last Watchdog engaged Loveland to drill down on what Resecurity’s analysts have been closely tracking. He recounted their observations about how now, more so than ever, social media apps have come to serve as “echo chambers.” This refers to how easily patrons become isolated within bubbles of half-truths and conspiracy theories that reinforce their biases.

Foreign adversaries are well aware of how echo chambers can be leveraged to manipulate large groups. They’ve seized upon this phenomenon to strategically sway public sentiment in support of their geopolitical gains. Disinformation spread through social media has been part and parcel of election interference all around the globe, not just in the U.S., for more quite some time now.

Election interference has become impactful enough, Loveland told me, to warrant stricter regulatory guard rails and wider use of advanced detection and deterrence technologies. Greater public awareness would help, of course. Here’s the gist of our exchange about all of this, edited for clarity and length.

LW: Can you frame how the social media ‘echo chamber’ phenomenon evolved?

Loveland: With the decline of traditional media consumption, many voters turn to social media for news and election updates. This shift drives more people to create accounts, particularly as they seek to engage with political content and discussions relevant to the elections.

Loveland

Foreign adversaries exploit this aspect, running influence campaigns to manipulate public opinion. To do that, they leverage accounts with monikers reflecting election sentiments and the names of political opponents to mislead voters. Such activity has been identified not only in social media networks with headquarters in the US, but also in foreign jurisdictions and alternative digital media channels.

The actors may operate in less moderated environments, leveraging foreign social media and resources, which are also read by a domestic audience, and the content from which could be easily distributed via mobile and email.

LW: Can you characterize why this is intensifying?

Loveland: Social media can create echo chambers where users are exposed primarily to information that reinforces their existing beliefs. This phenomenon can polarize public opinion, as individuals become less likely to encounter opposing viewpoints.

Such environments can intensify partisan divides and influence voter behavior by solidifying and reinforcing biases. For example, we identified several associated groups promoting the “echo” narrative – regardless of the group’s main profile. For example, a group that aims to support the Democratic Party contained content of an opposite and discrediting nature.

LW: Can you drill down a bit on recent iterations?

Loveland: We’ve identified several clusters of accounts with patterns of a ‘troll factory’ that promotes negative content against the U.S. and EU leadership via VK, Russia’s version of Facebook. These posts are written in various languages including French, Finnish, German, Dutch, and Italian. The content is mixed with geopolitical narratives of an antisemitic nature, which should violate the network’s existing Terms and Conditions.

The accounts remain active and constantly release updates, which may highlight the organized effort to produce such content and make it available online. In September the U.S. Department of Justice seized 32 domains tied to a Russian influence campaign. This was part of a $10 million scheme to create and distribute content to U.S. audiences with hidden Russian government messaging.

LW: Quite a high degree of coordination on the part of the adversaries.

Loveland: These operations are usually well-coordinated, with teams assigned to different tasks such as content creation, social media engagement, and monitoring public reactions. This strategic approach allows them to adapt quickly to changing circumstances and public sentiment. The content is often designed to evoke anger or fear, which can lead to increased sharing and engagement.

Troll factories often create numerous fake social media profiles to amplify their messages and engage with real users. This helps them appear more credible and increases their reach. Workers in these factories produce a variety of content crafted to provoke reactions, spread false narratives, or sow discord among different groups. They typically focus on specific demographics or political groups to maximize their impact. They may even use data analytics to identify vulnerable populations and tailor their messages accordingly.

LW: How difficult has it become to identify and deter these highly coordinated campaigns?

Loveland: Unfortunately, it is not always so obvious. Troll factories tend to push similar messages across multiple accounts. If you notice a coordinated effort to spread the same narrative or hashtags, it may indicate a troll operation. Accounts with a high number of followers but few follow-backs can indicate a bot or troll account, as they often seek to amplify their reach without engaging genuinely.

If the content shared by an account is mostly reposted or lacks originality, it may be part of a troll factory’s strategy to disseminate information without creating authentic engagement. Trolls often target divisive issues to provoke reactions. If an account consistently posts about hot-button topics without a nuanced perspective, it could be a sign of trolling activity.

There are various tools and algorithms designed to detect bot-like behavior and troll accounts. These can analyze patterns in posting frequency, engagement rates, and content similarity to identify potential trolls.

LW: Technologically speaking, is it possible to detect and shut down these accounts in an effective way?

Loveland: With GenAI, the creation of troll factories became much more advanced. Unfortunately, adversaries continue to evolutionize their tools, tactics and procedures (TTPs) – using mobile residential proxies, content generation algorithms, deep fakes to impersonate real personas, and even financing media distribution operations in the United States by hostile states.

LW: Strategically, why are foreign adversaries trying so hard to sow doubt about democratic elections?

Loveland: One of the foreign adversaries’ critical goals is to plant social polarization and distrust in electoral integrity. This is a crucial component of these campaigns. Often, these campaigns promote and discourage both candidates, as they do not intend to promote one candidate over the other. They plan to sow distrust in the election process and encourage animosity among the constituents of the losing candidate against the winning candidate and their supporters.

LW: No one can put the genie back in the bottle. What should we expect to come next, with respect to deepfakes and AI-driven misinformation, over the next two to five years?

Loveland: Foreign adversaries understand that the immediate goals in election interference cannot be easily achieved, as the U.S. Intelligence Community is working hard to counter this threat proactively. That’s why one of the main long-term goals for foreign adversaries is to create polarization in society and distrust in the electoral system in general, which may impact future generations of voters.

LW: Anything else you’d like to add?

Loveland: Our research highlights the difference between the right of any US person to express their own opinion, including satire on political topics, which the U.S. First Amendment protects, and the malicious activity of foreign actors funded by foreign governments to plant discrediting content and leverage manipulated media to undermine elections and disenfranchise voters.

For example, we’ve identified content cast as political satire that is also antisemitic and in support of geopolitical narratives beneficial to foreign states to discredit US foreign policy and elections. All postings were made by bots, not real people. The proliferation of deepfakes and similar content planted by foreign actors poses challenges to the functioning of democracies. Such communications can deprive the public of the accurate information it needs to make informed decisions in elections.

Acohido

Pulitzer Prize-winning business journalist Byron V. Acohido is dedicated to fostering public awareness about how to make the Internet as private and secure as it ought to be.


(LW provides consulting services to the vendors we cover.)

The post Shared Intel Q&A: Foreign adversaries now using ‘troll factories’ to destroy trust in U.S. elections first appeared on The Last Watchdog.

The art of detecting subtle anomalies, predicting emergent vulnerabilities and remediating novel cyber-attacks is becoming more refined, day by day.

Related: GenAI’s impact on elections

It turns out that the vast datasets churned out by cybersecurity toolsets happen to be tailor-made for ingestion by Generative AI (GenAI) engines and Large Language Models (LLMs.) Leading cybersecurity vendors have recognized this development; and they are innovating clever ways to bring GenAI and LLM to bear.

A prime example comes from Resecurity, a Los Angeles-based cybersecurity vendor that has been helping organizations identify, analyze, and respond to cyber threats since its launch in 2016. Resecurity most recently unveiled Context AI, a new service that enriches threat intelligence, enhances analyst workflows and speeds up decision-making across security operations.

Last Watchdog engaged Shawn Loveland, Chief Operations Officer at Resecurity, to discuss where things stand with respect to GenAI and LLM making an impact in cybersecurity.  Here’s that exchange, edited for clarity and length.

LW: We’re at a very early phase of GenAI and LLM getting integrated into cybersecurity; what’s taking shape?

Loveland: The technology itself is still evolving, and while it shows great potential, it has yet to fully mature in terms of reliability, scalability and security. Additionally, the cybersecurity community needs a more comprehensive understanding and trust regarding how these AI tools can be effectively and safely deployed in real-world environments.

Integrating GenAI and LLMs into cybersecurity frameworks requires overcoming complex challenges, such as ensuring the models can handle the nuances of cyber threats, addressing data privacy concerns, adapting to the dynamic nature of the threat landscape, and dealing with inaccuracies and incomplete data sets that may lead to misleading outputs.

LW: How much potential does GenAI and LLL to be a difference maker in cybersecurity?

Loveland

Loveland: They can potentially revolutionize cybersecurity. Their advanced capabilities in processing vast amounts of data, identifying patterns, and automating responses to threats make them game changers. These AI models can analyze and understand complex data from various sources much faster and more accurately than traditional methods, enabling them to detect anomalies, predict potential threats, and respond to real-time incidents.

This significantly enhances the speed and efficiency of cybersecurity defenses, spanning individual companies and locations. Additionally, GenAI can assist in developing more sophisticated threat simulations and improving incident response strategies by learning from past incidents and continuously adapting to new threat landscapes. As these models evolve, they promise to reduce human error and security operations and provide a more proactive approach to cybersecurity.

LW: Tell us a bit about Resecurity’s implementation.

Loveland: We’ve integrated GenAI and LLM into our services platform. These technologies enable our platform to process and analyze large amounts of structured and unstructured data, empowering our advanced threat intelligence and cybersecurity solutions. Using AI-driven analytics, we’ve automated many routine security tasks and enhanced our threat detection accuracy.

This integration empowers more proactive defense mechanisms, such as real-time monitoring and detecting sophisticated cyber threats that may bypass traditional security measures. Additionally, we have recently introduced Context AI, which allows analysts to interact with our data through an LLM interface to gain further insights into threats targeting their company.

LW: How did the idea for Context AI come about?

Loveland: Traditional security measures continuously fail to identify and respond to new, novel, and sophisticated cyber threats, which are compounded by incomplete dark web data sets, leading to incomplete and inaccurate output by AI.

Context AI created a platform that automatically gathers, analyzes, and correlates vast amounts of data from multiple sources, including the deep dark web, to provide real-time and predictive insights. This enables security teams to make more informed decisions, anticipate potential threats, and proactively defend against them. The goal was to move beyond reactive security measures and empower organizations with the intelligence needed to stay ahead of emerging threats.

LW: Can you share any anecdotes that validate your approach?

Loveland: One organization in the financial sector used Context AI to identify and prevent a sophisticated phishing campaign that targeted their employees. By leveraging the platform’s real-time threat intelligence and contextual analysis, they were able to thwart the attack before it compromised any sensitive data

Another benefit accrued by a healthcare provider was the early detection of potential insider threats, which allowed them to address vulnerabilities and prevent data breaches that could have jeopardized patient privacy.

LW: How do you expect the adoption curve of Context AI to play out, moving forward?

Loveland: As Context AI gains traction, future benefits will include more robust threat prediction capabilities, integration with broader security ecosystems, and the ability to provide tailored industry-specific intelligence. As more organizations experience these advantages and share their success stories, the adoption rate of Context AI will likely accelerate, leading to widespread recognition of its value in cybersecurity.

Acohido

Pulitzer Prize-winning business journalist Byron V. Acohido is dedicated to fostering public awareness about how to make the Internet as private and secure as it ought to be.


(LW provides consulting services to the vendors we cover.)

The post NEWS ANALYSIS Q&A: The early going of Generative AI and LLMs impacting cybersecurity first appeared on The Last Watchdog.

The tectonic shift of network security is gaining momentum, yet this transformation continues to lag far behind the accelerating pace of change in the operating environment.

Related: The advance of LLMs

For at least the past decade, the cybersecurity industry has been bending away from rules-based defenses designed to defend on-premises data centers and leaning more into tightly integrated and highly adaptable cyber defenses directed at the cloud edge.

I first tapped Gunter Ollmann’s insights about botnets and evolving malware some 20 years when he was a VP Research at Damballa and I was covering Microsoft for USA TODAY. Today, Ollmann is the CTO of IOActive, a Seattle-based cybersecurity firm specializing in full-stack vulnerability assessments, penetration testing and security consulting. We recently reconnected. Here’s what we discussed, edited for clarity and length?

LW: In what ways are rules-driven cybersecurity solutions being supplanted by context-based solutions?

Ollmann: I wouldn’t describe rules-based solutions as being supplanted by context-based systems. It’s the dimensionality of the rules and the number of parameters consumed by the rules that have expanded to such an extent that a broad enough contextual understanding is achieved. Perhaps the biggest change lies in the way the rules are generated and maintained, where once a pool of highly skilled and experienced cybersecurity analysts iterated and codified actions as lovingly-maintained rules, today big data systems power machine learning systems to train complex classifiers and models. These complex models now adapt to the environments they’re deployed in without requiring a pool of analyst talent to tweak and tune.

LW: In what noteworthy ways have legacy technologies evolved?

Ollmann: Cybersecurity technologies are continuously evolving; they must because both the threat and the business requirements are continuously changing. It’s been that way since the first person suggested using a password along with a login ID.

That said, to date the two biggest changes and influences upon legacy technologies have been public cloud and AI. Public cloud not only shifted the perimeter of internet business, but it also enabled a shift to SaaS delivery models – forcing traditional legacy protection technologies to transform. This fundamentally changed the way organizations shared and consumed cyber protection and detection information. It took quite some effort to shift from every on-premise log action and rule being private and confidential, to trusting cloud solution providers with that same data, pooled across multiple customers, and reaping the benefits of collective intelligence.

That cloud transformation and pooling of threat and response data was fundamental to the second transformation: deploying and applying AI-based cybersecurity technologies that range from training and reinforcement learning of detection models to incident response playbook production and auto-response. While the core “legacy” security building blocks have remained the same, the firewalls have grown smarter, the SIEMs detect and classify kill chains faster and blocking responses have become more trusted.

LW: Which legacy solutions are threatened with extinction?

Ollmann

Ollmann: Solutions that focus on enterprise-level on-premises and air-gapped protection are on borrowed time. Some people will argue that there will always be a need for such solutions, but their efficacy against today’s threats is constantly diminishing. There’s a real reason why on-premises anti-spam gateways protecting on-premises mail services are failing, and part of that is because some classes of threats are exponentially easier to detect and mitigate through massive cloud scale and collective intelligence.

Additionally, the majority of today’s solutions that require a customer’s pool of in-house analysts and security experts to update and maintain a custom-tuned or unique set of detection rules, data connectors, response playbooks, blocking filters, etc., are also on borrowed time. The last generation of machine learning system automation and the first generation of LLM-based analyst augmentation have proven they can replace the tier-one and tier-two human analysts traditionally tasked with building and maintaining those customized rules. There’s a sizable ecosystem of tooling and providers that specialize in custom rule creation and maintenance. They’re equally in trouble if they don’t adapt and evolve.

LW: What does the integration of iterated legacy tools into edge-focused newer technologies look like?

Ollmann: To understand the next generation of security technologies and what that means for the iterated evolution of legacy tools, it’s important to step back. Too often, as security professionals, we’re day-to-day involved in watching our feet on the dance floor and keeping in time with the music. When we take a step back, we get to see the bigger movements and relationships between dances.

We have an ecosystem of niche tools and specialized solutions for elements and processes within a chained pipeline of protection and response. Enterprise buyers select and integrate these components to achieve the same lofty goals as everyone else. For the last decade, we’ve seen a significant uptick in the growth of managed security service providers that effectively offer an obscured, off-the-shelf integrated protection and/or response pipeline that focuses on delivering the buyer’s security objectives rather than the stack of technologies’ security.

In parallel, over the last half-decade, we’ve observed the rapid development and advancement of cross-cloud and hybrid-cloud security posture management and response solution providers. Vendors such as Wiz, Palo Alto Network and CrowdStrike have acquired or rebuilt from the ground up much of the legacy tooling and capabilities and brought them together as unified edge protection and security management platforms. Behind the scenes, they’ve invested hugely in intelligent automation and AI systems to overcome and do away with the stack of interdependent legacy technologies (from a customer’s perspective).

LW: Looking just ahead, which new security platforms or architectures do you expect to emerge as cornerstones?

I think the managed security services industry that’s been leveraging inexpensive human analysts will lose to the new cloud and edge security posture management and response solution providers unless they transform and completely embrace AI. They’re at a disadvantage because they’re not software developers. They’re not AI engineers. But they are sitting on a lot of very valuable customer data and already have the integrations and relationships to drive transformational impact to their customers.

Collective intelligence and the knowledge derived from streaming vast data is a cornerstone to protection, compliance, and threat response. AI, LLMs, machine learning models, and their future iterations’ efficacy is dependent upon this data. It’s true, data is the new gold rush.

The cornerstone around the corner (as it were) that will likely bring the next business transformation will be ubiquitous confidential cloud computing. The legacy on-premises and air-gapped business requirements disappear once confidential compute is economical, prevalent, and performant. At that point, the “edge” consolidates to the cloud-edge, and new protections over data and regulatory concerns are overcome.

LW: Where is this all taking us over the next two to five years?

The global shortage of cybersecurity talent continues to hold back the industry. Just as cybersecurity requirements have become mainstream, the explosion of corporate need for trained security professionals and the chasm of attaining the security experience required to protect and operate the advanced cyber defense technologies, have arguably made businesses feel less secure.

The rapid advances in applied AI to security and the growth of AI-first security companies gives us great hope in overcoming the skills gap situation.

Over the next few years, I think AI-based automation of response and augmentation of human analysts will largely overcome the bottleneck of the historic cybersecurity talent shortage.

While some experts presume that AI will help elevate a new generation of cybersecurity graduates to quickly become tier-three expertise proficient, I don’t think that’s where the primary changes and benefits will come. Just as generative AI has enabled almost anyone to near instantly create their own Shakespearean-esque sonnets or Picasso-ify their dream illustrations, I expect security AI advancements to apply to, and be adopted by, other non-cyber professionals already within the business.

It’s exponentially easier and more beneficial to elevate someone with multiple years of institutional experience and business process knowledge and augment them with advanced security capabilities than to take a cybersecurity graduate and teach them the ins and outs of the business and personalities in play.

Acohido

Pulitzer Prize-winning business journalist Byron V. Acohido is dedicated to fostering public awareness about how to make the Internet as private and secure as it ought to be.


The post NEWS ANALYSIS Q&A: Striving for contextual understanding as digital transformation plays out first appeared on The Last Watchdog.

Identity and Access Management (IAM) is at a crossroads.

Related: Can IAM be a growth engine?

A new Forrester Trends Report dissects ten IAM trends now in play, notably how AI is  influencing IAM technologies to meet evolving identity threats.

IAM is a concept that arose in the 1970s when usernames and passwords first got set up to control access mainframe computers.

By the 1990s, single sign-on (SSO) solutions had caught, and with the explosion of web apps that followed came more sophisticated IAM solutions. Federated identity management emerged, allowing users to use the same identity across different domains and organizations, and standards like SAML (Security Assertion Markup Language) were developed to support this.

The emergence of cloud computing further pushed the need for robust IAM systems. Identity as a Service (IDaaS) began to gain traction, offering IAM capabilities through cloud providers.

Last Watchdog engaged Forrester Principal Analyst Geoff Cairns, the report’s lead author, in a discussion about the next phase of IAM’s. Here’s that exchange, edited for clarity and length.

A new Forrester Trends Report dissects ten IAM trends now in play, notably how AI is  influencing IAM technologies to meet evolving identity threats.

IAM is a concept that arose in the 1970s when usernames and passwords first got set up to control access mainframe computers.

By the 1990s, single sign-on (SSO) solutions had caught, and with the explosion of web apps that followed came more sophisticated IAM solutions. Federated identity management emerged, allowing users to

LW: In the grand scheme, how urgent has it become for companies to focus on identity threats?

Cairns: The urgency for companies to focus on identity threats has significantly increased over the past few years due to several factors. First, the rapid advancement of technology has created a more complex and interconnected digital landscape, making it easier for attackers to exploit vulnerabilities. Second, the growing adoption of cloud and SaaS services, as well as remote work arrangements and the extended workforce, has expanded the identity threat surface. Third, high-profile data breaches, such as the recent Change Healthcare cyberattack, have underscored the importance of effective identity security controls in protecting sensitive information.

LW: What’s the vital lesson stemming from IAM-related breaches like those seen with MGM and Okta?

Cairns

Cairns: One of the most vital lessons for CISOs and IAM leaders to take away from the MGM and Okta breaches is that your IAM vendors’ servicing and operations is intrinsic to your own organization’s security posture and, ultimately, end-customer trust.  The ongoing consolidation of IAM vendors and technology stacks will lead to greater concentration of supplier risk, as well. We expect IAM platform vendors will face increased scrutiny from their prospects and customers as it relates to underlying platform security and incident response practices.

LW: Can you share an anecdote that illustrates exactly how generative AI is being used to improve threat detection and remediation in IAM systems?

Cairns: Given the ability to input natural language queries (e.g., “show me the last 5 privileged account access attempts”), IAM administrators are conducting conversational interrogations of the IAM system to more swiftly identify and isolate identity threats. With IAM administrators also able to use AI to generate immediate, actionable steps for remediation, incident response time is significantly reduced. In the future, we expect to see genAI advances that will proactively generate and optimize IAM policies to pre-empt future threats.

LW: What should CISOs clearly understand about integrations between IAM and non-IAM cybersecurity vendors?

Cairns: CISOs should understand that to effectively respond to identity-centric threats, integration is necessary between IAM and non-IAM cybersecurity tool sets. Support for these integrations is quickly maturing.  Across your existing security vendor portfolios, review roadmaps and integration points for identity threat detection, signal sharing, and response automation. Most importantly, leverage the opportunity to drive tighter operational process alignment and a stronger working relationship between IAM and SecOps teams.

LW: Are legacy IAM solutions obsolete; will they  — or be replaced?

Cairns: Even as environments get more complex and attacks get more sophisticated, companies should remain rooted in solid IAM fundamentals and core principles – strong authentication, least privilege access, robust monitoring – applying a defense in depth approach.  However, organizations must systematically evolve and upgrade their underlying IAM technology platforms to match their IT environment and the current threat landscape.  In some cases, like phishing-resistant passwordless MFA, it capitalizes on technical advances (biometrics, compute power) layered on top of well-established practices (multifactor authentication).  In other instances, it may require re-engineering of processes and systems to adopt a different technology or approach, such as verifiable credentials or zero standing privileges.  To be effective, IAM implementations must be dynamic and constantly evolving.

LW: Anything else?

Cairns: While staying updated on IAM technology trends is certainly important, perhaps the most critical thing that CISOs and IAM leaders can do is to nurture and maintain the right culture. Many security leaders that Forrester has spoken with stress the importance of establishing cross-functional relationships and collaboration to ensure a business-led approach to IAM. Prioritizing user-centric design thinking and a growth mindset are paramount for building a high-performing IAM team and applying the right set of IAM technologies to both protect and enable the business.

Acohido

Pulitzer Prize-winning business journalist Byron V. Acohido is dedicated to fostering public awareness about how to make the Internet as private and secure as it ought to be.


The post SHARED INTEL Q&A: Forrester report shows Identity and Access Management (IAM) in flux first appeared on The Last Watchdog.

It’s easy to compile a checklist on why the announced merger of LogRhythm and Exabeam could potentially make strategic sense.

Related: Cisco pays $28 billion for Splunk

LogRhythm’s is a long established SIEM provider and Exabeam has been making hay since its 2013 launch advancing its UEBA capabilities. Combining these strengths falls in line with the drive to make cloud-centric, hyper-interconnected company networks more resilient.

Forrester Principal Analyst Allie Mellen observes: “The combined organization is likely to push hard in the midmarket, where LogRhythm’s existing suite has had success and the Exabeam user experience makes it a more natural fit.”

Despite the promising synergies, Mellen cautioned that the merger alone would not resolve all challenges. “Both of these companies have faced challenges in recent years that are not solved by a merger,” she adds. “These include difficulty keeping pace with market innovation and with the transition to the cloud.” she said.

Last Watchdog engaged Mellon in a drill down on other ramifications. Here’s that exchange, edited for clarity and length.

LW: How difficult is it going to be for LogRhythm and Exabeam to align their differing market focuses; what potential conflicts are they going to have to resolve?

Mellen

Mellen: The companies have dramatically different company cultures and processes, as LogRhythm is a veteran security companyfounded in 2003 with a focus on a suite-style offering, while Exabeam is, by comparison, a younger company founded in 2012 with a focus on modular, stand-alone products.

In addition, both companies have faced challenges in recent years that are not solved by a merger: difficulty keeping pace with market innovation and with the transition to the cloud. LogRhythm has traditionally focused on the midmarket, while Exabeam aggressively pursued large enterprise deals, highlighting a difference in target market that must be bridged.

LW: How do you see them competing against the hyperscalers, i.e. Microsot, AWS and Google, who are muscling into this space?

Mellen: Since 2018 we have talked about how the Tech Titans are changing the cybersecurity market. The past few years have demonstrated the accuracy of that prediction, with Microsoft, AWS, and GCP having an outsize impact on the security market.

This acquisition is, in part, to help both companies continue to be competitive in this market against the likes of the Tech Titans. However, while the hyperscalers are investing heavily in security, the combined entity will be playing catch-up trying to integrate two very different products and companies into one.

LW:  What specific areas of innovation should the merged entity prioritize to stay competitive?

Mellen: LogRhythm and Exabeam are likely to experience a period of innovation stagnation as they work to combine. The most important first step for them: getting the combined entity and products aligned. Once they have addressed that, the innovation they push forward should be focused on serving the mid market. That’s where they can have the most impact with the combined offering. As always, ease of use, log pipeline management, and quality of analytics are some of the biggest challenges for SIEM vendors and should be the combined entity’s focus.

LW: In what ways could the combined concerns better serve mid-market enterprises, perhaps even SMBs, as well?

Mellen: The combined entity should target the mid market and SMBs. LogRhythm has focused there, and though Exabeam previously targeted large enterprise, its user interface and ease of user makes it a good fit to bring down market.

LW: Anything else?

Mellen: Between this merger, Cisco’s acquisition of Splunk, and IBM selling QRadar assets to Palo Alto Networks, the SIEM market is undergoing a series of high-profile changes. Much of this is driven by pressure from the Tech Titans, XDR providers, and the realities of a hybrid, multi-cloud world. Expect more consolidation in the SIEM market in the next year.

Acohido

Pulitzer Prize-winning business journalist Byron V. Acohido is dedicated to fostering public awareness about how to make the Internet as private and secure as it ought to be.


 

The post News analysis Q&A: Shake up of the SIEM, UEBA markets continues as LogRhythm-Exabeam merge first appeared on The Last Watchdog.

CISOs can sometimes be their own worst enemy, especially when it comes to communicating with the board of directors.

Related: The ‘cyber’ case for D&O insurance

Vanessa Pegueros knows this all too well. She serves on the board of several technology companies and also happens to be steeped in cyber risk governance.

I recently attended an IoActive-sponsored event in Seattle at which Pegueros gave a presentation titled: “Merging Cybersecurity, the Board & Executive Team”

Pegueros shed light on the land mines that enshroud cybersecurity presentations made at the board level. She noted that most board members are non-technical, especially when it comes to the intricate nuances of cybersecurity, and that their decision-making is primarily driven by concerns about revenue and costs.

Thus, presenting a sky-is-falling scenario to justify a fatter security budget, “does not resonate at the board level,” she said in her talk. “Board members must be very optimistic; they have to believe in the vision for the company. And to some extent, they don’t always deal with the reality of what the situation really is.

“So when a CISO or anybody comes into a board room and says, ‘if we don’t do this, this is going to happen,’ it makes them all feel anxious and they start to close down their thought processes around it.”

This suggests that CISOs must take a strategic approach, Pegueros observed, which includes building relationships up the chain of command and mastering the art of framing messages to fit the audience.

Last Watchdog engaged Pegueros after her presentation to drill down on some of the notions she highlighted in her talk. Here’s that exchange, edited for clarity and length.

LW: Why do so many CISOs still not get it that FUD and doom-and-gloom don’t work?

Pigueros: I think this is the case where CISOs understand the true gravity and risk of the situation and they feel a sense of urgency to drive action by senior management and the board.  When that action does not materialize as they think it should, they start to use worst case scenarios to drive action.

Pegueros

In the end, the CISOs are just trying to do the right thing and resolve the issues threatening the organization. What they fail to realize is that the Board does not truly understand the risk of the situation and since nothing has happened up until that point, why would it happen now?

LW: What are fundamental steps CISOs can take to start to think and act strategically and communicate more effectively

Pigueros:  First, they need to understand the business including financials, customer concerns, product deficiencies and any macro level issues and how they are impacting the business.  Next, they need to understand the priorities of the business and frame all the security priorities in the context of the business priorities.

If the CISO wants to drive better compliance, then they talk about how compliance is key to enabling sales and how the customers are demanding compliance to do business with the company.  If they want better patching, then the CISOs should talk about how patched systems will improve availability of the product and therefore service to the customers.

If they want improved visibility around security logs, they can talk about the benefits of better visibility to the overall troubleshooting and improved efficiencies in operations.   Boards won’t argue with more revenue, better availability (which drives revenue) or greater efficiencies (which save money)

LW: Is compliance an ace in-the-hole, in a sense, for CISOs? How does the SEC’s stricter rules come into play, for instance.

Pigueros: Compliance is not going to fix all the security risks.  Many companies who are compliant with various regulations or frameworks have had breaches.  I believe compliance sets a minimum bar and a CISO must leverage compliance initiatives to drive overall better security, but it is not sufficient in and of itself.

Compliance brings visibility to a topic.  For example, with the SEC Cybersecurity Rules, Boards are now much more aware of the importance of cyber and are having more robust conversations relative to cybersecurity.

LW: Is it overly optimistic to suggest that companies will soon start viewing security as a business enabler instead of a cost center?

Pigueros: Sound cybersecurity practices and risk management are a differentiator for many non-regulated companies and are table stakes for highly regulated organizations.   Enterprise customers are demanding and driving the conversation around cybersecurity.

They are demanding to understand how their vendors could potentially impact their customers and their reputation.  The evolving and interrelated ecosystem that most companies exist in has the entrance fee of sound cybersecurity practices.  In time, organizations who do not pay this entrance fee will be kicked out.

LW: Massively interconnected, highly interoperable digital systems of the near future hold great promise. Don’t we have to solve security to get there?

Pigueros: Understanding digital connectedness, the benefits, and risks of that relationship and how it enables strategic objectives is key for the board to understand.  Security is just one risk element of this reality.

Boards need to dig in and understand all the key connection points and how they could enable or potentially hinder growth for the organization.  We have a long way to go relative to boards because technology is disrupting the established norms and modes of operations relative to governance.  Boards must evolve or their organizations will fail.

Acohido

Pulitzer Prize-winning business journalist Byron V. Acohido is dedicated to fostering public awareness about how to make the Internet as private and secure as it ought to be.


 

The technology and best practices for treating cybersecurity as a business enabler, instead of an onerous cost-center, have long been readily available.

Related: Data privacy vs data security

However, this remains a novel concept at most companies. Now comes a Forrester Research report that vividly highlights why attaining and sustaining a robust cybersecurity posture translates into a competitive edge.

The report, titled “Embed Cybersecurity And Privacy Everywhere To Secure Your Brand And Business,” argues for a paradigm shift. It’s logical that robust cybersecurity and privacy practices need become intrinsic in order to tap the full potential of massively interconnected, highly interoperable digital systems.

Forrester’s report lays out a roadmap for CIOs, CISOs and privacy directors to drive this transformation – by weaving informed privacy and security practices into every facet of their business; this runs the gamut from physical and information assets to customer experiences and investment strategies.

Last Watchdog engaged Forrester analyst Heidi Shey, the report’s lead author, in a discussion about how this could play out well, and contribute to an overall greater good. Here’s that exchange, edited for clarity and length.

LW: This isn’t an easy shift. Can you frame the barriers and obstacles companies can expect to encounter.

Shey: A common barrier is framing and articulating the value and purpose of the cybersecurity and privacy program. Traditionally it’s been about focusing inward on securing systems and data at the lowest possible cost, driven by compliance requirements.

Compliance matters and is important, but with this shift, we have to recognize that it is a floor not a ceiling when it comes to your approach. Building your program and embedding these capabilities with a customer focus in mind is the difference. You are trying to align business and IT strategies – and brand value – to drive customer value here. This is a key factor for building trust in your organization.

LW: How can companies effectively measure the success of cybersecurity and privacy integration into their operations?

Shey

Shey: This is something that calls for a maturity assessment. By understanding the key competencies required for this type of shift, organizations can better gauge their current maturity and identify capabilities they need to shore up to further improve. These key capabilities fall under the four competencies of oversight, process risk management, technology risk management, and human risk management.

For example, process risk management capabilities include how well the organization implements security and privacy in its customer-facing products and services as well as its own internal processes. It also covers the extension of security and privacy requirements to third-party partners and the ability to respond quickly and effectively to external questions from stakeholders such as customers, auditors, and regulators.

Within a maturity assessment like this, you can start to hone in on areas of improvement. If you’re doing a particular activity in an ad-hoc way today, establishing a repeatable process for it helps you push to the next level of maturity.

LW: Cultural change is acutely difficult.  What should CIOs and CISOs expect going in; what basic rethinking do they need to do?

Shey: Re-examine their own relationship first, specifically the trust and empathy between CIO and CISO. You need to be partners in driving this. If the CIO and CISO are operating in silos, and do not have shared vision, goals, and values here, it will make broader organizational cultural change difficult.

LW: Some progressive companies are moving down this path, correct? What have we learned from them; what does the payoff look like?

Shey: Yes, and this goes back to a point I made earlier about a key outcome of building customer trust in your organization. Trusted organizations reap rewards. Our research and data on consumer trust have proven this. Customers that trust your firm are more likely to purchase again, share personal data, and engage in other revenue-generating behaviors.

There is also a benefit of stronger business partnerships. We operate in a world today where your business is the risk and how you adapt is the opportunity. Companies view it as a risk to do business with your firm, whether they’re purchasing products and services or sharing data with you. Your ability to comply with partner’s or B2B customer’s security requirements will be critical.

LW: What approach should  mid-sized and smaller organizations take? What are some basic first steps?

Shey: Resist the urge to go buy technology as the first step. Emphasize strategy and oversight of your cybersecurity and privacy program, because you can’t embed the foundation for what you have not built yet. Align with a control framework as a starting point.

This will be your common frame of reference for connecting policies, controls, regulations, customer expectations, and business requirements. Recognize that as you mature your program, a Zero Trust approach will help you take your efforts beyond compliance.

Conduct a holistic assessment of technology and information risks to determine what matters most to the business, and identify the appropriate practices and controls to address those risks.

Set clear goals, such as a roadmap of core competencies to build and milestones. Identify clear lines of accountability to help make it transparent as to who is responsible for what, making it clear how each person on the team contributes to the program’s success.

Acohido

Pulitzer Prize-winning business journalist Byron V. Acohido is dedicated to fostering public awareness about how to make the Internet as private and secure as it ought to be.


 

A new tier of overlapping, interoperable, highly automated security platforms must, over the next decade, replace the legacy, on-premise systems that enterprises spent multiple kings’ fortunes building up over the past 25 years.

Related: How ‘XDR’ defeats silos

Now along comes a new book, Evading EDR: The Definitive Guide for Defeating Endpoint Detection Systems, by a red team expert, Matt Hand, that drills down a premier legacy security system that is in the midst of this transition: endpoint detection and response, EDR.

Emerging from traditional antivirus and endpoint protection platforms, EDR rose to the fore in the mid-2010s to improve upon the continuous monitoring of servers, desktops, laptops and mobile devices and put security teams in a better position to mitigate advanced threats, such as APTs and zero-day vulnerabilities.

Today, EDR is relied upon to detect and respond to phishing, account takeovers, BEC attacks, business logic hacks, ransomware campaigns and DDoS bombardments across an organization’s environment. It’s a key tool that security teams rely upon to read the tea leaves and carry out triage, that is, make sense of the oceans of telemetry ingested by SIEMs and thus get to a position where they can more wisely fine-tune their organization’s automated vs manual responses.

Last Watchdog visited with Hand to get his perspective of what it’s like in the trenches, deep inside the world of managing EDRs, on the front lines of non-stop cyber attacks and reactive defensive tactics. He says he wrote Evading EDR to help experienced and up-and-coming security analysts grasp every nuance of how EDR systems work, from a vendor-agnostic perspective, and thus get the most from them. His guidance also happens to shed some revealing light about the ground floor of the cyber arms race while illustrating why network security needs to be overhauled.

LW: From a macro level, do security teams truly understand their EDRs? How much are they getting out of them at this moment; how much potential would you say is actually being tapped vs. left on the table?

Hand:   I don’t think that a majority of teams who rely on EDR truly understand their inner workings or are getting the most out of them. EDRs have historically been considered a “black box” – something that activity goes into, and alerts come out of. Most teams that I’ve encountered trust that their EDR works perfectly out of the box and unfortunately that’s just not the case.

Every EDR needs to be tuned to the specific environment in which it is deployed. Some vendors have a period during customer onboarding wherein the EDR observes what is typical in the environment and creates a baseline, but this shouldn’t be the end of tuning. The next step should be building custom detections tailored to the organization. Unfortunately, most SOCs are still understaffed so detection engineering often goes on the back burner in favor of managing the alert queue.

LW: Your chapter teasers suggest there remains a ton of viable attack paths in the nooks and crannies of Windows systems; is this where attackers are making hay with Living off the Land (LotL) tactics? Can you please frame what this looks like.

Hand:   In any significantly complex system, there will inevitably be edge and corner cases that we just can’t account for. Windows is a very complex operating system and there are a ton of native capabilities that attackers can leverage. This can include using traditional living-off-the-land binaries or something as niche as a Win32 API function that allows for arbitrary code to be executed.

Finding and closing all of these attack vectors is an immense, if not entirely unfeasible, task. This fact highlights the importance of growing beyond solely using brittle, signature-based detections and investing in robust detections that capture the common denominator between many techniques and operations that an attacker can employ. This is only a band aid though and we should be looking at Microsoft and other OS developers to invest more into secure-by-design principles.LW: Your book is targeted to precious commodity: experienced cybersecurity professionals. Aren’t reactive systems that require specialized human expertise, like EDR, on their way out?

Hand:   I don’t believe so. I think the biggest problem is in reactivity and how it forces us to use our more experienced engineers. Let’s say that there is some cool new post-exploitation technique circulating. Should I pull my most experienced engineers away from building proactive defenses to test, validate, and remediate any issues or should I rely more on my vendor(s) to ensure we’re covered? If a vendor can identify and shore up a deficiency in their product, it would benefit all customers and not just those with the technical expertise to throw at the problem.

Looking beyond this, if we accept the fact that we have a staffing shortage and truly senior engineers are rare, we have two options – forge more engineers or use ours more effectively. Right now, the impact an engineer has is typically limited to their own organization. For instance, if an engineer writes a detection to catch that cool new post-exploitation technique, the outside world will likely never know.

What if instead of keeping the output of the hard work that goes into extending the usefulness of an EDR (research, writing detections, tuning, etc.), we shared that information openly with others in the industry so that everyone can benefit from it? If a surgeon finds a cool new method to perform an operation that has better patient outcomes, do they squirrel it away at their hospital or do they publish it to a journal and teach others?

 LW: Where do you see EDR fitting in 10 years from now? Does it have a place in the leading-edge security platforms and frameworks that are shifting more to a focus on proactive resiliency at the cloud edge, instead of reactive systems on endpoints?

Hand:   Yes, 100%. At the end of the day, an endpoint is any system that runs code, whether those be workstations, servers, mobile devices, cloud systems, ICS, or any other type of system. The nature of endpoints has and will continue to change, but there will always be endpoints that need defending. Perimeter defense has also been around for ages, but now the nature of the perimeter is changing.

Hand

Trying to decide which is more important isn’t the conversation we should be having. Rather, we should accept that proactive hardening and increasing the resiliency of Internet-facing systems, which would fall into a “prevention” category, is equally as important as ensuring that we can catch an adversary that slips through the cracks. Realistically, if a motivated and well-resourced attacker wants to get into your environment, they will.

It’s just a matter of time. If we accept that fact, we should spend our limited time and resources making it reasonably difficult to breach the perimeter (MFA, asset management, inbound mail filtering, training) while also preparing for the inevitability of a breach by implementing robust detective controls that can catch an adversary as early in their attack chain as possible to reduce the impact of the breach and allow responders to more confidently evict them.

Acohido

Pulitzer Prize-winning business journalist Byron V. Acohido is dedicated to fostering public awareness about how to make the Internet as private and secure as it ought to be.


 

Cisco’s recent move to acquire SIEM stalwart Splunk for a cool $28 billion aligns with the rising urgency among companies in all sectors to better protect data — even as cyber threats intensify and disruptive advancements in AI add a wild card to this challenge.

Related: Will Cisco flub Splunk?

Cisco CEO Chuck Robbins hopes to boost the resiliency the network switching giant’s growing portfolio of security services. Of course, it certainly doesn’t hurt that Cisco now gets to revenue from Splunk customers like Coca-Cola, Intel, and Porsche.

Last Watchdog engaged Gurucul CEO Saryu K. Nayyar in a discussion about the wider implications of this deal. Gurucul is known for its innovations in User and Entity Behavior Analytics (UEBA) as well as its advanced SIEM solutions. Here’s the exchange, edited for clarity and length:

LW: What are tech giants like Microsoft, Google and now Cisco doing in the SIEM space?

Nayyar: Microsoft, Google, and Cisco are not security-first companies, but they recognize that SIEM is at the heart of security operations, so it’s not surprising they want to get in. It seems their strategy is to leverage their existing customer base and products to get traction in this space. 

LW: Why are suppliers of  legacy firewall, vulnerability management and EDR  solutions also now integrating SIEM capabilities?

Nayyar: Many security vendors want a piece of the SIEM market, even if their technology isn’t necessarily purpose-built. These vendors aren’t so much ‘doing SIEM’; rather, they’re positioning a set of point products to solve pieces of the puzzle, not the whole puzzle. The importance of SIEM continues to rise along with the constant velocity and veracity of threats, so this trend of jumping on the SIEM band wagon will likely continue.

LW: For some historical context, could you summarize how we went from SIM to SIEM and how Gurucul came to pioneer UEBA?

Nayyar:: The transition from SIM to SIEM was born out of necessity. Security teams needed greater visibility across their operating environment. Combining a security Information tool with a security event tool made it easier to correlate alerts generated by security products, like firewalls and IDS, normalize it, and then analyze it to identify potential risks.

SIEMs of today, like Gurucul’s, have evolved leaps and bounds over legacy SIEMs with the addition of purpose-built machine learning and analytics models,  along with the ability to scale.

Gurucul pioneered UEBA technology a decade ago – in fact our company was built around this capability. UEBA focuses on behavioral patterns for users and entities to identify anomalies and activity outside of the norm. We use machine learning models on open choice big data lakes to detect unknown threats early in the attack chain.

Instead of being stuck in reactive mode, security analysts could proactively determine if an attack was underway. This significantly improved their ability to accurately identify a potential threat early in the kill chain before damage happens.

LW: Then along came SOAR and next-gen SIEM, correct? What was behind the emergence of these advances?

Nayyar: SOAR gave analysts a playbook for responding to an attack campaign so they didn’t have to reinvent the wheel each time. Many attacks, while varied in how they are used, have a known set of characteristics. The MITRE Attack framework is an example of how various attack techniques, even if unique, can still be mapped to known techniques and procedures. SOAR uses the output of detection engines and investigations and recommends workflows or playbooks to build a response plan, saving time and effort.

Next-gen SIEM came about to address the shortcomings of legacy SIEMs when it comes to things like ineffective data ingestion, a flood of unprioritized alerts from security control products, and weak threat detections. Early SIEMs were log management and compliance tools, they were never built to address real-time threat detection and response.

Essentially, next-gen SIEM combines the capabilities of UEBA, SOAR and XDR so security teams can proactively – and accurately – assess threats and respond quickly. Another characteristic of a next-gen SIEM is its ability to ingest and interpret any data from any source and easily scale.

LW: To what extent is Cisco’s acquisition of Splunk just a microcosm of a wider shift of network security that’s taking place? Can you frame how legacy security tools (NGFW, WAF, web gateways, SIEM, SOAR, UEBA, XDR, VM, IAM, etc.) appear to be converging, in some sense, with brand-new cloud-centric solutions (API Security, RBVM, EASM CAASM, CNAPP, CSPM, DevSecOps, ISAT, BAS, etc.)

Nayyar: While there will always be point products to solve specific problems, the best solution for customers is a platform that combines the best-of-breed technologies into a single framework.

Related: Reviving obervability.

As the SIEM has long been central to gathering data and information across the entire infrastructure, it’s naturally evolving into an observability platform where the data can be used for various use cases beyond just security, such as application and cloud performance monitoring and management. There is greater awareness that IT functions can work together to improve the gathering of data, analytics, and prioritization of security-related events to improve the organization’s resiliency.

 LW: How should a company leader at a mid-market enterprise think about all this? What’s the most important thing to keep in mind?

Nayyar

Nayyar: Mid-market enterprises need the ability to reduce manual tasks and detect and respond faster. They are resource-restrained and don’t typically have specialized analyst roles. They need a SIEM that can automate their workflow and provide prioritized, risk-driven context that enables them to respond to threats in real time.

LW: What do you expect network security to look like five years from now?

Nayyar: Traditional network security is becoming less relevant as edge computing and zero trust networks evolve. The incorporation of edge networking, cloud migration, and identity and access data is changing how we look at security and its interaction with IT.

However, companies making investments in their security stack will likely continue to use a layered approach versus a deprecative approach. For example, Anti-virus will continue to be supported on endpoints even though its efficacy has dramatically reduced. This also means that automating and simplifying management of these layers is important.

LW: Anything else?

Nayyar: When we look at the SIEM market, legacy log-based architectures that were built for centralized deployments have failed to provide the needed visibility and detection of threats in the cloud. And, cloud-vendor approaches, like GCP and Azure or cloud-only SIEMs, have failed to recognize that most organizations are hybrid and will continue to be hybrid for many years.

As data becomes more de-centralized and spread across multiple clouds and geographies, it becomes significantly harder to analyze and identify attack campaigns. All the while, attackers are becoming more sophisticated.

The only way to make sense of all the data is through sophisticated analysis leveraging data lakes, machine learning and AI. These capabilities exist today; security operations teams don’t have to be saddled with tools that have failed to keep up with the threat environment.

Acohido

Pulitzer Prize-winning business journalist Byron V. Acohido is dedicated to fostering public awareness about how to make the Internet as private and secure as it ought to be.