Most organizations today struggle with the basic of things. They don’t know how many endpoints they have on their network. They don’t know if they have antivirus installed. If they get a simple alert they don’t know its cause or where to begin to investigate it.

The vast majority of companies’ struggles with the very basics are due to talent availability. For example, a company of 500 employees cannot afford to put 10 people on one particular security product. But AI agents for cybersecurity can act like virtual employees who can augment humans.

Before we dive further into this bold claim, it’s important to understand that AI agents are different than GenAI and ChatGPT we’ve been hearing about for a while.

The whole large language model (LLM) phenomena started with ChatGPT. When people talk about AI agents or when they think about using LLMs, they invariably think about the ChatGPT use case. ChatGPT is a very specific use case where someone is talking to basically a chat bot. The promise of AI agents is having software that automatically does things for you – where software is powered by LLMs and that software is always trying to figure out what needs to be done. Even without telling it to do something, it does it. That is very different from early use cases of chat bots where users take the initiative and ask the questions.

Let me explain how AI agents work. As an example, a Security Operations Center analyst receives a Splunk alert about an employee logging in from a new location where they have never been. If the analyst asks Google about the alert and the employee logging from a new location, Google will offer some information and suggestions that can serve as a guideline. But in order to best triage that issue further, the analyst would want to get all the location information from where that employee had logged in in the past. The analyst may want to create a query that pulls information from Active Directory or Okta. Once they correlate this data, they may decide that more information is needed. AI agents do something very similar, and look at a whole variety of security knowledge inputs. They have this reasoning and can figure out that for this kind of alert certain information is needed, and they will find out how to get that information. They may need to run a few queries on various security systems, and they can correlate all the information in a report. This is just one example, and the reality is that there are thousands of different types of alerts and hundreds of different security tools. While AI agents cannot do everything today, the idea is that there are simple tasks they can do reliably to decrease the amount of work for the SOC team.

In fact, AI agents are often more effective than humans who bottleneck some processes. For example, if there’s an alert about a particular IP address then information about that IP address is needed. Humans will need to pull different kinds of information from internal and external sources. This takes time and effort, and they need to do it continuously. And all this data collected doesn’t really help because a SOC analyst wants to look at only the relevant information, and not spend time determining what’s important, and what’s not. This is one very simple use case where AI agents can deliver automatic enrichment with the right information based on the context, on what you are doing, and the alert.

Organizations, however, need to understand the security of the AI agents and GenAI they are considering. AI agents can cause damage in a thousand ways, they are like DevOps creating 100 lines of code every hour with no review process and no trial environment to test code before being deployed in production. A very frequently encountered problem with AI is hallucinations and these can be difficult to detect because they are subtle and hidden. For example, one of the common AI agent use cases is attempting to extract indicators of compromise (IOCs) from unstructured data. Because of the way LLMs are trained, they respond very confidently and even if information does not exist they will give an answer. So the right approach is to take any answer from an LLM with a grain of salt and use that not as gospel but as a candidate toward resolution. And then you can run your own deterministic logic to figure out whether that answer is correct or not. It is very important for organizations to look to solutions that can verify whether or not its LLM outputs are correct.

Regarding AI agents and cybersecurity, there are two axes of development today. First, we have a long way to go in making AI agents much more powerful and useful. There is no reason that in a couple of years you cannot triage 100 percent of your alerts with AI agents. There is no law in physics that’s getting in the way, it’s only a matter of engineering. Of course, it will require lots of development, but it can be done. To be more effective, AI agents need more reasoning and more domain knowledge gained over time. The second axis of development is making AI agents more reliable. Today AI agents can extract IOCs from some cyber threat intelligence (CTI) sources. But using them as is proves ineffective because sometimes they will work and sometimes they won’t. Reliability, trust and control are orthogonal to the inherent power of LLMs. As an analogy, consider that not all employees are equally competent. Some are very competent and powerful, while others are just starting their careers. But even with your most competent employees, you can’t always trust them. Some of them can be knowledgeable but unreliable – reliability and trust are orthogonal to competence. And that is the same with AI agents.

And how do we deal with unreliable people? We don’t throw them away, we put guard rails around them. If someone is very erratic, but when they do their work it’s very high quality, you don’t put them on critical projects. You give them lots of buffers. On the other hand, if someone is highly reliable but their work is just average or always needs review, you need to plan accordingly. LLMs are the same way, and the good thing is that it’s all software. So you can take its work and another AI agent can verify its work, and if it’s not good then you can throw it away. Organizations should have frameworks to evaluate the outputs of LLMs and make sure that they are used when useful, and you don’t use them where they can do damage.

However, the democratization of AI tools can lower the entry barrier for attackers, potentially leading to a surge in sophisticated attacks. This scenario underscores the urgency for defenders to automate their defenses aggressively to keep pace and stay ahead of evolving threats.

We have yet to see if AI agents will finally allow defenders to move ahead of the attackers, because adversaries are not sitting idle. They are automating attacks using AI today and it will get much worse. Fundamentally we should speed AI use for defenses even faster than we are now. The question is, if AI continues to become very powerful, then who wins? I can see a deterministic path for defenders to win because if intelligence is available on both sides then defenders will always have more context. For example, if you are trying to break into my network and there are 100 end points and you don’t know which endpoint is vulnerable, you will have to find out by doing a brute force attack. But as a defender I have that context into my network. So all things being equal, I will always be one step ahead.

However, this future is contingent on continuous innovation, collaboration, and a strategic approach to integrating AI into security frameworks. Now is the time for organizations to get their strategies in line and defenders should work together and collaborate. There is not a moment to lose because AI will create a tsunami of automated attacks, and as a human if you are spending $100 responding to an attack that costs your attacker a penny, you will go bankrupt. As an industry we must automate our defenses, and AI agents provide a great start.

The post How AI Agents Keep Defenders Ahead of Attackers appeared first on Cybersecurity Insiders.

Steve Jobs famously said, “Let’s go invent tomorrow instead of worrying about what happened yesterday.” If the pace of change is any indicator, the tech industry took that sentiment and ran with it. 

We’re at the halfway point of the 2020s decade, one punctuated by massive amounts of change. The Covid-19 pandemic ushered in an evolution in how we work and play, paving the way for innovation at breakneck speeds. In just five years, we’ve seen significant shifts in certain areas of security, in particular. Understanding these trends is key as we move forward into 2025. 

1.SIEM is out; security data fabrics are in. The majority of Fortune 500 enterprise organizations we work with have told us they’ve definitively decided to move away from their SIEM. Almost all are moving to a security data fabric and data lake for the myriad cost, efficiency and analytical benefits. Some aren’t sure exactly when, how or if they’ll buy one or build one, but they’ve made the choice to begin this effort. 

So, what will happen in 2025? More companies will have a security data lake strategy. Some will build their own, while others will purchase off-the-shelf proven data lake solutions. All of them will have to mature their outlook on security data management and analysis. This is a daunting task for many. One of the things I frequently hear most from companies is that they don’t think they’re “mature” enough to begin making better use of their security data. Unfortunately, this is a problem of “paralysis by analysis.” In 2025, we’ll see more companies seek to help themselves by enlisting the expertise of others who have successfully moved to a security data fabric and lake model. Practitioners who’ve done so successfully will have a bigger voice on the vendor stage, especially since a security data fabric and data lake is often comprised of multiple home-grown or off-the-shelf solution. 

2.VPN is out; zero trust is in. It’s hard to find anyone these days who still uses a VPN and isn’t thinking about how to get away from it. With legislation like Executive Order 14028 and other paradigms set for zero trust models, almost everyone seems to realize that the cornerstone of a solid zero trust solution is connectivity that facilitates user access to applications regardless of the network they are on. There are a handful of leading providers of zero trust services, some being pureplay public cybersecurity companies that have mature and reliable solutions. The next frontier of zero trust is bringing into the fold non-standard assets like those for OT/IoT, as well as enabling connections between devices to be performed in alignment with zero trust protocols. 

In 2025, we’ll see more companies look to upgrade their solutions to best-of-breed zero trust connectivity models, as well as look to harness data from those solutions for security analysis. Products that leverage exchange platforms to make user-to-application connections may have a wealth of logs that can be combined with other security signals to produce insights on risks and threats. 

3.SaaS is not out, but on-premises purchasing models are back. Between 2016-2022, SaaS was all the rage. However, for security, privacy and cost reasons, the largest of enterprises are retrenching on SaaS and looking to keep or repurchase on-premises solutions. Many large companies find on-premises more beneficial from an accounting perspective and, long-term, they often are indeed more affordable. Control of where their data resides is becoming paramount in our ever-changing regulatory environment, especially in the face of some new regulations that may place more accountability on CISOs and companies for security or privacy violations.  Those who were holdouts on migrating from SaaS to on-premises are relishing the fact that they triumphed in their assumption that on-premises was a better model for them, after all. 

In 2025, we’ll see companies that are exchanging SIEMs for data lakes and retiring older network detection and response (NDR) solutions for new-age methods of network monitoring, place at least the data storage components on-premises, or at minimum under their control, where possible. This will enable them to have better confidence in their ability to protect security and privacy of data that transits these solutions for analytical purposes. 

The tech industry is inventing tomorrow so quickly that today is struggling to keep pace – particularly when it comes to keeping digital assets safe. New regulations and new technologies, with both positive and negative implications, always require new strategies and tactics for organizations to thrive. Some of the tools and technologies that have been foundational to enterprise cybersecurity programs for years—SIEM, VPN, SaaS and others—are caught in the midst of a security data (r)evolution that is necessary in the new age of AI-fueled threats and global uncertainty. 

Take stock of your current security posture and determine whether security data fabrics, zero trust and on-prem solutions should be part of your security organization’s evolutionary process.

The views and opinions expressed by the individuals herein are their own and do not reflect any official policy or position of Comcast. These views and opinions are provided for illustrative purposes only and Comcast makes no warranties, whether express, implied, or statutory, regarding or relating to the accuracy of such statements.

 

The post Three Cybersecurity Shifts to Consider for the New Year appeared first on Cybersecurity Insiders.

During geopolitical tensions, supply-chain uncertainties, and fast-moving regulatory changes, organizations accelerate their risk-management programs, especially when mitigating risks inherent in business relationships with other organizations.

With so many challenges and headwinds to face, risk managers are increasingly pressed to use every tool in their toolkits to stay ahead of security threats while remaining within the bounds of the law.

Among their most valuable tools is the Standard Information Gathering (SIG) Questionnaire, a widely used assessment that helps organizations evaluate the security, privacy, and compliance risks of their third-party service providers and vendors. The SIG questionnaire, which Shared Assessments developed, standardizes the process of gathering mission-critical information about vendors and their security protocols, sparing organizations the effort of creating custom questionnaires for each assessment.

Many business leaders have become adept at using the SIG Questionnaire, but this year, it has been updated in ways that every organization should know.

The updates found in SIG 2025 reflect a shift toward stricter regulatory compliance and third-party risk governance. 

Organizations that adapt to these changes early will become more resilient, secure, and compliant in an increasingly complex vendor landscape.

The Role of SIG in Third-Party Risk Management

Tailor-making risk profiles for every service provider and vendor on the roster would consume more time and resources than most organizations have. This is why the SIG Questionnaire was developed. Its advantages include:

  • Standardization via a consistent framework for evaluating vendors, making risk assessments comprehensive and comparable.
  • Better efficiency by reducing the workload for both organizations and vendors by eliminating redundancies and streamlining the risk assessment process.
  • Comprehensive analysis, addressing cybersecurity, data privacy, operational resilience, regulatory compliance, and business continuity.
  • Alignment with regulations including ISO 27001, NIST, GDPR, HIPAA, SOC 2, and other laws, which simplifies complex compliance requirements.

Before onboarding a new vendor, organizations send the SIG questionnaire to them to get a sense of their security posture. Vendors and service providers also enjoy the benefits of standardization, as they can complete the questionnaire once and share it with multiple clients, saving time and effort.

Risk-management teams then analyze the responses to find gaps and determine whether additional controls or audits will be needed before onboarding the provider.

While the system works well, it also changes over time. This year will bring important updates to the SIG Questionnaire and understanding these is crucial in making third-party risk-management programs as effective as they can be.

Understanding the Changes

The 2025 SIG update includes new questions, expanded content mappings, and enhanced regulatory alignment. While no new risk domains have been added, there are other significant changes, including:

  • Five new questions on response requirements and outsourced incident reporting.
  • Four new questions assessing contingency planning, data governance, and resilience strategies.
  • Three new questions that address evolving threats.

Users can also expect improved functionality and expanded compliance mapping. The latter deserves a closer look.

Mapping Compliance

The 2025 SIG directly maps to 31 reference documents, including new standards and regulations. This streamlines regulatory compliance and saves time.  

SIG 2025 incorporates three key regulatory frameworks—and new controls for risk teams–to align with global cybersecurity and risk management trends:

  • E.U. Digital Operational Resilience Act (DORA), which strengthens the financial sector’s ability to withstand cyber threats and operational disruptions. SIG 2025 includes control J.11, which evaluates whether an organization has outsourced its incident reporting responsibilities, aligning with DORA Article 18.
  • E.U. Network and Information Security Directive 2 (NIS2), which mandates stricter security measures for supply chain security, requiring organizations to assess third-party risk exposure. SIG 2025 controls C.11 and C.12 were added to address Article 29, emphasizing information-sharing about cyber threats, vulnerabilities, and security incidents.
  • NIST Cybersecurity Framework (CSF) 2.0:, which strengthens governance functions and aligns cybersecurity practices with enterprise risk management. SIG 2025 now incorporates NIST CSF principles to improve third-party cybersecurity governance and risk visibility.

As organizations surely realize, the updates to the SIG Questionnaire are substantial. So, how should risk managers best prepare for them?

Ready for the Future

To effectively integrate the important updates to SIG—which will save organizations time and reduce the risk of falling out of compliance—risk teams get familiar with the new functionalities and explore the enhanced features of the SIG Manager to streamline the assessment process. They should also update assessment templates to incorporate the latest regulatory mappings and use custom scoping to ensure assessments are comprehensive and compliant.

Risk teams should also attend webinars and other training sessions offered by Shared Assessments to stay current on the latest changes and best practices.

By proactively adapting to these enhancements, risk teams will strengthen their third-party risk management programs and maintain compliance with evolving standards.

The gradual evolution of SIG is a reflection of the world that businesses find themselves in today. Geopolitics continues to affect commerce and supply chains. Regulations safeguarding privacy and security continue to proliferate.

At the same time, organizations find they need to do business with an ever-growing roster of vendors and service providers, all of whom bring their own unique risks to the table.

Broader vendor risk management covering multiple risk domains is crucial as security and business continuity challenges continue to multiply. Risk teams need every possible tool at their disposal – and the updated SIG Questionnaire is among the most valuable.

The post What Risk Managers Need to Know About SIG 2025 appeared first on Cybersecurity Insiders.

A recent survey revealed that nearly three-quarters of business leaders plan to implement generative AI within the next 12 months. However, almost 80 percent were not confident in their ability to regulate access and governance of these AI applications, citing concerns around data quality, data security, and data governance.  Unlike traditional systems that rely on fixed data sets and a standard query-response model, generative AI enables direct, natural language engagement, causing a shift in how users interact with technology and how data is accessed and processed.

This new data usage model marks a significant departure from previous applications, which tightly controlled and curated the use of structured and unstructured data. As such, our approach to data governance must evolve to prioritize data protection measures that ensure the confidentiality, integrity, and availability of information—principles that have long been foundational in data security—regardless of where that data resides. As we navigate this new landscape, it’s essential to rethink our strategies and frameworks to address the challenges posed by generative AI.

New Strategies for Data Governance

Data governance is essential because it dictates how data is accessed and used in AI applications and involves safeguarding the confidentiality, integrity, and availability of data, no matter where it resides. According to ePlus’ survey, business leaders are most concerned about data quality (61%), security (54.5%), and governance (52%), with data often siloed across various legacy systems. That’s why a robust protection program should prioritize data classification, identification, encryption, tokenization, real-time monitoring, and the management of mission-critical data sets. AI initiatives must break down these silos and modernize legacy data platforms to ensure proper data flow and integration.

It’s also essential to maintain visibility and control over data flows, access, and associated risks throughout the data lifecycle. This requires a clear understanding of where data is located, who has access to it, and ensuring compliance with relevant regulations.

Building a Strong Security Culture

Driving a strong culture of security within organizations is vital to a successful and holistic AI integration plan. While technology serves as the enforcement and execution point of a robust security program, comprehensive training for all employees—ranging from IT professionals and application developers to end-users—is equally crucial. Those engaging with generative AI agents and applications need to be well-informed about acceptable use and data protection practices to strengthen the organization’s overall security posture. 

Security professionals must prioritize compliance and effectiveness to drive successful AI initiatives. It is key to align data governance programs with regulatory standards and assess their effectiveness concerning the data used by AI applications to achieve positive outcomes. Most importantly, aligning data strategy with business objectives allows organizations to maximize their AI investments, leading to cost savings, improved resource efficiency, and better experiences for employees, customers, partners, and stakeholders.

Developing a Comprehensive Data Management Strategy

Successful AI implementation requires a comprehensive data management strategy, including modernized data platforms to accommodate scalable processing and performance requirements and transitioning from isolated data repositories to a unified data platform to enable the enforcement of security and data policies effectively. Conducting data strategy assessments and reviewing data governance controls helps organizations understand their current data landscape and align data management practices with their AI goals.

Finally, integrating services across AI applications involves bringing together the right teams to build, support, and secure AI infrastructure. Managing this infrastructure and providing feedback loops for continuous improvement ensures optimized security controls, financial management, and a strong governance program. 

Organizations that prioritize a holistic, data-led AI adoption strategy will seamlessly move from AI curious to AI ready, and ultimately to AI mature, putting them in an environment to succeed in today’s hyper-competitive AI landscape.

 

 

The post The Governance Model Required for Success in the Era of AI appeared first on Cybersecurity Insiders.

Agentic AI is becoming a hot topic in the security community. This emerging technology has already taken other industries by storm, such as customer service, healthcare, and financial services. Many security teams are intrigued by the concept of AI-powered agents that can learn, adapt, make decisions, and take action. Agentic AI can be an absolute game changer for lean, resource-strapped security teams and mid-market organizations to combat the onslaught of never-ending cyberattacks.

Defining Agentic AI

Don’t confuse the “agent” in Agentic AI with legacy endpoint agents, which are software components installed on connected devices to collect telemetry, enforce security policies, and enable remote administration. Agentic AI is not the same. Instead of being a passive collector of data or an execution mechanism for predefined rules, Agentic AI has the ability to adapt and make decisions in real-time.

Self-guided decision-making is what sets Agentic AI apart. Unlike traditional IT agents that must wait for commands to take the next step, Agentic AI, in the context of a security environment, autonomously detects, investigates, and mitigates threats without human intervention. It also has context-aware adaptability. This means agents don’t just follow narrow scripts or pre-programmed logic. Instead, they learn from their environments, attack patterns, and past responses. AI-powered agents are constantly refining their actions through feedback loops driven by continuous improvement. And, while traditional automation handles repetitive tasks, Agentic AI can chain multiple security actions together, thinking strategically about the broader security picture and reaching goals faster than manual procedures allow.

In short, Agentic AI functions like a security analyst, only faster and without burnout.

Building a Better SOC

With Agentic AI, transforming a Security Operations Center (SOC) into a more autonomous model is more achievable. Transitioning to an autonomous SOC model has many benefits for an organization’s overall security posture. An Autonomous SOC utilizes Agentic AI, generative AI, machine learning, and workflow automation to carry out security operations tasks with minimal human involvement.

Here are four ways Agentic AI helps lean security teams create a supercharged SOC that can defend against threats:

1. Automated Threat Detection and Response: Unlike SIEMs and other automated security systems that rely on rule-based detection, Agentic AI ingests alerts from a wide variety of sources across the network, including cloud, network, endpoint, and identity systems. AI-powered agents can automatically analyze the data from all of these ingestion points, identify abnormal behavior patterns, and surface potential threats quickly via machine learning. And Agentic AI doesn’t just detect—it acts, correlating related events pulled from these various sources with the rich context that human analysts need to neutralize and contain threats.

2.Automated Decision-Making: Instead of expecting security analysts to manually triage alerts, Agentic AI can prioritize incidents. It can also investigate anomalies and escalate threats intelligently for the analyst, lightening the workload and allowing them to work on more critical threats. Think of it as having a virtual Tier 1 security analyst who handles the heavy lifting. For lean security teams, this is paramount.

3.Dynamic Playbooks: Agentic AI dynamically executes multi-step response actions, such as blocking malicious traffic, isolating compromised endpoints, and initiating forensic data collection, based on real-time risk assessment. There is no waiting for analysts to hit “approve” on every alert.

4.Feedback Loops and Continuous Learning: Unlike static security tools, Agentic AI is designed to improve over time, learning from attack attempts, remediation steps, and analyst feedback to fine-tune detection and response mechanisms.

Leveling the Playing Field

SentinelOne introduced a maturity model for the Autonomous SOC toward the end of 2024. This programmatic concept, powered and influenced largely by AI, assists organizations in achieving the scalability and autonomy of their security operations.

However, many midmarket companies may find the pursuit of an Autonomous SOC program to be unattainable. While this model is a valuable resource, it is more easily achievable for larger, enterprise-sized organizations. These organizations typically have the benefit of larger budgets, more resources, and 24/7 security staff. Midmarket companies often lack the funding, infrastructure, and personnel of their enterprise-sized counterparts.

This is why Agentic AI changes the game for smaller, strapped security teams seeking more automation in their security operations. Agentic AI helps bridge a necessary gap in detection and response by automating manual efforts, acting as a helpful companion to the human security analysts worried about burning out.

For midmarket enterprises with smaller security teams, Agentic AI is the ingredient that powers an automated SOC that runs itself, saving them the overhead of hiring dozens of analysts.

Here are the key benefits of Agentic AI for lean security teams:

Faster Detection and Response: AI-powered agents can significantly reduce the time it takes to identify, detect, and respond to real-time attacks by replacing manual correlation with automated triaging, saving small teams thousands of hours a year.

● Less Burnou for Security Analystst: Small security teams get overwhelmed with security alerts, spending hours sifting through false positives which leads to burnout.

Agentic AI can significantly eliminate unnecessary alerts, helping teams focus on what matters most without burning through their bandwidth.

● Extracting More Value From Existing Tools: Most Agentic AI capabilities include open integration and interoperability of your security stack, adding tremendous firepower and ultimately ROI for your existing technology investments.

● Levels the Playing Field Against Cybercriminals: Mid-market organizations no longer have to play catch-up with their enterprise peers, as Agentic AI unlocks enterprise-grade security capabilities at scale without the hefty price tag.

Autonomy is the Goal

As cyber threats become more sophisticated, mid-market enterprises can’t afford to rely on traditional security models that require massive headcounts and budgets. They need to work smarter and faster. AI enables them to do just that.

With Agentic AI, the dream of an Autonomous SOC is now a reality for organizations of all sizes. Lean security teams can do more with less, stay ahead of threats, and defend with confidence.

For mid-market security leaders, the future isn’t just automation—it’s autonomy. Agentic AI is here to make it happen.

About the author

Subo Guha is Senior Vice President of Product Management at Stellar Cyber, where he spearheads the development of their award-winning, AI-driven Open XDR solutions. With more than 25 years of experience, Subo has held senior leadership roles at industry-leading companies like SolarWinds, Dell, N-able, and CA Technologies.

The post Four Ways Agentic AI Helps Lean Security Teams Defend Against Threats appeared first on Cybersecurity Insiders.

AI is rapidly transforming digital payments, revolutionizing money movement, and enhancing fraud detection. However, cybercriminals are using the same technology to launch deepfake scams, synthetic identities, and adaptive fraud techniques that evade traditional defenses. To outpace these evolving threats, financial institutions and the overall payments ecosystem must move beyond reactive security and adopt AI-driven strategies that anticipate and prevent fraud.

A recent McKinsey Global Survey on AI found that while 53% of organizations acknowledge cybersecurity as a major AI-related risk, only 38% are actively working to mitigate it. This gap in preparedness highlights the urgency for financial institutions to shift from traditional security approaches to AI-powered defenses. The following five approaches provide a roadmap for securing digital payments against the evolving landscape of cyber threats.

1.Transitioning to Predictive Security Measures

AI detects fraud by analyzing historical and real-time transaction data to establish normal user behavior. Any deviation, such as unusual spending locations or transaction spikes, triggers an alert. Unlike static rule-based models, AI constantly refines its detection techniques to counter new fraud methods. Cybercriminals now use machine learning to mimic legitimate transaction patterns, making fraudulent activity harder to detect. AI counters this by continuously learning and adapting, flagging even the most subtle irregularities that indicate fraudulent behavior. Unlike static models, AI-based detection evolves in real time, reducing false positives and incorporating new fraud tactics into its defenses. By processing millions of transactions simultaneously, AI-driven fraud detection enables financial institutions to intervene before fraud escalates, creating a dynamic, self-improving security layer that surpasses traditional methods.

2. Developing Adaptive Threat Protection Protocols

Financial institutions face a constant challenge, cyber threats are growing more sophisticated, and static security frameworks can’t keep up. Attackers refine their tactics, from automated card testing to targeted phishing scams, exploiting even the smallest security gaps. Hence, organizations need security measures that evolve in real time rather than reacting after the damage is done.

AI-driven threat intelligence helps detect and stop fraud before it escalates. But AI alone isn’t a silver bullet. The most effective security strategies blend AI’s speed with human expertise, ensuring anomalies are detected and understood in context. While AI processes vast data streams instantly, human oversight provides judgment for high-stakes decisions.

Resilience requires preemptive defense testing. Cyberattack simulations and stress tests uncover vulnerabilities before exploitation. In a rapidly changing digital landscape, adaptation isn’t just an advantage; it determines who stays secure and who falls behind. 

3. Strengthening Compliance with Evolving Regulations

Regulatory landscapes in digital payments are constantly evolving, with diverse data protection laws, AML directives, and cybersecurity mandates across jurisdictions. Non-compliance risks hefty fines and reputational damage, making adherence complex.

AI streamlines compliance by automating regulatory monitoring, detecting violations in real time, and simplifying reporting. Predictive analytics help institutions anticipate regulatory shifts, ensuring proactive adaptation. A Deloitte report found that 83% of financial institutions are exploring GenAI for fraud detection and compliance, highlighting its role in enhancing regulatory adherence and mitigating financial crime.

AI also strengthens fraud detection by identifying money laundering and suspicious transaction patterns, enabling financial institutions to navigate compliance challenges while maintaining strong security.

4. Enhancing Identity Verification with AI-Powered Biometrics

The rise of synthetic identities and deepfake scams has made traditional identity verification methods ineffective. Passwords and one-time passcodes (OTP), once the standard for authentication, are now easily bypassed by AI-driven attacks. To combat this, financial institutions must adopt advanced verification techniques beyond static credentials.

AI-powered biometric and behavioral authentication offers a more secure alternative by analyzing unique user traits that are difficult to forge. These systems assess factors such as typing patterns, navigation habits, and facial recognition data to verify identities with high accuracy. By continuously learning from user behavior, AI can detect even the faintest fraud indicators, making it significantly harder for imposters to impersonate legitimate users.

However, as institutions deploy these technologies, they must strike a balance between security and user privacy. AI-driven identity verification must comply with strict data protection regulations and ensure transparency in collecting and using biometric data. Trust is essential in financial transactions, and institutions must prioritize clear communication and robust encryption measures to maintain user confidence while enhancing security.

5. Driving Cross-Industry Collaboration for Unified Security Standards

No single entity can combat AI-driven fraud alone. Cybercriminals exploit gaps between financial institutions, regulators, and technology providers, making cross-industry collaboration essential in the fight against AI-driven fraud.

Successful partnerships between financial institutions and AI security firms have shown that shared threat intelligence accelerates fraud detection and enables unified countermeasures. Regulators also play a key role in working with payment providers to establish security standards and close exploitable loopholes.

Initiatives such as fraud intelligence-sharing networks and public-private collaborations have already proven effective in strengthening digital payment security. The more stakeholders work together, the more resilient the ecosystem becomes. Cross-industry collaboration is not just beneficial—it is critical to ensuring the long-term security of digital payments.

Beyond cyber threats – AI, foresight, and collaboration in securing digital payments

The future of digital payments security will not be defined by the sophistication of cyber threats, but by how well institutions anticipate and counter them. AI has already changed the game, both for attackers and defenders. Financial institutions that harness AI’s predictive capabilities, build adaptive security frameworks, and integrate biometric authentication will gain an edge. But technology alone isn’t enough. To secure digital payments, we need clear foresight, strong regulations, and teamwork across the industry. The leaders who will shape the future are those ready to innovate and adapt.

 

The post Future-Proofing Digital Payments: Five Strategies for AI-Driven Security appeared first on Cybersecurity Insiders.

AI-driven cyberattacks targeted more than 87% of organizations in 2024, according to a study conducted by SoSafe, a German cybersecurity platform that helps enhance employee awareness of cybersecurity threats.

The SoSafe 2025 Cybercrime Trends report highlights that 91% of security professionals anticipate a significant increase in AI-powered cyberattacks over the next three years, beginning in July 2025.

For those unfamiliar, artificial intelligence can obscure cyberattacks and make them nearly impossible for law enforcement agencies to track. While many organizations across the U.S. and other parts of the world are aware of this emerging threat, most are unprepared due to a lack of specialized talent and insufficient budgets to proactively defend against such advanced attacks.

Interestingly, these AI-driven attacks are often propagated through channels such as email, SMS, and social media, making them harder for experts to prevent or mitigate effectively.

In a related development, China has urged the Canadian government to stop accusing them of conducting AI-based attacks and spreading disinformation among the Canadian population. China argues that these accusations damage bilateral relations and could create unnecessary chaos in the international political sphere due to the lack of concrete evidence.

Misinformation, or “fake news,” often leads to political unrest and can significantly impact the economic stability of nations or even entire regions. As such, China has emphasized that its focus is on promoting global AI governance and opposes the use of AI technology for launching cyberattacks.

The post AI Cyber Attacks to intensify on organizations appeared first on Cybersecurity Insiders.

Threat hunters have shed light on a "sophisticated and evolving malware toolkit" called Ragnar Loader that's used by various cybercrime and ransomware groups like Ragnar Locker (aka Monstrous Mantis), FIN7, FIN8, and Ruthless Mantis (ex-REvil). "Ragnar Loader plays a key role in keeping access to compromised systems, helping attackers stay in networks for long-term operations," Swiss
Uncategorized