Business executives never think they’ll be victims of a cyberattack until it happens to them—and by that point, it’s already too late. Over the course of a few weeks, I had seen three companies fall victim to cybercrimes executed through social engineering—and I was forced to face the gravity of an impending crisis facing CEOs. One thought consumed me: if a large-scale company could be breached, what did it mean for my own private equity firm, which transfers millions of dollars to investors and tenants every month?

After days of deliberation, I developed a potential solution. With over 25 years of experience in IT and cybersecurity, multiple patents to my name, and a track record of building and selling a Managed Service Provider (MSP) that reached $65 million in annual revenue, I understood the evolving nature of cyberthreats. The key, I realized, was fostering shared awareness across all corporate communications, implementing a system that visually signals threats to end-users to help prevent deepfake-driven social engineering attacks.

I embarked on a journey to draft the patents, develop the software, and build the company. What I wasn’t prepared for was the sheer volume of attacks occurring every day across Corporate America.

In the past few months, I’ve spoken with hundreds of major companies. CTOs and CISOs have quietly disclosed their breaches to me. The patterns are both clear and alarming: social engineering is the predominant attack vector, and AI has transformed these attacks from obvious scams to near-perfect impersonations.

A few years ago, a Dubai company director was duped by a cloned voice to initiate $35 million in bank transfers. Another company last year acknowledged that a series of AI-generated video calls, mimicking their CFO, nearly resulted in $25 million dollars of fraudulent transfers. These are not isolated incidents. They represent a fundamental shift in the cybersecurity landscape that most organizations—and certainly most individuals—have yet to comprehend.

Traditional cybersecurity has focused on protecting systems: firewalls, intrusion detection, and endpoint protection. These tactics remain necessary but are increasingly insufficient. The most sophisticated attackers don’t bother trying to break through your technical defenses. Why would they when they can simply call your finance department, sound exactly like your CEO, and request an urgent wire transfer?

The rise of generative AI has exponentially increased both the scale and sophistication of these attacks. Previously, social engineering required skilled human operators who could stage a convincing performance on calls or craft persuasive emails. This limited the number of high-quality attacks possible. Now, AI can generate thousands of personalized, contextually aware communications—emails, voice calls, even video—that appear completely legitimate.

This transformation has happened with breathtaking speed. A Midwest company shared that their phishing simulation tests from just 18 months ago now seem laughably obvious compared to the real attacks they’re seeing today. The awkward phrasing and grammatical errors that once served as red flags have disappeared, replaced by perfectly crafted messages that reflect the exact communication style of the impersonated executive.

What makes this crisis particularly insidious is its invisibility. Unlike a ransomware attack that announces itself with encrypted files and demand notes, successful social engineering often leaves no obvious trace until the money is gone. And companies, fearing reputational damage, rarely disclose these incidents publicly unless legally required—embarrassed to admit that they are quite literally being “robbed blind.”

The financial implications are staggering. The FBI’s Internet Crime Complaint Center reported that Business Email Compromise (BEC) attacks—just one type of social engineering—resulted in billions of dollars in reported losses. But industry experts I’ve spoken with believe the true cost is far higher, potentially 5 to 10 times greater when factoring in unreported incidents. The scale of this threat is not just alarming, it’s a wake-up call for businesses to rethink their cybersecurity defenses. 

So, what can be done? Technical solutions are part of the answer. The system we’ve been developing uses AI to detect AI, analyzing communication patterns across channels to identify anomalies and provide real-time warning indicators.

Regulators also have a role to play. Compliance auditors and cyber insurance providers can guide companies to employ technology that provides shared awareness and non-repudiation aggregators. Also, current disclosure requirements often fail to capture the true nature and extent of social engineering attacks. More granular reporting mandates would help illuminate the scale of the problem and drive appropriate responses.

As AI continues to advance, the line between authentic and synthetic communications will only blur further. The attackers have weaponized trust itself, exploiting our fundamental human tendency to believe what we see and hear from seemingly familiar sources.

This crisis is real, growing, and largely invisible to the public. It’s time we recognized that in the new cybersecurity landscape, the weakest link isn’t your firewall—it’s human psychology. And strengthening that link will require tools, training, and vigilance beyond anything we’ve previously deployed.

 

 

The post They’re Not Hacking Your Systems, They’re Hacking Your People: The AI-Powered Crisis We’re Ignoring appeared first on Cybersecurity Insiders.

AI applications are embedded in our phones and becoming a vital part of life. To accelerate mainstream adoption, technology companies are inundating us with TV commercials to show the magic of AI. “Summarize a research report.” “Make this email sound professional.” 

Many people don’t realize that as they watch these commercials and experiment with the technology, most of these capabilities are based on language, particularly large language models (LLM). On the consumer side, breakthroughs in natural language processing and improving search engines are great. Andrej Karpathy, Open AI co-founder, referred to this when he said, “The hottest new programming language is English.” But this is not necessarily where the real power of AI is for enterprises.

Although nearly half (49%) of CEOs use AI for content generation, communication, and information synthesis, implementation more broadly across enterprises is flat or cooling. Enthusiasm for AI to enhance productivity, reduce downtime, and increase ROI is there, but the full potential is untapped due to cost and security concerns.

Initial AI applications have relied heavily on machine learning (ML), a subset of AI that has evolved into transformer architecture or look-ahead architecture. ML models basically predict what the next word, the next sentence, the next paragraph will be, and so on. However, training a model costs millions of dollars before it adds value and must be done responsibly. Using flawed or biased data can lead to inaccurate results. You must also lasso the data and the systems it connects to so that sensitive data isn’t exposed. 

This is where the newest innovation in AI, distinct from ML, is coming into play to enable additional enterprise use cases. With the right boundaries, new AI can provide game-changing value, including assistance in building cyber resilience.  

Delivering Cyber Resilience Insights

According to Gartner’s latest Hype Cycle for I&O Automation, by 2026, 50% of enterprises will use AI functions to automate Day 2 network operations, compared with fewer than 10% in 2023. 

The new generation of AI  will help us get there. 

AI is now moving from training to inference, helping you quickly make sense of or create a plan from the information you have. This is made possible based on improvements to how AI understands massive amounts of semi-structured data. New AI can figure out the signal from the noise, a critical step in framing the cyber resilience problem. 

The power of AI as a programming language combined with its ability to ingest semi-structured data opens up a new world of network operations use cases. AI becomes an intelligent helpline, using the criteria you feed it to provide guidance to troubleshoot, remediate, or resolve a network security or availability problem. You get a resolution in hours or days – not the weeks or months it would have taken to do it manually.  

Enabling Better Network Automation

In the same study, Gartner also finds that by 2026, 30% of enterprises will automate more than half of their network activities – tripling their automation efforts from mid-2023. 

AI is not the same as automation; instead, it enhances automation by significantly speeding up iteration, learning, and problem-solving processes. New AI allows you to understand the entire scope of a problem before you automate and then automate strategically. Instead of learning on the job – when you have a cyber resilience challenge, and the clock is ticking – you improve your chances of getting it right the first time. As the effectiveness of network automation increases, so too will its adoption.

Let’s look at the challenge of vulnerability management as an example.

Imagine you are a managed service provider (MSP). A flaw has been discovered in an open-source library that’s typically included in most of the popular switches made by multiple vendors. You, your customers, the vendors, and the bad guys all hear about this vulnerability at roughly the same time. Your job is to figure out how to remediate faster than the bad guys who will accelerate attacks because they know the door will close. 

Today, you have to manually figure out what to do across a complex and distributed network environment consisting of different customers, switches, and versions of switches that may or may not be running a version of the library with this vulnerability. 

You write one automation script after another to remediate each scenario. But you don’t see the commonalities until you’re well into the project. Eventually, you realize you could have written a handful of scripts to cover most of your customers, but by then, it’s too late. 

New AI allows you to streamline the project by formulating an AI-based lookup. You can pull in customer configuration information automatically and then use AI to categorize customers based on that criteria to see how cyber resilient they are. AI can also provide recommendations on how many unique automation scripts you will need to write so you can focus your resources and build resilience faster. 

The Magic of AI: Enabling Cyber Resilience

AI is never certain, but it can give you high-probability guidance, and that’s what business leaders look for to help them manage their enterprises strategically. 

You can get to cyber resilience faster when AI can provide insights that help you slash the amount of prep work and time spent writing automations to solve network security and availability problems. For business leaders, that’s more than magic. That’s a compelling use case for AI.

The post AI and Automation: Key Pillars for Building Cyber Resilience appeared first on Cybersecurity Insiders.

In November 2024, U.S. authorities charged multiple individuals for conducting cyberattacks on telecom and financial firms. They allegedly used phishing to steal credentials, breach networks, and exfiltrate data, leading to major security and financial losses.

This incident highlights the escalating sophistication of cyber threats and the critical need for advanced defense mechanisms. Traditional security measures are inadequate, requiring organizations to adopt AI-driven cybersecurity strategies. Those who don’t get on board will be left behind due to the fast growth in both technology and threats. 

AI’s ability to process vast data in real-time helps counter evolving threats. By identifying anomalies and potential vulnerabilities proactively, AI empowers organizations to neutralize risks before they escalate into significant breaches.

Modern Cybersecurity Challenges

The challenge isn’t just the growing number of threats; it’s that these threats are becoming smarter and more difficult to detect. Cybercriminals are also adopting AI and at an expedited rate to refine their tactics, from making phishing emails more convincing to automating credential theft. Even more concerning, they’re using deepfake technology to perpetrate fraud, blurring the lines between real and manipulated data. 

To counter these evolving threats, AI-driven cybersecurity is emerging as the next line of defense. Unlike traditional rule-based systems, these AI-powered solutions use machine learning to sift through massive data sets, identifying patterns and behaviors that humans might miss. What this means for businesses is faster, more accurate threat detection and a reduction in the noise from false positives that often swamp security teams.

More than just a tool for detection, AI is helping organizations stay one step ahead of the attackers. It’s automating routine tasks, allowing security professionals to focus their efforts on addressing real threats. With AI systems continuously learning and evolving, they adapt to new threats, making them increasingly reliable as organizations contend with the growing volume and complexity of cyberattacks. 

AI and Digital Transformation

The real value of AI in cybersecurity lies in its ability to reduce false positives. Traditional security systems often generate alerts for non-issues, creating noise that detracts from the real threats. With AI, organizations can filter out these distractions and focus only on genuine risks. AI’s ability to automate routine security tasks—like patch management and vulnerability scanning—frees up valuable human resources. Security teams can then focus on more strategic activities, like threat mitigation and risk analysis, which drive greater value.

The Playbook 

To harness the power of AI in cybersecurity, business leaders should consider the following strategic steps:

  1. Deploy Intelligent Threat Detection – Invest in AI-driven security platforms that provide real-time monitoring, anomaly detection, and automated response capabilities.
  2. Build a Strong AI Governance Framework – Develop clear policies for AI adoption to ensure responsible use, data protection, and compliance with evolving regulations.
  3. Upgrade Threat Intelligence Capabilities – Leverage AI to analyze vast amounts of threat intelligence data, identifying emerging risks before they escalate.
  4. Seamlessly Integrate AI into Security Operations – Ensure AI solutions work within existing cybersecurity architectures for a unified, resilient defense system.
  5. Stay Ahead with Continuous Training – Regularly update AI models, train security teams on AI-driven insights, and conduct red-team exercises to test AI effectiveness.
  6. Be Proactive with AI-Enhanced Incident Response – Implement AI-powered detection, investigation, and mitigation protocols to reduce attack impact and response times.

As cybercriminals refine their AI-driven attacks, businesses must adopt AI-powered defenses to stay ahead. Investing in AI tools within digital transformation efforts strengthens cybersecurity while preserving operational agility. AI is no longer optional, it is essential. By leveraging AI for predictive threat detection and mitigation, companies protect their digital assets and ensure long-term resilience in an evolving threat landscape.

 

The post AI’s Edge in Cybersecurity: How It’s Detecting Threats Before They Happen appeared first on Cybersecurity Insiders.

Most organizations today struggle with the basic of things. They don’t know how many endpoints they have on their network. They don’t know if they have antivirus installed. If they get a simple alert they don’t know its cause or where to begin to investigate it.

The vast majority of companies’ struggles with the very basics are due to talent availability. For example, a company of 500 employees cannot afford to put 10 people on one particular security product. But AI agents for cybersecurity can act like virtual employees who can augment humans.

Before we dive further into this bold claim, it’s important to understand that AI agents are different than GenAI and ChatGPT we’ve been hearing about for a while.

The whole large language model (LLM) phenomena started with ChatGPT. When people talk about AI agents or when they think about using LLMs, they invariably think about the ChatGPT use case. ChatGPT is a very specific use case where someone is talking to basically a chat bot. The promise of AI agents is having software that automatically does things for you – where software is powered by LLMs and that software is always trying to figure out what needs to be done. Even without telling it to do something, it does it. That is very different from early use cases of chat bots where users take the initiative and ask the questions.

Let me explain how AI agents work. As an example, a Security Operations Center analyst receives a Splunk alert about an employee logging in from a new location where they have never been. If the analyst asks Google about the alert and the employee logging from a new location, Google will offer some information and suggestions that can serve as a guideline. But in order to best triage that issue further, the analyst would want to get all the location information from where that employee had logged in in the past. The analyst may want to create a query that pulls information from Active Directory or Okta. Once they correlate this data, they may decide that more information is needed. AI agents do something very similar, and look at a whole variety of security knowledge inputs. They have this reasoning and can figure out that for this kind of alert certain information is needed, and they will find out how to get that information. They may need to run a few queries on various security systems, and they can correlate all the information in a report. This is just one example, and the reality is that there are thousands of different types of alerts and hundreds of different security tools. While AI agents cannot do everything today, the idea is that there are simple tasks they can do reliably to decrease the amount of work for the SOC team.

In fact, AI agents are often more effective than humans who bottleneck some processes. For example, if there’s an alert about a particular IP address then information about that IP address is needed. Humans will need to pull different kinds of information from internal and external sources. This takes time and effort, and they need to do it continuously. And all this data collected doesn’t really help because a SOC analyst wants to look at only the relevant information, and not spend time determining what’s important, and what’s not. This is one very simple use case where AI agents can deliver automatic enrichment with the right information based on the context, on what you are doing, and the alert.

Organizations, however, need to understand the security of the AI agents and GenAI they are considering. AI agents can cause damage in a thousand ways, they are like DevOps creating 100 lines of code every hour with no review process and no trial environment to test code before being deployed in production. A very frequently encountered problem with AI is hallucinations and these can be difficult to detect because they are subtle and hidden. For example, one of the common AI agent use cases is attempting to extract indicators of compromise (IOCs) from unstructured data. Because of the way LLMs are trained, they respond very confidently and even if information does not exist they will give an answer. So the right approach is to take any answer from an LLM with a grain of salt and use that not as gospel but as a candidate toward resolution. And then you can run your own deterministic logic to figure out whether that answer is correct or not. It is very important for organizations to look to solutions that can verify whether or not its LLM outputs are correct.

Regarding AI agents and cybersecurity, there are two axes of development today. First, we have a long way to go in making AI agents much more powerful and useful. There is no reason that in a couple of years you cannot triage 100 percent of your alerts with AI agents. There is no law in physics that’s getting in the way, it’s only a matter of engineering. Of course, it will require lots of development, but it can be done. To be more effective, AI agents need more reasoning and more domain knowledge gained over time. The second axis of development is making AI agents more reliable. Today AI agents can extract IOCs from some cyber threat intelligence (CTI) sources. But using them as is proves ineffective because sometimes they will work and sometimes they won’t. Reliability, trust and control are orthogonal to the inherent power of LLMs. As an analogy, consider that not all employees are equally competent. Some are very competent and powerful, while others are just starting their careers. But even with your most competent employees, you can’t always trust them. Some of them can be knowledgeable but unreliable – reliability and trust are orthogonal to competence. And that is the same with AI agents.

And how do we deal with unreliable people? We don’t throw them away, we put guard rails around them. If someone is very erratic, but when they do their work it’s very high quality, you don’t put them on critical projects. You give them lots of buffers. On the other hand, if someone is highly reliable but their work is just average or always needs review, you need to plan accordingly. LLMs are the same way, and the good thing is that it’s all software. So you can take its work and another AI agent can verify its work, and if it’s not good then you can throw it away. Organizations should have frameworks to evaluate the outputs of LLMs and make sure that they are used when useful, and you don’t use them where they can do damage.

However, the democratization of AI tools can lower the entry barrier for attackers, potentially leading to a surge in sophisticated attacks. This scenario underscores the urgency for defenders to automate their defenses aggressively to keep pace and stay ahead of evolving threats.

We have yet to see if AI agents will finally allow defenders to move ahead of the attackers, because adversaries are not sitting idle. They are automating attacks using AI today and it will get much worse. Fundamentally we should speed AI use for defenses even faster than we are now. The question is, if AI continues to become very powerful, then who wins? I can see a deterministic path for defenders to win because if intelligence is available on both sides then defenders will always have more context. For example, if you are trying to break into my network and there are 100 end points and you don’t know which endpoint is vulnerable, you will have to find out by doing a brute force attack. But as a defender I have that context into my network. So all things being equal, I will always be one step ahead.

However, this future is contingent on continuous innovation, collaboration, and a strategic approach to integrating AI into security frameworks. Now is the time for organizations to get their strategies in line and defenders should work together and collaborate. There is not a moment to lose because AI will create a tsunami of automated attacks, and as a human if you are spending $100 responding to an attack that costs your attacker a penny, you will go bankrupt. As an industry we must automate our defenses, and AI agents provide a great start.

The post How AI Agents Keep Defenders Ahead of Attackers appeared first on Cybersecurity Insiders.

A recent survey revealed that nearly three-quarters of business leaders plan to implement generative AI within the next 12 months. However, almost 80 percent were not confident in their ability to regulate access and governance of these AI applications, citing concerns around data quality, data security, and data governance.  Unlike traditional systems that rely on fixed data sets and a standard query-response model, generative AI enables direct, natural language engagement, causing a shift in how users interact with technology and how data is accessed and processed.

This new data usage model marks a significant departure from previous applications, which tightly controlled and curated the use of structured and unstructured data. As such, our approach to data governance must evolve to prioritize data protection measures that ensure the confidentiality, integrity, and availability of information—principles that have long been foundational in data security—regardless of where that data resides. As we navigate this new landscape, it’s essential to rethink our strategies and frameworks to address the challenges posed by generative AI.

New Strategies for Data Governance

Data governance is essential because it dictates how data is accessed and used in AI applications and involves safeguarding the confidentiality, integrity, and availability of data, no matter where it resides. According to ePlus’ survey, business leaders are most concerned about data quality (61%), security (54.5%), and governance (52%), with data often siloed across various legacy systems. That’s why a robust protection program should prioritize data classification, identification, encryption, tokenization, real-time monitoring, and the management of mission-critical data sets. AI initiatives must break down these silos and modernize legacy data platforms to ensure proper data flow and integration.

It’s also essential to maintain visibility and control over data flows, access, and associated risks throughout the data lifecycle. This requires a clear understanding of where data is located, who has access to it, and ensuring compliance with relevant regulations.

Building a Strong Security Culture

Driving a strong culture of security within organizations is vital to a successful and holistic AI integration plan. While technology serves as the enforcement and execution point of a robust security program, comprehensive training for all employees—ranging from IT professionals and application developers to end-users—is equally crucial. Those engaging with generative AI agents and applications need to be well-informed about acceptable use and data protection practices to strengthen the organization’s overall security posture. 

Security professionals must prioritize compliance and effectiveness to drive successful AI initiatives. It is key to align data governance programs with regulatory standards and assess their effectiveness concerning the data used by AI applications to achieve positive outcomes. Most importantly, aligning data strategy with business objectives allows organizations to maximize their AI investments, leading to cost savings, improved resource efficiency, and better experiences for employees, customers, partners, and stakeholders.

Developing a Comprehensive Data Management Strategy

Successful AI implementation requires a comprehensive data management strategy, including modernized data platforms to accommodate scalable processing and performance requirements and transitioning from isolated data repositories to a unified data platform to enable the enforcement of security and data policies effectively. Conducting data strategy assessments and reviewing data governance controls helps organizations understand their current data landscape and align data management practices with their AI goals.

Finally, integrating services across AI applications involves bringing together the right teams to build, support, and secure AI infrastructure. Managing this infrastructure and providing feedback loops for continuous improvement ensures optimized security controls, financial management, and a strong governance program. 

Organizations that prioritize a holistic, data-led AI adoption strategy will seamlessly move from AI curious to AI ready, and ultimately to AI mature, putting them in an environment to succeed in today’s hyper-competitive AI landscape.

 

 

The post The Governance Model Required for Success in the Era of AI appeared first on Cybersecurity Insiders.

In episode 40 of the AI Fix, Graham meets a shape-shifting GOAT, a robot dog gets wet, Mark likes Claude 3.7 Sonnet, OpenAI releases its dullest model yet, Grok 3 needs to go home and have a lie down, and everyone loses their minds over two AI agents booking a hotel room using 90s-era modem dial-up sounds. Graham tells the incredible story of a woman whose life was saved after ChatGPT told her to go to the emergency room, stat! And Mark explains how just a little negativity made GPT-4o bad to the bone. All this and much more is discussed in the latest edition of "The AI Fix" podcast by Graham Cluley and Mark Stockley.

In today’s digital age, cybersecurity is more critical than ever before. With the increasing sophistication of cyberattacks and the expanding volume of data that organizations must protect, the integration of Artificial Intelligence (AI) in cybersecurity has emerged as a powerful tool to combat these threats. However, like any technology, AI in cybersecurity comes with both advantages and challenges. This article will explore the pros and cons of using AI in the field of cybersecurity.

Pros of Using AI in Cybersecurity

1.Enhanced Threat Detection and Prevention – One of the most significant advantages of AI in cybersecurity is its ability to detect and prevent threats in real time. Traditional cybersecurity tools often rely on predefined signatures or rules to identify threats, which can be bypassed by new, sophisticated attack methods. AI, on the other hand, can use machine learning (ML) algorithms to analyze vast amounts of data and identify anomalous patterns indicative of cyber threats, such as malware, phishing attempts, or zero-day attacks. This allows organizations to detect threats that may otherwise go unnoticed and respond swiftly before they cause significant harm.

2.Automated Incident Response-  AI can automate many aspects of incident response, reducing the time it takes to detect, analyze, and mitigate cyberattacks. AI-powered security systems can automatically isolate affected systems, block malicious traffic, and implement countermeasures without human intervention. This can dramatically reduce response times and minimize the damage caused by cyberattacks. In high-pressure situations, AI can act as a force multiplier, allowing security teams to focus on more complex tasks while automated systems handle the basics.

3.Improved Accuracy and Efficiency – Unlike human analysts, AI systems do not suffer from fatigue or bias. They can process enormous amounts of data quickly and accurately, identifying threats that might be overlooked by human eyes. By utilizing AI, organizations can significantly reduce the number of false positives, which are common in traditional cybersecurity systems, and ensure that resources are focused on legitimate threats. This efficiency leads to cost savings and a more robust cybersecurity posture.

4.Predictive Capabilities -AI’s ability to analyze historical data and recognize emerging trends allows it to predict potential threats before they materialize. By examining past cyberattacks and understanding how threats evolve over time, AI can provide valuable insights into where and how future attacks may occur. This predictive capability enables organizations to strengthen their defenses proactively, rather than reactively, and helps them stay ahead of cybercriminals.

5. Scalability -As the amount of data generated by organizations continues to grow exponentially, AI’s scalability becomes increasingly valuable. AI systems can adapt to handle larger volumes of data, more complex networks, and a growing number of endpoints. Unlike traditional systems that require constant manual updates and human intervention, AI can autonomously adjust its models and adapt to changing network environments, making it a highly scalable solution for cybersecurity.

Cons of Using AI in Cybersecurity

1.High Implementation Costs – While AI offers numerous benefits, implementing AI-based cybersecurity solutions can be expensive. The development, integration, and ongoing maintenance of AI-powered systems require significant financial investment. Organizations must not only purchase the necessary hardware and software but also invest in the expertise required to configure and manage these systems effectively. Smaller organizations with limited budgets may find it difficult to justify the high costs of adopting AI for cybersecurity.

2.Risk of Adversarial AI – As AI systems become more integrated into cybersecurity, cybercriminals are also using AI to launch more sophisticated attacks. Hackers can develop adversarial AI, which is designed to bypass or deceive security systems powered by machine learning algorithms. For example, AI can be used to create fake data that tricks a security system into classifying malicious activity as benign, allowing cybercriminals to evade detection. This cat-and-mouse dynamic between security AI and cybercriminals introduces a new layer of complexity to the cybersecurity landscape.

3.Dependence on Data Quality – AI systems are only as good as the data they are trained on. If the data used to train AI algorithms is biased, incomplete, or of poor quality, the effectiveness of the system can be severely compromised. In cybersecurity, where the stakes are high, relying on faulty or incomplete data can lead to missed threats, false alarms, or improper responses to attacks. Organizations must ensure that the data feeding their AI systems is accurate, comprehensive, and representative of the latest threat landscape.

4.Complexity and Lack of Transparency – AI systems, particularly those based on deep learning and other advanced techniques, can often operate as “black boxes,” meaning their decision-making processes are not easily understood by human operators. This lack of transparency can be a significant drawback in cybersecurity, where understanding why a particular threat was detected or why a response was triggered is essential for improving and fine-tuning the system. Additionally, if an AI system makes an incorrect decision, it can be difficult to troubleshoot and correct the issue without a clear understanding of how the AI reached its conclusion.

5.Ethical and Privacy Concerns -The deployment of AI in cybersecurity can raise ethical and privacy concerns, particularly when it comes to data collection and surveillance. AI-driven systems often require access to vast amounts of sensitive information to function effectively, which could include personal data, employee activities, or customer information. The use of AI in this context could potentially violate privacy rights or lead to unwanted surveillance. Moreover, the increasing reliance on AI could give organizations unprecedented power over personal data, raising concerns about potential misuse or abuse.

Conclusion

AI has the potential to revolutionize cybersecurity by providing faster, more accurate threat detection, automated responses, and predictive capabilities. However, its adoption comes with challenges, including high implementation costs, the risk of adversarial AI, data quality concerns, and ethical issues related to privacy. As AI technology continues to evolve, organizations must carefully weigh the benefits and drawbacks before integrating AI into their cybersecurity strategies. With proper implementation and oversight, AI can significantly enhance an organization’s ability to defend against the ever-evolving landscape of cyber threats.

The post Pros and Cons of Using AI in Cybersecurity appeared first on Cybersecurity Insiders.

Kyocera CISO Andrew Smith explains how he’s responded to the cyber risks associated with AI and how businesses can start implementing it.

Ever since AI’s meteoric rise to prominence following the release of ChatGPT in November 2022, the technology has been at the centre of international debate. For every application in healthcare, education, and workplace efficiency, reports of abuse by cybercriminals for phishing campaigns, automating attacks, and ransomware have made mainstream news.

Regardless of whether individuals and businesses like it, AI isn’t going anywhere. That’s why, in my view, it’s time to start getting real about the use cases for the technology, even if it might lead to potential cyber risks. Companies that refuse to adapt are risking being left behind in the same manner that stubborn businesses were when they refused to adjust during the early days of the Dot-com boom.

When it comes to early adoption, everyone wants to be Apple; nobody wants to be Pan-Am. So, how do businesses adapt to the new world of AI and tackle the associated risks?

Step 1: Understand the legal boundaries of AI and identify if it’s right for your business

Despite the risks, the mass commercialization of AI is a positive development as it means legal conditions are in place to help govern its use. AI has been around for a lot longer than ChatGPT; it’s just that we’re only now starting to set guidelines on how to implement and use it.

Regulations are constantly changing given the rapid evolution of AI, so it’s essential that businesses are aware of the rules which apply to their sector. Consultation with legal professionals is as crucial as any step of the process; you don’t want to commit a large amount of capital towards a project which falls foul of the law.

Once you’ve got the all-clear to proceed – hopefully with some additional understanding of the legal parameters – it’s down to you to identify if and where AI can add value to your business and how it could affect your approach to cybersecurity. Are there thousands of hours being spent on mundane tasks? Could a chatbot speed up the customer service process? How will you keep sensitive data safe after the introduction of AI software?

What’s important is that businesses have taken the time to identify where AI could add value and not just include it in digital transformation plans because they think it’s the right thing to do. Fail to prepare, prepare to fail – and avoid embarking on vanity projects that could do more harm than good.

Step 2: Decide on your AI transformation partner

This doesn’t mean you start using ChatGPT to run your business!

Assuming you don’t already have the talent in-house, there are hundreds, if not thousands, of AI transformation businesses for you to partner with on your journey.

I won’t labour over this step as every business will have its procurement processes. Still, my best advice is to look at the case studies of an AI transformation company’s existing work and even reach out to their existing clients to find out if their new AI tools have been helpful. Crucially, make a note of any security issues encountered in AI projects and bear this knowledge in mind. Like anything, a third-party endorsement for impactful work goes a long way.

That said, with the rapid growth in AI, sometimes “case studies’ are not freely available, and businesses should consider not discounting skilled firms. Instead, if a company has the credentials, insight, and technology, allow them the ability to demonstrate capabilities and how these support your journey.

Step 3: Ensure cyber-hygiene and cyber-education are communicated across the business

Unfortunately, most cyber-attacks are caused or enabled by insiders, usually employees. In the vast majority of cases, it’s not malicious; it’s just a member of your team who doesn’t understand the implications of cyber risks and doesn’t take all the necessary precautions.

Therefore, your best opportunity to nullify those risks is by thoroughly and consistently educating your employees. This should apply just as much to new AI tools as to anything else at the business.

It seems obvious to most by now, but ChatGPT is free because we are the product. Every time you input data into the model, it learns from your input, and there’s a distinct possibility that your data will be regurgitated at some stage to someone else. That’s why staff must be careful about entering sensitive information, even if an AI tool claims to keep data secure.

Not inputting sensitive company data into (Large Language Models) LLMs might be an easy and obvious starting point, but there’s plenty more that companies should be educating their employees about cyber-hygiene and not just its relevance to AI. Key topics can include:

  • Best practices in handling sensitive company data
  • The right way to communicate and flag potential breaches
  • Implementing an incident/rapid response plan
  • Regularly backing up data and ensuring it is secure
  • Secure by design – “Doing the thinking up front”

I believe education and training remain the best tools for tackling cybercrime, and failing that; you should ensure you have a solid plan to ensure that criminals can’t hold you to ransom should the worst happen.

Step 4: Implementation and regular review

If you successfully completed steps 1-3, you should have a powerful new AI tool to improve your business.

Once your staff have been trained on security risks are and using it, AI shouldn’t be treated as a ‘set and forget’ tool – any business using it should constantly review its effectiveness and make the necessary tweaks to ensure it provides maximum value the same way we do with our staff. It’s not just for efficiency either: there’s a good chance that regular reviews will expose potential vulnerabilities, and it’s far better for you to catch them before a potential cyber-criminal does.

If you skip one of the above steps, you risk encountering significant security issues and ultimately wasting capital on a failed or troublesome project. Follow each step correctly, however, and AI will become a powerful tool to help you stay ahead of the curve.

The post How Kyocera’s CISO tackles the threat of cyber risk during AI adoption appeared first on Cybersecurity Insiders.

Weak passwords, as various studies have shown, can be cracked in a second, but now AI can crack even stronger ones in the same amount of time. Language models can and will be used to brute force passwords and organize dictionary attacks more often, cybersecurity experts predict.

“AI is a breakthrough technology that is beginning to permeate all aspects of life and business, including cybersec. We should be mindful that in 2025, the time it takes to guess, social engineer, or brute force passwords is going to drop dramatically due to AI tools in the hands of cybercriminals”, says Ignas Valancius, Head of Engineering at NordPass, a leading password manager.

According to the Top 200 Most Common Passwords research, simple passwords like “123456” or “qwerty” can be cracked in under a second. The more complex the password, the longer it takes, but with the increasing computing power and AI advances, hackers will be able to try many more combinations in less time. So even more complex passwords will be cracked faster. 

AI is learning 

“I’m not saying that super long, random 18-character passwords are at immediate risk. But shorter ones – they could be in danger. With the arrival of DeepSeek, language models are being commoditized. Recently, researchers at Stanford and the University of Washington trained the “reasoning” model using less than $50 in cloud computing credits. With things so cheap, more threat actors will choose the easy way – buy some datasets on the dark web, ask an AI to make dictionary or brute force attacks on all the accounts, and go watch a movie. No need to organize months-long phishing campaigns,” says Valancius.

A dictionary attack is a systematic method of guessing a password by trying many common words and their simple variations. Attackers use extensive lists of the most commonly used passwords, popular pet names, fictional characters, or literally just words from a dictionary – hence the name of the attack. They also change some letters to numbers or special characters, like “p@ssw0rd”.

Poor security habits

The latest Top 200 Most Common Passwords research shows that despite the efforts of many organizations, there hasn’t been much improvement in people’s password habits. During a six-year study by NordPass, the password “123456” topped the charts as the most common password 5 out of 6 times. “password” held this not-so-noble title just once.

“And let’s not forget that the more people use AI, the more it learns about them. This is to say that many people already share sensitive data with ‘free’ AI tools to get things done, but here’s the catch – nothing’s really free. That data gets used for training, tracking, and, even worse, creating detailed profiles for more targeted attacks. So, as we move forward, it’s crucial to keep our passwords long and strong, and tread carefully as we interact with AI tools,” Valancius added.

How to create long and strong passwords

  • When creating or updating passwords, make sure they are at least 8 characters long and contain some uppercase and lowercase letters, symbols, and numbers. Keep in mind that this is the bare minimum for your password. The longer it is, the better. Just be sure not to use your name or other personal information, like your date of birth, because that is exactly the type of correlation an AI or a hacker would be looking for. Anniversaries, names of family members, and pet names should be avoided as well.
  • Since long random passwords are very hard to remember, creating a passphrase might be a good workaround. For example, the well-known phrase from Star Wars, “May the Force be with you,” could make a pretty good passphrase: “M@Y7heF0rc3BwithY0(_)”.
  • Use different passwords for different accounts and never reuse them. If it gets overwhelming, consider using a password manager. It can help you create strong passwords and synchronize them across devices. That way, you’ll only need to remember one master password. 
  • Another option is switching to passkeys. They combine biometric verification with cryptographic keys, offering a safer and more convenient alternative to passwords. In other words, passkeys let you get rid of passwords entirely and use your face or a fingerprint to log in. 

ABOUT NORDPASS

NordPass is a password manager for both business and consumer clients. It’s powered by the latest technology for the utmost security. Developed with affordability, simplicity, and ease of use in mind, NordPass allows users to securely access their passwords on desktop, mobile, and browsers. All passwords are encrypted on the device, so only the user can access them. NordPass was created by the experts behind NordVPN – the advanced security and privacy app trusted by more than 14 million customers worldwide. For more information: nordpass.com.

The post AI is coming for your passwords – better make them strong appeared first on Cybersecurity Insiders.

In episode 39 of the AI Fix, our hosts watch a drone and a robot dog shoot fireworks at each other, xAI launches Grok 3, Mark explains that AIs can design genomes now, a robot starts a punch up, Zuck becomes a mind reader, an AI cracks a ten-year science question in two days, and an anatomically accurate synthetic human recreates a terrifying scene from The Long Good Friday. Graham learns that it always pays to be polite before running over 15 people with a train, and Mark discovers why AIs value some lives more than others, particularly their own. All this and much more is discussed in the latest edition of "The AI Fix" podcast by Graham Cluley and Mark Stockley.