Business executives never think they’ll be victims of a cyberattack until it happens to them—and by that point, it’s already too late. Over the course of a few weeks, I had seen three companies fall victim to cybercrimes executed through social engineering—and I was forced to face the gravity of an impending crisis facing CEOs. One thought consumed me: if a large-scale company could be breached, what did it mean for my own private equity firm, which transfers millions of dollars to investors and tenants every month?
After days of deliberation, I developed a potential solution. With over 25 years of experience in IT and cybersecurity, multiple patents to my name, and a track record of building and selling a Managed Service Provider (MSP) that reached $65 million in annual revenue, I understood the evolving nature of cyberthreats. The key, I realized, was fostering shared awareness across all corporate communications, implementing a system that visually signals threats to end-users to help prevent deepfake-driven social engineering attacks.
I embarked on a journey to draft the patents, develop the software, and build the company. What I wasn’t prepared for was the sheer volume of attacks occurring every day across Corporate America.
In the past few months, I’ve spoken with hundreds of major companies. CTOs and CISOs have quietly disclosed their breaches to me. The patterns are both clear and alarming: social engineering is the predominant attack vector, and AI has transformed these attacks from obvious scams to near-perfect impersonations.
A few years ago, a Dubai company director was duped by a cloned voice to initiate $35 million in bank transfers. Another company last year acknowledged that a series of AI-generated video calls, mimicking their CFO, nearly resulted in $25 million dollars of fraudulent transfers. These are not isolated incidents. They represent a fundamental shift in the cybersecurity landscape that most organizations—and certainly most individuals—have yet to comprehend.
Traditional cybersecurity has focused on protecting systems: firewalls, intrusion detection, and endpoint protection. These tactics remain necessary but are increasingly insufficient. The most sophisticated attackers don’t bother trying to break through your technical defenses. Why would they when they can simply call your finance department, sound exactly like your CEO, and request an urgent wire transfer?
The rise of generative AI has exponentially increased both the scale and sophistication of these attacks. Previously, social engineering required skilled human operators who could stage a convincing performance on calls or craft persuasive emails. This limited the number of high-quality attacks possible. Now, AI can generate thousands of personalized, contextually aware communications—emails, voice calls, even video—that appear completely legitimate.
This transformation has happened with breathtaking speed. A Midwest company shared that their phishing simulation tests from just 18 months ago now seem laughably obvious compared to the real attacks they’re seeing today. The awkward phrasing and grammatical errors that once served as red flags have disappeared, replaced by perfectly crafted messages that reflect the exact communication style of the impersonated executive.
What makes this crisis particularly insidious is its invisibility. Unlike a ransomware attack that announces itself with encrypted files and demand notes, successful social engineering often leaves no obvious trace until the money is gone. And companies, fearing reputational damage, rarely disclose these incidents publicly unless legally required—embarrassed to admit that they are quite literally being “robbed blind.”
The financial implications are staggering. The FBI’s Internet Crime Complaint Center reported that Business Email Compromise (BEC) attacks—just one type of social engineering—resulted in billions of dollars in reported losses. But industry experts I’ve spoken with believe the true cost is far higher, potentially 5 to 10 times greater when factoring in unreported incidents. The scale of this threat is not just alarming, it’s a wake-up call for businesses to rethink their cybersecurity defenses.
So, what can be done? Technical solutions are part of the answer. The system we’ve been developing uses AI to detect AI, analyzing communication patterns across channels to identify anomalies and provide real-time warning indicators.
Regulators also have a role to play. Compliance auditors and cyber insurance providers can guide companies to employ technology that provides shared awareness and non-repudiation aggregators. Also, current disclosure requirements often fail to capture the true nature and extent of social engineering attacks. More granular reporting mandates would help illuminate the scale of the problem and drive appropriate responses.
As AI continues to advance, the line between authentic and synthetic communications will only blur further. The attackers have weaponized trust itself, exploiting our fundamental human tendency to believe what we see and hear from seemingly familiar sources.
This crisis is real, growing, and largely invisible to the public. It’s time we recognized that in the new cybersecurity landscape, the weakest link isn’t your firewall—it’s human psychology. And strengthening that link will require tools, training, and vigilance beyond anything we’ve previously deployed.
The post They’re Not Hacking Your Systems, They’re Hacking Your People: The AI-Powered Crisis We’re Ignoring appeared first on Cybersecurity Insiders.