Valentine’s Day 2025 is just around the corner, and many men are likely busy picking out thoughtful gifts to impress their loved ones—some of which could cost a big fortune. For those who are in long-term relationships or happily married, the day holds a lot of significance, and all is well.

However, for those who’ve recently found love through dating or social media platforms, there are a few things you should keep in mind.

In today’s digital age, it’s not uncommon to meet someone online that you never actually meet in person. Many people stay connected through messages and video calls, but that’s where potential problems can arise. Some charming individuals may use their sweet words to manipulate their online partners into lending money to settle supposed debts or, worse, investing in fraudulent schemes like cryptocurrency scams.

Unfortunately, many of these so-called “partners” are nothing more than scammers, preying on vulnerable people and trying to exploit them emotionally or financially.

A report from Comparitech revealed that in 2024, nearly 59,000 Americans fell victim to such romance scams. The numbers are expected to rise by 10% in 2025, potentially leading to losses exceeding $830 million. The report highlighted that residents of Arizona were the most targeted, with California coming in second, with over 7,000 cases reported.

Further compounding this issue, the Internet Crime Complaints Center reported that cryptocurrency investors lost a staggering $215.8 million due to romance scammers convincing them to participate in complex, fake schemes, causing substantial digital currency losses.

Adding another layer of danger are new types of scams fueled by generative AI technology. Fraudsters can now easily create fake profiles of potential partners and manipulate their victims into asking friends and family for money via social media networks.

So, if your online love interest suddenly starts asking for financial assistance or encourages you to invest in risky schemes, be cautious. It’s likely a scam designed to exploit you. Protect yourself by severing all contact or, if in doubt, verifying their claims thoroughly before taking any action.

Stay safe and smart—trust your instincts.

The post Celebrate Valentine’s Day 2025 by steering clear of romance scams appeared first on Cybersecurity Insiders.

I am always interested in new phishing tricks, and watching them spread across the ecosystem.

A few days ago I started getting phishing SMS messages with a new twist. They were standard messages about delayed packages or somesuch, with the goal of getting me to click on a link and entering some personal information into a website. But because they came from unknown phone numbers, the links did not work. So—this is the new bit—the messages said something like: “Please reply Y, then exit the text message, reopen the text message activation link, or copy the link to Safari browser to open it.”

I saw it once, and now I am seeing it again and again. Everyone has now adopted this new trick.

One article claims that this trick has been popular since last summer. I don’t know; I would have expected to have seen it before last weekend.

As companies work to reap the benefits of artificial intelligence (AI), they also must beware of its nefarious potential. Amid an AI-driven uptick in social engineering attacks, deepfakes have emerged as a new and convincing threat vector. Earlier this year, an international company lost $25 million after a financial employee fell for a deepfake video call impersonating the company’s CFO. While such a story may sound like an anomaly, the reality is that generative AI is creating more data than ever before—data that bad actors can use to make attacks more convincing. Additionally, the technology has supported the multiplication of such attacks, growing from single attacks to quickly become tens of thousands of attacks, each tailored to the target in question. 

As deepfakes and other AI-generated social engineering attacks continue to become more common and convincing, companies must evolve beyond traditional threat intelligence. To remain secure, they must leverage AI themselves, embrace segmentation, and educate their employees on an ongoing basis.

Fighting fire with fire

Deepfakes are an extremely sophisticated way for bad actors to get through the door. Instead of receiving an oddly worded email from an alleged Nigerian prince, AI can help bad actors send highly personalized and convincing emails that mask the usual red flags. Once they have access to the network, they can start exporting, collecting and sharing data that can be used to build a convincing attack for their target. Thus, companies need tools that can identify a normal baseline for every user’s schedule and behavior. Then, AI can be leveraged to quickly identify and remediate anomalies that arise, like someone logging in at weird hours or stockpiling large amounts of information. By employing AI to detect suspicious activity, companies can sift through tremendous amounts of noise to uncover red flags.

Embrace segmentation 

The impact of deepfakes and other social engineering attacks can be minimized by dramatically shrinking the attack surface through segmentation. Government agencies with extremely sensitive data have always had several rings of protection: unclassified, classified, and top-secret networks. This is a mindset all companies must embrace. Having everything on a single network is extremely risky, even if that network uses zero-trust principles. 

In fact, the recent Crowdstrike outage completely debilitated airlines because they have everything on a single network, which creates a single point of failure. In addition to separating crown jewel data from less critical data, it can also be useful to rely on different applications, such as using Microsoft Teams for standard messaging and a dedicated chat capability for more sensitive conversations. Segmenting networks, communication styles, and data enclaves ensures that, even if a bad actor gets through the door using a deepfake, they won’t have complete and total access to sensitive information.

Educate employees

In an ideal situation, segmentation and anomaly detection aren’t required because bad actors never get in at all, which is why educating employees on the rise of deepfakes may be the most effective way to ensure company-wide security. Zero-trust is a mindset—not just a technology or a protocol—and teaching employees to be extremely diligent can go a long way. If there’s even a small chance that a request is nefarious, employees should be encouraged to verify it outside of the channel the request came in on. That may mean picking up the phone and simply calling the individual in question. Additionally, teaching about the capabilities that exist and reminding employees to think before they click are simple but effective ways to prevent deepfakes. 

Altogether, the technology available to bad actors is going to continue to evolve, but companies can keep up with the pace of change by deploying AI themselves, embracing segmentation, and educating their employees about the threats that exist. Without these steps, organizations will remain vulnerable to deepfakes and other social engineering attacks, which leaves their data and reputations at risk.

 

The post How to Stay Ahead of Deepfakes and Other Social Engineering Attacks appeared first on Cybersecurity Insiders.

Interesting social engineering attack: luring potential job applicants with fake recruiting pitches, trying to convince them to download malware. From a news article

These particular attacks from North Korean state-funded hacking team Lazarus Group are new, but the overall malware campaign against the Python development community has been running since at least August of 2023, when a number of popular open source Python tools were maliciously duplicated with added malware. Now, though, there are also attacks involving “coding tests” that only exist to get the end user to install hidden malware on their system (cleverly hidden with Base64 encoding) that allows remote execution once present. The capacity for exploitation at that point is pretty much unlimited, due to the flexibility of Python and how it interacts with the underlying OS.

Interesting social-engineering attack vector:

McAfee released a report on a new LUA malware loader distributed through what appeared to be a legitimate Microsoft GitHub repository for the “C++ Library Manager for Windows, Linux, and MacOS,” known as vcpkg.

The attacker is exploiting a property of GitHub: comments to a particular repo can contain files, and those files will be associated with the project in the URL.

What this means is that someone can upload malware and “attach” it to a legitimate and trusted project.

As the file’s URL contains the name of the repository the comment was created in, and as almost every software company uses GitHub, this flaw can allow threat actors to develop extraordinarily crafty and trustworthy lures.

For example, a threat actor could upload a malware executable in NVIDIA’s driver installer repo that pretends to be a new driver fixing issues in a popular game. Or a threat actor could upload a file in a comment to the Google Chromium source code and pretend it’s a new test version of the web browser.

These URLs would also appear to belong to the company’s repositories, making them far more trustworthy.

After the XZ Utils discovery, people have been examining other open-source projects. Surprising no one, the incident is not unique:

The OpenJS Foundation Cross Project Council received a suspicious series of emails with similar messages, bearing different names and overlapping GitHub-associated emails. These emails implored OpenJS to take action to update one of its popular JavaScript projects to “address any critical vulnerabilities,” yet cited no specifics. The email author(s) wanted OpenJS to designate them as a new maintainer of the project despite having little prior involvement. This approach bears strong resemblance to the manner in which “Jia Tan” positioned themselves in the XZ/liblzma backdoor.

[…]

The OpenJS team also recognized a similar suspicious pattern in two other popular JavaScript projects not hosted by its Foundation, and immediately flagged the potential security concerns to respective OpenJS leaders, and the Cybersecurity and Infrastructure Security Agency (CISA) within the United States Department of Homeland Security (DHS).

The article includes a list of suspicious patterns, and another list of security best practices.

Last week, the Internet dodged a major nation-state attack that would have had catastrophic cybersecurity repercussions worldwide. It’s a catastrophe that didn’t happen, so it won’t get much attention—but it should. There’s an important moral to the story of the attack and its discovery: The security of the global Internet depends on countless obscure pieces of software written and maintained by even more obscure unpaid, distractible, and sometimes vulnerable volunteers. It’s an untenable situation, and one that is being exploited by malicious actors. Yet precious little is being done to remedy it.

Programmers dislike doing extra work. If they can find already-written code that does what they want, they’re going to use it rather than recreate the functionality. These code repositories, called libraries, are hosted on sites like GitHub. There are libraries for everything: displaying objects in 3D, spell-checking, performing complex mathematics, managing an e-commerce shopping cart, moving files around the Internet—everything. Libraries are essential to modern programming; they’re the building blocks of complex software. The modularity they provide makes software projects tractable. Everything you use contains dozens of these libraries: some commercial, some open source and freely available. They are essential to the functionality of the finished software. And to its security.

You’ve likely never heard of an open-source library called XZ Utils, but it’s on hundreds of millions of computers. It’s probably on yours. It’s certainly in whatever corporate or organizational network you use. It’s a freely available library that does data compression. It’s important, in the same way that hundreds of other similar obscure libraries are important.

Many open-source libraries, like XZ Utils, are maintained by volunteers. In the case of XZ Utils, it’s one person, named Lasse Collin. He has been in charge of XZ Utils since he wrote it in 2009. And, at least in 2022, he’s had some “longterm mental health issues.” (To be clear, he is not to blame in this story. This is a systems problem.)

Beginning in at least 2021, Collin was personally targeted. We don’t know by whom, but we have account names: Jia Tan, Jigar Kumar, Dennis Ens. They’re not real names. They pressured Collin to transfer control over XZ Utils. In early 2023, they succeeded. Tan spent the year slowly incorporating a backdoor into XZ Utils: disabling systems that might discover his actions, laying the groundwork, and finally adding the complete backdoor earlier this year. On March 25, Hans Jansen—another fake name—tried to push the various Unix systems to upgrade to the new version of XZ Utils.

And everyone was poised to do so. It’s a routine update. In the span of a few weeks, it would have been part of both Debian and Red Hat Linux, which run on the vast majority of servers on the Internet. But on March 29, another unpaid volunteer, Andres Freund—a real person who works for Microsoft but who was doing this in his spare time—noticed something weird about how much processing the new version of XZ Utils was doing. It’s the sort of thing that could be easily overlooked, and even more easily ignored. But for whatever reason, Freund tracked down the weirdness and discovered the backdoor.

It’s a masterful piece of work. It affects the SSH remote login protocol, basically by adding a hidden piece of functionality that requires a specific key to enable. Someone with that key can use the backdoored SSH to upload and execute an arbitrary piece of code on the target machine. SSH runs as root, so that code could have done anything. Let your imagination run wild.

This isn’t something a hacker just whips up. This backdoor is the result of a years-long engineering effort. The ways the code evades detection in source form, how it lies dormant and undetectable until activated, and its immense power and flexibility give credence to the widely held assumption that a major nation-state is behind this.

If it hadn’t been discovered, it probably would have eventually ended up on every computer and server on the Internet. Though it’s unclear whether the backdoor would have affected Windows and macOS, it would have worked on Linux. Remember in 2020, when Russia planted a backdoor into SolarWinds that affected 14,000 networks? That seemed like a lot, but this would have been orders of magnitude more damaging. And again, the catastrophe was averted only because a volunteer stumbled on it. And it was possible in the first place only because the first unpaid volunteer, someone who turned out to be a national security single point of failure, was personally targeted and exploited by a foreign actor.

This is no way to run critical national infrastructure. And yet, here we are. This was an attack on our software supply chain. This attack subverted software dependencies. The SolarWinds attack targeted the update process. Other attacks target system design, development, and deployment. Such attacks are becoming increasingly common and effective, and also are increasingly the weapon of choice of nation-states.

It’s impossible to count how many of these single points of failure are in our computer systems. And there’s no way to know how many of the unpaid and unappreciated maintainers of critical software libraries are vulnerable to pressure. (Again, don’t blame them. Blame the industry that is happy to exploit their unpaid labor.) Or how many more have accidentally created exploitable vulnerabilities. How many other coercion attempts are ongoing? A dozen? A hundred? It seems impossible that the XZ Utils operation was a unique instance.

Solutions are hard. Banning open source won’t work; it’s precisely because XZ Utils is open source that an engineer discovered the problem in time. Banning software libraries won’t work, either; modern software can’t function without them. For years, security engineers have been pushing something called a “software bill of materials”: an ingredients list of sorts so that when one of these packages is compromised, network owners at least know if they’re vulnerable. The industry hates this idea and has been fighting it for years, but perhaps the tide is turning.

The fundamental problem is that tech companies dislike spending extra money even more than programmers dislike doing extra work. If there’s free software out there, they are going to use it—and they’re not going to do much in-house security testing. Easier software development equals lower costs equals more profits. The market economy rewards this sort of insecurity.

We need some sustainable ways to fund open-source projects that become de facto critical infrastructure. Public shaming can help here. The Open Source Security Foundation (OSSF), founded in 2022 after another critical vulnerability in an open-source library—Log4j—was discovered, addresses this problem. The big tech companies pledged $30 million in funding after the critical Log4j supply chain vulnerability, but they never delivered. And they are still happy to make use of all this free labor and free resources, as a recent Microsoft anecdote indicates. The companies benefiting from these freely available libraries need to actually step up, and the government can force them to.

There’s a lot of tech that could be applied to this problem, if corporations were willing to spend the money. Liabilities will help. The Cybersecurity and Infrastructure Security Agency’s (CISA’s) “secure by design” initiative will help, and CISA is finally partnering with OSSF on this problem. Certainly the security of these libraries needs to be part of any broad government cybersecurity initiative.

We got extraordinarily lucky this time, but maybe we can learn from the catastrophe that didn’t happen. Like the power grid, communications network, and transportation systems, the software supply chain is critical infrastructure, part of national security, and vulnerable to foreign attack. The US government needs to recognize this as a national security problem and start treating it as such.

This essay originally appeared in Lawfare.