The cybersecurity world got really lucky last week. An intentionally placed backdoor in XZ Utils, an open-source compression utility, was pretty much accidentally discovered by a Microsoft engineer—weeks before it would have been incorporated into both Debian and Red Hat Linux. From ArsTehnica:

Malicious code added to XZ Utils versions 5.6.0 and 5.6.1 modified the way the software functions. The backdoor manipulated sshd, the executable file used to make remote SSH connections. Anyone in possession of a predetermined encryption key could stash any code of their choice in an SSH login certificate, upload it, and execute it on the backdoored device. No one has actually seen code uploaded, so it’s not known what code the attacker planned to run. In theory, the code could allow for just about anything, including stealing encryption keys or installing malware.

It was an incredibly complex backdoor. Installing it was a multi-year process that seems to have involved social engineering the lone unpaid engineer in charge of the utility. More from ArsTechnica:

In 2021, someone with the username JiaT75 made their first known commit to an open source project. In retrospect, the change to the libarchive project is suspicious, because it replaced the safe_fprint function with a variant that has long been recognized as less secure. No one noticed at the time.

The following year, JiaT75 submitted a patch over the XZ Utils mailing list, and, almost immediately, a never-before-seen participant named Jigar Kumar joined the discussion and argued that Lasse Collin, the longtime maintainer of XZ Utils, hadn’t been updating the software often or fast enough. Kumar, with the support of Dennis Ens and several other people who had never had a presence on the list, pressured Collin to bring on an additional developer to maintain the project.

There’s a lot more. The sophistication of both the exploit and the process to get it into the software project scream nation-state operation. It’s reminiscent of Solar Winds, although (1) it would have been much, much worse, and (2) we got really, really lucky.

I simply don’t believe this was the only attempt to slip a backdoor into a critical piece of Internet software, either closed source or open source. Given how lucky we were to detect this one, I believe this kind of operation has been successful in the past. We simply have to stop building our critical national infrastructure on top of random software libraries managed by lone unpaid distracted—or worse—individuals.

First-person account of someone who fell for a scam, that started as a fake Amazon service rep and ended with a fake CIA agent, and lost $50,000 cash. And this is not a naive or stupid person.

The details are fascinating. And if you think it couldn’t happen to you, think again. Given the right set of circumstances, it can.

It happened to Cory Doctorow.

EDITED TO ADD (2/23): More scams, these involving timeshares.

In a recent cybersecurity incident, hackers managed to pilfer millions of dollars from the US Department of Health and Human Services through a sophisticated spoofing attack. The cyber-criminals assumed the identities of legitimate fund recipients, skillfully engaging with health department staff via email to fraudulently obtain funds.

This well-executed cyber-attack resulted in the unauthorized withdrawal of approximately $7.5 million from the agency’s funds, presenting a significant challenge for security experts attempting to recover the stolen assets.

The Inspector General’s office has taken up the investigation following a formal request from the Health and Human Services department. The focal point of the breach was the ‘Payment Management System,’ a platform also utilized by federal agencies for fund transfers involving entities such as the Pentagon, Treasury Department, White House Administration, NASA, and Small Business Administration.

Given the interconnected nature of the breached platform, there is a considerable risk that hackers could employ similar email spoofing tactics to target other organizations within the network, seeking illicit financial gains. In response, the health department has enlisted the expertise of forensic professionals to mitigate potential risks and is collaborating with law enforcement agencies in an effort to recover the embezzled funds.

To enhance cybersecurity measures, fostering a culture of awareness among employees and online users is crucial. Vigilance against potential threats, the implementation of encryption protocols, and thorough verification of recipient identities can contribute significantly to preventing such attacks. Additional safeguards include the adoption of two-factor authentication (2FA) for heightened account security, utilizing antivirus and firewall protection, maintaining robust passwords, and ensuring that software is regularly updated with the latest security patches. These proactive measures collectively serve as a defense against future cyber threats.

The post Hackers steal $7.5 million funds from US Health Department via email spoofing cyber attack appeared first on Cybersecurity Insiders.

As the festive season is just a couple of days ahead, the joy of giving and receiving is accompanied by an unfortunate increase in scams targeting unsuspecting holiday shoppers. Scammers are adept at exploiting the spirit of generosity and the rush to find the perfect gift. In this article, we’ll shed light on some of the most targeted items during this Christmas season to help you stay vigilant and protect yourself from falling victim to holiday scams.

1. Popular Electronics and Gadgets: High-demand electronic devices and gadgets, such as smartphones, gaming consoles, and smartwatches, are prime targets for scammers. Be cautious when purchasing from unfamiliar online retailers offering deals that seem too good to be true. Stick to reputable stores to ensure the authenticity of the products.

2. Gift Cards: Gift cards are a convenient and versatile present, making them a favored target for scammers. Avoid purchasing gift cards from unofficial sources or online marketplaces. Scammers often tamper with cards on display racks, so choose cards from the back of the stack and verify that the activation code hasn’t been compromised.

3. Limited Edition or Hard-to-Find Items: Scammers often exploit the scarcity of certain items by creating fake listings or auctioning off non-existent limited edition or hard-to-find products. Exercise caution when dealing with online sellers offering rare items and verify their credibility before making a purchase.

4. Online Shopping Scams: With the rise of online shopping, scammers have adapted their tactics to include fake websites and counterfeit products. Double-check the legitimacy of online stores by looking for customer reviews, verifying secure payment options, and confirming the website’s authenticity before entering personal information.

5. Fake Charities and Donation Requests: ‘This the season for giving, but scammers capitalize on the goodwill of individuals by posing as fake charities or sending fraudulent donation requests. Verify the legitimacy of charitable organizations before making contributions, and be cautious of unsolicited emails or messages requesting financial sup-port.

6. Travel Deals and Accommodations: As people plan holiday getaways, scammers tar-get those searching for travel deals and accommodations. Exercise caution when booking through unfamiliar websites, and use reputable travel agencies to minimize the risk of falling victim to travel-related scams.

7. Pets and Pet Supplies: The demand for pets and pet-related products increases during the holiday season. Scammers may exploit this by advertising non-existent pets for sale or offering fake pet supplies. Adopt pets from reputable shelters or purchase from trust-ed pet stores to avoid falling prey to scams.

Conclusion

This Christmas season, while embracing the spirit of giving, it’s essential to re-main vigilant against scams that seek to exploit the holiday rush. By staying informed, exercising caution, and verifying the legitimacy of sellers and charitable organizations, you can protect yourself and ensure a joyful and scam-free holiday season. Remember, if a deal seems too good to be true, it probably is.

The post Most scammed items for this Christmas season appeared first on Cybersecurity Insiders.

By Greg Woolf, CEO of FiVerity

The marriage of fraud and artificial intelligence (AI) is lethal. Right now, fraudsters are upping their games, leveraging new and innovative tools such as ChatGPT and Generative AI to wreak havoc on the financial world. Their goal? To create deep-fake personas that look so authentic that financial institutions are granting them loans, allowing them to open accounts, approving transactions, the list goes on.

Adding insult to injury, most don’t realize the damage inflicted upon them until it’s too late. This is the new reality financial institutions face today thanks to AI, which not only allows criminals to create deep-fake or synthetic personas but makes the process easier than ever.

This is troubling on many levels.

First, as I mentioned above, these fraudulent identities are virtually undistinguishable from authentic ones, and discerning the difference is a challenge, even to the trained professional. Here’s why—deep fake IDs include a long credit and payment history, exactly the information an institution would see with all their legitimate customers. Exacerbating the issue is that fraudsters are turning to algorithms to quickly create multiple deep-fake personas, which they can refine continually using AI to avoid detection.

Add it all up, and it’s no surprise that fraudsters are achieving significant levels of success and becoming more and more aggressive—according to a TransUnion 2023 State of Omnichannel Fraud Report, digital fraud attempts have increased 80% from 2019 to 2022, while rising 122% for digital transactions originating in the U.S. during that time.

You don’t need to be an expert to realize that the success of fraudsters spells trouble for financial institutions.

  • First and foremost are the financial losses that stem from defaulted loans, charge-offs, and more.
  • Next comes damaged reputations, which can tarnish a business where trust is one of THE key attributes that customers value most—how can a consumer be expected to choose a financial institution making front-page news because it was defrauded by deep-fake personas?
  • And don’t forget compliance. Financial institutions are required to verify the identity of their customers to prevent fraud, money laundering, and other financial crimes. Any failure to meet these mandates can come with a hefty fine and penalty.

Going From Bad….to Worse

If you think the above scenario sounds ominous, I have bad news. It’s only going to get worse. That’s because technology never sits still. It’s always advancing and growing in sophistication, and incidents of digital catfishing and identity fraud will reach new levels as fraudsters leverage these advancements. This will manifest itself in different ways. One will be the use of deep-fake biometric data. This includes facial recognition or voice prints. The result would be a deep fake persona that is convincing on multiple levels, on paper and in person. Just imagine the challenges businesses will face trying to distinguish the fraudulent from the legitimate.

Criminals will also leverage AI to automate the creation of synthetic identity creation. The result would be hundreds to thousands of deep-fake personas being created and used simultaneously. This scale would be unlike anything we have ever seen before.

Fighting back 

Fighting back starts with collaboration. Financial institutions must be committed to sharing information on known fraudsters and intelligence on suspicious transactions. By pooling the resources and expertise of all these institutions, they can identify emerging patterns and trends and better detect digital catfishing and identity fraud ways that aren’t possible with information siloes.

Working together, they can also devise best practices. This should include everything from how to best share data and intelligence, how to act before an incident causes significant financial losses, and how to prevent these incidents from happening in the first place.

For anyone wondering what will support this collaborative mode, your best bet is a centralized platform that enables the safe, secure, and real-time sharing of fraud data. The platform should leverage AI and machine learning algorithms, and here’s why. AI and ML make it possible for businesses to analyze huge libraries of data to detect patterns and anomalies that may indicate fraudulent activity. Some key use cases that can help spot fraud include:

  • Dynamic Profiling: Implement a system that dynamically profiles user activity and attributes such as name, email address, zip, and state. This means not merely looking for hard matches but understanding the normal behavioral patterns of users to spot anomalies.
  • Multi-Attribute Analysis: Why look at a single attribute when you can examine multiple attributes and the interrelationship between each? For example, a change in email address alone might not raise a flag. Many of us use more than one email address. But when that switch coincides with a change in state, further investigation may be necessary.
  • Machine Learning Adaptability: Leverage adaptive machine learning algorithms to gain insights from the constantly shifting tactics. As you gain new levels of knowledge, take what’s been learned and update detection protocols.
  • Time-based Monitoring: Implement time-based flags that trigger alerts when sudden changes in key attributes are made in a short timeframe. This helps to enable fast action while freeing teams from spending countless hours sifting through data to identify fraudulent activity.

All of these capabilities are hugely valuable, but I would be remiss if I didn’t spotlight your biggest resource in this fight, your fraud analysts. At the end of the day, the intuition of these experts is invaluable. We encourage businesses to continue plugging into their knowledge experience to conduct periodic manual reviews, especially in cases that the system flags as borderline.

At the end of the day, financial services businesses face a highly sophisticated threat that is escalating in frequency. This is not a battle that can be one in isolation. It required action that is equal parts collaboration and a commitment to tapping into the latest innovations. By gaining a better understanding of fraudsters, they can identify patterns as well as fraudulent accounts that can not only take preemptive action but also collaborate on methods to stay ahead of the ever-evolving threat landscape of digital fraud.

The post The Evolution of Financial Fraud appeared first on Cybersecurity Insiders.

From time to time, we encounter social media posts that tempt us to click on a link promising heavily discounted goods or a chance to win a lottery. Some individuals avoid such links, believing themselves to be too savvy to fall for online schemes. However, many still become victims of these social media scams, which continue to claim more victims annually than conventional crimes like robbery, burglary, homicides, and knife-related scams.

Interestingly, a majority of these scams are propagated through well-known social media platforms such as WhatsApp, Instagram, and Facebook, all owned by Meta. Even TikTok, another online service, finds itself embroiled in controversy.

Despite the growing prevalence of these criminal activities on their platforms, these companies have not taken adequate measures to counteract them. This negligence on their part has inadvertently transformed their platforms into havens for major online criminals, who now disseminate malware-laden content through various posts.

An exception to this trend is X, formerly known as Twitter, which has taken a proactive stance against such criminal activity on its platform. Leveraging Artificial Intelligence technology, X effectively manages spam and thwarts fake scams masquerading as endorsements from celebrities. This strategic approach has proven successful, evident in the swift elimination of these scams, particularly after Elon Musk assumed control of the platform, preventing substantial harm.

Contrastingly, Meta has not adopted similar strategies, resulting in an alarming estimate of 1.1 million UK citizens falling victim to these scams in 2022 and 2023.

In a survey conducted by Lloyd’s in early 2023, losses exceeding £300 million were attributed to scams originating from Facebook and TikTok. This sum could escalate even further if the true extent of unreported crimes were revealed.

It is essential to bear in mind that once ensnared in these scams, recovering lost funds is an arduous task, often futile. Only in exceptional cases can law enforcement aid in recovering lost funds; otherwise, the drained finances remain irretrievable.

So, can the blame for these crimes be squarely placed on the social media platforms themselves?

In truth, this attribution is complicated. All these companies absolve themselves of responsibility through terms and conditions outlined in the fine print, which typically goes unnoticed during the initial signup process.

What then, is the solution?

Remaining vigilant is crucial. One must exercise caution when encountering dubious posts, avoid clicking on links from unfamiliar sources, particularly profiles using images of attractive individuals. Similarly, if a persistent pop-up plagues a post or if bank account details are requested for payment transfers, these are clear indicators of potential scams that should be avoided.

The post How social media scams are draining bank accounts of victims appeared first on Cybersecurity Insiders.

World of Warcraft players wrote about a fictional game element, “Glorbo,” on a subreddit for the game, trying to entice an AI bot to write an article about it. It worked:

And it…worked. Zleague auto-published a post titled “World of Warcraft Players Excited For Glorbo’s Introduction.”

[…]

That is…all essentially nonsense. The article was left online for a while but has finally been taken down (here’s a mirror, it’s hilarious). All the authors listed as having bylines on the site are fake. It appears this entire thing is run with close to zero oversight.

Expect lots more of this sort of thing in the future. Also, expect the AI bots to get better at detecting this sort of thing. It’s going to be an arms race.

By Dimitri Shelest, Founder and CEO of OneRep

Companies go to great lengths to protect their top executives. Keeping them safe, healthy and happy so they can perform their duties without unnecessary distractions is critical for the productivity of the company. At one time, executive protection meant providing bodyguards and secure transit, and fortifying executive offices against external threats. As more executives work from home, efforts have extended to bolstering home defense systems.

Still, there’s a missing element. In today’s digital world, it’s also necessary to protect executives online. That should include protecting their personal data.

Executives have access to some of the company’s most sensitive information, and they’re increasingly being targeted by hackers looking to steal company secrets or to perpetrate cybercrimes.

Personal data provides fuel for these crimes. Digital data warehouses store all kinds of details about all of us. It used to be just addresses, phone numbers, aliases, and relatives. Now, it’s far more detailed information such as political affiliation, names of neighbors, resting heart rate, and even Amazon wishlists.

All this data is collected legally by companies. Every time you interact with a computer–be that via a smart device, a bar code at checkout or on a website, data about you is being collected. In the U.S. there is essentially no limit to the amount of data companies can collect, and few limits on how they can use it.

Cyber Attacks Against Executives: Phishing, Whaling, and More

Most data can be sold to anyone who will pay for it–including bad actors. They can use it to personalize their workplace phishing attacks and business email compromise schemes to make them more effective. Executives are particularly at risk for “whaling” attacks, where a criminal impersonates an executive via email or another means of communication and asks the target for money and/or information.

A successful whaling attack can be quite lucrative, since executives have a lot of credibility and power. In one such attack, a Mattel finance executive sent $3 million to a fraudster impersonating the company’s CEO. With the possibility of such large payouts, criminals will go to considerable effort to use personal details that make their requests compelling and believable.

Executives also face risks from social media, where they are more visible and accessible than ever before. This can be great for brand-building and engagement. Unfortunately, it also puts them at risk of harassment or worse from a variety of bad actors, both online and in real life.

This can come from dedicated customers or fans who are unsatisfied with a product or service. For example, in 2022, Strauss Zelnick, the CEO of Nasdaq-listed video game developer Take Two Interactive, was forced to lock his Twitter account after being bombarded by a wave of harassment from customers dissatisfied with the latest Grand Theft Auto game.

It can also come as a result of taking a stand–or not taking a stand–on social issues. Gone are the days when staying neutral was the preferred corporate strategy. According to research from Accenture, customers are increasingly aligning their spending with their values. They demand to know where companies stand on issues that matter to them. Executives are expected to “walk the walk” and stand for the company’s values. But one false move can place them in the crosshairs of cancel culture and harassers can quickly descend.

This kind of harassment, while still very upsetting for the individuals involved, can at least be somewhat anticipated and crisis communications strategies can be at the ready. But threats to executives can also arise unexpectedly when a company is caught in the cross currents of the news cycle.

For example, after the contentious 2020 election, figures ranging from the head of strategy and security at Dominion Voting systems to the CEO of social media app Parler were forced to go into hiding with their families after receiving death threats when their personal information as well as that of their family members was leaked by hackers.

These scenarios don’t even include the possibility of threatening behavior from a disgruntled or terminated employee. In a turbulent economic environment like the one we are navigating now, this issue may come into the foreground as executives grapple with layoffs and cost-cutting measures.

This doesn’t just happen to executives at big companies or celebrity CEOs. Anyone who is involved in making decisions that can impact other people’s lives, contradict their political views or offend their values can become a target.

The effects are devastating. Researchers are just beginning to understand the impact of online harassment, but it appears to be very similar to other types of trauma. Victims might have difficulty concentrating and making decisions. They might experience increased levels of anxiety and even paranoia. They might come to fear opening messages or looking at their devices. Many individuals have even had to change jobs or alter their daily routines because of cyberstalking and harassment.

How to Protect Executive Data Privacy

Clearly, none of this is optimal to executive productivity. But it not only affects their own well-being. It can deplete morale of the company as a whole, and ultimately affect a company’s bottom line.

The good news is that there are steps that companies can take to protect their executives, their families and their organizations. It starts with educating them about the threats, and the fact that they are possible targets. Like the general public, executives can avoid oversharing personal information on social media.

They can protect their web browsing by using browser extensions to block trackers. They can maintain strong passwords, use a separate email address for sensitive activities, and be on high alert for any suspicious sounding communications.

They can also remove their data from people search sites that publish it. There are currently over 190 of these sites. Data from my company, OneRep, shows that the average person has data records on 46 of them.

People search sites are legally required to remove your information on request, but they aren’t legally required to make it easy for you to submit that request. Few people, least of all executives, have the time to approach 46 sites and request their data be removed. Even if they could, it’s a Sisyphean task. Our data shows that much of this information resurfaces within four months–when they get their next data dump from their data broker.

Fortunately, there are technology companies that can comb all the people search sites, locate your records, and automate the removal process. They also provide continued monitoring and removal of your data should it reappear.

The proliferation and widespread availability of personal data is dangerous for public-facing executives, their families and their companies. Companies understandably prioritize protecting the physical safety of top executives, but in today’s polarized, always-on world, keeping executives safe online is also imperative. It’s a small investment that pays dividends in peace of mind.

Author Bio:

Dimitri Shelest is a tech entrepreneur and the CEO at OneRep, a privacy protection company that removes public records from the Internet. Dimitri is an avid proponent of privacy regulation framework and likes to explore cybersecurity and privacy issues as a writer and reader on various platforms.

The post One Overlooked Element of Executive Safety: Data Privacy appeared first on Cybersecurity Insiders.

Social engineering is a term used to describe the manipulation of people into revealing sensitive information or performing actions that they otherwise wouldn’t. It is an ever-increasing threat to cybersecurity, as it can be used to gain unauthorized access to systems, steal sensitive data, or carry out fraudulent activities.

Social engineering is an age-old tactic that is often used in phishing attacks. These attacks are typically carried out through email or messaging services, with the attacker pretending to be a trusted source, such as a bank or an employer. The attacker will then try to convince the victim to click on a malicious link or provide sensitive information, such as login credentials or credit card details.

Another common social engineering tactic is known as “pretexting”. This involves an attacker creating a fictitious scenario, such as a problem with an account, in order to trick the victim into providing sensitive information. Pretexting attacks can also take place through social media, with attackers posing as a friend or contact in order to gain trust and access to sensitive information.

Social engineering can also be used in physical attacks, where attackers gain access to restricted areas or information by posing as a legitimate employee or contractor. This can involve tactics such as impersonation, tailgating, or dumpster diving.

The threat of social engineering is significant, as it is often easier to exploit human vulnerabilities than it is to breach security systems. Cybersecurity professionals must be aware of the tactics used in social engineering attacks and work to educate employees and implement security protocols to protect against them.

One effective way to combat social engineering is through employee education and training. Employees must be trained to recognize and report suspicious emails, messages, and phone calls. They should also be aware of the importance of protecting sensitive information, such as login credentials and financial data.

Another key defense against social engineering is the implementation of multi-factor authentication (MFA) systems. MFA requires users to provide multiple forms of authentication, such as a password and a fingerprint or face scan, before gaining access to a system or account. This can greatly reduce the risk of unauthorized access to sensitive data.

In conclusion, social engineering is a significant threat to cybersecurity. Cybersecurity professionals must be aware of the tactics used in social engineering attacks and work to educate employees and implement security protocols to protect against them. By taking a multi-faceted approach, including employee education, MFA, and other security measures, organizations can greatly reduce their risk of falling victim to social engineering attacks.

The post How social engineering is related to Cybersecurity appeared first on Cybersecurity Insiders.