An investigation dating back almost ten years has seen the extradition this week to the United States of a man suspected to be the head of one the world's most prolific Russian-speaking cybercriminal gangs. The UK's National Crime Agency (NCA) says it has been investigating a cybercriminal using the online handle "J P Morgan" since 2015, alongside parallel investigations run by the United States FBI and Secret Service. Read more in my article on the Tripwire State of Security blog.

On July 29, 2024, a cyber attack targeting Locata, a housing software provider managing multiple housing portals, triggered a widespread phishing scam affecting several boroughs in Greater Manchester. The incident exposed residents to risks of personal data theft through fraudulent email links.

The issue initially surfaced in one borough where residents received deceptive emails urging them to activate their tenancy options by providing personal information. This phishing attempt led to concerns about data security and prompted immediate action.

In response, the local council suspended certain housing services as a precautionary measure and enlisted IT experts to address the situation and mitigate further risks. The Information Commissioner’s Office has been informed about the breach, and affected individuals are being advised to follow the National Cyber Security Centre’s (NCSC) guidelines.

For those who clicked on the phishing link, the following steps are recommended:

Monitor Financial Accounts: Regularly check credit accounts for any unusual transactions and report suspicious activity to Action Fraud.

Update Passwords: Change passwords for email and banking accounts immediately. Use unique passwords for different online accounts to enhance security.

Utilize Credit Score Services: Consider using free credit score checking services to monitor for potential fraud.

Individuals who have applied for a housing register are advised to contact the relevant authorities to check the status of their application and understand how the cyber attack may have impacted it.

By taking these precautionary measures, residents can better protect their personal information and minimize the potential effects of the phishing scam.

The post Cyber Attack Sparks Phishing Scam Across Greater Manchester appeared first on Cybersecurity Insiders.

The 2024 U.S. elections are set for November 13th, and Microsoft, the American technology giant, has issued a warning about potential interference from state-funded actors. The company’s alert comes in response to increased online activity over recent weeks.

According to Microsoft, certain Iranian groups have established fake websites over the past few months to spread misinformation. These groups are attempting to gauge American voters’ sentiments by offering incentives such as freebies in exchange for personal information.

To enhance their reach, these actors are employing email phishing attacks that impersonate activists or federal entities. Their strategy involves baiting victims with deceptive URLs, which, when clicked, may prompt them to provide sensitive information in exchange for prizes like the latest Apple iPhone 15 or gift coupons for online retailers like Amazon.

In some cases, clicking these malicious links can lead to the installation of harmful software on the victim’s device. This malware can activate upon reboot, potentially locking the device or erasing its data.

Iran, which recently joined the United Nations Mission, has denied any involvement in election interference, asserting that it has no interest in influencing foreign elections but seeks only to lift international trade sanctions.

As the world looks forward to the upcoming debate between Kamala Harris and Donald Trump on August 15th in Washington County, Maryland, there are concerns that certain parties may attempt to disrupt or delay this event through digital means.

In response, White House cybersecurity teams and the Election Commission are actively working to prevent and address potential threats to ensure the integrity of the election process.

For context, the 2016 U.S. election was scrutinized for potential foreign interference, leading then-President Barack Obama to launch an investigation in January 2017. However, the results of this investigation were not disclosed after Donald Trump assumed the presidency later that month.

The post Microsoft issues alert against email phishing attack to influence US 2024 Elections appeared first on Cybersecurity Insiders.

According to a study made by Palo Alto Networks cyber threat arm ‘Unit 42’, a threat actor named APT28 aka BlueDelta or Fancy Bear, supposedly belonging to Russian Intelligence is seen luring diplomats with a car sales phishing link that is leading them to a repository containing a Windows Backdoor called HeadLace malware.

Security analysts are treating this APT28 aka Sofacy or Pawn Storm as a precede to APT29 last recorded to be active in May 2023.

Currently, this hacking team is targeting only Windows machines and if the target doesn’t run on the said system, then they are diverted to fake HTML page that doesn’t offer any kind of services to the recipient and is a dumb page.

And when the Windows Operating System is detected then it offers a ZIP archive for download often in a picture form of an AUDI Q7 or   Land Rover SUV, but is a malware.

The shift in tactics, where threat actors use car sales phishing links to target diplomats, represents a noteworthy evolution in the approach of cyber-criminal groups like APT28.

Here’s a detailed breakdown of the situation and its implications:

Overview of the New Tactic

Target Audience: The current focus on diplomats suggests a strategic shift to gather sensitive intelligence. Diplomats often handle confidential information, making them valuable targets for espionage.

Modus Operandi:

Phishing Link: The initial bait is a car sales link, which appears legitimate and might appeal to professionals interested in high-end vehicles.
        
Fake Repository: The link leads to a repository hosting a Windows backdoor malware known as HeadLace. This backdoor allows for persistent access and control over the infected system.

Detection and Payload Delivery: If the target’s operating system is identified as Windows, the malware is delivered as a ZIP archive disguised as an image of a luxury car.

Non-Windows systems are redirected to a fake HTML page that offers no real content or service.

Evolution of Cyber Threats

Traditional Methods: Historically, phishing attacks have relied on more sensational bait, such as job offers or explicit content, to lure victims. These methods exploit curiosity or desire, often leading to the download of ransomware or other malware.

New Tactics:

The use of car sales phishing links is a more sophisticated and less sensational approach. It may leverage the professional nature of the targets and their potential interest in high-value items, making the bait appear more legitimate and less suspicious.

This evolution reflects a more nuanced understanding of the target audience’s behavior and interests.

Implications for Security

Targeted Attacks: The move to targeting diplomats with specific types of lures indicates a shift towards more targeted and strategic attacks. This could be part of a broader strategy to gain access to high-value information and conduct espionage.

Detection and Defense: Security measures need to evolve to address these sophisticated phishing tactics. Organizations should:
           
Educate Users: Train individuals to recognize phishing attempts, even when they come in the form of seemingly legitimate offers or interests.
            
Enhance Email Filtering: Implement advanced email filtering solutions that can detect and block phishing attempts.
            
Use Endpoint Protection: Ensure that endpoint protection solutions are up-to-date and capable of detecting and neutralizing sophisticated malware.
    
Monitoring and Response:
        
Continuous monitoring for unusual network activity or signs of compromise is crucial.
        
Establish incident response plans to quickly address any detected breaches and minimize damage.

Conclusion

The shift to car sales phishing links by APT28 highlights the increasing sophistication of cyber threat actors. By using seemingly benign or attractive offers to lure high-value targets, they are able to evade traditional security measures and achieve their espionage goals. Organizations and individuals must stay vigilant and adapt their security practices to address these evolving threats effectively.

The post Threat Actor offers Car Selling Phishing lure appeared first on Cybersecurity Insiders.

A couple weeks ago, an IT outage hit Microsoft Windows 10 and 11 servers shortly after CrowdStrike released a Falcon Sensors software update. Rather than resolving, the update transformed into a software bug , affecting over 8.2 million PCs and servers globally.

The disruption, initially caused by the software update, has since been exploited by hackers, who are using the vulnerability to launch phishing attacks.

The Computer Emergency Response Team (CERT) of India has issued a worldwide alert, warning that CrowdStrike Threat Monitoring software users are being targeted in a phishing scam. Thousands in India and potentially millions worldwide are at risk.

CERT-India’s advisory, released last Saturday, cautions Windows 10 and 11 users to be vigilant against phishing attempts. Hackers are posing as CrowdStrike support staff through phone calls, emails, or SMS messages. Their goal is to infiltrate networks, gather intelligence, or deploy malware, exacerbating the IT crisis that began with the Microsoft outage on July 19, 2024.

CrowdStrike is grappling with a loss of trust, customer migration, and other business challenges following the incident. If customers fall victim to these phishing attacks, it could further damage the company’s reputation and financial stability, potentially leading to significant losses and a severe impact on this year’s profits.

To protect against these threats, it’s crucial to verify the identity of anyone claiming to be IT support before taking any action. Additionally, raising awareness among employees about these phishing schemes is essential to mitigate potential damage.

It’s worth noting that CERT-India’s warning coincides with media speculation about the hacking group USDoD allegedly leaking data from CrowdStrike’s servers earlier this year.

In response, John Cable, Microsoft’s VP of Program Management, has stressed the importance of end-to-end resilience. Microsoft plans to restrict kernel access for security software by focusing on alternatives like Azure Attestation Service and VBS Enclave—measures similar to those Apple implemented for macOS in 2020. Additionally, Microsoft has hired over 5,000 support engineers to help affected organizations recover from the outage, aiming to enhance its service levels by 100% by the first week of August 2024.

The post Microsoft CrowdStrike Software Update leading to Phishing Attacks appeared first on Cybersecurity Insiders.

Social media fuels conspiracies galore after Donald Trump is shot at a rally, cryptocurrency websites are hijacked after a screw-up at Squarespace, and our guest takes a close look at bottoms on Instagram. All this and much much more is discussed in the latest edition of the "Smashing Security" podcast by cybersecurity veterans Graham Cluley and Carole Theriault, joined this week by Zoë Rose.

Generative AI has the potential to make social engineering attacks much more sophisticated and personalised. The technology can rapidly mine sites for information on a company, individuals, their responsibilities and specific habits to create multi-level campaigns. Through automated gethering of information, the technology can acquire photos, videos and audio recordings which can then be used to craft emails (phishing), voice attacks (vishing) and deep fake videos and images for spear phishing attacks against individuals in positions of power, for instance. 

We’re already seeing evidence of such attacks in action. Back in February the Hong Kong police revealed that a finance worker at Arup, a UK engineering firm, was duped into transferring $25m when he attended a video call in which every attendee, including the CFO, was a deep fake. Similar attacks have been carried out over the WhatsApp platform, with LastPass targeted in April by calls, texts and voicemails which impersonated the company’s CEO and a senior exec at advertising firm, WPP invited to a video call in which they were asked to set up a new business by a clone of the CEO crafted from YouTube videos and voice cloning technology.

Deep fakes go wide

These are no longer isolated incidents either, with the CIO of Arup, Rob Greig, warning in his statement that the number and sophistication of deepfake scams has been rising sharply in recent months. It’s a view substantiated by The State of Information Security 2024 report from ISMS.Online which reveals that 32% of UK businesses experienced deep fake cyber security incidents, with Business Email Compromise (BEC) the most common attack type over the last 12 months. Indeed, reports suggest there was a 780% rise in deep fake attacks across Europe between 2022-23.

GenAI is a gamechanger for crafting deep fakes, because the AI enhances its own production, delivering hyper-realistic content. Physical mannerisms, movements, intonations of voice and other subtleties are processed via an AI encoding algorithm or Generative Antagonistic Network (GAN) to clone individuals. These GANS have significantly lowered the barrier to entry so that creating deepfakes today requires a much lower level of skills and resources, according to the Department for Homeland Security.

Defending against such attacks can prove challenging because users are much more susceptible to phishing which emulates another person. There are giveaways, however, with deep fake technology typically struggling to accurately capture the inside of the mouth, resulting in blurring. There may also be less movement such as blinking, or more screen flashes than you’d expect. Generally speaking, its currently easiest to fake audio, followed by photos while video is the most challenging.

Why we can’t fight fire with fire

While standalone and open source technological solutions are now available that scan and assess the possible manipulation of video, audio and text giving a reliability score as a percentage, success rates are mixed. It’s difficult to verify accuracy because few are transparent about how they arrived at the score, the dataset used or when they were last updated. They vary in approach from those trained on GANs to classifiers that can detect if a piece of content was produced with a specific tool, although even content deemed as authentically created in a piece of software can be manipulated. Many video apps, messaging and collaborative platforms already use AI with respect to filters, making detection even more problematic. 

Given the current technological vacuum, the main form of mitigation today is employee security awareness, with 47% saying they are placing greater emphasis on training in the ISMS.Online survey. However, the survey notes that even well-trained employees can struggle to identify deep fakes and this is being compounded by a lack of policy enforcement; the survey found 34% were not using adequate security on their BYOD devices and 30% were not securing sensitive information. Zero trust initiatives may well help here in limiting access to such sensitive information but few organisations have mature deployments. 

Deloitte makes a number of recommendations on how to mitigate the threat of deep fake attacks in its report The battle against digital manipulation. In addition to training and access controls, it advocates the implementation of a layer of verification in business processes and the clarification of verification protocols when it comes to sanctioning payments. This could be in the form of multiple layers to approve transactions, for example, from code words to token-based systems or live detection verification such as taking a “selfie” or video recording, which is already in use in the banking sector for user verification.

Policy and process

But overarching all of this we need to see a comprehensive security policy covering people, process and technology from an AI-perspective. This should seek to address AI attack detection and response, for example, so that there are channels in place for reporting a suspected Gen-AI attack or if a payment has been made. There are already a number of AI standards that can be used to help here in the governance of AI such as ISO/IEC 42001:2023 as well as the NIST AI Risk Management Framework

Defending against deep fakes will therefore require a three-pronged approach that sees awareness training combined with security controls including access and user verification, as well as frameworks to govern how GenAI is used within the business and remediation and response. Ironically, it’s a problem that is likely to be addressed best by people and process rather than technology.  

Looking to the future, some are suggesting that deep fakes could see senior execs decide to adopt a lower profile online in a bid to limit the capture of their likeness. Yet conversely there are some, such as the CEO of Zoom, who believe we will instead go to the opposite extreme and embrace the technology to create digital clones of ourselves that will then attend meetings on our behalf. These will learn from individual recordings to reproduce our mannerisms, be briefed by us, and report back with a call summary and actions. If that approach is widely adopted then detection technologies will prove to be something of a non-starter, making the primary methods of defence the verification processes and an effective AI policy.

 

 

The post Cyborg Social Engineering: Defending against personalised attacks appeared first on Cybersecurity Insiders.