[By: Matt Lindley, COO and CISO at NINJIO]

Although the cyberthreat landscape is constantly shifting, several major cybercriminal tactics have stood the test of time. Phishing is one of them. Despite being among the best-known cyberthreats, the damage inflicted by phishing attacks keeps rising. This is because phishing exploits ingrained psychological vulnerabilities that are difficult for victims to overcome, and it has proven uniquely capable of adapting over time. 

 

Another reason for the devastating effectiveness of phishing is the fact that employees have different susceptibilities that can be leveraged by cybercriminals in many ways. This means there’s no one-size-fits-all solution to phishing – companies must be capable of offering personalized phish training that accounts for different personality traits, levels of knowledge, and learning styles. This is particularly important as cybercriminals increasingly use AI to launch highly targeted phishing attacks at scale. 

 

By personalizing cybersecurity awareness training, companies ensure that educational content is highly relevant to each individual, which improves engagement and information retention. Personalized phish training also generates invaluable data about security gaps, holds employees and security leaders accountable, and helps companies keep pace with new threats. These are just a few of the reasons why CISOs and their companies will prioritize personalized phish training in 2024. 

 

Meeting the individual needs of learners

 

Relevance is a core component of CSAT – training must cover real-world cyberattacks and provide actionable information to employees. At a time when human beings are involved in nearly three-quarters of successful breaches, it’s vital to capture and hold employees’ attention with hyper-relevant training content. There’s one especially high-resolution form of relevance that CISOs and other security leaders must focus on: individual employee traits. 

 

Employees should never be treated as if they’re interchangeable with one another. They have different skills, personalities, and learning styles, which means phish training must be designed to maximize the value of the educational experience on the basis of these variables. When phishing training is capable of identifying employees’ strengths and weaknesses, engaging them on a personal level, and tracking individual progress, the collective security of the entire organization will improve dramatically.

 

Employees have many psychological vulnerabilities – like fear, obedience, greed, opportunity, sociableness, urgency, and curiosity – and these vulnerabilities vary from person to person. If one employee has a propensity to click on malicious content sent by an authority figure (obedience and fear) while another is more inclined to fall for fake investment schemes (greed and opportunity), training content should be customized based on this information. Effective phish training should build adaptive behavioral profiles which account for different psychological risk factors, levels of knowledge and performance, and attack vectors. 

 

When companies create training programs around individual behavioral profiles, they won’t just address specific vulnerabilities – they will also keep employees engaged and improve retention of the most critical concepts. By personalizing phish training, security leaders will provide the information that is most relevant to individual employees while preserving the flexibility to change course as circumstances demand. 

 

Personalized training and the evolution of phishing

 

The average cost of a phishing breach hit $4.76 million in 2023, and phishing is the most common initial attack vector (along with stolen or compromised credentials, which are often obtained through phishing). This means phishing is by far the tactic of choice for cybercriminals when they want to gain access to secure accounts and networks – a long-term trend that’s likely to pick up momentum. 

 

One reason phishing attacks will become increasingly common and destructive is the growing role of AI in these attacks. Generative AI tools like large language models (LLMs) and deepfakes give cybercriminals the ability to launch highly sophisticated and targeted phishing attacks on a vast scale. The key to guarding against these attacks is training employees to identify malicious content that is becoming far more difficult to distinguish from legitimate content. This process begins with personalized phish training that teaches employees how cybercriminals can hack their minds and use their psychological weaknesses against them. 

 

Unlike traditional phishing schemes which rely on a high volume of messages to hook a handful of victims, AI allows hackers to collect large quantities of data on potential targets and create focused messages that exploit their unique psychological weaknesses. AI also drastically improves the quality of the messages themselves, fixing the spelling errors, strange syntax, and other mistakes that were once red flags (GPT-4 supports 26 languages, which gives many more hackers the ability to launch phishing attacks internationally). 

 

Phishing has been among the most significant cyberthreats for years, but companies still aren’t able to stop employees from clicking on dangerous content. With the advent of AI-enabled phishing, this problem is about to get a whole lot worse – which is yet another reason why personalized phish training is a must-have. 

 

Simulated phishing generates crucial data and engagement

 

According to Gartner, global end-user spending on security and risk management is projected to reach $215 billion this year – up 14.3 percent from 2023. This means CISOs must be capable of making a strong case to their boards for the cost-effectiveness of any cybersecurity initiative, and personalized phish training meets this standard in several ways. 

 

An essential element of personalized phish training is the consistent evaluation of employees to pinpoint their susceptibilities, reinforce what they’re learning, and assess the organization’s overall security posture. Simulated phishing confronts employees with tests that mirror the latest social engineering tactics, which gives companies an accurate idea of how they would behave in real-world scenarios. This allows CISOs and other security leaders to identify the most at-risk employees, as well as the exact psychological and behavioral traits that make them vulnerable to attack. The company can then use this data to measure performance over time, engage with employees about their progress or areas for improvement, and close security gaps. 

 

There are three central pillars of successful awareness training: relevance, engagement, and accountability. Because personalized phish content is tailored to each employee’s behavioral profile and learning style, it’s far more relevant than any one-size-fits-all solution and it provides much more actionable data. Individual attention will also keep employees engaged – especially at a time when large-scale skills disruption is imminent and employees are demanding professional development opportunities. Cybersecurity awareness is one of the most important skills employees can cultivate, which is why CISOs should present personalized phish training as a chance to prepare for the workplace and economy of the future. 

 

Simulated phishing helps CISOs demonstrate the value of personalized training programs in a rigorous and consistent way. By aggregating individual employee performance, security leaders will have a clear view of the company’s overall level of security. This allows them to proactively improve their cybersecurity posture by addressing vulnerabilities as they arise, implementing constructive and engaging educational interventions, and empowering each employee to defend the company from phishing attacks. 

The post How personalized phish training can thwart evolving cyberattacks appeared first on Cybersecurity Insiders.

Byline: Michael Gorelik, CTO of Morphisec

 

Microsoft’s decision to end support for Windows Server 2012 and 2012 R2 should surprise no one. But the end of support for these decade-old operating systems is still catching many off guard.

 

Early last year, support ended for Windows 7, Windows 8, 8.1, and Windows Server 2008 R2 happened. At the time, Microsoft’s recommendation for anyone still using these OSs was to upgrade to their recent versions of desktop and server operating systems.

 

Unfortunately, many organizations are likely unable to implement this advice since legacy systems and applications often cannot be readily migrated due to business implications. In fact, market share data shows that around 5% of all organizations continue to use servers with operating systems long after their official end-of-support date.

 

Today, hundreds of thousands of servers continue to run with unsupported operating systems, hosting outdated and unpatched applications. In many cases, these systems support mission and business-critical processes which cannot be interrupted.

 

Microsoft’s core recommendation to any organization relying on this system is identical to the previous OSs: migrate (to Azure) or upgrade to the latest operating system.

 

For anyone managing an inherited legacy system tied into dozens of business-critical dependencies and custom applications, this advice will probably not be that useful.

 

This challenge is compounded by the fact that the expertise required to properly and securely configure Legacy OS is fading as professionals are trained on and become familiar with modern versions; Windows 2012 R2 was released over a decade ago.

 

Moving some of a legacy server’s functions into Azure, per Microsoft’s recommendation, is also a significant challenge. Aside from the security configuration risks and operational change that comes with cloud migration, the common causes of failure excluded from the Azure Service Level agreement may not give enough uptime assurance to make cloud migration feasible without increased costs. 

 

The Security Problem This Creates Is Huge

CISA ranks relying on “unsupported (or end-of-life) software” as the number one security bad practice a company can do. This is not without reason.

 

Every legacy server inside your organization results in a stockpile of exploitable vulnerabilities, often right at the core of your business processes.

 

Windows Server 2012 / 2012 R2 has more than 400 exploitable vulnerabilities as of March 2024, and more are likely to be discovered in the future. Research from Rand Corporation shows that the average zero-day has a lifespan of around seven years and often much longer, putting organizations at risk of ever-increasing volumes of vulnerabilities with the potential for long-term exploitability.

 

Over the latter part of 2023, Morphisec saw over 40 distinct attack patterns targeting legacy operating systems. A growing number of these involve threat actors trying to deploy Cobalt Strike beacons as an attack stage, which is commonly used as part of ransomware deployment.

 

In this example, Cobalt Strike allows threat actors to stealthily establish persistence by malicious exploitation of run-time memory on endpoints such as a servers. The Legacy servers are more easily penetrated due to OS vulnerabilities, and then leverage for lateral movement and other attack phases. Because servers running Windows Legacy OS, like 2012 R2, keep presenting new memory vulnerabilities and lack security controls against memory compromise, they are a perfect target for this kind of compromise.

What To Do Instead

 

If you are still relying on Windows Server 2012 / 2012 R2 or other Legacy operating systems and cannot upgrade or migrate, you have two options.

 

Either pay Microsoft for an extended report package (which should last three years) or find a way to install a security solution that works with your Windows 2012 legacy servers.

 

Neither of these seems like a good option on the face of things.

 

Opt for extended support, and you will eventually find yourself navigating a repeating renewal cycle. This will also not solve the inherent challenge of Legacy operating systems running outdated and vulnerable applications.

 

Trying to make a modern EDR or EPP work with your legacy servers is also challenging. Firstly, the optimal operations of the solutions require the consumption of system resources (CPU/RAM) that are often unavailable on older systems. Modern endpoint protection solutions also rely on architectural visibility components that are either unavailable or exist partially on legacy systems. This includes Anti-Malware Scanning Interface (AMSI) and Event Tracing for Windows (ETW).

 

However, there is hope. Applying best practices is fundamental, including these essential steps that can help secure legacy systems:

 

  1. Apply security patches where possible: Legacy systems are often vulnerable to cyber threats due to outdated software and business applications. IT professionals should strive to apply security patches whenever these are available, and whenever possible. If the manufacturer no longer provides updates, they may need to implement compensatory controls, such as patchless protection, to mitigate potential vulnerabilities.

  1. Implement strong access controls: Limiting access to legacy systems can significantly reduce the risk of unauthorized access and data breaches. IT professionals should enforce strict access controls, including the use of strong authentication methods such as multi-factor authentication (MFA) and role-based access control (RBAC). Additionally, regular monitoring and auditing of user access can help identify any suspicious activities and potential security breaches.

  1. Check network segmentation and firewalls: Legacy systems should be isolated from other parts of the network through network segmentation. By implementing firewalls and other network security measures, IT professionals can control and monitor the traffic to and from these systems, reducing the risk of unauthorized access and limiting the potential impact of a security breach on the overall network.

 

  1. Apply compensatory controls and preventative technologies: Technologies like Automated Moving Target Detection (AMTD) can prevent unauthorized code from executing in legacy servers without relying on missing architectural visibility components, and with negligible performance impact.  This can serve as a compensatory control and patchless protection against vulnerabilities. Organizations can extend the secure lifespan of Windows legacy servers by using solutions like AMTD that protect Windows legacy OS deterministically.

Securing legacy systems is an ongoing challenge and seems futile, however ensuring up-to-date best practices and proactive defenses are in place can mitigate the impact of legacy, unsupported systems.

 

About Michael Gorelik

 

Morphisec CTO Michael Gorelik leads the malware research operation and sets technology strategy. He has extensive experience in the software industry and leading diverse cybersecurity software development projects. Prior to Morphisec, Michael was VP of R&D at MotionLogic GmbH, and previously served in senior leadership positions at Deutsche Telekom Labs. Michael has extensive experience as a red teamer, reverse engineer, and contributor to the MITRE CVE database. He has worked extensively with the FBI and US Department of Homeland Security on countering global cybercrime. Michael is a noted speaker, having presented at multiple industry conferences, such as SANS, BSides, and RSA. Michael holds Bsc and Msc degrees from the Computer Science department at Ben-Gurion University, focusing on synchronization in different OS architectures. He also jointly holds seven patents in the IT space.

The post Windows Server 2012 / 2012 R2 End of Life – Here’s How to Secure your Legacy Servers appeared first on Cybersecurity Insiders.

By Jamal Elmellas, Chief Operating Officer, Focus-on-Security

Generative AI is expected to impact 60% of jobs in advanced economies like the UK according to the International Monetary Fund (IMF), half of which will gain from enhanced productivity while the other half will take over tasks previously performed by humans, lowering labour demands, wages and hiring. It’s also proving to be a catalyst for transformation with the 2023 Global Trends in AI report finding that 69% of organisations have at least one AI project underway. However, it’s also hugely disruptive and likely to cause changes to the business on a human level too.

Just about everybody with a background in IT has experimented with one of the language learning models (LLMs) such as ChatGPT, Google PaLM and Gemini, Meta’s LLaMA. However, only 28% use it in a work capacity today, finds the Generative AI Snapshot series from Salesforce, although a further 32% plan to so in the near future. There’s a great deal of excitement over the capabilities of the technology when it comes to utilising data to augment communication in IT, sales and marketing roles but how might the technology impact cybersecurity?

Where will AI help?

According to the AI Cyber 2024: Is the Cybersecurity Profession Ready? study by ISC2, AI is most likely to take over the analysis of user behaviour patterns (81%), the automation of repetitive tasks (75%), the monitoring of network traffic and malware (71%), the prediction of areas of weakness (62%) and to detect and block threats (62%). It’s therefore going to be applied to the most time consuming and mundane elements and while it may annex these particular tasks this promises to free up skilled personnel to use their human intuition on more demanding and rewarding activities. 

In fact, while 56% believe AI will make parts of their jobs obsolete, this is not seen as a negative, with 82% believing it will improve job efficiency. This, in turn, could help to alleviate the workforce gap which the same industry body estimates currently stands at almost 4m. That deficit in the workforce is placing cybersecurity professionals under tremendous strain, decreasing their ability to perform critical, careful risk assessment and remain agile. The ISC2 Cybersecurity Workforce Study 2023 found 50% complained of not having enough time to conduct proper risk assessments and carry out risk management, 45% claimed it lead to oversights in process and procedure, 38% misconfigured systems, and 38% tardy patching of critical systems, due to skills shortages.

However, while GenAI has the power to alleviate these stresses and strains, the Snapshot found 73% of the general workforce believe GenAI will also introduce new security risks inhouse, from threats to data integrity, to a lack of employee skills in this area, to the inability of GenAI to integrate with the existing tech stack, and the lack of AI data strategies.  This demonstrates there is a clear need for better governance and guard rails, with the ISC2 survey also unearthing concerns over the lack of regulation, its ethical application, privacy and the risk of data poisoning. 

Only just over a quarter (27%) of those in the ISC2 AI survey said their organisation had a formal policy in place to govern AI use and only 15% a policy to cover securing and deploying the technology. This potentially represents an interesting opportunity for the sector as security teams could take the lead in deployments. We’ve already seen a host of regulatory guidelines issued that could help assist in this respect, such as ISO/IEC 22989:2022, ISO/IEC 23053:2022, ISO/IEC 23984:2023, and ISO/IEC 42001:2023 as well as NIST’s AI Risk Management Framework. 

It’s also worth mentioning here that AI is likely to see an escalation in the sophistication, veracity and volume of attacks. Over half (54%) of those questioned for the ISC2 AI report said they’d seen an increase in cyber attacks over the past six months and 13% said they were able to detect these were AI-generated, indicating that worst fears are being realised. Given the continual arms race between attacker and defender, this lends some urgency to the proceedings. 

With regards to timescales, the ISC2 AI study found 88% believe AI will significantly impact their job in the next two years. Yet, as of today, 41% said they have minimal or no expertise in securing AI and machine learning technology which could spell a steep learning curve. 

To help move adoption forward, security teams therefore need to conduct a skills gap analysis and focus on upskilling in the area of AI and machine learning technologies. Once equipped with this understanding, cybersecurity professionals can provide the security piece in working parties charged with implementing the technologies, helping to caution the organisation against threats and update acceptable use policies on ethical use.

As to whether AI will augment or annex job roles, the IMF claims that it’s only in the most extreme cases that AI is expected to see jobs disappear. What is certain is that it will see the emergence of new ways of working, threats and opportunities, making it imperative that we get to grip with the technology today. Ignoring it, which 17% admitted to doing in the ISC2 AI report, banning it (12%) or not knowing what the organisation is doing (10%) is not an option. AI is here to stay, making this an adapt or die moment for the business.

 

The post Will AI augment or annex cybersecurity jobs? appeared first on Cybersecurity Insiders.

Organizations in different industries rely on cloud backups to secure critical business data. In recent years, backup to the cloud has evolved into an easy, flexible and effective technology. The two most common cloud backup strategies are multi-cloud backup and hybrid cloud backup. 

However, backups on their own are not enough for data safety. In this post, we describe the two strategies and then provide recommendations that help you build a secure cloud backup system. 

What is Multi-Cloud Backup? 

The multi-cloud backup strategy refers to sending backup data to multiple public clouds of various cloud vendors. This allows organizations to ensure data redundancy and availability while also avoiding a single point of failure in case the original data is lost. Additionally, you can benefit from the cost flexibility of multi-cloud backups, as most vendors offer a pay-as-you-go model.

Proprietary features that enable you to conveniently copy and store data in the cloud are usually available. However, organizations need swift recovery in addition to the backup functionality, which may not be the case for all cloud vendors. Organizing an effective and convenient multi-cloud backup system requires expertise, time and effort from your IT team. 

Lastly, storing all your backups in the cloud can result in slower recovery compared to local backups. Also, cloud backups remain available only when your network connection is up and running.

What is Hybrid Cloud Backup?

A hybrid cloud backup approach utilizes both on-premise and cloud storage as backup repositories. Compared to the multi-cloud strategy, hybrid cloud storage provides additional flexibility and recoverability in exchange for increased complexity and cost. 

To implement hybrid cloud backup, you first need to organize on-premise storage with appropriate high-performance hardware, cooling and maintenance. Secondly, you have to pay for the cloud storage volume required to fit your data backups. And finally, you need to pick a reliable and functional hybrid cloud backup solution to enable and maintain data protection workflows. 

Choosing a suitable data protection solution can be challenging as there are multiple nuances and specifics to consider. You might want to learn more about setting up hybrid cloud backup before you start implementing your system.

Best Practices for Secure Cloud Backup

Creating and maintaining backups does not make your organization’s data secure by default. This additional data copy can be a target for ransomware or malware. Moreover, cybercriminals prioritize data backups when planning their attacks.

Backups require thorough protection to ensure data recoverability. Below you can find several recommendations to enhance backup security. The security best practices mentioned below can help you protect backups with both multi-cloud and hybrid cloud strategies.  

Know your data

First, know and prioritize the data you want to back up. When you are aware of the volume and type of data you need to protect, you can build up your workflows according to recovery point and recovery time objectives. Additionally, consider prioritizing your data so you back up critical assets first. 

Define your cloud backup strategy

Should you implement a multi-cloud or a hybrid cloud backup strategy? Your choice defines the system’s capabilities, costs and management specifics. In addition, the chosen strategy sets the qualification requirements for the IT team. 

A multi-cloud strategy can be more cost-efficient to start with. On the other hand, hybrid cloud data protection adds more reliability for a higher price and increased infrastructure complexity. Consider your data volumes and security priorities along with hardware and network performance to make the choice that suits your organization the most.

Pick suitable cloud provider(s)

The data backup and recovery capabilities of the available cloud providers can vary. Some can have the latest integration features that simplify building multi-cloud systems. Others can provide advanced data protection and recovery functions that add resilience and improve RTOs. The solution is to check the offers available on the market and pick the cloud provider (or providers) suitable for your strategy, infrastructure, expectations and budget.  

Encrypt backup data

In 2024, storing unencrypted data means exposing your digital assets to malicious actors. You can organize an encrypted cloud backup storage in Amazon, Google Cloud, or Microsoft Azure, among other providers. 

However, you might also want to encrypt backup data “in flight” (during transfer). Additionally, your local backups should also be encrypted in case a hybrid cloud backup is your choice. Consider using a specialized data protection solution to enable backup encryption in all your data transfers and repositories.  

Implement anti-ransomware capabilities

Ransomware is an ongoing and evolving threat worldwide, and backups are priority targets for hackers. The most advanced cloud providers, such as Amazon or Microsoft, enable you to set immutability periods for repositories. Immutability protects the data in a repository from alteration or deletion, thus preventing ransomware encryption. 

Modern data backup and recovery solutions such as NAKIVO Backup & Replication can enable you to set immutability in on-premise and cloud repositories. Even in the worst-case scenario, when ransomware successfully infiltrates backup repositories, immutable backups remain usable for recovery. Integrating immutability along with threat monitoring and regular antivirus solutions into backup protection workflows can help you ensure regulatory compliance and avoid paying ransoms.

Optimize resource consumption

Modern data protection solutions can provide shorter backup windows and cut cloud storage costs with deduplication and compression features. Additionally, compressed and deduplicated backups can offload your networks when running backup workflows. This is especially beneficial when transferring large volumes of data to public cloud repositories.   

Implement thorough backup testing

The worst time to discover that your backups are unrecoverable is when the original data is already lost or unavailable. Consider implementing regular backup testing as a part of your data protection strategy. You can conduct a test upon completion of every backup workflow and perform a global recovery review at specific times.

Modern hybrid and multi-cloud backup solutions enable recovery testing on demand and by schedule. Additionally, you can run test workflows without impacting production environments. 

Restrict access to backups

Role-based access control (RBAC) and multi-factor authentication are efficient security practices that significantly improve the resilience of accounts and infrastructures. Consider using these common approaches to enhance the protection of your backup repositories and restrict access to data protection workflows.

Prepare a DR plan

A DR (disaster recovery) plan includes IT-related steps that your staff undertakes when a global incident happens. You might want to organize a disaster recovery team and share responsibilities with qualified workers to increase IT recovery efficiency. 

Last but not least, modern infrastructures with hybrid or multi-cloud backups require custom data recovery sequences, which may be complicated to compose. However, you can meet shorter recovery time objectives after planning and automating such workflows beforehand for different disaster cases. 

Conclusion

Multi-cloud and hybrid-cloud backup strategies enable higher data protection reliability and flexibility compared to other backup methods. Although the strategies are different from each other, you can use similar security practices to enhance data security and recoverability. Consider using a third-party backup solution with data encryption, ransomware protection, resource optimization, backup testing, access restriction and disaster recovery planning capabilities to implement multi-level backup and recovery workflows.

The post Multi-Cloud and Hybrid Cloud Backup: Best Practices to Reliably Secure Your Data appeared first on Cybersecurity Insiders.

Overview

Traditionally, Virtual Private Networks (VPNs) have facilitated basic remote access. The rapid growth in the distributed workforce and increasing adoption of cloud technologies are challenging the basic connectivity that VPN offers. As the threat landscape rapidly evolves, VPNs cannot provide the secure, segmented access organizations need. Instead, VPNs often provide full access to the corporate network, increasing the chances of cyberattacks once bad actors gain access through login credentials. In addition, VPNs connect multiple sites, allow access to third parties, support unmanaged devices, and enable IoT device connectivity. However, these varied use cases stretch VPNs beyond their initial purpose and design, often creating security gaps in the face of an increasingly complex and changing threat landscape.

This comprehensive report, based on a survey of 382 IT professionals and cybersecurity experts, explores these multifaceted security and user experience challenges. The 2023 VPN Risk Report reveals the complexity of today’s  VPN management, user experience issues, vulnerabilities  to diverse cyberattacks, and their potential to impair organizations’ broader security posture. The report also outlines more robust security models, with zero trust emerging as a viable option to secure and accelerate digital transformation.

Key findings from the survey include:

VPN Vulnerabilities and Cybersecurity Impacts: Despite their critical role, VPNs pose security risks, with 88% of organizations expressing a slight to extreme concern  that VPNs may jeopardize their environment’s security. Furthermore, 45% of organizations confirmed experiencing at least one attack that exploited VPN vulnerabilities in the last 12 months – one in three became victim of VPN-related ransomware attacks. The increasing threat of cyberattackers exploiting VPN vulnerabilities underscores the urgent need to address the security of current VPN architectures.

VPN Use and User Experience: VPNs have a broad spectrum of use, with 84% of respondents identifying remote employee access as their primary application. However, users reported  a less than optimal experience, with a majority of users dissatisfied with their VPN experience (72%), highlighting the need for more user-friendly and reliable remote access solutions in the digital workplace.

Primary Attack Vectors: One in two organizations have  faced VPN-related attacks in the last year. VPN attack vectors need special attention due to their critical roles in business operations and communication. Additionally, third-party users such as contractors and vendors serve as potential backdoors for malicious access to networks, further complicating the job of network security teams. In the survey, 9 of 10 respondents expressed concern about third parties serving as potential backdoors into their networks through VPN access.

Embracing Zero Trust: The transition to a zero trust model is high on the agenda for a majority of organizations. About 9 of 10 respondents identified adopting zero trust as a focus area, and more than a quarter (27%) are already implementing Zero Trust. 37% of respondents are planning to replace their VPN with Zero Trust Network Access (ZTNA) solutions.

We are grateful to Zscaler for their contribution to this VPN risk survey. Their expertise in zero trust and secure access solutions has significantly enriched our findings.

We are confident that the insights from this report will be an essential resource for IT and cybersecurity professionals on your journey toward zero trust security.

End Users Struggle with VPN.

Among the VPN problems encountered, slow connection speed when accessing applications via VPN is the most prevalent, reported by 25% of respondents. Other notable issues include problems with connection drops while using VPN (21%) and an inconsistent user experience across different devices/ platforms (16%). 

Given these findings, it is evident that improving remote access user experience should be a priority for many organizations. A smooth and reliable access experience not only helps productivity but can also enhance security by encouraging compliance with security policies.

Improvements can range from optimizing network performance to minimizing slow connection speeds and connection drops, simplifying the VPN authentication process and ensuring a consistent user experience across different platforms. It is also critical to have robust support mechanisms in place to help users troubleshoot and resolve any difficulties they may encounter while using the VPN.

Primary VPN Use Case: Remote Access for Employees

VPNs have a long history in connecting remote employees to the organization’s network and facilitating a variety of use cases, such as remote work and third-party connections.

The primary purpose of VPNs in most organizations (84%) is to enable access for remote employees. This is a reflection of the remote work trend that has significantly increased in recent years. It’s interesting, however, that only 11% use VPNs to manage access for unmanaged devices, pointing to an area of vulnerability that organizations may not be fully addressing.

High VPN Dependency

A significant number of end users (70%) utilize VPN daily or almost daily, showing high dependency on VPNs for daily, routine business operations. Combined with those using VPNs 4-5 times a week, 77% of all respondents use VPN for their work nearly every day. Interestingly, none of the respondents reported using VPN less often than once a month, confirming the widespread adoption of the technology.

Given this high frequency of usage, it’s vital to ensure the consistent availability and robust security of remote access/VPN services.

User Experience Issues

The performance and user experience of VPN services significantly impacts organizations’ productivity and overall operational efficiency. A VPN that is slow or frequently disconnects can significantly disrupt business operations and frustrate users. Looking at the survey results, the most significant issue encountered with VPN services is poor user experience, with 32% of respondents citing slow connections and frequent disconnections.

Given these results, organizations should prioritize enhancing the user experience of their remote access services, which could involve increasing server capacity or choosing secure access solutions known for their speed and stability. Interestingly, organizations ranked security as a relatively low issue despite several cyberattacks on VPN in recent years.

User Dissatisfaction with VPN

Assessing user satisfaction with VPN experience is critical, as dissatisfaction not only impacts productivity but can lead to non-compliance with security policies, which in turn could introduce security vulnerabilities.

A significant majority of users (72%) are dissatisfied with their VPN experience, highlighting the need for more user-friendly and reliable remote access solutions in the digital workplace.

VPN Management Challenges

The survey reveals that the biggest headache in managing VPN infrastructure, as indicated by 22% of the respondents, is balancing VPN performance with user experience.

Troubleshooting VPN connectivity and stability issues is also a significant concern, impacting nearly 20% of respondents, closely followed by the effort required to keep up with frequent software patches and updates at 18%. Interestingly, only 9% of respondents cite increasing VPN infrastructure costs as their biggest headache.

VPN Security Concerns

The level of security a remote access solution provides is vital in protecting organizations’ sensitive data and systems. Faced with increasingly advanced cyberthreats, VPNs can either fortify or compromise an organization’s security posture, depending on their design and how well they are managed.

Reviewing the survey results, the vast majority of respondents (88%) are concerned that their VPN may jeopardize their environment’s security. Particularly noteworthy is that a combined 22% of respondents report being “very” or “extremely” concerned, indicating a significant level of anxiety around VPNs as potential security weak points.

Third-Party Security Concerns

Granting third parties access through a VPN is a necessary business practice, but it also raises serious security concerns. Given that third-party entities may not adhere to the same stringent cybersecurity standards, they can potentially provide a backdoor for cyberattackers to breach an organization’s network.

In the survey, a vast majority of the respondents (90%) expressed concern about third parties serving as potential backdoors into their networks through VPN access. A combined total of 35% were “very” or “extremely” concerned, suggesting that third-party VPN access is a significant source of anxiety.

Organizations should enforce rigorous security measures when granting VPN access to third parties. This could involve regularly reviewing and updating access permissions, enforcing strong password policies, and monitoring network activity for anomalies. In addition, organizations should ensure that third parties comply with their cybersecurity policies and consider using advanced technologies such as zero trust architectures, which only grant access on a need-to-know basis.

Phishing Attacks Make Up Half of Cyberattacks

VPNs have a long history of vulnerabilities and require IT teams to constantly patch their VPN servers. This can potentially expose an organization to a variety of cyberattacks as threat actors continue to become more sophisticated and creative in their techniques.

Survey respondents see phishing attacks (49%) and ransomware attacks (40%) as the most likely types of attacks to exploit their organization’s VPN vulnerabilities. These attacks often involve deceiving users into revealing sensitive information or deploying malicious software that locks down systems until a ransom is paid.

1 in 2 Organizations Have Experienced VPN-Related Attacks

The security of a VPN server is crucial for maintaining the integrity and confidentiality of the data it handles. As organizations increasingly depend on VPNs for remote work, any vulnerabilities can become attractive targets for cyberattackers.

According to the survey, a sizable portion of organizations (45%) have experienced one or more attacks on their VPN servers in the past 12 months that exploited software vulnerabilities in VPN servers, highlighting the urgent need for more secure remote access solutions.

Zero Trust Strategy is a Big Priority

The adoption of zero trust, which is a security model that follows the maxim ‘never trust, always verify,’ is a priority for 9 of 10 organizations.

To fully leverage a zero trust architecture, organizations should prioritize key elements such as strong multi-factor authentication methods, continuous verification of traffic, network segmentation, least-privileged access, and continuous monitoring to strengthen their security posture.

Implementing Zero Trust is the Primary Focus

92% of organizations are either already implementing (27%), planning to implement (42%) or considering a zero trust strategy, demonstrating an understanding of its importance and that zero trust is moving from a buzzword to a reality for most organizations.

Those yet to define a timeline for implementation should consider accelerating their plans to remain competitive and secure. For those with  no plans or who are unsure, they may risk falling behind in a rapidly evolving cybersecurity threat landscape.

VPN Transition Plans

The transition from VPN to Zero Trust Network Access (ZTNA) solutions marks a significant shift in modern cybersecurity strategies, given the heightened focus on least-privileged access and microsegmentation inherent in ZTNA. Four out of 10 organizations are transitioning to ZTNA, demonstrating an active response to evolving security requirements.

For organizations planning or considering a switch, it’s crucial to evaluate and choose ZTNA solutions that meet their specific security requirements and business needs. Those who currently have no plans to adopt ZTNA should, at the very least, investigate the potential benefits of these solutions in enhancing their cybersecurity posture. For companies unable to switch completely, hybrid models can be a beneficial compromise, providing the advantages of ZTNA while leveraging existing VPN infrastructure.

Best Practices for Your Journey to Zero Trust

We recommend the following best practices for successfully navigating the journey from traditional VPN infrastructure to a modern zero trust architecture.

Methodology  &  Demographics

This report is based on the results of a comprehensive online survey of 382 IT and cybersecurity professionals, conducted in June 2023 to identify the latest enterprise adoption trends, challenges, gaps, and solution preferences related to VPN risk. The respondents range from technical executives to IT security practitioners, representing a balanced cross-section of organizations of varying sizes across multiple industries.

The post VPN Risk Report appeared first on Cybersecurity Insiders.

Cloud computing is fundamentally delivering on its promised business outcomes, including flexible capacity and scalability, increased agility, improved availability, and accelerated deployment and provisioning.

However, security concerns remain a critical barrier to faster cloud adoption, showing little signs of improvement in the perception of cloud security professionals. Cloud adoption is further inhibited by a number of related challenges that prevent the faster and broader embracement of cloud services, including the continued lack of cloud security talent, proliferating compliance requirements, and a significant lack of visibility and control, especially in hybrid and multi-cloud environments.

Cybersecurity Insiders and Fortinet conducted a comprehensive survey of 752 cybersecurity professionals on the state of cloud security earlier in 2023 and the resulting Cloud Security Report revealed a variety of key challenges and priorities, including:

 

  • Cloud security continues to be a significant issue, with 95% of surveyed organizations concerned about their security posture in public cloud environments. Misconfiguration remains the biggest cloud security risk, according to 59% of cybersecurity professionals. This is closely followed by exfiltration of sensitive data and insecure interfaces/APIs (tied at 51%), and unauthorized access (49%).

 

  • Despite economic headwinds, cloud security budgets are increasing for the majority of organizations (60%) by an average of 33%.

 

  • 44% of organizations are looking for ways to achieve better visibility and control in securing hybrid and multi-cloud networks, with 90% looking for a single cloud security platform to protect data  consistently  and  comprehensively  across  their  cloud  footprint.

WORKLOADS IN THE CLOUD

Despite a leveling out of cloud adoption year-over-year, the pace of moving workloads to the cloud remains strong. Today, 39% of respondents have more than half of their workloads in the cloud, while 58% plan to reach this level in the next 12–18 months.

MULTI-CLOUD ADOPTION

As workloads move to the cloud, organizations are selecting the cloud platform that’s the best fit for each project. This is driving multi-cloud proliferation with nearly seven out of 10 companies in our survey using two or more cloud providers (69%).

PREFERRED CLOUD PROVIDERS

Which cloud providers are organizations prioritizing? The big name providers, such as Microsoft Azure (72%) and Amazon Web Services (67%), continue to dominate the market. However, future cloud adoption is highest for Google Cloud Platform (+21%) and Oracle Cloud (+20%).

CLOUD SERVICES PRIORITIES

It seems the cloud is not just about compute and storage. Interestingly, security services are the top workload deployed in the cloud (56%), just ahead of compute (54%), storage (52%), and even applications (51%).

BARRIERS TO CLOUD ADOPTION

 

While the cloud offers important advantages, significant barriers to cloud adoption still exist. The biggest challenges organizations are facing are not primarily about technology, but people and processes. Most critical is the perennial lack of qualified staff (37%), which continues to be the largest impediment to faster adoption, despite dropping slightly from last year. This is closely followed by legal and regulatory compliance (30%), data security issues (29%), and integration with existing IT environments (27%).

 

KEY CLOUD BENEFITS

After years of cloud adoption, do organizations believe cloud computing is delivering on its promise? The answer is yes. Cloud users affirm that the cloud is delivering key business benefits, including flexibility and scale (53%), agility (45%), business continuity (44%), and accelerated deployment and provisioning (41%). This is the first year that accelerated deployment has made it into the top four, leapfrogging both performance and the move to variable OpEx.

 

CLOUD BUSINESS OUTCOMES

The benefits of the cloud are driving key business outcomes, and this year responsiveness to customer needs (52%) becomes the top outcome instead of accelerated time to market (48%). Organizations that are smart about integrating cybersecurity into their move to the cloud are also seeing its value to the business in lower risk and improved security (42%) and in cost reductions (41%).

 

CLOUD SECURITY CONCERNS

Despite increasing cloud adoption, cloud security concerns show no signs of improving. Virtually all surveyed organizations are moderately to extremely concerned about their security posture in public cloud environments (95%). The number of organizations that are extremely concerned about public cloud security even increased this year — 35%, up from 32%.

RISK OF A BREACH

Concerns about public cloud security, combined with a lack of resources and expertise, are driving the perception that the risk of a security breach in the public cloud is higher than in traditional on- premises environments (43%). Only 27% of security professionals perceive risk to be lower in a public cloud environment.

OPERATIONAL SECURITY HEADACHES

Cybersecurity professionals face numerous challenges when it comes to protecting cloud workloads. The people factor again sits at the top of the list, lack of qualified security staff (43%), closely followed by compliance (37%). Multi-cloud proliferation is almost certainly the reason behind the headache of delivering consistent security policies (32%).

MULTI-CLOUD SECURITY CHALLENGES

Multi-cloud environments increase the complexity and challenges of securing cloud workloads. The people factor and the expertise that multi-cloud environments demand is clearly highlighted in the fact that three out of the four top challenges are related to having the right skills, along with an in-depth understanding of each cloud platform.

CLOUD SECURITY THREATS

Which cloud security threats keep cybersecurity professionals up at night? The same top four as last year, with misconfiguration continuing to hold the top spot, according to 59% of cybersecurity professionals. This is closely followed by the exfiltration of sensitive data and insecure interfaces/APIs (tied at 51%), and unauthorized access (49%).

CLOUD SECURITY PRIORITIES

Security professionals are using their cloud budgets wisely to address the threats and concerns that pose the biggest risk to the business. It may be no surprise that preventing misconfiguration is the number one priority (51%), but securing applications that have already moved to the cloud is a close second (48%).

KEY DRIVERS FOR CLOUD-BASED SECURITY

The cloud allows organizations to get the same advantages for their security services as they have for their applications and workloads. This includes better scalability (56%), faster time to deployment (48%), reduced effort around patches and upgrades of software (43%), and cost savings (40%).

SINGLE CLOUD SECURITY PLATFORM

In light of the challenges regarding security visibility and lack of cyber talent, it comes as no surprise that the vast majority of respondents (90%) consider it moderately to extremely helpful to have a single cloud security platform and dashboard to protect data consistently and comprehensively across their cloud footprint.

METHODOLOGY & DEMOGRAPHICS

The 2023 Cloud Security Report is based on a comprehensive global survey of 752 cybersecurity professionals conducted in February 2023, to uncover how cloud user organizations are adopting the cloud, how they see cloud security evolving, and what best practices IT cybersecurity leaders are prioritizing in their move to the cloud. The respondents range from technical executives to IT security practitioners, representing a balanced cross-section of organizations of varying sizes across multiple industries.

The post The State of Cloud Security appeared first on Cybersecurity Insiders.

Artificial Intelligence (AI) has emerged as a game-changer, revolutionizing industries and transforming the way we live and work. However, as AI continues to advance, it brings with it a new set of cybersecurity risks and challenges. In this blog, we will delve into the potential risks associated with AI and the importance of implementing robust cybersecurity measures to safeguard against these threats.

 

AI’s Vulnerabilities:

AI systems are not immune to vulnerabilities and can be exploited by cybercriminals. One major concern is adversarial attacks, where malicious actors manipulate AI models by injecting subtle modifications into input data, causing the system to make incorrect or biased decisions. These attacks can have significant consequences in various domains, such as autonomous vehicles, medical diagnosis, or financial systems.

 

Data Poisoning and Manipulation:

AI models heavily rely on vast amounts of data for training and decision-making. However, if the training data is compromised or poisoned, it can lead to biased outcomes or erroneous predictions. Cyber attackers can intentionally manipulate training data to trick AI systems into making incorrect decisions, potentially resulting in serious consequences. Protecting the integrity and quality of training data is crucial to prevent these types of attacks.

 

Model Theft and Replication:

AI models are valuable assets, representing significant investments in time, resources, and expertise. Sophisticated attackers may attempt to steal or replicate AI models to gain a competitive advantage or exploit their capabilities for malicious purposes. Safeguarding the intellectual property and proprietary algorithms behind AI models is vital to prevent unauthorized access and misuse.

 

Privacy and Ethical Concerns:

AI systems often process vast amounts of personal and sensitive data, raising concerns about privacy and ethical implications. Inadequate security measures or vulnerabilities in AI systems can result in data breaches, leading to the exposure of personal information and potential privacy violations. Ensuring robust data protection mechanisms, such as encryption and access controls, is essential to maintain user trust and comply with privacy regulations.

 

Lack of Explainability and Accountability:

AI models, particularly those based on deep learning techniques, can be opaque and difficult to interpret. This lack of explainability poses challenges when it comes to understanding the reasoning behind AI-driven decisions. In critical sectors like healthcare or finance, the inability to explain AI’s decision-making process may lead to distrust and hinder accountability. Balancing transparency and performance in AI models is crucial to ensure responsible and accountable AI applications.

 

Mitigating AI Cybersecurity Risks:

To mitigate the cybersecurity risks associated with AI, organizations must adopt proactive measures:

 

Robust Security Infrastructure: Implement comprehensive security measures to protect AI systems, including secure development practices, regular vulnerability assessments, and robust access controls.

 

Adversarial Training: Train AI models to recognize and withstand adversarial attacks by exposing them to carefully crafted malicious inputs during the training phase.

 

Data Governance: Establish strict data governance policies to ensure the integrity and quality of training data, including data validation, data lineage tracking, and monitoring for data poisoning attempts.

 

Continuous Monitoring and Response: Implement real-time monitoring and detection systems to identify anomalies, potential attacks, or unauthorized access to AI systems. Develop incident response plans to mitigate and contain any breaches or attacks swiftly.

 

Collaboration and Industry Standards: Foster collaboration between AI researchers, industry experts, and policymakers to establish best practices, guidelines, and standards for AI cybersecurity.

As AI continues to revolutionize industries and drive innovation, it is crucial to acknowledge and address the associated cybersecurity risks. By understanding and proactively mitigating these risks, we can unlock the full potential of AI while ensuring the safety, privacy, and integrity of our systems and data. Implementing robust cybersecurity measures and promoting responsible AI practices will pave the way for a secure and trustworthy AI-driven future.

The post Unleashing the Power of AI with Caution: Understanding Cybersecurity Risks appeared first on Cybersecurity Insiders.

New cloud platform strengthens organizations’ cyber resilience

by making real-world threat simulation easier and more accessible

San Francisco, US, 9th November 2022 – Picus Security, the pioneer of Breach and Attack Simulation (BAS), today announced the availability of its next-generation security validation technology. The new Picus Complete Security Validation Platform levels up the company’s attack simulation capabilities to remove barriers of entry for security teams. It enables any size organization to automatically validate the performance of security controls, discover high-risk attack paths to critical assets and optimize SOC effectiveness.  

“Picus helped create the attack simulation market, and now we’re taking it to the next level,” said H. Alper Memis, Picus Security CEO and Co-Founder. “By pushing the boundaries of automated security validation and making it simpler to perform, our new platform enables organizations even without large in-house security teams to identify and address security gaps continuously.” 

The all-new-and-improved Picus platform extends Picus’s capabilities beyond security control validation to provide a more holistic view of security risks inside and outside corporate networks. It consists of three individually licensable products:

  • Security Control Validation – simulates ransomware and other real-world cyber threats to help measure and optimize the effectiveness of security controls to prevent and detect attacks.
  • Attack Path Validation – assesses an organization’s security posture from an ‘assume breach’ perspective by performing lateral movement and other evasive actions to identify high-risk attack paths to critical systems and users.
  • Detection Rule Validation – analyzes the health and performance of SIEM detection rules to ensure that SOC teams are reliably alerted to threats and can eliminate false positives. 

A global cybersecurity workforce gap of 3.4 million professionals means automated security validation is now essential to reduce manual workloads and help security teams respond to threats sooner. Recently, the US’s Cybersecurity and Infrastructure Security Agency (CISA) and UK’s National Cyber Security Centre (NCSC) published a joint advisory recommending organizations test their defenses continually and at scale against the latest techniques used by attackers.

“Insights from point-in-time testing are quickly outdated and do not give security teams a complete view of their security posture,” said Volkan Erturk, Picus Security CTO and Co-Founder. “With the Picus platform, security teams benefit from actionable insights to optimize security effectiveness whenever new threats arise, not once a quarter. With our new capabilities, these insights are now deeper and cover even more aspects of organizations’ controls and critical infrastructure.”

On 15th November 2022, Picus Security is hosting Picus reLoaded, a free virtual event for security professionals that want to learn more about its platform and how to leverage automated security validation. Register to attend and hear from thought leaders from Gartner, Frost & Sullivan, Mastercard, and more.

H. Alper Memis has also published a blog to announce the release to Picus customers.

About Picus Security

Picus Security is the pioneer of Breach and Attack Simulation (BAS). The Picus Complete Security Validation Platform is trusted by leading organizations worldwide to continuously validate security effectiveness and deliver actionable insights to strengthen resilience 24/7.

Picus has offices in North America, Europe and APAC and is supported by a global network of channel and alliance partners.

Picus has been named a ‘Cool Vendor’ by Gartner and is cited by Frost & Sullivan as one of the most innovative players in the BAS market. 

 For more information, visit www.picussecurity.com

∗The (ISC)² Cybersecurity Workforce Study 2022

The post Picus Security brings automated security validation to businesses of all sizes appeared first on Cybersecurity Insiders.

New integration provides SOC teams with rich cloud context they need to detect and investigate threats in the cloud

 

SAN JOSE, Calif., 26 October 2022Lacework, the data-driven cloud security company, today announced a new integration with Google Cloud’s Chronicle Security Operations, bringing its cloud-native application protection platform (CNAPP) capabilities to Chronicle deployments. By tapping into rich multi cloud runtime alerts from the Lacework Polygraph Data Platform, organizations using Chronicle Security Operations gain better insight into cloud threats, helping them understand, respond to, and remediate incidents more effectively than ever before. Lacework fully integrates multicloud runtime telemetry with Chronicle Security Operations.

 

SOC teams that rely on legacy security solutions, which are based on static, manually-written rules, can’t keep up with the rate and scale of changes in cloud environments. They are then forced to spend an increasing amount of analyst time and energy sifting through an overwhelming volume of low-context alerts. SOC teams need a modern threat management solution that can keep up with the constantly changing nature of the cloud, and allows them — and their company overall — to operate and innovate effectively at scale. 

 

With this integration, organizations using Chronicle Security Operations can now access runtime alerts and anomalous activity from multi cloud environments, generated by the Lacework Polygraph Data Platform. The Lacework Polygraph Data Platform uses automation to provide teams with an improved signal-to-noise ratio compared to traditional solutions that are not built for the cloud, without the need for manual intervention. The addition of these high-context alerts allows SOC teams to quicken investigation and remediation, and closes the gap between SOC and security teams by embedding Lacework into security playbooks.

 

“Enterprises transforming their security strategies for the cloud require technologies that easily deliver comprehensive visibility across their multi cloud environments,” Sunil Potti, VP/GM of Security, Google Cloud. “Lacework’s integration with Chronicle Security Operations enables organizations to detect and address the right threats via contextual insights that matter the most across their diverse environments.”

 

Key capabilities of this integration include:

  • Anomaly detections from Lacework, including the cloud control plane, audit logs, cloud, and container instances for Google Cloud, AWS, and Microsoft Azure are all shared with Chronicle Security Operations.

  • Using Chronicle’s Universal Data Model parsers, customers can easily onboard this integration within their existing Chronicle instance.

  • Customers will be able to create automation, orchestration and response playbooks using Chronicle SOAR to quickly react to and address issues.

 

“Cloud threats are only becoming more sophisticated over time, so it’s critical for security teams to have the right context to make the right decisions to remediate issues quickly,” said Jay Parikh, co-CEO, Lacework. “Through our continued partnership with Google Cloud, we’re making it easier for joint customers to take advantage of the richness of Lacework data so they can get a better understanding of what’s happening across their multi cloud environments and continue to innovate with confidence.”

 

The Lacework integration with Chronicle Security Operations will be available to organizations via Google Cloud Marketplace

 

About Lacework

 

Lacework is the data-driven security company for the cloud. The Lacework Polygraph Data Platform automates cloud security at scale so our customers can innovate with speed and safety. Only Lacework can collect, analyze, and accurately correlate data across an organization’s cloud and Kubernetes environments, and narrow it down to the handful of security events that matter. Customers all over the globe depend on Lacework to drive revenue, bring products to market faster and safer, and consolidate point security solutions into a single platform. Founded in 2015 and headquartered in San Jose, Calif., Lacework is backed by leading investors like Sutter Hill Ventures, Altimeter Capital, D1 Capital Partners, Tiger Global Management, Counterpoint Global (Morgan Stanley), Franklin Templeton, Durable Capital, GV, General Catalyst, XN, Coatue, Dragoneer, Liberty Global Ventures, and Snowflake Ventures, among others. Get started at lacework.com.

The post Lacework Brings Its CNAPP Solution To Google Cloud’s Chronicle Security Operations appeared first on Cybersecurity Insiders.

Cutting-edge security research team debuts research on Versioning in Cloud Environments

Laminar, the leader in public cloud data security, today announced the launch of Laminar Labs, the company’s cutting-edge research team designed to help organizations protect their most sensitive cloud data. Led by Laminar CTO and Co-founder Oran Avraham, the team also includes Laminar Chief Scientist Joey Geralnik and Laminar VP of Data Dan Eldad and will be responsible for discovering, analyzing and designing defenses for emerging cloud data security risks. To mark its debut, Laminar Labs has published its first blog post, “Versioning in Cloud Environments: How It Can Cause Shadow Data & How to Mitigate the Risk.”

The Laminar Labs Team

The Laminar Labs team has decades of collective experience in the Israel Defense Forces and a combined 40+ years of cybersecurity industry experience. Team lead Avraham identified the first iPhone 3G baseband vulnerability at just 17 and has since gone on to win the annual Google Capture the Flag (CTF) competition five times in the past six years. Most recently, Avraham and several Laminar Labs team members won the AWS Security Jam contest at AWS re:Inforce earlier this year. This expertise will bring red team experience and insights to blue teams around the world.

Laminar Labs has already scanned many petabytes of data in order to provide meaningful analysis and research to Laminar customers to keep customers’ public cloud data safe. The team will continue to publish data-driven industry research to provide guidance on how security teams can protect their cloud data.

“While the cloud offers organizations a host of benefits, it also has come with significant security challenges. It’s become increasingly important for data security professionals to be armed with data-driven research to protect their most sensitive cloud data assets. This is why we created Laminar Labs,” said Amit Shaked, CEO and co-founder of Laminar. “It is our hope that our experienced research team can connect the dots for security professionals to protect organizations’ most precious assets.”

Laminar Labs’ First Research Findings

Versioning in AWS S3 buckets, Azure Blob containers and Google Cloud buckets can also create unknown or “shadow” data. If that shadow data includes sensitive information, it increases its value in the eyes of attackers.

Laminar Labs’ inaugural research, “Versioning in Cloud Environments: How It Can Cause Shadow Data & How to Mitigate the Risk,” provides valuable insights on what shadow data is and its risk to company networks. It also explores how versioning in AWS S3 buckets, Azure Blob Containers and Google Cloud buckets can add to data exposure risk, and how data security professionals can mitigate the risk.

For more information and to read the research, visit the Laminar blog.

About Laminar

Laminar’s Cloud Data Security Platform protects data for everything you build and run in the cloud across cloud providers (AWS, Azure, and GCP) and cloud data warehouses such as Snowflake. The platform autonomously and continuously discovers and classifies new datastores for complete visibility, prioritizes risk based on sensitivity and data risk posture, secures data by remediating weak controls and actively monitors for egress and access anomalies. Designed for the multi cloud, the architecture takes an API-only approach, without any agents, and without sensitive data ever leaving your environment. Founded in 2020 by a brilliant team of award winning Israeli red team experts, Laminar is proudly backed by Insight Partners, Tiger Global, Salesforce Ventures, TLV Partners, and SentinelOne (NYSE:S). To learn more please visit www.laminarsecurity.com.

The post Laminar Launches Laminar Labs to Shine Light on Shadow Data, Cloud Security Risks appeared first on Cybersecurity Insiders.