In the realm of cybersecurity, where data has become an invaluable asset, precise understanding of technical terms is essential for professionals. Yet, many in the tech field find key data security terms perplexing. 

To address this gap, Kiteworks has analyzed search data to reveal the most frequently misunderstood data security concepts in the U.S. As cyber threats become increasingly sophisticated, mastering these terms is crucial for effective risk management. Kiteworks provides expert insights to clarify these critical concepts and underscores the need for comprehensive data protection strategies in 2024.

The Most Misunderstood Data Security Terms:

Please see the full dataset here.

VPN is the Most Misunderstood Data Security Term in the U.S.

The most misunderstood data security term in the U.S. is “Virtual Private Network (VPN),” which sees an average of 57,840 searches per month or 694,080 annually. Despite its significance in securing online connections and protecting sensitive data, many are unclear about the full scope of VPNs. 

Tim Freestone, Chief Strategy and Marketing Officer at Kiteworks, comments: “A Virtual Private Network (VPN) is essential for ensuring secure and private connections over the internet. A VPN is designed to encrypt your online activities, making it harder for cybercriminals, and even your internet provider, to intercept your data. Nevertheless, VPNs have their limitations: there remains an underlying risk when you open a VPN tunnel into your employer’s network from an untrusted home or public Wi-Fi network.

Understanding VPNs is crucial not just for protecting personal privacy but also for securing sensitive business information, particularly in remote work environments. Many organizations use VPNs as a fundamental layer of their cybersecurity strategy, highlighting their importance in safeguarding against potential breaches and unauthorized access.”

HIPAA is the Second Most Misunderstood Data Security Term

Following closely is the “Health Insurance Portability and Accountability Act (HIPAA),” with 13,700 searches each month or 164,400 annually. Despite its significance in safeguarding sensitive health information, many are unclear about the definition of HIPAA

“In 2023, healthcare organizations experienced the most data breaches since 2009, with the industry paying the highest average data breach cost compared to other industries since 2010. The HIPAA Privacy Rule is a key federal law which establishes national standards for protecting individuals’ medical records and other personal health information. 

Understanding HIPAA is not just essential for compliance but also for protecting patients from potential data breaches and protected health information (PHI) loss, which could have severe consequences. Some organizations that don’t work in the healthcare sector still use HIPAA as a measure for the maturity of their data security, signifying its importance.”

Malware Ranks Third Among the Most Misunderstood Data Security Terms

The third most misunderstood data security term is “Malware,” with 13,200 monthly searches and 158,400 annually. Although widely used, the term still causes confusion, making it a critical point of concern.

Freestone clarifies: “Malware, or malicious software, is designed to infiltrate, damage, or disable computers and systems. It encompasses various types, including viruses, ransomware, and spyware. Given the rising sophistication of cyberattacks, understanding malware and its potential impact on an organization’s infrastructure and sensitive data is vital. Failure to recognize the threats posed by malware can lead to devastating breaches and significant financial losses. By protecting their infrastructure against malware, organizations can ensure the systems and data they rely on to function and grow is secured.”

Digital Rights Management (DRM) and Secure File Transfer Protocol (SFTP) rank in the Top 10 Most Misunderstood Data Security Terms

In the top 10, “Digital Rights Management (DRM)” ranks eighth with 5,770 monthly searches or 69,240 annually. DRM, which refers to technologies used to control the access to and use of digital content, is often misunderstood despite its widespread application in protecting intellectual property and other sensitive content. “Secure File Transfer Protocol (SFTP)” also makes the list, with 4,950 monthly searches and 59,400 annually. SFTP is a crucial tool for securely transferring files over a network, yet its functionality and benefits are frequently unclear to many users.

“Digital Rights Management (DRM) is a critical tool for safeguarding intellectual property like eBooks, software, and videos, but also increasingly other sensitive, proprietary content that needs to be shared with select partners for short time periods. This can include contracts, proposals, and customer records . DRM works by encrypting the digital content so that only authorized users can access it, restricting how it can be used and distributed. The primary function of DRM is to prohibit content copying or limit the number of people or devices that can access a piece of content. 

Secure File Transfer Protocol (SFTP), by contrast, is vital for transferring files securely, reducing the risk of interception and unauthorized access. SFTP is the file transfer tool of choice in many organizations, encrypting the credentials and the content  to unreadable format. This encryption ensures that sensitive information remains protected even if the data is intercepted during transmission.”

Why Understanding Data Security Terms is Crucial for Organisations

As cyber threats become increasingly frequent and sophisticated, it is crucial for organizations to have a comprehensive understanding of key data security terms to safeguard sensitive information. Knowledge of concepts such as VPNs, HIPAA regulations, and malware empowers companies to protect personal data, ensure compliance, and fortify their defenses against potential breaches.

 

The post The Most Misunderstood Data Security Terms in The U.S. appeared first on Cybersecurity Insiders.

In the realm of cybersecurity, where data has become an invaluable asset, precise understanding of technical terms is essential for professionals. Yet, many in the tech field find key data security terms perplexing. 

To address this gap, Kiteworks has analyzed search data to reveal the most frequently misunderstood data security concepts in the U.S. As cyber threats become increasingly sophisticated, mastering these terms is crucial for effective risk management. Kiteworks provides expert insights to clarify these critical concepts and underscores the need for comprehensive data protection strategies in 2024.

The Most Misunderstood Data Security Terms:

Please see the full dataset here.

VPN is the Most Misunderstood Data Security Term in the U.S.

The most misunderstood data security term in the U.S. is “Virtual Private Network (VPN),” which sees an average of 57,840 searches per month or 694,080 annually. Despite its significance in securing online connections and protecting sensitive data, many are unclear about the full scope of VPNs. 

Tim Freestone, Chief Strategy and Marketing Officer at Kiteworks, comments: “A Virtual Private Network (VPN) is essential for ensuring secure and private connections over the internet. A VPN is designed to encrypt your online activities, making it harder for cybercriminals, and even your internet provider, to intercept your data. Nevertheless, VPNs have their limitations: there remains an underlying risk when you open a VPN tunnel into your employer’s network from an untrusted home or public Wi-Fi network.

Understanding VPNs is crucial not just for protecting personal privacy but also for securing sensitive business information, particularly in remote work environments. Many organizations use VPNs as a fundamental layer of their cybersecurity strategy, highlighting their importance in safeguarding against potential breaches and unauthorized access.”

HIPAA is the Second Most Misunderstood Data Security Term

Following closely is the “Health Insurance Portability and Accountability Act (HIPAA),” with 13,700 searches each month or 164,400 annually. Despite its significance in safeguarding sensitive health information, many are unclear about the definition of HIPAA

“In 2023, healthcare organizations experienced the most data breaches since 2009, with the industry paying the highest average data breach cost compared to other industries since 2010. The HIPAA Privacy Rule is a key federal law which establishes national standards for protecting individuals’ medical records and other personal health information. 

Understanding HIPAA is not just essential for compliance but also for protecting patients from potential data breaches and protected health information (PHI) loss, which could have severe consequences. Some organizations that don’t work in the healthcare sector still use HIPAA as a measure for the maturity of their data security, signifying its importance.”

Malware Ranks Third Among the Most Misunderstood Data Security Terms

The third most misunderstood data security term is “Malware,” with 13,200 monthly searches and 158,400 annually. Although widely used, the term still causes confusion, making it a critical point of concern.

Freestone clarifies: “Malware, or malicious software, is designed to infiltrate, damage, or disable computers and systems. It encompasses various types, including viruses, ransomware, and spyware. Given the rising sophistication of cyberattacks, understanding malware and its potential impact on an organization’s infrastructure and sensitive data is vital. Failure to recognize the threats posed by malware can lead to devastating breaches and significant financial losses. By protecting their infrastructure against malware, organizations can ensure the systems and data they rely on to function and grow is secured.”

Digital Rights Management (DRM) and Secure File Transfer Protocol (SFTP) rank in the Top 10 Most Misunderstood Data Security Terms

In the top 10, “Digital Rights Management (DRM)” ranks eighth with 5,770 monthly searches or 69,240 annually. DRM, which refers to technologies used to control the access to and use of digital content, is often misunderstood despite its widespread application in protecting intellectual property and other sensitive content. “Secure File Transfer Protocol (SFTP)” also makes the list, with 4,950 monthly searches and 59,400 annually. SFTP is a crucial tool for securely transferring files over a network, yet its functionality and benefits are frequently unclear to many users.

“Digital Rights Management (DRM) is a critical tool for safeguarding intellectual property like eBooks, software, and videos, but also increasingly other sensitive, proprietary content that needs to be shared with select partners for short time periods. This can include contracts, proposals, and customer records . DRM works by encrypting the digital content so that only authorized users can access it, restricting how it can be used and distributed. The primary function of DRM is to prohibit content copying or limit the number of people or devices that can access a piece of content. 

Secure File Transfer Protocol (SFTP), by contrast, is vital for transferring files securely, reducing the risk of interception and unauthorized access. SFTP is the file transfer tool of choice in many organizations, encrypting the credentials and the content  to unreadable format. This encryption ensures that sensitive information remains protected even if the data is intercepted during transmission.”

Why Understanding Data Security Terms is Crucial for Organisations

As cyber threats become increasingly frequent and sophisticated, it is crucial for organizations to have a comprehensive understanding of key data security terms to safeguard sensitive information. Knowledge of concepts such as VPNs, HIPAA regulations, and malware empowers companies to protect personal data, ensure compliance, and fortify their defenses against potential breaches.

 

The post The Most Misunderstood Data Security Terms in The U.S. appeared first on Cybersecurity Insiders.

Myth (noun). 1. an ancient story or set of stories, especially explaining the early history of a group of people or about natural events and facts; 2. a commonly believed but false idea. 

Myths in their purest form have been around since ancient times. Stories to help people understand and navigate the world around them. More recently, they’ve become less folklore and more fallacy as people buy into ideas that suit their narrative without any basis in fact. And, perhaps this is never more true than when it comes to cybersecurity. 

Whether it’s willful ignorance or the mistaken belief that a cyber event won’t happen to them, too many companies are operating under a set of misguided beliefs that they are safe, when nothing could be further from the truth. After many years in cybersecurity, most recently in distributed denial of service (DDoS) mitigation solutions, I can assure you no one is safe, especially when it comes to DDoS attacks. I’m sharing a few of the most common myths surrounding DDoS attacks and mitigation in hopes that by arming companies with the facts, they don’t fall victim to the fiction.

1.Nothing to see here. Whereas in 2008, the assertion that certain financial institutions were too big to fail saved them from certain ruin, a similar but opposite belief by some companies that they are too small to be noticed by cyberattackers could inadvertently lead them to certain ruin. Despite near-weekly evidence to the contrary, these organizations believe they aren’t significant enough to merit a blip on a threat actor’s radar. And while it may be true that they are an unlikely target of a nation state-based attack, there are plenty of ne’er do wells who are looking for an easy score, courtesy of an unprotected or underprotected company. If a company has an online presence, they are a potential victim, no matter their size or industry.

2.The total package. Simply implementing a DDoS protection solution is not enough to keep the wolves at bay. In fact, no solution can completely shield a company from potential attacks and those who claim they can should be avoided at all costs. That’s not to say that DDoS prevention solutions aren’t a worthwhile investment. They are and are an essential part of a company’s security posture. While they can mitigate various types of attacks, they cannot guarantee absolute protection. Threat actors are constantly working to outsmart the next, best security solution and are tailoring their tactics to leverage new vulnerabilities to their advantage. Companies need to make sure they are employing a comprehensive approach to security that includes a DDoS solution that limits downtime to seconds and not minutes.

3.One-size-fits-all (or does it?). There’s a misconception that one DDoS protection solution is the same as the next, with price being the main differentiator. Nothing could be further from the truth, however. Different solutions specialize in mitigating different types of attacks and offer varying levels of protection. Before shopping for a DDoS protection solution, organizations must have a solid understanding of their specific needs and choose a service provider accordingly. Ideally, a solution provider should provide options that will allow for protection at scale and that can be tailored to suit an organization’s needs now and in the future.

4.Faulty math. Some believe that implementing robust DDoS protection is cost-prohibitive and only necessary for large enterprises. However, DDoS attacks don’t discriminate and target businesses of all sizes. All too often, the cost of mitigation is often far lower than the potential losses incurred during an attack in terms of downtime, reputation damage, and lost revenue. In fact, research has found that the average loss to a business under DDoS attack is anywhere from thousands to hundreds of thousands of dollars per hour. Compare that to eliminating the cost of a DDoS protection solution from your budget, and the math doesn’t add up.

5.Firewall insufficiency, bandwidth buster. Firewalls are essential components of network security, but they are not the end-all, be-all when it comes to mitigating DDoS attacks. While firewalls play the important role of gate-keeper, stopping unwanted traffic, many DDoS attacks operate by overwhelming network resources, making them inaccessible to legitimate users. It might follow then that the key to success comes with adding bandwidth; unfortunately, a significant portion of DDoS attacks are non-volumetric in nature, meaning the bandwidth you added to alleviate the problem might just make things worse. Look for specialized DDoS protection services that employ advanced techniques such as traffic filtering, rate limiting, and behavioral analysis to mitigate these attacks effectively.

6.Set it and forget it: While it’s tempting to think that once you have DDoS protection measures in place you can go about your business and forget about them, you’d be wrong. Strong DDo protection demands continued monitoring, maintenance and updates to keep abreast of evolving threats. Therefore, it’s essential that companies regularly review and update their DDoS mitigation strategy. Attackers constantly develop new methods, and your defenses must evolve accordingly.

7.The call came from inside the house: All too often, organizations focus on protecting themselves from external DDoS attacks while overlooking the importance of protecting their internal networks from attacks that originate from inside the company. Insider threats or compromised devices can launch DDoS attacks that disrupt internal services and operations so make sure that any DDoS protection solution you consider accounts for both firms of attack

Whereas the idea that ignorance is bliss might be a balm meant to soothe a wrongdoer’s conscience, the stark reality is that what you don’t know can, in fact, be your undoing. Know the facts, and be prepared.

 

The post Seven Deadly Myths of DDoS Protection appeared first on Cybersecurity Insiders.

As we near the 2024 US presidential election, businesses around the country face an escalating cybersecurity threat that demands immediate and sustained action. According to recent research, two-thirds of employees already report an increase in political emails hitting their work inboxes. This increase doesn’t just clutter mailboxes—it creates a perfect storm for potential ransomware attacks, putting organizations at significant risk. 

Cybercriminals are, at their core, opportunists. They recognize that major public events like elections create an ideal environment for their nefarious activities. During these times, emotions can run high. Americans also tend to pay closer attention to political news and communications. This means workers may be more susceptible to election-related phishing attempts designed to compromise their employers’ IT systems. 

The success of phishing attacks often depends on the attacker’s ability to engineer an emotional response. By tapping into the heightened political atmosphere, cybercriminals try to craft messages that provoke strong reactions, increasing the likelihood that recipients will click on malicious links without proper validation. 

Consider the typical election-related email: it might claim to contain breaking news about a candidate, allege a scandal or promise exclusive insider information. For an employee caught up in the political fervor, the temptation to click could override their usual sense of caution. This momentary lapse in judgment is all a skilled attacker needs to gain a foothold inside an organization’s network. 

The research also highlighted another alarming statistic: more than a third of end users admitted that they’re at least somewhat likely to click on a link in a political campaign email, even if it appears suspicious. And one out of five are unlikely to validate a political campaign email before opening an attachment. 

This lack of caution is troubling on its own, but it gets worse:

Most U.S. workers access personal email on the same devices they use to access work correspondence. This blurring of personal and professional boundaries creates a significant vulnerability for businesses nationwide. An employee engrossed in the latest poll numbers or campaign developments might be less vigilant about cybersecurity best practices, especially if they’re toggling between work tasks and election news. 

The severe consequences of a successful phishing attack that leads to ransomware are numerous, from operational and financial disruption to legal and reputational repercussions. As outlined, these risks are becoming even more pronounced as the election season heats up. It’s crucial organizations bolster their cyber resilience and maintain a heightened state of vigilance to protect against potentially devastating attacks.

A comprehensive approach to heightened cyber resilience should include: 

  • Employee education and awareness – Implement comprehensive training programs that teach staff to recognize and report suspicious emails, particularly those with political content. IT staff should conduct regular phishing simulations to test and reinforce employee best practices and to create a culture of cyber resilience awareness, where employees feel empowered to report potential threats without fear of reprimand. 
  • Robust email security – Deploy advanced email security solutions capable of identifying and quarantining potential threats before they reach employee inboxes. Additionally, protocols like domain-based message authentication, reporting and conformance, sender policy frameworks and domain keys identified mail can reduce the risk of email spoofing, while AI-powered email filtering systems can detect subtle anomalies in message content and sender behavior. 
  • Network segmentation and access control – Properly segmenting networks can limit the potential spread of ransomware. Implementing least-privilege access controls also helps ensure employees have access only to the data and systems necessary for their roles. 
  • Comprehensive backup and recovery – Backup and recovery is your last line of defense against threats like ransomware. Maintain up-to-date, clean backups of critical data and systems and ensure you can efficiently and effectively recover from them. All the backups in the world do no good if you can’t recover them. IT leaders should consider AI-powered data protection along with a 3-2-1 backup strategy: at least three copies of backup data on at least two different media with at least one copy stored off-site and on immutable storage.
  • Incident response planning – Develop and regularly update a detailed incident response plan that outlines steps to take in the event of a ransomware attack. Tabletop exercises should be conducted to familiarize key personnel with their roles and responsibilities during and after an incident, while partnerships with cyber resilience firms and legal cybersecurity counsel should be formed before a crisis occurs. 
  • Endpoint protection monitoring – Deploy and maintain up-to-date endpoint protection software on all devices that access company resources. Endpoint detection and response solutions that can quickly identify and contain potential threats should be implemented as part of a zero-trust security model, which assumes no user or device is trustworthy. 
  • Policy enforcement – Develop and enforce clear policies regarding the use of work devices for personal activities, especially during sensitive times like elections. These should include stricter controls on non-work-related web browsing and email use during high-risk periods. 

The convergence of personal political passion and access to critical company networks creates a potent risk that organizations cannot afford to ignore. As we move toward November, businesses must remain vigilant and proactive in their cyber resilience. Leaders should also view this period not just as a time of increased risk, but as an opportunity to strengthen their overall security posture. The steps outlined here to combat election-related ransomware threats will serve organizations long after the polls close, too, creating a more resilient and secure business environment now and in the future.

 

The post Beyond the Campaign Trail: Strengthening Your Business’s Cyber Defenses for Election Season appeared first on Cybersecurity Insiders.

Although the federal government tasks companies with meeting cybersecurity mandates and other forms of regulatory compliance, few seem to cry foul. That’s largely because Washington, D.C., is expected to spend nearly $7 trillion in contracts by the end of the 2024 fiscal year. Those monetary rewards have nearly doubled over the last 10 years and are on track to exceed $8 trillion in 2029.

For defense contractors and other businesses to remain in the government’s good graces, industry leaders must meet and maintain some of the most stringent data security standards. The U.S. Department of Defense (DoD) is currently rolling out the Cybersecurity Maturity Model Certification (CMMC), which overlaps with and differs from the Defense Federal Acquisition Regulation Supplement (DFARS) and the National Institute of Standards and Technology (NIST) framework, particularly NIST SP 800-171. Understanding the differences between CMMC, DFARS, and NIST is essential if the more than 100,000 contractors, as well as subcontractors, that generate revenue from DoD contracts are to remain in compliance.

What is NIST?

Part of the U.S. Department of Commerce, the National Institute of Standards and Technology helps advance American scientific innovation, business competitiveness, and technologies by creating security standards. While its original purpose was to further the country’s economic prosperity, NIST SP 800-171 has been adopted as foundational data security thought leadership. This guidance outlines many of the best practices needed to safeguard data related to our national security.

The NIST SP 800-171 standard has been integrated into DFARS and is also the bedrock of the Pentagon’s CMMC 2.0 mandate. Direct defense contractors and those working in the private sector supply chain must adhere to one of three CMMC cyber hygiene levels or risk being sidelined.

What is CMMC 2.0?

The CMMC model has undergone some modifications since the Pentagon published its 2020 interim rule in the Federal Register. A change in governance resulted in scrapping a five-tiered cybersecurity model in favor of three tiers. Based on NIST SP 800-171 and other data security protocols, CMMC 2.0 brings many of the most determined cybersecurity measures under one umbrella. Every organization that stores or transmits DoD-related Controlled Unclassified Information (CUI) and Federal Contract Information (FCI) must meet CMMC compliance.

What is DFARS?

The Defense Federal Acquisition Regulation Supplement involves an additional layer of rules that pertain to the Federal Acquisition Regulation, also known as FAR. Rolled out during the 1980s, these supplemental DoD directives came into play when the Pentagon purchased goods, materials, and services. What began as a set of quality-related standards evolved into a set of guidelines designed to also protect national security. Along with wide-reaching product and services regulations, DFARS also has rules for CUI.

For example, the DFARS 7012 clause mandates that defense contractors and subcontractors adequately secure critical DoD data and promptly report any cyberattacks. Private-sector companies operating in the military defense niche must adopt roughly 79 security protocols, disclose cyber incidents, and ensure ongoing systems monitoring of OpSec Information, Export-Controlled Information, and Controlled Technical Information. While there was not necessarily a problem with the evolving DFARS mandate in terms of technical elements, the DoD decided to pull the best of the best measures into one policy.

How Do CMMC, DFARS & NIST Overlap and Differ?

It’s important to keep in mind that both CMMC and DFARS base much of their cybersecurity measures on NIST SP 800-171. If one were to conduct a side-by-side comparison of the 79 DFARS and more than 100 CMMC controls, they would fit into categories such as the following.

  • Configuration Management
  • Critical Incident Response Protocols
  • Cybersecurity Awareness Training
  • Data Storage and Transfer Protections
  • Data and Network Monitoring
  • Network Access Control
  • Risk Assessments
  • Security Audits and Accountability
  • System Login Authentication
  • User Identification and Approval

These NIST security priorities may apply in different fashions to DFARS and CMMC, but they share a common theme. The digital security measures are all designed to deter, detect, and expel threat actors. Beyond the technical NIST differences between DFARS and CMMC, the latter does not allow organizations that possess or transfer highly sensitive information to self-assess without oversight. They must enlist the support of a CMMC Third-Party Assessor Organization (C3PAO) to perform rigorous testing and report the findings to the DoD. In CMMC Level I and some Level II instances, an outfit may follow the self-testing procedures and report that score. Many reach out to a C3PAO to determine which CMMC cyber hygiene applies, refine the network, and integrate mandated protections.

By contrast, DFARS allowed, perhaps, too many military supply-chain companies to self-assess and trust them to maintain a robust cybersecurity posture. That issue resulted in an unacceptable number of data breaches and stolen national security secrets. Federal officials developed CMMC to effectively override much of the DFARS mandate and ensure ongoing cybersecurity compliance.

How to Comply with CMMC or DFARS

If your organization is currently NIST SP 800-171, in all likelihood, it also meets the DFARS standards. However, your enterprise will still need to demonstrate CMMC 2.0 compliance because the newly minted security measure integrates NIST SP 800-171 plus wide-reaching others.

The best way to accomplish compliance is to onboard a C3PAO that can perform an assessment in light of these regulations and meet the applicable cybersecurity standard.

Author Bio

John Funk is a Creative Consultant at SevenAtoms. A lifelong writer and storyteller, he has a passion for tech and cybersecurity. When he’s not found enjoying craft beer or playing Dungeons & Dragons, John can be often found spending time with his cats.

 

 

The post CMMC vs DFARS vs NIST: What Are the Differences? appeared first on Cybersecurity Insiders.

The cybersecurity industry is littered with buzzwords, technologies and acronyms that can often be overwhelming for security professionals doing their best to keep up and ensure their organizations are being adequately protected. Naturally, it’s the leading analyst, research and consulting agencies that security practitioners listen to the most when it comes to making decisions regarding what technology investments to make for the business. 

As one of the leading industry consultancy and research firms, Gartner stated that AI risk and security management were the number one strategic technology trends for 2024. Understandable considering the adoption of AI technology within cybersecurity has been rife on both sides of the battlefield with threat actors actively using AI capabilities to cause more digital destruction, while cybersecurity vendors have looked to AI to enhance defenses. 

Gartner’s number two trend from the list was the birth of the Continuous Threat Exposure Management (CTEM) ideology to help counter cybersecurity risk. While it may be another acronym to remember, CTEM is here to stay because it is a valuable process to help organizations continually manage cyber hygiene and risk across all digital environments. Given the rapid expansion of modern digital attack surfaces, having automated and ongoing risk management is necessary to aid today’s security departments. 

With CTEM, there are five key stages to this concept which are: scope, discover, prioritize, validate and mobilize. The objective is to break these stages into more manageable components for organizations, allowing security teams to focus on the business-critical aspects first. In fact, the CTEM approach should be considered a priority by organizations because it is estimated they would be three times less likely to experience a breach by 2026, underscoring its critical importance.

What are CTEM’s components? 

At its core, CTEM is defined as “a five-stage approach that continuously exposes an organization’s networks, systems, and assets to simulated attacks to identify vulnerabilities and weaknesses.” It is a proactive approach to cybersecurity that involves continuously assessing and managing an organization’s exposure to cyber threats and is different from traditional vulnerability management approaches which often fail to provide businesses with an efficient detailed plan of action from the findings. 

If anything, security teams are left with long lists of vulnerabilities that need fixing but with blanket remediation guidance, which makes solving the problems and dealing with the real risk even more difficult.

Naturally, many security practitioners will use the CVSS (Common Vulnerability Scoring System) for aid because it offers prioritization and evaluation of vulnerabilities in a consumable manner, but where it fails is in its true description of the potential impact to a company if the vulnerability is not rectified. 

This is where CTEM excels because it will help businesses prioritize vulnerabilities based on their significance level. Such information gives clarity on where the security gaps are, allowing clear and actionable improvement plans to be made accordingly. Security teams will gain a new-level of comprehension as to their external attack surface and how to continuously manage overall threat exposure. CTEM encompasses creating a continuous process of discovery and remediation powered by real-time threat intelligence. With critical risks often hidden within digital infrastructures, continuous monitoring and management are key when following a CTEM blueprint.

Knowing the key stages of CTEM

The CTEM approach consists of five key stages with each playing an important role in protecting an organization:

1.Scope – allows the business to identify and scope its infrastructure for the critical areas that need to be analyzed and protected.

2.Discovery – after scoping, a list of vulnerable assets is revealed.

3.Prioritization – review the risks flagged and their potential impact on the business.

4.Validation – understand how threat actors can exploit these vulnerabilities, how monitoring systems may react, and if further footholds could be gained. 

5.Mobilization – agree on the resolution with actionable goals and objectives while providing effective reporting to convey the urgency to stakeholders. 

While these stages may already be incorporated in an organization’s defense, often they are siloed or not continuously in sync. Security departments that want to take their organization along the CTEM journey, leveraging security platforms that harness the power of External Attack Surface Management (EASM), Risk-based vulnerability Management (RBVM), Threat Intelligence and targeted testing, is necessary. 

By following the CTEM methodology, organizations can bring these critical components together in a structured approach to systematically address vulnerabilities, prioritize risks, effectively reduce the overall attack surface and protect the digital infrastructure. 

 

The post Cybersecurity Strategy: Understanding the Benefits of Continuous Threat Exposure Management appeared first on Cybersecurity Insiders.

The cybersecurity industry is littered with buzzwords, technologies and acronyms that can often be overwhelming for security professionals doing their best to keep up and ensure their organizations are being adequately protected. Naturally, it’s the leading analyst, research and consulting agencies that security practitioners listen to the most when it comes to making decisions regarding what technology investments to make for the business. 

As one of the leading industry consultancy and research firms, Gartner stated that AI risk and security management were the number one strategic technology trends for 2024. Understandable considering the adoption of AI technology within cybersecurity has been rife on both sides of the battlefield with threat actors actively using AI capabilities to cause more digital destruction, while cybersecurity vendors have looked to AI to enhance defenses. 

Gartner’s number two trend from the list was the birth of the Continuous Threat Exposure Management (CTEM) ideology to help counter cybersecurity risk. While it may be another acronym to remember, CTEM is here to stay because it is a valuable process to help organizations continually manage cyber hygiene and risk across all digital environments. Given the rapid expansion of modern digital attack surfaces, having automated and ongoing risk management is necessary to aid today’s security departments. 

With CTEM, there are five key stages to this concept which are: scope, discover, prioritize, validate and mobilize. The objective is to break these stages into more manageable components for organizations, allowing security teams to focus on the business-critical aspects first. In fact, the CTEM approach should be considered a priority by organizations because it is estimated they would be three times less likely to experience a breach by 2026, underscoring its critical importance.

What are CTEM’s components? 

At its core, CTEM is defined as “a five-stage approach that continuously exposes an organization’s networks, systems, and assets to simulated attacks to identify vulnerabilities and weaknesses.” It is a proactive approach to cybersecurity that involves continuously assessing and managing an organization’s exposure to cyber threats and is different from traditional vulnerability management approaches which often fail to provide businesses with an efficient detailed plan of action from the findings. 

If anything, security teams are left with long lists of vulnerabilities that need fixing but with blanket remediation guidance, which makes solving the problems and dealing with the real risk even more difficult.

Naturally, many security practitioners will use the CVSS (Common Vulnerability Scoring System) for aid because it offers prioritization and evaluation of vulnerabilities in a consumable manner, but where it fails is in its true description of the potential impact to a company if the vulnerability is not rectified. 

This is where CTEM excels because it will help businesses prioritize vulnerabilities based on their significance level. Such information gives clarity on where the security gaps are, allowing clear and actionable improvement plans to be made accordingly. Security teams will gain a new-level of comprehension as to their external attack surface and how to continuously manage overall threat exposure. CTEM encompasses creating a continuous process of discovery and remediation powered by real-time threat intelligence. With critical risks often hidden within digital infrastructures, continuous monitoring and management are key when following a CTEM blueprint.

Knowing the key stages of CTEM

The CTEM approach consists of five key stages with each playing an important role in protecting an organization:

1.Scope – allows the business to identify and scope its infrastructure for the critical areas that need to be analyzed and protected.

2.Discovery – after scoping, a list of vulnerable assets is revealed.

3.Prioritization – review the risks flagged and their potential impact on the business.

4.Validation – understand how threat actors can exploit these vulnerabilities, how monitoring systems may react, and if further footholds could be gained. 

5.Mobilization – agree on the resolution with actionable goals and objectives while providing effective reporting to convey the urgency to stakeholders. 

While these stages may already be incorporated in an organization’s defense, often they are siloed or not continuously in sync. Security departments that want to take their organization along the CTEM journey, leveraging security platforms that harness the power of External Attack Surface Management (EASM), Risk-based vulnerability Management (RBVM), Threat Intelligence and targeted testing, is necessary. 

By following the CTEM methodology, organizations can bring these critical components together in a structured approach to systematically address vulnerabilities, prioritize risks, effectively reduce the overall attack surface and protect the digital infrastructure. 

 

The post Cybersecurity Strategy: Understanding the Benefits of Continuous Threat Exposure Management appeared first on Cybersecurity Insiders.

Introduction

Recent NetRise research found that vulnerability risks are, on average, 200 times greater than what traditional network-based vulnerability scanners report!

For years, traditional network-based vulnerability scanning has been a cornerstone of cybersecurity efforts for enterprise organizations. These scanners have played a critical role in identifying potential security weaknesses by analyzing network traffic and detecting known vulnerabilities in devices based on their make, model, and firmware versions. While these tools have been indispensable, they also have significant limitations that leave organizations vulnerable to hidden software risks.

As the cybersecurity landscape evolves, it is becoming increasingly clear that traditional vulnerability scanning methods are inadequate for addressing the complex and dynamic nature of modern software environments. This blog explores the limitations of these traditional methods, highlights findings from the NetRise Supply Chain Visibility & Risk Study, and discusses steps organizations can take to achieve comprehensive software visibility and better manage their vulnerability risks.

The Importance of Vulnerability Risk Management

Vulnerability risk management is a crucial component of any robust cybersecurity strategy. It involves identifying, assessing, and mitigating vulnerabilities to reduce the attack surface and protect against potential threats. Effective vulnerability risk management helps organizations prioritize their security efforts, allocate resources efficiently, and minimize the likelihood of successful cyberattacks.

By systematically identifying and addressing vulnerabilities, organizations can reduce their exposure to threats and improve their overall security posture. However, achieving this requires accurate and comprehensive visibility into all software components and their associated risks. Something traditional network-based vulnerability scanning cannot and does not provide.

Why Do Traditional Network-Based Scanners Underreport Software Vulnerabilities?

Traditional network-based vulnerability scanners can under report the extent of software vulnerabilities due to inherent limitations in their approach. These scanners typically perform surface-level assessments, focusing on known vulnerabilities associated with device make and model names, and possibly firmware versions. They rely on looking up the make, model, and firmware in existing vulnerability databases to generate a list of known vulnerabilities specifically reported for these devices.

However, this approach fails to account for vulnerabilities in deeply embedded software components and third-party libraries that make up the device’s firmware and software stack. Vulnerability scanning from the outside cannot discover these detailed software components and libraries in the code, and thus cannot report on known vulnerabilities for the device that is running those software components.

The difficulty in getting to the entire software stack SBOM (Software Bill of Materials) and corresponding vulnerabilities has led to an attitude of acceptance throughout the industry when it comes to the risk these devices and software can pose in the network. This must change. Organizations need to adopt automated software analysis methods that provide a comprehensive and granular view of all software components and risks, complementing existing vulnerability scanning processes and helping prioritize the full list of vulnerabilities for security teams.

Examples of the Underreporting of Software Vulnerabilities

The most concerning finding from the recent NetRise Supply Chain Visibility & Risk Study is the significant underestimation of software vulnerability risks in networking equipment. The research uncovered that vulnerability risks are, on average, 200 times greater than what traditional network-based vulnerability scanners report. This discrepancy highlights a critical blind spot in current cybersecurity practices.

Read more in the NetRise Supply Chain Visibility and Risk Study, Edition 1: Networking Equipment; Q3 2024

Implications of Underestimation

This finding is particularly concerning because it means organizations have a false sense of security, believing their systems are more secure than they actually are. This false sense of security can lead to inadequate risk management practices and unpreparedness for potential attacks. The study underscores the urgent need for comprehensive software visibility because, without detailed insights into the entire software stack and their vulnerabilities, organizations cannot effectively prioritize and mitigate risks.

The implications of underestimating software vulnerabilities are far-reaching and severe:

1.False sense of security:

Incomplete scanning provides a false sense of security, leading organizations to believe they are more protected than they are. This can result in complacency and a lack of urgency in addressing critical vulnerabilities. At a minimum, organizations should understand their risk levels, even if all they do is explicitly acknowledge and accept these risks. 

2.Unaddressed risks and vulnerabilities:

Undetected vulnerabilities remain unaddressed, leaving systems exposed to potential exploits. These hidden vulnerabilities can be exploited by attackers, leading to significant security breaches.

3.Increased risk of exposure to software supply chain cyberattacks

Undetected threats can have substantial financial and operational impacts, especially if the company is hit with a complex to respond to and remediate supply chain cyber-attack.

Steps to Address the Limitations

To address these challenges, organizations must prioritize achieving comprehensive software visibility. The findings from the NetRise study underscore the critical importance of having a detailed understanding of all software components within the supply chain. Here are some basic steps companies should consider:

1. Generate comprehensive SBOMs

Creating detailed software bills of materials (SBOMs) is the foundation of effective supply chain security. SBOMs provide a clear inventory of all software components, including third-party libraries and dependencies. This inventory is essential for identifying and managing risks effectively.

2.Implement automated software risk analysis

Traditional network-based vulnerability scanners often underreport vulnerability information as we’ve seen. By augmenting these scans with detailed software risk analysis methods, companies can uncover a much more complete risk picture, ensuring a more thorough risk assessment. Automated tools can help generate and analyze SBOMs, providing continuous and up-to-date visibility.

3.Prioritize risk management

Once comprehensive visibility is achieved, organizations should prioritize vulnerabilities based on factors beyond CVSS scores, such as weaponization and network accessibility. This approach ensures that the most critical threats are addressed first. Feeding this vulnerability information into existing security operations center (SOC) tools ensures it is widely available and actionable.

4.Continuous monitoring and updating

Supply chain security is not a one-time effort. Continuous monitoring of software components is essential to stay ahead of emerging threats. Companies should establish processes for ongoing vulnerability assessment and remediation, ensuring that their software inventory is always current, and risks are continuously managed.

By focusing on these steps, organizations can significantly enhance their supply chain security processes, mitigate risks more effectively, and protect their critical assets.

Conclusion

The limitations of traditional network-based vulnerability scanning methods are becoming increasingly apparent in today’s complex cybersecurity landscape. These methods often fail to provide a complete picture of the vulnerabilities within an organization’s software environment, leading to a false sense of security and unaddressed risks. To address these challenges, organizations must adopt more robust vulnerability assessment strategies that include comprehensive software visibility and detailed risk analysis.

By generating comprehensive SBOMs, implementing automated software risk analysis, prioritizing risk management, and maintaining continuous monitoring and updating, organizations can significantly improve their vulnerability management practices and protect against evolving threats. The key takeaway is clear: comprehensive software visibility is essential for effective cybersecurity. Organizations cannot secure what they cannot see, and achieving detailed visibility into all software components is the first step towards a robust and resilient security strategy.

The post The Limitations of Traditional Network-Based Vulnerability Scanning – And the Systematic Underestimation of Software Risks appeared first on Cybersecurity Insiders.

Microsoft’s advanced AI assistant, Copilot, has gained significant traction in corporate environments and is rapidly changing how users interact with data across Microsoft 365 applications. Although Copilot introduces countless new possibilities, it has also brought challenges related to data access and security that must be considered.  

As organizations embrace digital transformation and AI adoption, protecting all information is critical, especially data generated by AI. With increasing reliance on AI and machine learning technologies to streamline operations, increase productivity, and reduce costs, classifying and ensuring adequate access controls to sensitive data is paramount to keeping it safe.  

Ultimately, Copilot has brought four key security issues into organizations. First, its output inherits sensitivity labels from the input, which means if data is not classified correctly, the output will also be incorrectly classified. In the case where sensitive data used to generate a quarterly financial report is not correctly classified at the input stage, Copilot will generate a comprehensive report including sensitive earnings data yet fail to classify this data as confidential. A report like this could inadvertently be shared with an external stakeholder.  

Copilot also inherits access control permissions from its inputs, and thus the output inherits these permissions. If data has inappropriate permissioning, sharing and entitlements, the output will also have the same issues possibly leading to a potentially devastating data breach or loss. Concentric AI’s Data Risk Report shows that a great number of business-critical files are at risk from oversharing, erroneous access permissions, inappropriate classification, and unfortunately can be seen by users both internal or external who should not have access.  

Consider this example: An HR manager using Copilot to create an internal report which includes employee’s personal information -and may have source data with overly permissive access controls. This would allow any department member to view all employee records. As a result, this Copilot-generated report would inherit these permissions, and sensitive employee information would be accessible to all department members, violating privacy policies and potentially leading to legal challenges. 

The third key security issue with Copilot is due to company context on sensitivity not factored into the output. Every company has sensitive data including financial records, intellectual property and business confidential customer data. However, Copilot is unlikely to factor this context into its decision making around outputs or who should have access to it.  

Imagine a product development team using Copilot to brainstorm new product ideas based on existing intellectual property (IP) and R&D data, with inputs that might include confidential information about upcoming patents. Copilot, lacking context on the company’s sensitivity towards this IP, will incorporate detailed descriptions of these patents in its output. If this output is shared with a broader audience, the company has inadvertently exposed future product plans and risks IP theft. 

Lastly, Copilot output is unclassified and output that may be sensitive could easily be accessible by anyone. For example, a marketing team could use Copilot to analyze customer feedback, generating a report on customer satisfaction trends. Perhaps the input data contains sensitive customer information, such as criticism of unreleased products. Since Copilot outputs are unclassified by default, the generated report will not flag any of the sensitive customer feedback as confidential. If the report is uploaded to a shared company server without appropriate access restrictions, internal leaks and competitive disadvantage become a significant risk.  

Why we need data security posture management for AI usage 

Data security posture management (DSPM) is an essential pre-requisite to deploying and operating Copilot to help ensure that organizations can adequately balance Copilot’s productivity increases while ensuring sensitive data is protected.   

DSPM empowers organizations to discover sensitive data, visibility into where it resides and determine the type of sensitive data existing across cloud environments. DSPM provides the ability to identify risks by proactively detecting and assessing business-critical data, thereby preventing potential breaches before they occur.  In addition, DSPM uniquely classifies data – by tagging and labeling sensitive data. Overall DSPM helps to remediate and protect sensitive information against unauthorized data loss and access.  

As data moves through the network and across structured and unstructured data stores, it is labeled appropriately no matter where it resides. It is then monitored for risks, such as risk sharing, inaccurate entitlements, inappropriate permissions, or wrong location.

The full potential of Copilot can be unlocked safely with DSPM. When it comes to deploying any type of AI tool, including Copilot, DSPM is critical before, during and after deployment. The risk to sensitive data is high enough without Copilot in the mix; adding it blindly greatly amplifies that risk for organizations. 

DSPM addresses the four security challenges organizations face before, during and after a Copilot deployment. DSPM’s approach to managing risks involves sophisticated natural language processing (NLP) capabilities to accurately categorize data, including outputs from Copilot. This ensures that sensitive information is correctly identified and protected, addressing potential security risks without compromising productivity. 

With incorrectly classified output due to inherited sensitivity labels, DSPM solutions mitigate this risk by implementing advanced data discovery and classification processes that automatically identify and classify data based on its content and context before input into Copilot. DSPM can also continuously monitor data flows, reclassifying data as necessary and ensuring that any data processed by Copilot and its subsequent outputs maintains the correct classification levels. By ensuring that all data is accurately classified at the source, DSPM prevents incorrect sensitivity labels from being propagated through Copilot’s outputs.  

Before data is processed by Copilot, DSPM tools can enforce the principle of least privilege, correcting over-permissive access settings and preventing sensitive outputs from being inadvertently shared or exposed. This proactive approach to permissions management significantly reduces the risk of data breaches and loss. When it comes to inappropriate permissioning, sharing and entitlements, DSPM addresses this challenge by providing granular visibility into data access controls and entitlements across the organization’s data stores. It automatically assesses and adjusts permissions based on the data’s classification, ensuring that only authorized users have access to sensitive information.

Regarding lack of company context in output sensitivity, advanced DSPM systems leverage sophisticated natural language processing and machine learning algorithms to understand the nuanced context of data, including its relevance to specific business processes and its sensitivity level.

By integrating DSPM with Copilot, organizations can ensure Copilot is informed about company-specific sensitivity context, providing a blueprint for Copilot as it factors in this critical information when generating outputs. This ensures that sensitive data, such as intellectual property or confidential business information, is handled appropriately, maintaining confidentiality and integrity.

Finally, DSPM solutions directly address the challenge of unclassified outputs by automatically classifying all data processed by Copilot, ensuring that outputs are immediately tagged with the appropriate sensitivity labels. This automatic classification extends to Copilot-generated content, ensuring that any sensitive information contained within these outputs is immediately recognized and protected according to its classification.

By enforcing strict classification protocols, DSPM ensures that sensitive outputs are not inadvertently accessible, maintaining strict access controls based on the data’s sensitivity and compliance requirements.

The post Data Security Posture Management (DSPM) is an Important First Step in Deploying Gen AI and Copilot Tools appeared first on Cybersecurity Insiders.

New research by Team Cymru, a global leader in external threat intelligence and exposure management, reveals that 50% of organizations experienced a major security breach in the past year. The “Voice of a Threat Hunter 2024” report, which surveyed 293 cybersecurity professionals, highlights the critical importance of threat hunting programs in mitigating these breaches.

Despite the rise in cyber attacks, the report found that 72% of those who faced a breach credited their threat hunting program with playing a crucial role in preventing or minimizing the impact. This finding underscores the need for organizations to invest in proactive security measures.

David Monnier, Chief Evangelist at Team Cymru, emphasized the significance of these findings: “The report paints a picture of a cybersecurity landscape where no organization is immune, but the robustness of threat hunting programs has proven essential in mitigating the impact of breaches.”

According to the report, organizations that prioritize proactive detection, real-time threat intelligence, and third-party monitoring are better positioned to defend against sophisticated cyber threats. However, challenges remain, with 39% of respondents citing a lack of funding and data as major obstacles to effective threat hunting.

“In today’s evolving threat landscape, investing in the right tools and strategies is critical to success,” Monnier added.

Additional key findings: 

Key Findings: 

  • The majority say proactive detection of previously unknown threats is their top objective. 
  • 53% say they would quit their job today to go work at an organization that offered better threat hunting tools and technology even if paid less.  
  • The most valuable threat hunting product is network forensic detection, netflow telemetry, raw network telemetry data and/or full packet captures. 
  • The top priority for the next year is expanding third-party monitoring for signals of compromise

It’s essential for organizations to fortify their cybersecurity defenses by implementing robust threat hunting programs that go beyond their network borders.

Read the full report here: Voice of a Threat Hunter 2024.

 

The post Report Finds 50% of Organizations Experienced Major Breaches in the Past Year appeared first on Cybersecurity Insiders.