[By Gabi Reish, Chief Business Development and Product Officer, Cybersixgill]

In today’s rapidly expanding digital landscape, cybersecurity teams face ever-growing, increasingly sophisticated threats and vulnerabilities. They valiantly try to fight back with advanced threat intelligence, detection, and prevention tools. But many security leaders admit they’re not sure their actions are effective.

In a recent survey1, 79 percent of respondents said they make decisions without insights into their adversaries’ actions and intent, and 84 percent of them worry they’re making decisions without an understanding of their organization’s vulnerabilities and risk.

What’s causing this uncertainty? The skills shortage is certainly one factor. There’s no getting away from this long-standing reality. According to a 2022 report2, some 3.4 million security jobs are unfilled due to a lack of qualified applicants. But there’s far more to the story than a staffing shortage.

The Cyber Threat Intelligence Paradox

Cyber threat intelligence (CTI) attempts to understand adversaries and their potential actions before they occur and prepare accordingly. CTI gathers information about threat actors, their intentions, mechanisms, intended targets and means for doing so as comprehensively as possible.

The reason why cybersecurity teams lack confidence in their actions is due to what I term The CTI Paradox: The more you have, the less you know. These teams are flooded with information that they can’t easily act upon because they can’t distinguish what’s relevant to their organization and what’s not. Additionally, they often have an overabundance of security tools designed to detect vulnerabilities, threats, intrusions and the like – firewalls, access management, endpoint protection, SIEM, SOAR, XDR, etc. – that they can’t operate them efficiently without a clear set of priorities.

To illustrate the point, my company, Cybersixgill, recently conducted a survey of more than 100 CTI practitioners and managers from around the globe. We learned that almost half the respondents said that they are still challenged, even with CTI tools at their disposal. Among the issues are the overwhelming volumes and irrelevance of data, the difficulty of gaining access to useful sources, and the complexity of integrating intelligence from different solutions.

It’s no surprise then that 82 percent of surveyed security professionals3 view their CTI program as an academic exercise. They buy a product but have no strategy or plan for using it.

While this scenario may sound grim, there are options to help CISOs and their teams make effective use of CTI data and strengthen their cyber defense. Here are some suggestions for getting out of the CTI Paradox and gaining confidence that your organization is foiling cyberattackers effectively and efficiently.

The Four Pillars of Effective CTI 

Fundamentally, a well-functioning security department needs two things: Timely, accurate insights about threats that are relevant to their organization, and the capacity to quickly respond to those threats. The first order of business is devising an overall strategy that reflects the organization’s unique security concerns. Next you need effective CTI that recognizes those concerns. And finally, you need the detection and prevention tools that allow you to take action in response to the relevant insights.

More specifically, resolving the CTI paradox means using CTI tools that provide support through four pillars:

  • Data – information about cyberthreats that matter to the organization
  • Skill sets – tools that match the team’s level of expertise in responding to those threats
  • Use cases – tools that match the types of intelligence that the security team is interested in
  • Compatibility – the fit between a CTI solution and the rest of the security stack

Let’s look at the four pillars, how and why organizations may be experiencing problems, and the best ways to solve them.

Data

Problem: It’s one thing to collect massive amounts of data. It’s another thing to refine that data so that security teams know what is relevant and what is peripheral. While it is fine to be aware of security threats on a global level – both literally and figuratively – companies need to zero in on the threats and vulnerabilities most relevant to their attack surface and prioritize them accordingly.

Solution: Focus on products that analyze and curate information rather than dumping everything on users and expecting them to filter out what is relevant and what’s noise.

If you’re shopping for a solution, be sure that the vendor has first compiled an exhaustive list of potential threats by accessing a wide range of sources, including underground forums and marketplaces and that the information is continuously updated in real time. But the vendor should further allow you to cull down the list to a manageable level, using the tool to automatically contextualize and prioritize those threats and thus respond quickly and efficiently.

Skill sets

Problem: Security teams sometimes find themselves working with tools that do not match their cybersecurity skills. A tool that provides access to raw, highly detailed information may be too complex for a more junior practitioner. Another tool may be too simplistic for a security team operating at an advanced level and fail to provide sufficient information for an adequate response.

Solution: Teams need to use CTI tools that match or complement their skill sets. You also want to select tools that match your organization’s security maturity and appetite for data – neither too high nor too low for your needs. Ideally, the tool you use incorporates generative AI geared specifically to threat intelligence data.

Use cases

Problem: Organizations may receive information irrelevant to their primary use cases. CTI vendors typically address a dozen or more intelligence use cases such as brand protection, third-party monitoring, phishing, geopolitical issues, and more. Receiving intelligence to address a use case irrelevant to your organization’s security concerns isn’t helpful.

Solution: Find a solution that matches your use-case needs and provides information that is clear, relevant, and specific to those use cases. For example, if your organization is particularly subject to ransomware, find one that offers the best, most up-to-date information about ransomware threats.

Compatibility

Problem: To adequately handle cyber threat intelligence, an organization needs to be able to consume incoming data,  integrate it with other elements of its security stack (SIEM, SOAR, XDR, and whatever other tools that are useful for the organization), and take action rapidly. Without this compatibility among tools, organizations may not be able to mitigate threats quickly enough. Additionally, manually porting information from one area to another may become onerous enough that the CTI tool eventually is ignored.

Solution: In this environment, you need to rely on automated responses to threats as much as possible, so make sure whatever CTI tool you acquire integrates seamlessly with your security ecosystem. You’ll want a tool that has the APIs needed to share information readily with the rest of your security stack. Check the vendor’s compatibility list to be certain that the CTI tool will sync with the security tools most important to your organization.

The CTI Paradox does not have to go unsolved. Curated, contextualized threat intelligence, relevant to an organization’s use cases, eliminates the paralysis that comes from too much data. Well-integrated tools, appropriate for the security teams implementing them, give organizations the defense mechanisms required to detect and respond rapidly and efficiently.

By being smart about threat intelligence and your organizational status and requirements, you can move from doubt and uncertainty to clarity, focus, and effective direction.

Gabi Reish, the chief business development and product officer of Cybersixgillhas more than 20 years of experience in IT/networking industries, including product management and product/solution marketing.

The post The Cyber Threat Intelligence Paradox – Why too much data can be detrimental and what to do about it appeared first on Cybersecurity Insiders.

“Over the past year, we’ve witnessed significant developments in cybersecurity, including the emergence of generative AI and its ability to enhance organizations’ threat intelligence efforts, and the rise of Threat Exposure Management, a program of consolidation to identify and mitigate risk and strengthen cyber defense proactively,” said Sharon Wagner, CEO of Cybersixgill.

“With these advancements, curated threat intelligence is gaining prominence and accessibility, delivering relevant, contextual data based on a company’s attack surface and the effectiveness of its security stack. As security teams home their strategies against malicious actors, these trends will play an even bigger role in the coming year and beyond.”

Sharon’s predictions for the top 2024 cybersecurity trends are as follows:

Prediction #1: AI will evolve to become more broadly accessible while cybersecurity vendors continue to address the reliability, diversity, and privacy of data.

  • AI’s value is rooted in the breadth and reliability of data, which Cybersixgill predicts will significantly improve in 2024 as AI vendors advance the richness and fidelity of results.
  • AI will become broadly accessible to practitioners, regardless of their skillset or maturity level.
  • As concerns for data privacy with AI grow, companies will form their own policies while waiting for government entities to enact regulatory legislation. The U.S. and other countries may establish some regulations in 2024, although clear policies may not take shape until 2025 or later.

Prediction #2: AI will be used as an attack tool – and a target. Black hat hackers will increasingly use AI to improve effectiveness, and legitimate use of AI will surface as a prominent attack vector.

  • Cybersixgill believes that in 2024, threat actors will use AI to increase the frequency and accuracy of their activities by automating large-scale cyberattacks, creating duplicitous phishing email campaigns, and developing malicious content targeting companies, employees, and customers.
  • Malicious attacks like data poisoning and vulnerability exploitation in AI models will also gain momentum, which cause organizations to provide sensitive information to untrustworthy parties unwittingly. Similarly, AI models can be trained to identify and exploit vulnerabilities in computer networks without detection.
  • Cybersixgill also predicts the rise of shadow generative AI, where employees use AI tools without organizational approval or oversight. Shadow generative AI can lead to data leaks, compromised accounts, and widening vulnerability gaps in a company’s attack surface.

Prediction #3: Tighter regulations and cybersecurity mandates hold the C-suite and Boards accountable for corporations’ cyber hygiene. Companies must prove vulnerability prioritization and risk management with evidence-based data.

  • In 2024, as attack surfaces widen and the frequency and scale of attacks grow, regulatory mandates will hold business leaders more accountable for their organization’s cyber hygiene. The C-suite and other executives will need a clearer understanding of their organization’s cybersecurity policies, processes, and tools. Cybersixgill believes companies will increasingly appoint cybersecurity experts on the Board to fulfill progressively stringent reporting requirements and conduct good cyber governance.
  • Changes to the Payment Card Industry’s Data Security Standard (PCI DSS) v. 4.0 will pressure retail, healthcare, and finance companies to follow the new reporting requirement by March 2024. These requirements will drive a more vital need for proactive threat intelligence to help mitigate risk, continuously identify gaps, and strengthen cyber hygiene.

Prediction #4: The need for proactive cybersecurity combined with continued tool consolidation will underscore the necessity of cyber threat intelligence in critical business decision-making.

  • Cybersixgill predicts that in 2024, more companies will adopt Threat Exposure Management (TEM), a holistic, proactive approach to cybersecurity, of which cyber threat intelligence (CTI) is a foundational component. As a result, they will need robust CTI solutions delivering focused insights to mitigate business and operational risk significantly.
  • Cybersixgill also predicts that the consolidation of CTI will gain prominence as it combines with other capabilities, including attack surface management, digital risk protection, and AI. CTI will be viewed as a strategic enabler as organizations assess incumbent vendors’ benefits.

Prediction #5: Geopolitical and other issues will broaden attackers’ motivations beyond financial gain, resulting in a growing pool of targets, attack vectors, and tactics.

  • In 2024, 40 national elections will occur worldwide. As threat actors’ motivations stretch beyond financial gain, Cybersixgill predicts an uptick in attacks targeting entities without profit centers, such as schools, hospitals, public utilities, and other essential services, as bad actors aim to gain power and influence and cause general disorder.
  • Cybercriminals will increasingly offer their skills and expertise for hire through ransomware-as-a-service, malware-as-a-service, and DDoS-as-a-service offerings.

Affiliate programs will continue to grow as powerful cybercriminal gangs franchise their ransomware technology, scaling operations to a network of lesser-skilled individuals for distribution, making the extortion business accessible and profitable to a larger pool of threat actors.

The post Five Cybersecurity Predictions for 2024 appeared first on Cybersecurity Insiders.

By Benjamin Preminger, Senior Product Manager, Cybersixgill

“You can’t get good help nowadays.” The adage is true for many professions, but exceedingly so for cybersecurity. While cyber-attacks continue to grow in quantity and sophistication each year, most organizations are ill-prepared to defend themselves. That’s primarily because they lack the skilled professionals needed to handle the consistent rush of expanding threats.

Many approaches to closing the skills gap focus on hiring, educating, and training more people – all sound ideas, but the number of open cybersecurity positions waiting to be filled is in the millions worldwide. It will take many years of training and on-the-job experience for humans alone to shore up defenses adequately. In the meantime, generative AI can serve as a highly valuable asset – now and soon- as its power and potential are better understood and applied.

We’ll look at how generative AI – abbreviated to AI for convenience here yet distinct from other forms of artificial intelligence – can help organizations in the short, medium, and long term.

Taking stock of reality

A few statistics quickly paint the picture:

  • US$8 trillion — the estimated total damage inflicted by cybercriminals in 2023, up from US$3 trillion in 2015[1]
  • 3.5 million – the number of unfilled cybersecurity jobs globally in 2023[2]
  • 2.2 – the average cybersecurity maturity level of organizations globally on a 1-5 scale. Translation: Most organizations are reactive and panic when attacked rather than being proactive and preemptive.[3]
  • 72% — the number of IT and cybersecurity professionals who say they use spreadsheets to track and manage security hygiene efforts[4]

Additionally, publicly traded companies now face increased scrutiny by the U.S. Securities and Exchange Commission on how they manage and implement cybersecurity protections. Boards of directors are now legally responsible for ensuring adequate efforts are being made to safeguard the corporations they oversee and, in turn, all of a company’s customers and other stakeholders[5].

Short-term solutions: Help for junior-level security team members (senior level, too)

One other interesting statistic before moving on to solutions: zero percent. That’s the unemployment rate for mid- and senior-level cybersecurity professionals, a figure that’s stayed constant since 2016.[6] Those who know what to do in the face of attacks are already gainfully employed. Those taking unfilled positions are relatively new to their jobs and need guidance that their seasoned peers typically don’t have the time to provide.

Enter AI.

As anyone who has played around with ChatGPT knows, just a few simple prompts can quickly return information that would take skilled and extensive research otherwise. Using large language models, generative AI tools can cull through diverse sources and provide answers that significantly elevate a user’s understanding of a subject almost immediately. If the first response isn’t sufficient, subsequent queries can explain the topic more thoroughly or suggest other directions to head.

While ChatGPT is excellent for answering questions regarding general interest issues, it’s not prepared to be a full-blown cybersecurity aid. Its data sources do not include the particular ones that contain both the relevant and up-to-date information pertinent to keeping an organization protected from attacks.

Properly implemented, cybersecurity-specific AI tools can be a godsend for an inexperienced junior cybersecurity person. Rather than taking up their senior colleagues’ time and potentially asking questions they’re embarrassed to pose, junior staff members can use AI as a non-judging helpmate and, in turn, become more valuable to their organization’s cybersecurity efforts (and in the process, also look better in the eyes of their peers and managers).

For more advanced cybersecurity pros, asking questions or giving commands in natural language expedites the analysis process. For example, “Tell me about this CVE” can result in a simple and concise answer that gets to the heart of the issue without having to weed through numerous sources manually. From there, the senior person can take the next step to protect the organization.

While a handful of solutions are on the marketplace, there’s room for further refinement. If you want the AI application to answer intelligence questions about activity in ransomware sites, it needs access to data from ransomware sites. If you want the latest on initial access broker (IAB) markets, the AI must have access to IAB market intelligence. And if you want the AI to answer intel questions about threat actors sharing exploit codes in the underground, you need it to have access to that vulnerability intelligence.

This also becomes a matter of efficiency and relevance. Cybersecurity teams only need to know about threats that could affect their organizations. Accordingly, AI tools work best when they provide organizational context. Due to the sensitive nature of customer assets, these tools need to be constructed to provide the highest levels of security along with their top-tier AI power.

Mid-term solutions: Organizations develop their own AI tools

The promise of open-source AI solutions is that a company can go even further in zeroing in on the data it needs and use its own accumulated history of cybersecurity threats and responses, as well as the types of data relevant to the types of issues it faces. In this way, the AI tool carries the “institutional knowledge” traditionally assumed to be in the heads of people who had worked in an organization for their entire careers. The AI learns about external threats, internal best practices, and priority intelligence requirements (PIRs).

Furthermore, many enterprises will prefer creating homegrown solutions for data security and privacy reasons. It’s hard to trust AI when its creators are sitting somewhere in Silicon Valley, with no alliance with your organization and potentially using your data to train the next version of their own AI solution.

We should emphasize that using open-source models is far from a simple task. You need highly specialized (and highly paid) employees – data scientists, data engineers, and the like – to set up, monitor, and continually feed and support an AI tool worthy of the role.

But such a tool can be even more valuable if those experts include predictive functions. Just as Amazon and other advanced online marketing companies can make buying suggestions to people ordering an item by looking at what other buyers of that item have done, an AI tool could recommend how to respond to a specific threat based on previous experience.

We expect that the tools that fulfill the requirements of this mid-term stage will be neither generic, off-the-shelf models like ChatGPT nor completely DIY open-source packages. Rather the tool will be designed for cybersecurity-specific use as soon as it’s installed while allowing the security team to fine-tune it for the organization’s unique contextual challenges.

Long-term solutions: AI becomes an autonomous agent

Artificial intelligence, in general, has been widely implemented to do the tasks that humans would rather not do. The same is likely to apply to AI use for cybersecurity matters.

In the cybersecurity sphere, AI-powered autonomous agents will be valuable in removing both the drudgery and overload that security teams sometimes face. This might include an initial triage of alerts, responses to particular events, and any other function in which an autonomous response is appropriate.

Such implementations are not likely to take jobs away from humans. As the threat environment becomes more complex and challenging, more work will need to be done. No system is completely hands-off, given the nature of threats and the data sensitivity to be protected, such as personal information. Consequently, security professionals will always be at the critical intersection of threat and response, making decisions beyond what AI can be expected to do.

How CISOs can benefit from AI

There’s no way to accurately predict how quickly we’ll reach the middle-term level of AI tools geared specifically to one organization or the long-term level of autonomous agents. But this field is accelerating at breakneck speed as investors and savvy technical people see the opportunities inherent in generative AI. Don’t be shocked if the long-term level arrives within the next 18 to 24 months.

In the meantime, CISOs would be wise to cast about AI tools that can boost their team’s readiness by making their junior-level people more capable and their senior-level people even more efficient. At Cybersixgill, we’re already seeing organizations adopt our AI-driven Cybersixgill IQ at all levels: accelerating their processes, improving workflows, and making their teams up to 10 times more effective as they do so.

It’s also not too early to learn more about how generative AI works and how organizations could benefit from cyber-specific AI that responds specifically to their organizational context and unique attack surface. Even better, cybersecurity leaders would be advised to invest the time and effort to go well beyond a surface-level understanding of AI to take advantage of the defensive asset that it is.

As American AI researcher Eliezer Yudkowsky says, “By far, the greatest danger of artificial intelligence is that people conclude too early that they understand it.”

The author is part of the team leading the development of Cybersixgill IQ, a generative AI tool designed to make threat intelligence accessible to all members of a cybersecurity team, regardless of their level of experience.

Sources:

[1] Cybersecurity Ventures report, released December 2022

[2] Cybersecurity Ventures report, released April 2023

[3] CYE Cybersecurity Maturity Report

[4] Noetic Cyber survey, as reported by Secureframe, June 2023

[5] https://www.sec.gov/files/rules/final/2023/33-11216.pdf

[6] https://cybersecurityventures.com/top-5-cybersecurity-facts-figures-predictions-and-statistics-for-2021-to-2025/

The post Generative AI: Bringing Cybersecurity Readiness to the Broader Market appeared first on Cybersecurity Insiders.

By Michael Angelo Zummo, Threat Intel Expert – CybersixgillPhishing tools and services are common and accessible on the underground. We took a close look at one of them and discovered how easy it can be to launch a phishing scheme.

Phishing is a type of cyberattack in which attackers attempt to deceive individuals into divulging sensitive information, such as usernames, passwords, credit card details, and other personal or financial information. This is typically done by posing as a trustworthy entity, such as a reputable company, financial institution, government agency, or even a friend or colleague.

The cyber underground hosts more than just leaked data, credentials, and narcotics. It is also a marketplace for a variety of tools and services to assist threat actors in carrying out their attacks. This includes phishing.

In the past month alone, underground forums and markets hosted over 2,427 conversations about phishing attacks, templates, kits, and services, with another 17,000 on Telegram, including chatter about services and kits for sale (Figure 1).

Figure 1. A phishing service is advertised in the underground.

Phishing templates

Much of the focus on phishing consisted of threat actors seeking or offering templates. For example, on July 26, a threat actor on a dark web forum requested that anyone with a Santander Bank email template private message them for further discussion (Figure 2).

Figure 2. A threat actor requesting an email template to phish Santander Bank.

Furthermore, other threat actors advertised their phishing and scam pages, such as this post on a popular hacking forum (Figure 3).

Figure 3. Threat actors offering their phish and scamp pages for sale on an underground forum.

Phishing Tools and Services

There are a variety of phishing tools and services available for threat actors in the underground. The most notable in the past month was the phishing-as-a-service program, Evilproxy, which provides the ability to run phishing attacks with reverse proxy capabilities that steal credentials and assist in bypassing 2FA (Figure 4).

Figure 4. A threat actor advertising their phishing as a service with EvilProxy.

EvilPhish

For those looking for less expensive options, there are free tools available that can be used for experimentation or in real attacks. For example, EvilPhish is an open-source tool available on Github that simply creates an evil twin of a web page and redirects traffic to a local web server hosting the phishing page. To demonstrate how accessible these tools are for threat actors, we installed EvilPhish in our attack box and tried it for ourselves.

EvilPhish is a simple script that copies a web page of your choosing to use as a template for your own phishing page. For the demonstration, we used Cybersixgill’s portal login page and downloaded the HTML to save in our EvilPhish folder. From there, we ran a command on that file, “./NewPage” and moved all the files to our WebPages folder (Figure 5).

Figure 5. Copying HTML file to WebPages folder.

Next, we ran the EvilPhish script on our WebPages folder to create the new phishing page (Figure 6).

Figure 6. Running the ./EvilPhish script on the HTML files.

As you can see in the below screenshot, our phishing page ran locally while waiting for users to input their credentials (Figure 7).

Figure 7. A phishing page impersonating Cybersixgill that we created with EvilPhish.

As a test, we inserted “test” for the username and password to see what EvilPhish captured (Figure 8).

Figure 8. Example of captured user credentials entered on our phishing page.

Once we confirm the tool works, all that a threat actor would need to do is host this on a public domain and redirect traffic to the scam page through various techniques such as embedded links, phishing emails, SMS, and more. Additionally, one can make the page look more convincing through some HTML and CSS modifications.

Conclusion

The cyber underground continues to provide a variety of avenues and opportunities for threat actors to engage in malicious activities. Free, easy-to-use tools are widely available, and actors can deploy them in successful attacks. A curious threat actor with an appetite for cybercrime can inflict a lot of damage.

Fortunately, organizations can take measures to defend themselves. Here are a few tips to proactively defend against phishing attacks.

– Conduct education and awareness training for employees
– Verify senders of emails and use filters when possible
– Enable two-factor authentication for an extra layer of protection
– Implement typosquatting and domain monitoring into your security stack– Monitor underground channels to detect phishing templates, tools, and services targeting your organization with real-time, comprehensive threat intelligence.

The post Dangers of Deep Sea Phishing – A Dive Into a Real-World Attack appeared first on Cybersecurity Insiders.

Omer Carmi, VP of Threat Intelligence, Cybersixgill

When I was in elementary school, we had a routine fire drill. The alarm bells would ring, and we were expected to drop everything and run outside as quickly as possible. As a young child, this was frightening, even upsetting, and we initially took it very seriously. The drills continued through our school years, yet we responded in a much different way by the time we reached high school: The alarm bells would ring, we’d shrug, pick up our stuff and shuffle outside for what we knew was just another break from class. We’d become numb to the alarm bell ringing because we knew there was no fire.

When the cybersecurity community deals with every patch day like we dealt with school fire drills, it runs the risk of becoming numb to the severity of some of the vulnerabilities and blind to which vulnerabilities should be prioritized.

Statistics show that threat actors never exploit 94 percent of the disclosed vulnerabilities. That means IT staff is spending valuable time on CVEs that:

  1. Will never be exploited.
  2. Don’t apply to your organization or industry.
  3. Are completely misjudged at the beginning of their life cycles.
  4. Take away attention from the 6 percent of vulnerabilities that will be exploited.

CISOs should expand the scope of vulnerability management programs so they are better able to decide in real-time if a CVE is indeed one of the 6 percent that demands immediate attention.

Taking into account multiple criteria, including the potential impact of a vulnerability and the likelihood of its exploitation, can create a more balanced order of urgency for an organization.

Take, for instance, the recent hype about OpenSSL vulnerabilities earlier this month. Early indicators pointed to a complete apocalypse – some likened the scenario to HeartBleed 2.0. The media picked up on the sense of urgency, and reports of the expected severity traveled worldwide at record speed. All the alarm bells were ringing, but then the severity was downgraded from “critical” to “high.” That’s a perfect example of the fire drill mentality I’m talking about: it’s inefficient, and it depletes our valuable resources if we continue to listen to “the boy who cries critical.” It doesn’t mean we shouldn’t treat every vulnerability with extra care, it means that we should change the lens we use to examine vulnerabilities.

How can we move away from severity-driven patching cycles and change the fire drill approach to patching? 

Constant patching creates the same feeling as Whack-a-Mole, where a new vulnerability pops up when you’re done patching an old one. Patch, watch for updates, patch, repeat. It never ends.

Let’s say a prominent software company sends out a release rating a CVE as critical, saying it should be immediately patched. Industry media will pick up on that and start ringing the alarm bells, probably reasoning that it’s better to be safe than sorry.

The problem with following the media’s lead is that most software companies base their patch announcements on the potential severity of the CVE (best characterized by CVSS), without considering the probability that this CVE will be successfully exploited. Remember, only 6 percent of vulnerabilities are actually exploited. If you base your patching on a severity-driven approach, you fail to distinguish between a fire drill and the real thing.

Software companies should get better at providing context for the CVEs they are warning us about and highlighting key risk parameters. It’s no longer enough to just offer a severity score. At a minimum, we should also know:

  1. Whether a CVE has already been exploited in the wild.
  2. How much chatter there is about this CVE in cybercrime forums.
  3. Are exploit codes for this CVE shared on the dark web.
  4. Are there other risk factors beyond severity that can help cybersecurity teams make a patching decision
  5. How critical are your assets which are vulnerable to this CVE?

And media outlets should examine their role in creating a fire-drill mentality by encouraging more attention given to risk-based parameters, not just severity.

Vulnerability disclosures will still dominate headlines  and attention in 2023, because that is the only way to create awareness of new vulnerabilities across the cybersecurity community and the public. This process has a lot of merits in it.

But the culture shift away from what I call a fire drill mentality has to come from the inside of cybersecurity departments. It has to come from strong CISOs who understand that a high severity score without any context is not enough to set the alarm bells ringing, and the negative consequences it has.

The post Firing the Vulnerability Disclosure Fire-Drill Mentality appeared first on Cybersecurity Insiders.