Electricity, transportation, water, communications – these are just some of the systems and assets that keep the world functioning. Critical infrastructure, a complex interconnected ecosystem, is what props entire countries up and is vital for the functioning of society and the economy. This is why it is under attack. Threat actors, usually nation-state backed, know this very well. By taking down the poorly protected power grid of a city or even a country, cyber attackers cannot only cause mass chaos, but any threat to the critical infrastructure sectors could have potentially debilitating national security, economic and public health or safety consequences. 

It is evident that cyberattacks targeting critical infrastructure have become the new geopolitical weapon. Across the globe, countries are seeing these attacks rising rapidly. In fact, the North American Electric Reliability Corporation (NERC) reported in early 2024 that the number of vulnerable U.S. power grids is increasing at an approximate rate of 60 per day. Additionally, the U.S. Department of Energy found that grid security incidents reached an all-time high in 2023. 

But it is not just in the United States that critical infrastructure such as power grids, water supplies, or communications are being targeted. According to a November 2023 report from the International Energy Agency (IEA), weekly global cyberattacks against utilities more than doubled from 2020 to 2022 – in just two years.

So, why are we seeing this rise in critical infrastructure as a target? 

Unlike financially-motivated threat actors, hackers targeting these critical systems are not seeking information in order to leverage a ransom. Instead, they are looking for access to the integral puzzle-pieces of enemy nations’ power, water and more, for the purposes of disruption, terrorism and/or espionage. The hackers conducting these attacks are typically backed by nation-states from one of the big four: China, Russia, Iran and North Korea.

There have been several of these attacks over the years, each with terrifying implications; but thankfully not yet overly successful. In 2021, the Colonial Oil pipeline was famously hit in a huge ransomware attack. Considering the pipeline supplies a significant portion of gas and fuel to the East Coast of the United States, this resulted in a state of emergency to be declared in four different states when the pipeline was forced to be offline for 11 days. This attack was carried out by the Russian hacker group DarkSide and is just one example of note. 

The serious reality is that critical infrastructure is almost constantly being attacked globally, even if it is not being talked about in the news. According to Forescout Research – Vedere Labs, from January 2023 to 2024, critical infrastructure was attacked more than 420 million times across 163 countries. While the U.S. has been the main target, many other countries like the UK, Germany and Japan, have also been highly impacted.

These rising attacks come in the context of the larger cybersecurity war in progress. In May 2023, the U.S. government determined that an intrusion impacting a U.S. port had come from a Chinese-backed government hacking group. Indeed, the inspectors tasked with looking into this intrusion found that several other networks had been hit, including some within the telecommunications sector in Guam. In Guam, there is a U.S. military base that would likely be a primary point of American response in the case of a Chinese invasion of Taiwan. The intrusion from the Chinese government had been a web shell allowing remote access to servers and, if successful, the intrusion likely would have aimed at electric grids, gas utilities, communications, maritime operations and transportation systems — all with the goal of crippling military operations. 

For organizations that supply even the smallest amount of support in the enormously interconnected global infrastructure network, it is high time to become serious about protecting society as we know it. So far, critical infrastructure attacks have yet to be truly catastrophic. However, at the rate these attacks are increasing, the next level of global disruption is inevitable. 

What is important to note as well is it is not just major infrastructure organizations that need to be concerned, but smaller businesses that are a part of the vast network of utilities, electricity, water, power and more. These businesses have the potential to be taken advantage of as the entry point for crafty-enough and malicious-enough nation-state backed cyber actors. 

Governmentally and diplomatically, geopolitical cybersecurity risks must be understood. In addition, businesses and individuals must place a priority on comprehending what the risks of these attacks are and how they can prevent them because in the end, it is going to be the individuals who are impacted. 

Like in physical wars, it is going to be the citizens who pay the price.  If one of these critical infrastructure attacks is successful enough to cause a catastrophe, it is going to be the people who will suffer from a lack of water, power loss or other resources. For this reason, it is the people who must spearhead a shift to global cybersecurity preparedness. 

 

The post The New Geopolitical Weapon: The Impact of Cyberattacks Against Critical Infrastructure appeared first on Cybersecurity Insiders.

The latest data loss involving MC2 Data, a background check company, saw sensitive information of more than 100 million people in the US leaked which has put the lives of millions on the line for computer-related crimes such as identity theft amongst others. A popular cybersecurity news website has recently established an unprotected 2.2TB database that contains personal information like employment history, criminal records, phone numbers, or addresses. This incident has raised many questions about data management and issues in big corporates. 

Having a measure in place for Data Loss Prevention (DLP), is not just a choice anymore but a need for enterprises. 

Understanding of Data Loss Prevention (DLP)

Data Loss Prevention solutions offer solutions to threats involving communication, storage, or modification of sensitive data. Due to highly strict rules, requirements and constant rise in numbers and effectiveness of cyber threats, DLP tools have become critical for any organization, no matter how large it is. This blog explains why DLP is crucial given today’s environment. 

Due to the large quantities of data being produced in due time, the exposure has hence widened allowing hackers to easily access a company’s most vital data. An effective DLP solution avoids this by identifying important data assets within an organization and protecting them before any breach occurs. 

What Should You look for in a DLP Solution? 

1. Automated Response to Threats 

Preventive or proactive cybersecurity offers constant monitoring, immediate threat detection, and a fast reaction. Having this proactive approach helps organizations to avoid threats from entering a network and therefore minimize the occurrence of breaches. 

2. PPC integration with Other Channels 

A Strong DLP solution ought to protect data at each touch point including messaging, network, cloud, and endpoints.   

3. Response of Policy with Respect to Flexibility and Customization 

Since regulations concerning data protection differ across industries and geographic locations it’s important for the DLP solution to be able to provide policies that can be customized. This allows organizations to integrate with the current multiple protection standards to set up specific permissions to further secure data. 

4. The numbering and planning of data access and management also influences data classification. 

A good DLP solution must work with information as per the risk associated with it and the permission options made available for the handling and sharing of this information. Look for platforms that use machine learning and response analysis built in, offering complete visibility. 

5. Easy-to-Use GUI, and Live Information Processing 

Easy-to-use GUI improves business performance, and therefore, using real-time data enables fast and correct decisions during the operation. Opt for a solution with decision automation tool which works with some kind of analytics dashboard to provide an intelligent perspective of active threats making it easier to respond to alerts timely. 

6. Scalability and Adaptability 

As organizations evolve, the DLP solutions they employ have to incorporate scalability as a key feature.  

7. Incident Reporting, and Forensics 

The primary incident reporting feature of a good DLP solution addresses the source and consequences of an information breach. It should allow organizations to capture and identify every aspect of an attack and provide remediation. 

8. Companies Manage Advanced Encryption and Data Masking 

Encryption and data masking play an important role in securing information when it is stored or transmitted. DLP solutions should therefore ensure that it uses high levels of encryption to avoid being breached.  

9. Anomaly detection using Behavioral Analytics 

In order, DLP solutions today should incorporate behavioral analysis features since users who behave anomalously may be up to no good. 

10. Being a part of the Regulatory Compliance and Audit Capability Team. 

It is crucial to remain compliant with data privacy laws; The DLP solution should help to achieve that. 

Conclusion 

When selecting the DLP solution, one should strive to find an excellent all-inclusive, customizable, and easy to operate platform. Fidelis Network ® comes with all the features which define today’s cybersecurity protection, making it an industry leader. Fidelis Security’s Patented Deep Session Inspection® technology gives you the ability to investigate threats and stop sessions that violate policies with details about who is sending and receiving data and what type of data is being sent.  Don’t wait anymore, it’s time for you to protect your organization data from cyber attackers.

The post Protecting Privacy in a Data-Driven World: What should you look for in a DLP Solution? appeared first on Cybersecurity Insiders.

Generative AI software and features are being shoehorned in across all industries, and come with both typical and unique security concerns. By establishing a flexible software security review framework, organizations can improve security posture and avoid being overwhelmed by countless requests for new technology. At a high level, the process of evaluating GenAI software asks takes the following form, adjusted to fit the particular organizational needs:

  • Define your own organization’s risk profile and threat environment.
  • Understand the software use case and request, as well as what data the software will touch and the scope of exposure.
  • Confirm the software is in active development or at least active maintenance, and not stale/unmaintained.
  • Research the vulnerability history of the software and company, and their responsiveness to security issues.
  • Access Trust Center materials to understand the deeper context (SOC2, ISO27001, pentest reports, BCDR).
  • Analyze the company’s data processing addendum and its DPA update notification protocol.
  • Repeat these analysis steps for any plugins or extensions.
  • Ask specific questions about the extraction of data for training datasets.
  • If no chart is provided, chart out the data flow between your systems and theirs to understand how complex the paths are, and how much attack surface area they represent.

Defense by In-Depth Inquiry

When public and private sector institutions depend on your data for critical defense and investigation contexts, there is very little room for error. And as a first principle of Security Operations in particular and Information Technology in general, it is critical to set up, maintain, and re-evaluate standardized and replicable processes for situations like security reviews of new software asks from the rest of the company. This is especially true with Software-as-a-Service (SaaS) offerings that provide less visibility than traditional on-premises software.

Enter the Artificial Intelligence hype cycle and the glut of Generative AI (GenAI) functions being shoehorned into nearly every application on the market, as well as entirely new GenAI-powered services not available previously. This new wrinkle of GenAI further complicates the security review process for a number of reasons. Among them: 

  • The novelty of the GenAI business sector compared to more traditional models brings with it uncertainty about some technology impacts and direct concerns about others.
  • The complicated nature of GenAI data processing and underlying and often interweaving data flows in and out of multiple companies, increasing data exposure to multiple entities.
  • The extractive nature of many GenAI services feeds all available data into training datasets as pristine training data becomes more scarce, and a regular disregard for user consent as GenAI models cannot consume their own output for training data.

Given these and other concerns, our Security Operations team has completed a large number of software security evaluations according to a general framework that seems to be working well so far. We’ll go into it here in the hopes that it can help other organizations inform their own software review processes and reinforce their security posture in an uncertain and fast-changing environment.

(For those SecOps practitioners looking solely for advice related to Generative AI, you may want to skip directly to the section titled “Grab a Coffee and Hit the Books” as it discusses specifics like Data Processing Addendums.)

The Importance of Self-Evaluation

Any security review, whether it’s an internal vulnerability, geopolitical concern, or external new software ask, must be rooted in a regular and accurate assessment of your own organization, company, or institution. There is no “One Size Fits All” risk profile – take a look at the industry you’re in and the threats it faces, as well as the data and services you’re responsible for protecting. A shoe sales company charged with protecting employees, customers, and commerce data has a very different footprint here than a company that provides intelligence feeds to investigators and defenders. A public safety agency has a very different footprint than a private incident response company. Understand where you sit and what the environment looks like around you, and let it inform your security posture.

Also prior to taking on the review of external software, ensure you understand the new request and its use cases. What data will be actively exposed to this new software, and how critical is it? Is it the crown jewels of customer data, proprietary source code, sensitive business operations, or is it aggregated open-source data available elsewhere? This is especially important when it comes to SaaS offerings that integrate with multiple other services. If the software wraps around Salesforce or Github, you’ve got to go deep.

Can I Get a Vibe Check?

Start with open-source research around the software and company. Ensure the product is actively developed or at least maintained as needed, with security updates prioritized. If it’s stale and the last update was a year ago, it should not be a contender for active use within any of your workflows. Threat environments and vulnerability ecosystems simply change too fast at this point. 

Evaluate the company’s response to security issues either in the media or through Github. Were they responsive or dismissive? Does their update cadence dovetail with your own risk profile, or is it too spaced-out to trust for your purposes? Also evaluate company maturity. If they lack a trust center and data processing documentation, and/or if their documentation or support system is through Discord, that is not a mature solution. Not every company possesses the resources for these steps, but always remember: not every company is going to fit well with your company’s risk management posture.

Grab a Coffee and Hit the Books

Now it’s time to focus on the deeper documentation that’s found in places like a Trust Center. Any software you’re evaluating should be able to make available documents like a SOC2 or ISO27001 certificate, recent penetration testing reports or attestations, and elements of their business continuity/disaster recovery plan. There are reasons some businesses may not be able to produce all of these, but if they cannot provide any, allow your skepticism to deepen. 

For the above-mentioned documents, be sure to check effective dates to ensure it’s not years-old, but also if the company provides multiple years of something like pentest reports take time to leaf through them. See if the same vulnerabilities show up year after year; or if the company is serious and responsive enough that they act fast and remediate discovered vulnerabilities during the course of the testing period, which is no small feat. Pay attention to whether the software company retains formal pentesting firms, skips from one firm to another to another across multiple years, or only engages in unfocused and less-effective testing like bug bounty platform advertised testing periods. This will all speak on some level to what’s going on underneath the surface.

For Generative AI in particular, one of the most crucial documents you’ll have access to is the company’s Data Processing Addendum (DPA), which should be public and easy to access and understand. This is a legal document usually established as an addendum to the terms and conditions of a service that covers data processing according to one or several geolocated standards such as GDPR. The DPA should also list all data subprocessors that a company has contracted with, their location, and a general description of their function in relation to your data. Pay attention to the geolocation and breadth of data exposure, and ensure it meets or exceeds your risk management needs. Some DPAs have five or six subprocessors; some have dozens. Some companies only contract subprocessors in the US or EU; some include countries you may not want to come within miles of your data. 

 For extra points, if you analyze the DPA of each subprocessor in turn, you see the first and second order of your data exposure. It’s not usually a pleasant sight.

Reading through the DPA, pay special attention to what standards data is held. More mature organizations will stick to US and EU best practices, especially GDPR; whereas companies you should avoid will use boilerplate language that points to non-US/non-EU data processing and “equivalent standards.” If the implication of a DPA is that your data can be sent off to completely different regions of the world with no sworn legal protections, it is time to find a different solution. The DPA will also provide background information on the standards the software company holds its subprocessors to. In this section, what you want to see is language along the lines of “a written agreement with each subprocessor that imposes data protection obligations no less protective of personal data than those set out in our agreement with our Clients.” Addendums lacking that kind of language often provide purposeful loopholes for subprocessors that are actually “data partners” – and are probably extracting data for unstated purposes without your consent.

More than any other SaaS segment I’ve performed security reviews for, services with Generative AI components have complex and problematic data subprocessor lists. Tracking back through third- and fourth-order subprocessor lists, you quickly find many of the smaller companies are just white label packages for the larger GenAI firms, and most of the larger GenAI firms are cross-connected with each other. You also find recurring patterns and recurring single points of deep exposure, such as data warehouse Snowflake – if that name sounds familiar, it’s because multiple Snowflake datastores were continuously scraped by unauthorized third parties, sometimes for months, resulting in a swarm of pivot compromises for companies storing data there as well as those relying on those companies as vendors. 

Before completing a security review, you should ensure you understand the data exposure caused by the new software’s subprocessor list, as well as specifics on data geolocation and any possible data shifts outside of approved regions. Also ensure the DPA specifies how and when the subprocessor list will be updated, and how notifications occur. Forthright companies proactively email customers about subprocessor changes with proscribed periods to opt out before the changes take effect. Questionable companies require you to somehow monitor the DPA page for updates yourself.

Another specific callout is training data. If you are left with any questions whatsoever as to whether your data will be extracted or analyzed for training datasets, ask the specific question and get the answer in writing. More than a few companies provide robust-looking data policies that leave specific loopholes in place and avoid answering when asked – make it a key piece of your inquiry, and make it clear that approval hinges upon the company’s answer.

Repeat the process for any plugins, extensions, or other add-ons your internal use case inquiry identified. If you thoroughly vet the web app but the Gmail plugin is trivially compromised, your data is still gone.

Conclusion

We are in a liminal period – the old signs have fallen before us, and new trails must be blazed as Generative AI software and features crowd most markets. But we aren’t at a place of stability or certainty. While we move through what’s likely to be the horseless carriage phase of Generative AI, Security Operations and similar teams must move carefully and deliberately, ask hard questions, and analyze dull documents. Establishing flexible frameworks for software security reviews that pay special attention to trust-related and data processing documentation eases the burden and helps inform critical business decisions as we all adapt to changing conditions while seeking to arm our colleagues with the best technology available. 

 

The post Generative AI software and features are being shoehorned in across all industries appeared first on Cybersecurity Insiders.

Distributed Denial-of-Service (DDoS) attacks flood target networks with an overwhelming number of requests all at once, resulting in a denial of service that can shut down internet connectivity across all verticals. They are particularly troublesome since attacks continually evolve to overcome existing defensive measures.

From 2022 to 2023, Radware reported a 120% increase in DDoS attacks, along with a 60% increase in large attack vectors and a staggering 770% increase in malicious web transactions. The rise in DDoS attack scale is partly due to the availability of large-scale Internet of Things (IoT) botnets — networks of compromised devices collaborating in attacks. These sophisticated assaults leverage vast botnet networks, made possible by the proliferation of IoT devices.

These attacks are not mere inconveniences. They have substantial repercussions. The costs of downtime alone can be staggering, averaging $6,130 per minute or $367,800 per hour. Beyond financial losses, successful DDoS attacks can also harm reputations and lead to regulatory violations. Any organization with an online internet presence is susceptible.

Types of DDoS Attacks Vary

DDoS attacks come in various forms, continually evolving to bypass countermeasures. Volumetric attacks flood networks with data, crippling operations. Application layer (L7) attacks target specific applications, slowly draining resources. Protocol attacks exploit network protocol vulnerabilities. However, zero-day DDoS attacks are the most challenging to detect since they are entirely new and lack pre-existing signatures.

Tactics of the attacks can also vary. For example, carpet-bombing attacks target multiple addresses, while burst attacks strike suddenly and deliver intense but short-lived traffic surges, sometimes repeating at various intervals, and SSL floods overwhelm servers with numerous SSL handshakes, all causing network or server resource depletion.

With 31% of organizations facing daily or weekly attacks and 60% encountering attacks monthly, the challenge is substantial, particularly amid a shortage of cybersecurity experts. Given the diversity of DDoS threats and the anticipation of new forms of attack, comprehensive protection is essential. Solutions must defend both network and application layers against current and future attacks.

Real-Time Monitoring Key

An effective DDoS protection strategy hinges on real-time monitoring, enabling rapid identification of attack signatures — whether known or new.

A DDoS protection solution should be easy to deploy and equipped with real-time monitoring for quick detection of a range of attacks. Malicious traffic must be stopped before it reaches your network edge by rerouting it to minimize or prevent disruption, often without you even realizing an attack is in progress.

A holistic approach is crucial to safeguarding against a wide array of advanced DDoS attacks. Advanced technology can be utilized to detect, analyze, and mitigate both sophisticated and emerging DDoS threats. They incorporate behavior-based detection powered by Machine Learning (ML) and Artificial Intelligence (AI) to recognize zero-day (unknown) attacks and dynamically adjust defenses based on the specific context of the attack. With access to global detection networks, these solutions automatically deploy updates to protect against new threats. Benefits of these advanced services include:

  • Comprehensive 360-degree defense: Consistent protection against both existing and emerging threats across all environments and entry points is achieved with advanced technologies, combined with threat intelligence. This consistency keeps businesses secure, regardless of network setups or deployment scenarios.
  • Intelligent protection: AI and ML algorithms enable automated, real-time defenses that evolve with new attack vectors, providing adaptive protection against both known and unknown attack types.
  • Fast detection and mitigation: Rapid identification and response are crucial to counter DDoS attacks efficiently. Advanced algorithms detect new attack patterns in real time, ensuring proactive defense against evolving threats.
  • User-friendly portal: A customer portal offering real-time reports and attack insights gives users visibility and control over their network security.
  • Fully managed expert service: A fully managed DDoS protection service provides peace of mind, with security experts available 24/7/365 to offer assistance during attacks and guidance during non-attack periods.
  • Minimized latency with ISP: Leveraging your Internet Service Provider (ISP) for DDoS protection integrates the solution directly into the ISP’s network core, allowing for the fastest possible detection and mitigation. This eliminates the extra hops associated with external scrubbing centers, reducing latency and ensuring optimal network performance during an attack. Additionally, using your ISP streamlines support, with a single team managing both internet service and DDoS mitigation.

With attacks on the rise, organizations can stay ahead of cybercriminals by leveraging DDoS protection backed by advanced technology, intelligent defense mechanisms, and comprehensive support. Proactive and robust defense against the ever-evolving landscape of DDoS cyber threats is critical for safeguarding critical internet connectivity.

 

 

The post A Holistic Approach to Security: 6 Strategies to Safeguard Against DDoS Attacks appeared first on Cybersecurity Insiders.

One year after Hamas attacked Israel on October 7, geopolitical tensions continue to undoubtedly impact various aspects of life in Israel. Yet, as they have so many times before, the people of Israel continue to show their resilience. In a very similar way, the Israeli technology has proven that it too has a level of resilience unmatched in the world, and that challenges are opportunities for success, rather than barriers. Israel is known for breeding world-class cybersecurity technology and startups, and while some might expect Israeli innovation to diminish amidst adversity, the Israeli tech-sector is unwavering, and the seeds are being planted for the next big wave of innovation coming out of Israel in 2025. 

Turning Conflict into Opportunity 

Since its inception over 70 years ago, Israel has faced constant threats, and despite this has remained innovative and adaptable. A large reason Israel generates so many cutting-edge cybersecurity startups is in fact because of these threats and the hands-on experience defending against them that Israelis in military units like Unit 8200 experience. This unit, part of the Israel Defense Forces, is charged with Israel’s cyber defense, and is among the best in the world at it. Having battled against some of the most advanced cyber threat actors in the world while serving in Unit 8200 and wanting to create commercial solutions to defend against them, many alumni often transition into the private sector to found successful startups. An early security pioneer, Check Point Software Technologies, which created the game-changing Firewall-1 software, was born out of technologies developed for national defense. This has been followed by many other success stories, including Palo Alto Networks, Wiz and SentinelOne. Born out of conflict, Israeli tech thrives because of its ability to adapt and find success through adversity. 

Driving Innovation and Investment Amidst Challenges 

Historically, Israeli companies founded in times of threat and turmoil have proven to dominate and outperform those companies that were founded during less challenging times. Research from Startup Nation Central shows that the success rate (as measured by their ability to go public, be acquired, or reach valuations of over $1B) of companies that raised funds during previous conflicts in 2006 and 2014 were higher than those of companies raising funds in conflict-free periods. Today, despite the challenges the nation is enduring, Israel has had an unceasing flow of investments and acquisitions. Since October 7, 2023, the Israeli tech ecosystem has seen 577 private investment rounds and raised a total of $7.8 billion in funding, with 18 companies each raising over $100m. These achievements indicate investor confidence in the long-term potential of Israel’s innovation landscape. 

For example, Dig and Talon were both acquired by Palo Alto Networks for a combined value of $1 Billion just days after October 7. Both Israeli companies were founded less than four years before their acquisition. Google, meanwhile, recently attempted to acquire Wiz, the Israeli cybersecurity startup focused on protecting organizations from cloud threats, for a whopping $23 billion. Dig, Talon, and Wiz prove that the Israeli cybersecurity market continues to earn the confidence of technology powerhouses around the world, and thus we can expect the exits and investments to continue uninterrupted. 

Israeli Collaboration

Israel’s reputation as a leading cyber nation is bolstered by strong collaboration within its cybersecurity ecosystem. Partnerships between startups, established companies, and government entities illustrate the strength and cohesiveness of the Israeli cyber community. Israeli cybersecurity innovations have a significant global impact, protecting critical infrastructure and enterprises worldwide.

Government support has played a crucial role in nurturing the growth of the Israeli tech ecosystem The Israeli government invests heavily in cybersecurity through research and development grants, startup acceleration programs, and public-private partnerships. In fact, the Israel tech sector accounts for 20% of the country’s economy, with 400,000 Israelis in the tech workforce. These partnerships have been crucial in maintaining the Israeli tech sector’s momentum throughout times of hardship, and enabling it to maintain its role as Israel’s main growth driver.

Forging Ahead 

The country’s ability to innovate in response to challenges is the cornerstone of Israel’s success. Real-world experiences like the current war, while challenging, has managed to strengthen the resolve of Israeli entrepreneurs and Israel continues to produce the most revolutionary cybersecurity technology in the world. The resilience and strength demonstrated over the past year since October 7, 2023 is telling, and will inspire future generations, ensuring that Israel remains at the vanguard of technological advancement. The next wave of innovation is being molded by the lessons learned during these hard times, leading to even more robust and effective cybersecurity solutions.

 

The post One Year Later: The Israeli Tradition of Resilience appeared first on Cybersecurity Insiders.

For businesses, managing the various risks that come with third-party relationships has become a critical function of the organization and a matter of complying with the law. However, organizations are still determining the most essential aspects of an effective third-party risk management (TPRM) program.

One pillar of any successful program is the vendor risk assessment questionnaire, a document created to evaluate the risks associated with vendors and business partners – and the partners they do business with. 

In gauging third-party risk, organizations should learn as much about their partners and vendors as possible. The questionnaire is a way to find potential weaknesses in their security, privacy, and compliance practices by evaluating policies, controls and supporting evidence of those controls. 

Risk assessment and mitigation begins with information gathering. The questionnaire is the key to getting an inside-out, trust-based view of a vendor’s security posture. They help an organization answer critical questions, such as:

  • Does this vendor have acceptable risk controls?
  • Are there risks with this vendor that require remediation?
  • Are there compensating controls in place for identified risks?

Questionnaires may just be one piece of the TPRM puzzle, but they are an extremely useful mechanism for getting a detailed internal perspective of third-party risk.

Choosing the right questionnaire

Creating TPRM assessment questionnaires from scratch is something only some organizations have the time, resources, or expertise to accomplish. That’s why many choose an industry-standard template, for example the Standard Information Gathering (SIG) questionnaire or the H-ISAC questionnaire (if it is a healthcare organization). These templates offer a good starting point, based on established frameworks and address critical areas like data security, operational resilience and compliance with the law.

While these questionnaires vary, many include these standard building blocks:

  • Vendor policies on data protection.
  • Compliance with standards, laws and regulations.
  • Access management, information privacy, incident response and other security controls.
  • Security measures related to both digital and physical infrastructure.

Another advantage of industry-standard questionnaires is that vendors—those who will be answering the questions–are likely already familiar with such questions and will be ready to give detailed responses. Instead of settling for a cookie-cutter approach that often comes with using templates, organizations should adapt these templates to meet the specific needs of their business, adjusting as needed for risk tolerance, industry, and regulatory requirements. This ensures the questionnaire will collect relevant, accurate, and timely information.

However, like most things that are important in business, the questionnaires that help an organization gauge risk come with their own set of challenges.

Questionnaires and their challenges

Organizations must surmount a series of challenges to get risk-assessment questionnaires to reach their full potential. Questionnaires, for example, can be:

Work-intensive: Completing a questionnaire can be time-consuming, especially if an organization has numerous vendors. Creating, distributing, and analyzing risk assessment questionnaires takes dedicated resources and expertise.

A snapshot, not a movie: Security questionnaires offer a limited glimpse of a vendor’s security profile at a certain point in time. However, the nature of risk changes constantly, and new vulnerabilities can arise after a questionnaire has been completed and filed away.

Supply chain complexity: Interconnected supply chains mean organizations must assess the risks associated with third-party and fourth-party vendors. This means additional complexity to the risk management process.

Vendor fatigue: Vendors may delay or deprioritize completing such questionnaires, as they may be suffering from fatigue from filling out so many. This can slow down the timeline of assessing their risks.

To combat this fatigue, organizations can streamline questionnaires with AI programs that automatically populate a new questionnaire by pulling from an older one or extracting details from sources like SOC2 reports or ISO Statements of Applicability. Tailoring questionnaires to the vendor’s specific role can also lessen the burden and boost engagement. And using automated workflow for follow-ups can relieve more of the burden. 

How to get the best use of questionnaires

Once an organization has pushed through the challenges and created a robust questionnaire for risk management, it’s time to put it to use. Below are tips on how to get the best use of it:

Refrain from settling for a fixed and rigid questionnaire. Don’t fall prey to “analysis paralysis,” in trying to create a perfect questionnaire. The one-and-done approach doesn’t suffice when it comes to the dynamic nature of risk. Information starts getting stale the moment a questionnaire is completed, so be aware that maintaining real-time risk knowledge and awareness takes continuous evaluation. 

Be ready to customize. An organization should be able to import or create items for review as the assessment process moves along, along with customization options for adding questions as more unique needs are identified.

Regularly reassess third parties. Assessment of risk should be repeated regularly, especially if any vendors bring extra risks. How often you reassess depends on how critical the vendor is to your operations and also the sensitivity of the data they handle. Organizations may need to reassess their vendors annually or more often in highly regulated industries, depending on compliance requirements.

Risk evolves rapidly in our digital and connected world, so a vendor’s security posture can easily change as new vulnerabilities, incidents, or changes in business processes come to light. That’s why automation and continuous monitoring are essential to stay ahead of such changes. 

Next steps in the process

A robust third-party risk management program begins with a risk assessment questionnaire. These documents can be paired with real-time security monitoring, automated risk management products, and continuous vendor monitoring to manage and mitigate third-party risk most effectively.

Tools and strategies in the right combination will help any organization mitigate the risks that come with a large ecosystem of vendors, ensuring the business stays secure.

TPRM best practices should always include using real-time monitoring to assess vendor performance continuously and validate the effectiveness of controls “in the wild”, reassessing vendors regularly to ensure their security measures are still effective and customizing your questionnaire to mirror the unique risks each vendor brings. 

However, every successful TPRM program begins with something simpler: the risk-assessment questionnaire.

 

The post Top Strategies for Using Vendor Risk Questionnaires to Strengthen Cybersecurity appeared first on Cybersecurity Insiders.

The financial strain on businesses is growing at an alarming rate, largely as a result of escalating cybercrimes. The financial implications of cyberattacks are becoming impossible to ignore.

The increasing frequency and sophistication of these threats demand a more strategic approach to cybersecurity investment, yet many organisations continue to underestimate the financial consequences of a breach.

The financial toll of cybercrime can be divided into direct and indirect impacts. Direct costs include the immediate loss of revenue due to downtime. A business can grind to a halt in the aftermath of an attack, often requiring weeks to restore operations.

The high costs

The cost of recovery, including professional support to restore systems, investigate the breach, and work with regulators, is another major direct hit to the bottom line.

The indirect costs, however, can be just as devastating, if not more so. Many people do not understand how severe the indirect effects of a successful cyber compromise will have on the business.

The most immediate indirect impact is the erosion of trust among customers, partners, and the public. A loss of trust often leads to a significant loss of business, as customers may turn away permanently.

Further indirect costs arise from regulatory reporting requirements and the protective measures necessary to safeguard individuals affected by the breach. These additional expenses can accumulate rapidly.

The true cost of a cyberattack extends far beyond ransom payments, regulatory fines, and recovery costs; it reaches into the personal lives of employees, affecting mental health and well-being. A cyber-attack is extremely stressful to the business and those responsible for recovery, which can lead to burnout and prolonged stress-related absences from work.

The cybersecurity investment gap

Despite the mounting risk, many organisations continue to under-invest in cybersecurity. I see a disproportionate under-investment in relation to the risk of cybercrime. This mismatch between risk and investment is a critical issue for CFOs.

While some boards may approve increased spending on cybersecurity, this spending is often ineffective, with a focus on isolated solutions rather than a comprehensive strategy.

The problem is that many business leaders still view cybersecurity as a technology issue. Cybersecurity has nothing to do with technology, it is about managing digital risk through a structured, resilience-based approach.

Technology is only an enabler; true resilience comes from understanding the broader risks and implementing a strategic framework that covers all aspects of digital risk.

Minimising financial damage

Prevention, as the saying goes, is better than cure. For businesses, this means building a robust cyber resilience framework. There is no way we will stop attackers trying to attack, but an effective framework can help businesses detect and respond to threats before they cause significant damage.

Security comes from visibility – resilience provides visibility, visibility gives us the capability to respond.

By ensuring total visibility across all parts of a cyber resilience framework, organisations can detect potential attacks early, limiting the financial damage. The sooner a threat is identified, the easier it is to contain, reducing the potential for widespread disruption.

Aligning cybersecurity with financial strategy

One of the key challenges for CFOs is aligning cybersecurity investments with their overall financial strategy. The focus needs to shift from the cost of individual cybersecurity tools to the value of preventing cyber incidents in the first place.

Let’s rather focus on what your business does to make money. By understanding how cyberattacks can disrupt revenue streams and harm customer relationships, business leaders can better justify the necessary investment in cybersecurity.

The financial impact of a cyberattack is not limited to the cost of recovery. Most businesses will face at least two weeks of downtime, followed by months of ongoing disruption. During this time, businesses lose not only revenue but also market share, as competitors swoop in to capture dissatisfied customers.

In many cases, 30% of customers will no longer want to do business with a company that has been breached. By calculating these potential losses, businesses can gain a clearer picture of the true cost of cyber risk.

Incident response planning

A comprehensive incident response plan is essential for reducing the financial impact of cybercrime. Being prepared is crucial. Regularly reviewing and testing incident response plans can help organisations respond more effectively when an attack occurs, reducing both the direct and indirect costs of a breach.

Building cyber resilience into the business also includes regular awareness training and cybersecurity drills. These exercises help employees understand their role in protecting the business, creating a culture of vigilance that strengthens the organisation’s overall defences.

The rising cost of cybercrime is placing significant financial pressure on CFOs. While many organisations still under-invest in cybersecurity, the true cost of a breach – from lost revenue and reputational damage to regulatory fines and personal stress – far outweighs the expense of building a robust, resilience-based cybersecurity framework.

By shifting focus from technology solutions to strategic risk management, businesses can reduce their exposure to cyber threats and protect their bottom line.

The post Financial impact of cybercrime appeared first on Cybersecurity Insiders.

The rapid adoption of Generative AI (GenAI) tools in both personal and enterprise settings has outpaced the development of robust security measures. The immense pressure on practitioners to quickly deploy GenAI solutions often leaves security as an afterthought. Cybersecurity experts, who prioritize the protection of data confidentiality, integrity, and availability, are increasingly raising alarms about the potential vulnerabilities of GenAI.

GenAI’s Achilles’ Heel: Where Vulnerabilities Lie

GenAI systems are susceptible to several security risks, including:

  • False Information Generation: GenAI models can be manipulated to produce misleading or inaccurate information, potentially damaging reputations or leading to poor decision-making.
  • Data Exfiltration: Malicious actors can exploit vulnerabilities in GenAI systems to extract sensitive data, posing significant risks to privacy and confidentiality.
  • Privacy Violations: The use of personal data in training GenAI models raises concerns about privacy and the potential for misuse or unauthorized access to this information.

A major challenge is the lack of transparency surrounding the maintenance, monitoring, and governance of many GenAI applications. Enterprise organizations that integrate with SaaS platforms utilizing GenAI services must thoroughly vet these providers to ensure adequate technical and security due diligence, particularly focusing on data flow monitoring.

Additionally, because GenAI has greatly reduced the difficulty in digital replication, it is much easier for threat actors to exploit voice, video and image replication. Considering the amount of digital content that exists, for both individuals and enterprises, the potential for damage to personal or business brands through manipulated digital content is a growing concern.

Both enterprise and personal users of these tools should be very concerned about the various threats posed by GenAI. In my experience, many SaaS providers are unprepared for the additional exposure these systems can create.

Traditional Defenses Fall Short

Traditional antivirus and cybersecurity products are ill-equipped to address the unique challenges posed by GenAI. These tools rely on identifying known threats through signatures, hashes, or other identifiers, which are ineffective against the constantly evolving nature of GenAI models.

The immense size and complexity of these models also make them difficult to scan for vulnerabilities unlike traditional software. Thus, new and more sophisticated security tools are required like User and Entity Behavior Analytics (UBEA) and automated model red teaming are necessary to preemptively address GenAI security risks. UBEA can help identify when a user or model is acting anomalously, and flag admins to potentially malicious activity, while automated red-teaming tools can stress test various components of GenAI services before deployment to ensure they generate appropriate content.

Navigating the GenAI Security Landscape

While leading GenAI providers like OpenAI, Google, and Microsoft are investing heavily in security, smaller vendors may not have the resources or expertise to adequately protect their systems. Therefore, it is crucial for organizations to conduct thorough security audits of their vendors and their controls.

Key areas to focus on include:

  • Data Monitoring: Ensure vendors have robust mechanisms in place to monitor and control data flow in and out of GenAI systems, including comprehensive audit records of GenAI transactions.
  • Transparency: Demand clear documentation and explanations of how GenAI models are trained, the data sets used, and any inherent biases or limitations.
  • Employee Training: Upskill employees to identify and report potential security issues related to GenAI use and misuse.

Proactive Measures for Enterprise Security

To effectively address GenAI security concerns, enterprise organizations should take a proactive approach:

1. Establish a GenAI Security Framework: Develop comprehensive policies and procedures for the secure use and management of GenAI tools.

2. Conduct Regular Security Audits: Regularly assess the security posture of GenAI vendors and their solutions.

3. Implement Continuous Monitoring: Monitor GenAI systems for anomalies and potential security breaches.

4. Invest in Advanced Security Tools: Explore and adopt innovative security tools specifically designed to address GenAI risks.

5. Foster a Culture of Security Awareness: Educate employees about GenAI security risks and promote best practices for safe usage.

By taking these steps, enterprise organizations can harness the power of GenAI while mitigating its potential risks, ensuring a secure and successful integration of this transformative technology.

 

The post The Dark Side of GenAI: Cybersecurity Concerns for the Enterprise appeared first on Cybersecurity Insiders.

Gather ’round and let us reveal a tale that will send shivers down your spine. 

Picture this: In the dark cyber realm, a shadowy figure stumbles upon a treasure trove of secrets, unguarded and exposed. A 2.2TB database left wide open, filled with the personal information of over 100 million Americans. This was not just any ordinary find; it was a Pandora’s box of digital horrors. 

This vast database, belonging to the background check company MC2 Data, held the essence of individuals’ lives—names, addresses, phone numbers, legal records, and employment histories. The leak impacted nearly one-third of the U.S. population due to a simple error: the database was unprotected without a password. 

Cybercriminals rejoiced, finding a goldmine of information ready for exploitation. Imagine the social engineering attacks possible with such details. Social engineering attacks are manipulative tactics used by cybercriminals to deceive individuals into divulging confidential information or performing actions that compromise security. The data of PrivateRecords subscribers and the individuals they had compiled information on were laid bare for such malicious actors. 

Remember the lessons this tale imparts. In the age of digital wonders, even the smallest oversight can unleash nightmares upon millions. Stay vigilant, guard your secrets well, and let this story serve as a cautionary tale for all. 

For a deeper dive into this chilling narrative and its far-reaching implications,  Clyde Williamson, Senior Product Security Architect at Protegrity, discusses the importance of data protection and privacy: 

“Looking into their background, MC2 Data owns and operates several websites like PrivateRecords.net that have access to 12 billion public records from thousands of scraped online sources. This information, taken and compiled without any knowledge or consent of those involved, is then used to create background reports. Even more concerning, MC2 Data didn’t even put data security or bare-minimum password protection to this information. So not only are there millions of Americans whose data was scraped and put together without their permission, but now it’s all out there waiting to be picked up by anyone who wants it.  

Companies like MC2 Data operate this way so they don’t have to receive personal data directly from individuals. While these types of services are often used by potential employers or loan departments, that’s not the case 100% of the time. Anyone could be using these types of services for any purpose imaginable. Unfortunately, this breach likely impacts both those who subscribed to this service and the people whose data was compiled without their consent.   

These background checks don’t just include contact information or address history, either. Instead, we’re looking at deeply personal information such as an individual’s social media profiles, family members, marital and divorce status, and much more. This breach goes beyond business checks and lands squarely as prime social engineering attack fodder for cybercriminals. 

In their hands, this type of information can easily be used to scam unsuspecting parents, siblings, friends and other people close to you into sending threat actors their whole life savings on your behalf. MC2 Data did the hard part for such criminals by amassing, storing, and then failing to protect this horde of public information – In fact, they left the door wide open for them to waltz in and take it freely and neatly.  

Regardless of whether this was an accidental move on MC2 Data’s part, or at worst a deliberate act of negligence, this incident highlights how poorly organizations understand data security despite having the means to access such vast amount of sensitive data. This failure to secure even basic authorized access is frankly alarming and highlights the inadequacy of U.S. laws in handling citizens’ data, which are not equipped for the challenges of the 21st century.   

The focus must shift from merely complying with outdated regulations to embracing the true spirit of data security, because no organization is a data Fort Knox. Our regulations need to value transparency and data de-identification with true data protection strategies like encryption and tokenization, which ensure even when data is stolen it’s useless to threat actors looking to abuse it.”

 

 

The post Cyber Nightmare: The Haunting Reality of an Unprotected Database appeared first on Cybersecurity Insiders.

The best part about a competition is a worthy opponent, but what happens when the fellow contender is engineered to never lose?

The idea of artificial general intelligence (AGI) has emerged amid the artificial intelligence (AI) explosion. AGI is a theoretical pursuit to develop AI with human-like behavior such as self understanding, self-teaching, and a level of consciousness. AI technologies currently operate with a set of pre-determined parameters. For example, AI models can be trained to paraphrase content but often lack the reasoning abilities to generalize what they’ve learned.

While AGI may seem like a far-off ideal, it is closer than we think. The history of computerized chess algorithms can give us a glimpse into what might be right around the corner.

The Checkmates That Changed the World

Until the mid-20th century, chess was an area where human creativity and intuition reigned supreme over computers. In 1957, the first functional chess-playing program was developed at IBM, and by the 1980s, programs had achieved a level of play that rivaled even the greatest human chess minds. In 1989, IBM’s Deep Thought set the stage for computers to challenge some of the best human players when it defeated several grandmasters.

The 1990s saw two of the most famous competitions between humans and machines —with World Champion Garry Kasparov taking on IBM’s Deep Blue. In his initial match with Deep Blue in 1996, Kasparov emerged as the victor. But, in the 1997 rematch, the upgraded Deep Blue defeated Kasparov 3.5-2.5 — the first time a reigning champion lost to a computer. This event marked a turning point for AI and illustrated that machines could outperform humans in specific, deeply intellectual tasks.

Computer chess continued to evolve with the introduction of self-learning algorithms. Initially, chess engines relied heavily on human-crafted algorithms and databases of previous games. With the introduction of the self-learning algorithm AlphaZero, which used reinforcement learning to teach itself, we began to see a level of superhuman play. By playing millions of games against itself, learning and improving with each iteration, AlphaZero was able to surpass the capabilities of the world’s best engines like Stockfish in just a few hours. In a 100-game match against Stockfish, the best human-developed chess engine, AlphaZero went undefeated- winning 28 games and drawing 72.

Today, AlphaZero boasts standard Elo ratings of over 3500, while the best human players are only around 2850. The odds of a human champion defeating a top engine? Less than 1%. In fact, experts widely believe that no human will ever again beat an elite computer chess algorithm.

Learning to Expect Sudden Jumps in GenAI’ s Capabilities

Chess’ evolution offers valuable insights into the development of other AI technologies, particularly Generative AI (GenAI). Both fields have shifted from relying on human-crafted strategies to adopting self-learning systems.

Modern Large Language Models (LLMs), like GPT-4, can process vast amounts of data through unsupervised learning and perform a wide range of tasks autonomously. This suggests we are on the cusp of witnessing exponential growth in AGI. The progression we’ve seen—where slow, incremental advances suddenly give way to explosive improvements—serves as a clear indicator of AGI’s potential. With AGI, this may not only result in outperforming humans in specific tasks but rapidly evolving to handle a broader range of cognitive functions independently.

The technical drivers behind these leaps are already emerging. LLMs like GPT-4 have shown an ability to scale unsupervised learning, enhancing performance across multiple domains with minimal human input. The architecture’s ability to process and generate massive amounts of data in parallel accelerates the learning cycle. As these systems are provided with more computational power and data, the likelihood of rapid and dramatic improvements becomes even more probable.

This is not a gradual evolution but an exponential one. Once a general AI system reaches a critical threshold in its learning capabilities, it could swiftly surpass human intelligence across various fields. Preparing for this rapid inflection point is not only a technical challenge but also a strategic imperative for organizations seeking to leverage AI responsibly. That’s why establishing robust ethical frameworks and implementing technical safeguards now is essential.

Unsupervised AI Learning in the Real World

While AI-powered chess may be all fun and games (pun intended), the implications in the real world of autonomous learning, a giant step toward AGI, are far from benign. Here are a few examples:

1.In 2016, Microsoft’s AI Chatbot for Twitter, Tay, turned quickly offensive when exposed to unfiltered data. Soon after it launched, people started communicating with the bot using misogynistic and racist content. Tay learned from these conversations and began repeating similar-sounding statements back to users.

2.A few months after ChatGPT was launched, adversaries began claiming that they had used the technology to create malware, phishing emails, or powerful social engineering campaigns.

3.When the U.S. military began integrating AI into wargames, they were surprised to see that the preferred outcomes from OpenAI’s technology were extremely violent. In multiple replays of a simulation, the AI chose to launch nuclear attacks.

We’ve opened Pandora’s box, and can’t shut it again — so what do we do?

Reconciling the Benefits of AI With Its Potential Risks

At every step of technological advancements, there has been fearmongering and concern. While some trepidation is valid, we can’t return to a world where AI does not exist — nor should we want to. To reap the benefits of AI (and even AGI), we must address ethical concerns and build robust security frameworks.

Here’s my call to action for organizations:

1. Experiment with Gen AI now. The fear is largely in the unknown. Get to know how AI can benefit your organization and begin to get comfortable with the technology.

2. Learn about the risks. It’s no secret that AI comes with several security risks. Security teams should dedicate time to learning about the latest threats. The OWASP Top 10 for Large Language Models is a great place to start.

3. Prepare a policy on Gen AI. Have representatives from each department of your organization come together to determine how you use Gen AI. Decide which apps are okay versus not okay. Then, write it down and share it with the whole company so everyone is on the same page.

Chess showed us what AGI might look like in the future. By acknowledging the dangers of AI and taking the right steps to protect ourselves, we can ride the tidal wave of innovation rather than be caught in the undertow. After all, a little challenge from a worthy opponent presents us with an opportunity to learn and improve.

The post A Checkmate That Couldn’t Lose: What Chess Has Taught Us About the Nature of AI appeared first on Cybersecurity Insiders.