[By Tyler Shields, Vice President at Traceable AI]

As we step into 2024, the landscape of API security is at a critical juncture. The previous year witnessed a significant escalation in API-related breaches, impacting diverse organizations and bringing to light the critical vulnerabilities in API security. This surge not only accentuated the essential role of APIs in our digital ecosystem but also catalyzed a much-needed shift in focus towards their security. And with regulatory bodies like the FFIEC now acknowledging APIs as distinct attack surfaces, the stage is set for a deeper understanding and reinforcement of API defenses.

Looking ahead, the key question is: what new trends and challenges will define the realm of API security in 2024?

The Explosive Growth of APIs: Brace for Impact

As we approach 2024, the digital landscape is poised to witness an exponential growth in API usage, a trend that signifies a profound transformation in how digital services are deployed and interconnected. This surge is not merely a quantitative increase but a qualitative shift, reflecting the deeper integration of digital technologies in organizational operations. The transition to cloud computing, still far from completion, is a key driver of this expansion. As organizations continue moving applications and workloads into the cloud, we’re seeing a consequential shift in infrastructure. This shift, often referred to as the atomization of applications, involves breaking down applications into smaller, more manageable components, each potentially interfacing through its own API.

This next phase of cloud transformation is expected to dramatically increase the number of APIs, as these atomized applications require extensive intercommunication. While this growth facilitates greater flexibility and scalability in digital operations, it also introduces the challenge of API sprawl, where organizations struggle to manage the sheer volume of APIs within their ecosystems. However, the primary focus for 2024 remains on the sheer scale of API integration and deployment. As APIs become more central to organizational infrastructure, they create new opportunities for innovation and efficiency, but also raise critical concerns in security and management. The ability to effectively harness this growth, balancing the benefits with the complexities it introduces, will be a defining factor in the success of digital strategies in the coming year.

Emerging Threats in Data Quantity and Storage, and Role of AI

As we navigate the digital transformation, a critical challenge emerges in the realm of data quantity and storage, exacerbated by the exponential growth of the communication patterns. This issue transcends the traditional cybersecurity approach of merely blocking attackers or direct attacks against APIs. The real challenge lies in managing the colossal volumes of data amassed from extensive API interactions, now centralized in vast digital repositories. The pivotal question is: how do we ensure that this data is accessed exclusively by appropriate, authenticated, and authorized personnel? Moreover, how do we prevent sensitive data from being exposed to unauthorized individuals or systems?

This dilemma is not just about securing data; it’s about redefining how we perceive and handle cybersecurity. The complexity is magnified when we consider the role of AI in this landscape. AI models, which are increasingly integral to our digital ecosystem, require training on large data sets. The volume of data used for this purpose has skyrocketed, with computational capacities doubling every six months since 2010. In this context, AI becomes more than a technological tool; it represents a new paradigm of API interaction, where AI systems, accessing data via APIs, pose complex questions and analyses.

This scenario presents a multifaceted challenge. On one hand, we have the ‘data in’ aspect, involving the influx of information into these systems. On the other, there’s the ‘data out’ component, where the output and its implications, particularly regarding privacy and fraud, become a concern. For instance, the potential for AI to ask questions or rephrase queries in ways that might inadvertently breach privacy or security protocols illustrates the intricate nature of this challenge.

Addressing these issues requires a nuanced approach to authentication, authorization, and privacy. The complexity of ensuring the security and integrity of data, both incoming and outgoing, in these vast, interconnected systems cannot be overstated. It’s a formidable task, yet not insurmountable. API security technologies stand at the forefront of this challenge, poised to develop solutions that can effectively navigate and secure this intricate web of data interactions. As we look towards the next three years, the evolution of these technologies will be pivotal in shaping a secure digital future, where data security is not just a feature but a foundational aspect of our cybersecurity infrastructure.

2024: The Year of API Breaches

This prediction isn’t unfounded, considering the recent statistics revealing that 60% of organizations reported an API-related data breach in the past two years, with a staggering 74% of these involving at least three API-related incidents.

This trend underscores a critical reality: APIs have become the universal attack vector in the digital world. Beyond the traditional realms of social engineering and cloud misconfiguration attacks, which themselves often leverage APIs, it’s becoming increasingly challenging to identify cyber attacks that don’t have their roots in API vulnerabilities.

APIs are rapidly evolving into the superhighway of digital communication within our infrastructure. As their usage broadens and becomes more complex, the necessity for robust API security measures escalates. In 2024, we anticipate that API security will no longer be an afterthought but a fundamental standard in cybersecurity strategies, pivotal in preventing the next wave of major digital breaches.

Contextual Intelligence – The Keystone of API Security in 2024

In 2024, a key driver in enhancing API security will be the comprehensive collection and analysis of data to create context. This approach marks a significant evolution in our security techniques, shifting from traditional perimeter-based defenses to a more nuanced understanding of each interaction within the API ecosystem. The focus is on securing the vast quantities of data that flow in and out through APIs by meticulously gathering and analyzing the surrounding data of each request to build a rich context that allows deeper analysis. This involves a detailed examination of the APIs themselves – their structure, expected data flow, and typical usage patterns. It also includes identifying what constitutes normal and abnormal behaviors within these interactions. By aggregating this information into a contextual dataset, we can apply advanced AI analysis to discern broader results and subtle anomalies.

This shift in strategy represents a move from basic, binary security queries – such as “Are you authenticated?” or “Is this connection secure?” – to more complex, AI-driven interrogations that mimic human analytical skills. Questions like “Does this data transaction contain any information that should not be leaked?” or “Is this pattern of API use indicative of a potential security threat?” become central to our security protocols. This level of inquiry requires a deep understanding of API interactions, far beyond surface-level authentication checks.

The future of API security, therefore, hinges on the ability of security technologies to amass and intelligently analyze the richest and most comprehensive sets of contextual data. The technologies that excel in capturing this depth and breadth of information will be best equipped to navigate the sophisticated security landscape of 2024, ensuring robust protection against increasingly complex threats.

The Bottom Line

The trends we’ve identified call for a proactive reimagining of cybersecurity strategies, where the focus shifts from reactive defense to anticipatory resilience. This evolution demands more than just technological upgrades; it requires a paradigm shift in our understanding of digital ecosystems. The integration of AI, the management of sprawling APIs, and the safeguarding of vast data repositories are not isolated tasks but parts of a cohesive strategy to fortify our digital infrastructures. In this context, the insights from 2024 serve as a beacon, guiding us towards a future where cybersecurity is dynamic, intelligent, and integral to the fabric of our digital existence.

As we navigate these waters, the real measure of success will be our ability to not just defend against emerging threats but to adapt and thrive in an ever-evolving digital landscape.

The post API Security in 2024: Navigating New Threats and Trends appeared first on Cybersecurity Insiders.

[By Brett Bzdafka, principal product manager at Blumira]

Businesses today face an ever-increasing number of cyberattacks on average, often posing potential financial impacts in the 7-figure range. Despite this threat, only 55% of organizations have some form of cyber insurance, and only 19% have coverage for cyber events beyond $600,000. The high cost of premiums, which surged in 2022, might contribute to the low percentage of organizations with sufficient coverage.

As the cybersecurity landscape continues to evolve, businesses must carefully evaluate their risk exposure and consider ways to invest in comprehensive cyber insurance policies that truly meet their needs without breaking the bank.

Understanding the Role of Cyber Insurance

Cyber insurance is a financial safeguard against the repercussions of cyberattacks and data breaches. This coverage extends to the expenses associated with data recovery, system restoration and the aftermath of a security breach. Legal actions from affected parties, regulators or business partners following a cyberattack can incur significant costs, all of which cyber insurance can thankfully alleviate.

Cyber insurance policies commonly encompass incident response services, enabling organizations to enlist security experts for breach investigation and mitigation. Some coverage extends beyond mere recovery, encompassing the implementation of security enhancements and measures to prevent future events.

Incorporating cyber insurance is an integral component of a holistic risk management strategy for any organization. It’s important for decision-makers and legal counsel to carefully consider the business’s unique needs and risks when choosing a cyber insurance policy.

Let’s delve into the cybersecurity strategies that IT professionals can adopt to reduce insurance expenses and identify a policy that aligns most effectively with their unique needs.

5 Ways to Lower Costs

IT experts can actively contribute to lowering cyber insurance expenses by showcasing a robust dedication to cybersecurity, effective risk management, regulation adherence and threat awareness.

1. Proactive Risk Management Strategies: Regular risk assessments empower IT professionals to pinpoint vulnerabilities and deploy effective measures for risk mitigation. Consider this: A staggering 98% of organizations globally are linked to breached third-party vendors. To counter the potential impact of a compromised partner or vendor, IT teams should seek to understand the vendor’s cybersecurity protocols. It’s imperative that vendors align with cybersecurity best practices and standards, preventing unauthorized access to sensitive information.

Through a meticulous vetting process of these entities, IT teams fortify their defenses and showcase an unwavering commitment to ongoing enhancements in cybersecurity practices and technologies. Such proactive risk management efforts often translate into more favorable insurance rates from providers who appreciate the dedication to risk prevention.

2. Robust Security Measures: Deploying advanced security solutions like firewalls, intrusion detection systems, encryption and systematic updates plays a pivotal role in curbing cyber insurance expenses by fortifying an organization’s overall cybersecurity framework. Insurers often factor in risk levels when determining premiums, and the efficacy of security software can significantly alleviate potential risks.

Security software doesn’t just safeguard against external threats – it aids in pinpointing and remedying vulnerabilities within other software, applications, systems and networks. The routine execution of vulnerability assessments and patch management, facilitated by reliable security software, contributes to a more resilient environment, diminishing the susceptibility to cyber exploitation.

3. Automated Threat Detection: Implementing security software with advanced threat detection capabilities empowers organizations to identify and respond to security incidents swiftly. Advanced threat detection tools can continuously analyze network traffic, system logs, and user behavior to identify abnormal patterns that may indicate a security breach. By promptly detecting these anomalies, organizations can initiate a rapid response to investigate and contain the threat before it escalates. Timely incident response can constrict the scope and impact of a cyberattack, potentially mitigating the financial losses associated with such incidents.

Companies that utilize modern technologies to monitor for and respond to threats may find insurers willing to extend discounts and offer lower premiums as a recognition of their commitment to cybersecurity.

4. Cybersecurity Compliance: Adopting cybersecurity standards fosters a proactive approach to managing cyber risks and can positively impact cyber insurance costs. Organizations can establish and uphold robust security protocols by aligning with recognized standards and abiding by a structured framework for compliance.

IT teams should prioritize compliance with industry-specific standards and regulatory requirements pertinent to their business sector. Noteworthy cybersecurity standards like ISO 27001, NIST Cybersecurity Framework and PCI DSS offer guidelines for identifying and mitigating cyber risks. By embracing these standards, organizations incorporate best practices into their risk management strategies, lowering the probability of security incidents that necessitate insurance claims.

5. Workforce Education: Despite the evolving technological landscape, the human element remains the primary contributor to cybersecurity incidents, accounting for 74% of total breaches. Well-intentioned employees may inadvertently contribute to security incidents if they lack awareness or training. Incorporating ongoing cybersecurity education is essential because it empowers employees to identify and address potential threats.

A robust training program allows employees to build skills to avoid common pitfalls that could lead to breaches. When the workforce understands cyber risks and best practices, they become less vulnerable to manipulation and less likely to make costly mistakes. Investing in employee education strengthens institutional resilience and signals to partners like insurance providers that the organization takes risk management seriously. Ultimately, equipping staff with knowledge and tools through training fosters a culture of collective responsibility for cybersecurity.

Don’t Wait. Now’s the Time to Prioritize Cyber Insurance.

Given the current threat environment, businesses must strengthen security measures and secure sufficient cyber insurance coverage. The path to lowering cyber insurance costs begins by implementing thorough security measures that diminish risks, and signal to insurers a dedication to addressing potential threats. Taking a proactive approach empowers organizations to protect their digital assets and obtain more economical cyber insurance coverage in the ever-evolving and intricate cybersecurity landscape.

About the Author

Brett Bzdafka is the principal product manager at Blumira. Brett has more than 10 years of leadership experience delivering SaaS solutions that solve real-world problems for small to medium-sized businesses (SMBs). Brett is committed to understanding SMB and IT leaders’ needs by gathering customer insights to shape Blumira’s product roadmap. Previously, Brett served as Group Product Manager at BoxCast, where he led the product team and agile teams to scale their SaaS live streaming solution. With experience in sales, project management, and product development, Brett knows firsthand what it takes to build products that customers love.

The post 5 Ways to Counteract Increasing Cyber Insurance Rates appeared first on Cybersecurity Insiders.

[By Brett Walkenhorst, Ph.D., CTO, Bastille]

Zero Trust has been an important paradigm for advancing network security for almost 15 years, incorporating tenets that move beyond perimeter-based control toward a multi-layered approach that seeks to minimize risk in the modern world. Although the paradigm is complex, the basic idea behind Zero Trust is to shift our mindset from defending our perimeter to assuming an attacker has already penetrated it. This requires us to bring visibility to our network, limit access to network resources, and automate the response to incidents aided by analytics.

As a security community, we are making progress implementing this paradigm shift, but one key area that is often overlooked is the wireless attack surface of our networks. Without addressing the wireless problem, our Zero Trust posture is incomplete.

The Wireless Problem

Wireless devices number in the tens of billions worldwide, and their presence continues to grow. Interfaces include Wi-Fi, cellular, Bluetooth, IoT, and others. Much of this wireless space is not monitored or even visible within our security tools. These unaccounted devices may include shadow IT equipment, industrial control systems (ICS), personal/corporate smartphones, peripherals, wearables, and many more. All of these devices have the potential to connect to our networks in some way, and yet their wireless interfaces are largely unmonitored. In our efforts to shift to a Zero Trust mindset, it is critical that we bring visibility to these wireless technologies in addition to the wired components of our networks.

Wireless devices are ubiquitous. They utilize electromagnetic waves to communicate with each other and with network infrastructure. These waves travel at the speed of light; they penetrate walls and other physical barriers, bypassing our physical security perimeter; and they are invisible to the eye. As critical 1’s and 0’s modulate these invisible waves, we must find ways to make them more visible to defend against the many vulnerabilities that exist within these wireless protocols. Over 2,000 wireless CVEs have been published within the last 10 years alone. And that is only what has been discovered. The trend of these discoveries is one of exponential growth. Clearly, the wireless attack surface is an area of growing concern.

The forms of wireless-based attacks vary widely. They include machine-in-the-middle (MitM) attacks to crack credentials and/or compromise clients/peripherals; denial of service (DoS), eavesdropping, malware injection, data exfiltration, and many more. Many affordable tools (both hardware and software) exist that have lowered the barrier to entry for people to conduct such attacks. Attack devices include Wi-Fi pineapples (Evil Twin attack devices), O.MG and USB Ninja cables and Wi-Fi Rubber Duckies (wireless-controlled keystroke injection and exfiltration cables/dongles), wireless network interface controller (NIC) dongles, Bluetooth development kits/dongles, software-defined radio kits/dongles, and more. The more sophisticated devices are typically around $100, but many very capable devices can be purchased for $10 or less.

Bringing Visibility to the Invisible Wireless Attack Surface

Wireless signals use electromagnetic (EM) waves to communicate. EM waves are invisible, but electronic systems can be built to both create and detect them. To make those invisible waves visible, then, we need a suitably capable detector. While radio technology has been around since the late 19th century, modern developments involving higher frequencies and digital modulation have made wireless communication increasingly efficient and effective, allowing us to use different bands of the EM spectrum to support tens of billions of devices speaking many different protocols. A wireless detection system must be equally capable by employing modern tools like software-defined radio technology and highly-capable processors to digitally demodulate and decode the many wireless packets from many protocols.

A wireless detection system should include multiple broadband, multi-channel software-defined radio sensors to detect multiple wireless signals simultaneously. The sensors must digitally decode the headers of many wireless packets in parallel to extract metadata for individual wireless detections, and then feed their data to a central server to localize the emissions in space. In this way, the system can detect and locate all wireless emissions within a facility. This gives a user visibility into the wireless signals in terms of their temporal, spatial, and behavioral characteristics. But visibility is only the first step. To enable Zero Trust, we need to add analytics and automate the response.

Analytical tools transform data into actionable insights. Applied to wireless data, we need to identify unhealthy behaviors in the wireless transmissions, classify their severity, and provide tools for users to take action. Dimensions over which we can analyze wireless devices include time, space, and many dimensions of behavior. The metadata available from the wireless packet headers offers a rich set of data from which we can infer connectivity, device information, data transmission volume, and much more. Once a particular behavior is identified, we need to alert on that behavior and automate the response if appropriate. Such automation should include the ability to shut off a device’s network access, disable certain functions of the device, populate an alert list in a security operations center (SOC), issue an incident response alert, focus physical security cameras on a specific area, flash a light, lock a door, and many other actions. To enable those kinds of responses, the wireless detection system must be able to readily integrate with a host of other security tools including security information and event management (SIEM) systems, security orchestration automation and response (SOAR), network access control (NAC) systems, unified endpoint management (UEM) systems, physical access control systems, etc.

Building a wireless detection system like the once described above is not trivial. The hardware is specialized and highly capable, but the software/firmware is the real key to making such a system viable. Some challenges include:

  • Differentiating among 4G/5G cellular devices and localizing them individually
    • This is simple enough for certain control channel packets such as a random-access channel (RACH) packet, but these are few and far between, so we would miss most of the relevant data
    • It is extremely difficult to do with general traffic channel packets
  • Detecting and locating Bluetooth devices when they’re connected to other devices
    • Bluetooth signals hop in frequency when they’re in a connected state, so many single-channel sniffers aren’t capable of seeing them
    • Differentiating and locating individual devices in a Bluetooth network requires sniffing the entire spectrum and teasing out which packets belong to which device
  • Accurate localization of indoor signals
    • Indoor environments have a lot of noise and multipath (EM waves bouncing off of various physical materials)
    • Localization accuracies of 10m is reasonable to achieve, but it’s not very actionable
    • Accuracies of 1-3m is very challenging but much more useful

Building such a detection and localization system is obviously challenging, but the effort yields a lot of value. In terms of implementing a Zero Trust architecture, it is essential, but some examples may help to motivate the value. The following are just a few examples of things that have been detected and located by such a system:

  • A USB Ninja cable on an executive floor of a Fortune 10 company
    • This cable is a hacking tool that looks and acts like a standard USB cable
    • It can wirelessly connect to a controller to enable an attacker to inject keystrokes and exfiltrate data from a target system
  • A laptop connected to a server in a secure data center beaconing Wi-Fi and Bluetooth packets
  • An active, unencrypted Zigbee transceiver in industrial chillers that had wired access to the core network inside a data center
  • Excessive RTS and connection request packets from devices indicating misconfiguration and/or a potential DoS condition
  • Intermittent WEP encryption advertised through beacons from an access point that otherwise used WPA2 encryption
    • WEP is a very old Wi-Fi encryption scheme that was cracked in 2001
    • No access point should ever be using it
  • Bluetooth-enabled RFID readers that were susceptible to a wireless DoS attack that could shut down physical access to the facility
  • Fitbits, phones, smartwatches, and many other devices are detected on a daily basis in various government and secure commercial facilities where the presence of such devices is prohibited due to security concerns

The ability to detect these kinds of threats allows operators to identify potential problems before they become incidents and take corrective action. For many of the examples above, physical security interdiction is the appropriate response, and the wireless detection system’s ability to locate the wireless devices spatially is critical. For others, some action to correct device misconfiguration or simply shutting down a specific wireless mode is sufficient. For such cases, the system’s ability to identify device details such as MAC address, device name, manufacturer, etc and integrate with a UEM/NAC system are all that is needed to identify and correct the problem. Whatever the case, a wireless detection solution can not only provide real-time monitoring of the wireless attack surface to identify incidents as they occur, but it can serve to shore up an organization’s security posture to prevent attacks from occurring at all.

Solving the Wireless Problem

Wireless devices are ubiquitous, vulnerable to attack, and invisible to most security tools. Their growing presence and vulnerability along with the trend toward democratizing RF hacking tools and capabilities necessitates improved vigilance on the part of network administrators and the entire security industry. While it is challenging to create systems that can monitor these wireless signals, such tools are becoming increasingly available with continuously improving capabilities.

The ability to detect, localize, analyze, and respond to wireless threats is the next phase in the implementation of Zero Trust. It is time to plug this increasingly dangerous gap in our network security posture.

The post Wireless Visibility: The MUST for Zero Trust appeared first on Cybersecurity Insiders.

[By John Anderson, Enterprise Information Security Manager, Lands’End]

Securing electronic messaging services, particularly when utilizing third-party services, is crucial for maintaining the integrity and security of your communications. Limiting who can send on your behalf is crucial to maintaining email reputation, security, and governance, ensuring that your communications are trusted by others while preventing unauthorized senders from spoofing your identity and ruining your reputation.

Industry recommendations are to limit outbound messages from your official sending domain to a single relay point. This can be provided by a specially configured secured email relay solution or a third-party messaging security solution, such as Microsoft, Mimecast, Proofpoint et al. It is essential that all third-party messaging partners relay messages through your configured secured email relay to present a single point of reference that can now have DKIM, SPF, DMARC, and other messaging standards (BIMI) applied uniformly. This will improve your overall reputation in the public messaging industry and allow you to track and remediate any potential issues.

There are multiple security, process, and business integrity reasons why you should not add Third Party Partners to your SPF records. These include but are not limited to the following:

  • Managing multiple partners within your SPF records requires constant attention and risks missing removals or changes in the business direction.
  • SPF record may become too large and cause lookup failures with impact delivery rates.
  • Third-Party partners can inadvertently send messages out with your domain signature that are not authorized or related to your business.
  • You are unable to verify what messages were sent by the third party and to whom. This may lead to a Bad Reputation score as a spammer sending unsolicited messages.
  • Third-Party partners may suffer a breach, and this now becomes your breach.
  • You may lose customers’ confidence and have reduced opening rates for your messages.

Here are some best practices to ensure correct DKIM, SPF, DMARC, and overall security standards:

  • “Choose a Reputable Proxy Service Provider”: Ensure that the third-party proxy service provider you choose has a good reputation for security and reliability. Look for providers with a history of maintaining high standards of security compliance.
  • “Implement DKIM, SPF, and DMARC”: These are essential email authentication protocols for preventing email spoofing and phishing attacks.
    • “DKIM (DomainKeys Identified Mail)”: Sign outgoing messages with digital signatures to verify the sender’s domain.
    • “SPF (Sender Policy Framework)”: Define which IP addresses are allowed to send emails on behalf of your domain.
    • “DMARC (Domain-based Message Authentication, Reporting, and Conformance)”: Specifies how your domain’s emails should be handled if they fail authentication checks.
    • “BIMI (Brand Indicators for Message Identification)”: BIMI adds a verified sender logo that appears next to your message in the inbox.
  • “Configure DNS Records”: Ensure that your DNS records are correctly configured to support DKIM, SPF, and DMARC. The DNS records should include the necessary public keys, SPF records, and DMARC policies.
  • “Monitor Email Traffic”: Regularly monitor your email traffic to detect any anomalies or suspicious activities. This includes monitoring for failed authentication attempts, unusual message volumes, and unexpected changes in email patterns.
  • “Enforce TLS Encryption”: Require Transport Layer Security (TLS) encryption for all incoming and outgoing emails. This ensures that emails are transmitted securely over the internet and are protected from eavesdropping and interception.
  • “Implement Multi-factor Authentication (MFA)”: Require users to authenticate using multiple factors such as passwords, biometrics, or security tokens. This adds an extra layer of security to prevent unauthorized access to email accounts.
  • “Regular Security Audits and Penetration Testing”: Conduct regular security audits and penetration testing to identify and address any vulnerabilities in your email infrastructure. This helps ensure that your systems are up to date with the latest security patches and configurations.
  • “Employee Training and Awareness”: Educate employees about email security best practices, including how to recognize phishing attempts and other email-based threats. Regular training sessions and awareness programs can help prevent security incidents caused by human error.
  • “Review Proxy Service Agreements”: Thoroughly review the service agreements with your proxy service provider to ensure that they comply with your organization’s security requirements and standards. Pay attention to clauses related to data privacy, security, and compliance.
  • “Stay Informed About Emerging Threats”: Keep up to date with the latest developments in email security threats and best practices. Subscribe to security newsletters, participate in industry forums, and collaborate with other organizations to share information about emerging threats and vulnerabilities.

By following these best practices, you can enhance the security of your electronic messaging services when using third-party proxy services and ensure compliance with DKIM, SPF, DMARC, BIMI, and other security standards.

The post Recommended Practices for Enterprise Electronic Messaging Security and Governance appeared first on Cybersecurity Insiders.

[By Avkash Kathiriya, Senior Vice President, Research and Innovation at Cyware]

There was a time when managed security service providers (MSSPs) were perceived as expensive outsourced options to replace or bolster internal security teams with a one-size-fits-all approach. Fortunately, those days are long gone. Now they offer advanced sets of technologies backed up with in-depth expertise, giving access to sophisticated solutions that customers can’t, or don’t want to, manage themselves. Regarded as trusted, knowledgeable partners, increasingly clients have turned to them for advice to solve emerging security concerns.  Many are already benefiting from a wide range of options including firewalls, vulnerability patching, endpoint security, SIEM and identity management.

More recently MSSPs have started adding advanced detection and response capabilities to their portfolios, as well as threat intelligence as-a-service. Not a moment too soon for those facing a barrage of security alerts and trying to pinpoint which ones pose the greatest risk. According to Gartner, security and risk managers struggle to know what threats constitute genuine concerns for their organisation and lack an accurate view of their own threat landscape.

While threat intelligence holds key indicators to identify and pre-empt attacks, sifting through the bewildering array and volume of data to find them is beyond many security teams, especially in smaller organisations. To make matters worse, the data arrives in all kinds of formats from internal and external feeds, such as reports, articles, emails, pdfs and documents. Attempting to assimilate and turn this information into usable format is a mammoth task in itself.

GuidePoint Security’s senior director of digital forensics and incident response and threat intel, Tony Cook, agrees that managing threat intelligence can overwhelm small and medium-sized security teams, saying it typically requires a level of expertise and complex systems that are only practical for large enterprises with specialised threat intel analysts.

Replacing endless alerts with top priorities

Constantly raising their game to meet evolving requirements, MSSPs have been working to reduce the burden of endless security alerts and false positives, with the ambition of providing targeted and timely remediation advice.  Historically, this has required an ever-growing number of skilled analysts to work through copious volumes of data to assess threats before tailoring responses appropriate to specific environments.  Whereas, today’s modern threat intelligence platforms (TIPs) can eliminate much of this tedious, formerly manual, and error-prone task.

At the outset, aggregation is automatically handled by the TIP ingesting raw data from a myriad of sources. It doesn’t matter whether incoming data is already structured in a machine-readable format or unstructured like documents and texts, it all goes through a normalisation process.  Any duplicates, inconsistencies and redundancy are cleaned up, and each piece of threat information is given relevant attributes and context. The TIP then correlates the enriched data by piecing together what might seem like unconnected factors if viewed in isolation, but can be indicators of multi-faceted attacks. Based on this analysis, the most severe threats are prioritised for further investigation by security analysts. TIPs can also be configured to triage automated responses and remediation actions to suppress threats before they cause further harm.

Instead of trawling through a sea of data and wasting valuable time on false positives, security analysts have access to timely, actionable intelligence to forestall attacks, essential for MSSPs serving clients with diverse security needs.

Cook considers this advanced level of threat intelligence as-a-service will be welcomed enthusiastically by MSSPs, explaining that it will become a crucial part of helping customers detect and mitigate emerging threats and vulnerabilities that could otherwise disrupt or bring down their networks. He adds, “By identifying indicators of compromise before a full-scale attack can occur, businesses can minimise the likelihood of serious security incidents, and the associated financial, operational and reputational damage.”

Well-suited to a shared services model, a TIP also supports integration with a wide range of security tools, helping MSSPs streamline and orchestrate responses. This enables more efficient and consistent threat management processes across all clients, while still ensuring comprehensive protection can be customised to each one’s individual requirements and desired security posture.

Intelligence sharing offers valuable synergies

Cook sees the goal of threat intelligence as helping organisations make better and faster decisions, based on timely analysis and contextual data. He maintains that, “Making this technology available to a wider market, MSSPs can help businesses strengthen cybersecurity, safeguard their assets, and enhance their overall resilience when facing new threats.”

Industry-specific information sharing and analysis centres (ISACs) and other types of sharing hubs have also taken on the mantle of collating and distributing vital threat information to their members. Covering a wide range of industries including financial services, healthcare, energy, manufacturing, education, some communities are also introducing two-way communication so members can quickly feedback real world intelligence to help the wider group.

MSSPs can play their part here too by helping to supply the technical platforms for many of these communities that are keen to receive actionable threat intelligence, as well as contribute information to a collective security model that supports and protects each other.

Can we add, that similar to ISAC, MSSPs can also create their own customer-specific intel-sharing and collaboration community for sharing real-time threat data across MSSP customers using technology like advanced Threat Intel Exchange platform.

The post Threat Intelligence as-a-Service: As good or better than D-I-Y? appeared first on Cybersecurity Insiders.

How many tools do you use to protect your network from cyberattacks? That’s a puzzling question to answer.

A typical enterprise Security Operations Center (SOC) employs a diverse array of security tools to safeguard against cyber threats. This includes Security Information and Event Management (SIEM) for log analysis, firewalls for network traffic control, and Intrusion Detection and Prevention Systems (IDS/IPS). Antivirus solutions, Endpoint Detection and Response (EDR), and Vulnerability Scanners address malware and system vulnerabilities. Identity and Access Management (IAM) controls user permissions, while tools like Security Orchestration, Automation, and Response (SOAR) streamline incident response. Encryption, deception technology, and threat intelligence platforms bolster defense.

If you were to just consider the enterprise vulnerability management program, it could have an asset discovery, vulnerability scanner, assessment tool, vulnerability prioritization software, patch management, configuration management, incidence response integration, collaboration tools for security and IT to work together, automation, reporting, integration with log management tools, and more.

Even with the increasing number of tools, the number of security risks your vulnerability scans detect is not reducing at all. But why?

What are the Culprits for Ineffective IT Security and the Rising Number of Security Risks?

Cyber attackers are continuously becoming more sophisticated, employing innovative and lethal methods to breach infrastructure. A broken counter would not work. It is crucial for us to respond with equally, if not more, innovative and bulletproof cybersecurity tools and measures. Most important is a continuous approach to cyber-attack prevention.

So, what are the biggest culprits limiting our IT security? Why are the number of security risks not reducing? Our security tools themselves!

  • Ineffective Disjoint Security Solutions:
    Vulnerability and Exposure Management play the most significant role in having a powerful base for effective IT security. But why are they split?
    A common occurrence in the industry is creating new and redundant tools that are micro-solutions to micro-problems instead of addressing the mother problem: cyberattacks! With most security solutions being just multiple tools wrapped together with janky integrations, their security effectiveness in detecting and remediating security risks is mediocre at best.
  • Lengthy Detection and Response Cycle:
    Did you know the average time taken to detect and remediate a vulnerability is 65 days! With detection and response cycles being this lengthy, the chance of a threat actor exploiting that security risk rises exponentially every passing day.
    Unfortunately, this is a byproduct of the lengthy duration of security scans and the time consumed by remediation tools to mitigate the risks.
  • Missing Integration & Automation Capabilities:
    As previously mentioned, security tools already take enough time to detect and mitigate risks. But with the rising number of vulnerabilities, the entire process increases in duration even more. With multiple tools developed by different vendors, integrating them and then automating the entire process becomes a Herculean task. The lack of proper integration and automation of the process reduces the effectiveness of your IT security.

So, how do we overcome these challenges and ensure a strong security posture for your organization?

The Weakness Angle to Overcome IT Security

A change in the way we perform vulnerability & exposure management, a change in the ineffective tools, and a change in the fundamental IT security framework can be a game-changer. The weakness angle is the change we need.

Every attack is the exploitation of a weakness. This is the fundamental fact we must keep in mind to prevent cyberattacks and protect your IT infrastructure effectively.

Weakness Perspective, simply put, is the study of your devices, your network, your data, your software, your users and their privileges, your security controls, your network, your attack surface, your threats, and potential attackers to find potential weaknesses. It encompasses all the devices, applications, users, data, and security controls of the network.

We must actively look for these weaknesses or security risks, the actual root cause for cyberattacks, and you’ll see a drastic change in your organization’s security posture.

But how do we do it?

Continuous Vulnerability and Exposure Management: A Necessity!

Continuous Vulnerability and Exposure Management (CVEM) is the new way of performing vulnerability management. By incorporating the weakness perspective and making incremental yet significant improvements to the vulnerability and exposure management process, CVEM is the shot in the arm all modern IT security teams need to improve their IT security.

With the weakness perspective at the crux, CVEM introduces a broader scope in the detection of security risks. Be it software vulnerabilities or CVEs, misconfigurations, posture anomalies, asset exposures, or missing configurations, all potential risks are looked at and mitigated.

Integration and automation play a critical role in CVEM. Integration of the different steps of the mitigation process, from detection and assessment to prioritization and remediation, leads to a streamlined and smooth security risk management process.

The concept of integration of the process allows for unified solutions to become effective. Further, automation becomes easier, and this, in turn, reduces the laborious task of detection and remediation.

This makes the entire process faster, leading to speedy response and reduced duration of the vulnerability mitigation cycle as well!

Conclusion

While it is difficult to digest, the ineffectiveness of modern vulnerability and exposure management solutions is the hard truth we must accept. Adding to the issue is the ever-expanding attack surface, making the life of IT security teams difficult. But there’s light at the end of the tunnel.

Better tools, automation, and integration are the key game changers that can take your IT security to the next level. Continuous Vulnerability and Exposure Management is the glue that ties it all together.

Read more about CVEM

The post Continuous Vulnerability and Exposure Management: Unifying Detection Assessment and Remediation for Elevated IT Security appeared first on Cybersecurity Insiders.

In today’s fast-paced digital world, keeping your IT assets safe is more important than ever. Imagine having a Superhero that can spot and fix problems with your IT infrastructure within the blink of an eye.

With cyber threats growing in complexity and sophistication, organizations must adopt proactive measures to safeguard their digital assets.

One key aspect of this security strategy is the implementation of an integrated risk prioritization system for faster remediation.

Why Quick Remediation Matter

Let’s talk about why fast remediation is crucial. Cyber threats can be sneaky and might strike at any moment. If you take too long to respond, it could lead to a catastrophe. Traditional ways of dealing with these issues can be slow and might miss the mark.

The Problem with Old-School Fixes

Legacy ways of dealing with vulnerabilities  are like a slow stroll when you need to sprint. These methods often involve looking for problems every now and then, leaving your digital defences open between checks. Also, there’s just too much data and vulnerabilities to sort through, making it hard to figure out what needs fixing.

Understanding the Need for Faster Remediation of Vulnerabilities

Before delving into the intricacies of integrated risk prioritization, it’s crucial to understand faster remediation. In the digital realm, speed is of the essence. Threats can materialize in a matter of seconds, and delayed response times can lead to devastating consequences. Lightspeed or faster remediation refers to the swift and agile process of identifying, assessing, and mitigating cybersecurity risks in real-time.

Traditional risk management approaches often involve time-consuming manual assessments, leaving organizations vulnerable. LightSpeed remediation, on the other hand, demands a dynamic and integrated approach that adapts to the ever-changing threat landscape.

Key Components of Integrated Risk Prioritization

  1. Continuous Monitoring: Instead of relying on periodic assessments, integrated risk prioritization involves prioritizing and mitigating what matters the most. This real-time data collection helps you to identify and respond to potential risks as they emerge.
  2. Data Integration: The integration of data from various sources, such as vulnerability scanners and threat intelligence feeds, provides a comprehensive understanding of the risk landscape. This integrated data serves as the foundation for accurate risk prioritization.
  3. Asset-Based Prioritization – One key aspect of integrated risk prioritization involves adopting an asset-based prioritization approach. Not all assets within an organization are created equal, and some are more critical to operations than others. By identifying and prioritizing assets based on their importance, cybersecurity teams can direct their effort toward the more important ones.
  4. Automated Remediation: Once risks are prioritized, automated remediation processes can be triggered to address identified vulnerabilities promptly. This automation significantly reduces the response time and minimizes the window of exposure to potential threats.

Benefits of Integrated Risk Prioritization

  1. Real-Time Threat Mitigation: By continuously monitoring and prioritizing risks in real-time, organizations can respond swiftly to emerging threats, reducing the likelihood of successful cyber-attacks.
  2. Resource Optimization: Integrated risk prioritization allows organizations to allocate resources efficiently by focusing on the most critical vulnerabilities. This targeted approach enhances the overall cybersecurity posture while minimizing operational overhead.
  3. Comprehensive Risk Visibility: Integrated risk prioritization provides a holistic view of an organization’s risk landscape, allowing for informed decision-making and strategic planning.

In conclusion, integrated risk prioritization is a game-changer in the realm of Quick remediation. As organizations face increasingly sophisticated cyber threats, the ability to assess and prioritize risks in real time is essential for effective cybersecurity. By leveraging advanced technologies and adopting a strategic approach, organizations can stay ahead of threats and enhance cyber resilience in the digital age.

 

The post Integrated Risk Prioritization for Lightspeed Remediation appeared first on Cybersecurity Insiders.

[By Shlomi Yanai]

It’s rather obvious to most in the IT sector that cybercriminals consistently and successfully exploit stolen or weak online identities to gain unauthorized access to businesses of all types. It’s these identities in an enterprise that are clearly the pathway for online attacks. But the irony remains that many identity and security leaders don’t yet recognize that it’s not enough to invest in identity security controls like Active Directory, SSO, MFA, PAM, etc. if an organization does not invest in making sure such tools are delivering the required protection.

Only focusing on what’s happening within the realm of identity and access management is a failing strategy. And that’s because identities, both human and machine, are everywhere in an enterprise – there are countless instances of unprotected and unmanaged identities across cloud, SaaS, and on-premises. They’re often far from the confines of identity infrastructure controls, yet cybercriminals can just as easily exploit them.

Attackers take full advantage of the fact that humans are human. Yes, some internal bad actors exist, but identity exposures are often created because of people, process, and technology challenges. For example, to maintain a competitive advantage, R&D teams are tasked with introducing new applications and services at warp speed. If the processes for rolling out new applications aren’t sufficiently coordinated across the organization, identity security blind spots can be created, such as production systems that aren’t managed by any directory or applications that can be accessed without MFA by a local account with an extremely easy-to-crack password. Even if processes are well aligned, identity blind spots can happen as changes to systems are made and new people join the organization.

Beyond blind spots, the sheer complexity of an organization’s identity and security technology stack can lead to misconfigurations that weaken the identity security controls put in place. A common example here is an exposure introduced by MFA misconfigurations, such as applications where MFA is not enforced due to session token duration issues or applications where step-up MFA to sensitive applications is not functioning as expected.

Service accounts are another common source of misconfiguration challenges. A common bad practice is associating a service account with a human user. This creates potential security risks, such as unauthorized access to the service account if the human user’s credentials are compromised. Furthermore, if the human user leaves the organization or changes roles, the service account could be left entirely unmanaged.

The reality of identity blind spots and misconfigurations demands that security and IT teams must have real-time visibility of all identities that exist and their activities. After all, that arms them with the ability to discover and resolve identity exposures proactively and respond to cyberthreats that target identities and identity systems.

To achieve this needed visibility, enterprises should consider integrated solutions that combine Identity security posture management (ISPM) and identity threat detection and response (ITDR). ISPM provides continuous monitoring to enable organizations to discover and resolve identity exposures before a threat actor can exploit them, maintain the resiliency of their identity systems, and improve day-to-day identity operations. ITDR solutions help enterprises quickly detect and respond to cyber threats that target user identities and identity-based systems in real-time. By providing an identity-focused lens, ITDR complements other threat detection and response systems to reduce the time it takes to identify and respond to identity-based threats.

An organization can have all the latest automated tools and costly security investments, but without eyes on everything from local accounts and MFA misconfigurations to something as simple as dormant accounts or unsanctioned SaaS services, identities can remain unchecked and still provide the main doorway for attackers. Just recently, it was announced that Microsoft fell victim to a password spray attack that ultimately compromised a legacy non-production test account that afforded cybercriminals the permissions they needed to access some executive email accounts. If a tech giant such as that can find itself vulnerable to such identity-related attacks, it’s clear that greater visibility is required.

So, the goal for IT leadership should NOT be to change their approach to cybersecurity radically but simply add a layer of deep visibility into identity activities with ISPM and ITDR that can work in tandem with existing security investments.

Shlomi Yani is CEO and Co-Founder of Maryland-based AuthMind (www.authmind.com), an identity-first security provider that protects an organization’s identity infrastructure and detects identity-based threats in real-time.

The post Unseen Threats: Identity Blind Spots and Misconfigurations in Cybersecurity appeared first on Cybersecurity Insiders.

[By Richard Bird, Chief Security Officer, Traceable]

In the wake of the devastating cyber-attack on Kyivstar, Ukraine’s largest telecommunications service provider, it’s time for a blunt conversation in the boardrooms of global enterprises. As someone who has navigated the cybersecurity landscape for over 30 years, I’ve witnessed numerous security breaches, but the Kyivstar incident is a watershed moment. This isn’t just a breach; it’s a complete obliteration of a company’s internal infrastructure. And it happened to a company that was on high alert, operating in a war zone, and had heavily invested in cybersecurity.

The breach, attributed to the Russian military spy unit Sandworm, didn’t just disrupt services; it decimated Kyivstar’s core, wiping out thousands of virtual servers and causing communications chaos across Ukraine. The attackers demonstrated a frightening capability to exfiltrate a vast amount of personal data, including device location data, SMS messages, and potentially data that could lead to Telegram account takeover. This level of devastation doesn’t happen without exploiting fundamental weaknesses, and it points to a glaring oversight in many current cybersecurity strategies: the underestimation of API vulnerabilities.

Despite Kyivstar’s significant security investments, it’s evident that APIs and Layer 7 were not prioritized. This is a critical mistake that many are making. CEOs and CISOs around the world need to take their heads out of the sand. The Kyivstar breach is a clear demonstration of the catastrophic potential of modern cyber-attacks. It’s no longer about if your defenses will be breached, but when and how devastating it will be. The traditional approach to cybersecurity is no longer sufficient. We need to rethink our strategies, with a particular focus on securing APIs and fortifying every layer of our digital infrastructure.

This is a critical mistake that many are making.

The attack on Kyivstar took out mobile and home internet service for as many as 24 million people, signaling not just a corporate disaster but a national emergency. The financial implications were staggering, with nearly $100 million in revenue loss, underscoring the severe economic repercussions of such breaches. This incident should be a massive wake-up call. We’re not talking about mere data theft or temporary disruptions. The Russians have demonstrated that they can take down an entire company, exploiting the same vulnerabilities that threaten enterprises globally.

In response, hackers linked to Ukraine’s main spy agency breached computer systems at a Moscow-based internet provider, signaling a tit-for-tat in the cyber domain between Russia and Ukraine.

This escalation is not just a regional issue but a global one, serving as a stark warning to the West about the capabilities and intentions of state-sponsored cyber groups like Sandworm.

The Bottom Line

CEOs and CISOs around the world need to take their heads out of the sand. The Kyivstar breach is a clear demonstration of the catastrophic potential of modern cyber-attacks. It’s no longer about if your defenses will be breached, but when and how devastating it will be. The traditional approach to cybersecurity is no longer sufficient. We need to rethink our strategies, with a particular focus on securing APIs and fortifying every layer of our digital infrastructure.

The Kyivstar incident is a stark reminder of the evolving and increasingly destructive nature of cyber threats. As industry leaders, we must recognize this as a turning point and act swiftly to reinforce our defenses. It’s time to move beyond complacency and address the critical vulnerabilities that can lead to the downfall of our enterprises. The message is clear: bolster your cybersecurity or risk severe consequences. The choice is ours.

The post The Kyivstar Breach and Its Implications for Global Cybersecurity appeared first on Cybersecurity Insiders.

In a newly released study from International Data Corporation (IDC) and cybersecurity company Exabeam, research shows companies globally are struggling with visibility when it comes to defending against cyberattacks.

Fifty-seven percent of surveyed companies experienced significant security incidents in the last year that required extra resources to remediate — shining a glaring light on program gaps caused by dedicated but overburdened teams lacking key, automated threat detection, investigation, and response (TDIR) resources. North America experienced the highest rate of security incidents (66%), closely followed by Western Europe (65%), then Asia Pacific and Japan (APJ) (34%). Research for the Exabeam report, The State of Threat Detection, Investigation and Response, November 2023, was conducted by IDC on behalf of Exabeam and includes insights from 1,155 security and IT professionals spanning these three regions.

The findings reveal a significant gap between self-reported security measures and reality. Despite 57% of interviewed organizations reporting significant security incidents, over 70% of organizations reported better performance on cybersecurity key performance indicators (KPIs), such as mean time to detect, investigate, respond, and remediate in 2023 as compared to 2022, and the overwhelming majority of organizations (over 90%) believe they have good or excellent ability to detect cyberthreats. Seventy-eight percent also believe that their organizations have a very effective process to investigate and mitigate threats. These inflated confidence levels are creating a false sense of security and likely putting organizations at risk. A continued lack of full visibility and complete TDIR automation capabilities, which survey respondents also reported, may explain the discrepancy.

“While we aren’t surprised by the contradictions in the data, our study in partnership with IDC further opened our eyes to the fact that most security operations teams still do not have the visibility needed for overall security operations success. Despite the varied TDIR investments they have in place, they are struggling to thoroughly conduct comprehensive analysis and response activities,” said Steve Moore, Exabeam Chief Security Strategist and Co-founder of the Exabeam TEN18 cybersecurity research and insights group. “Looking at the lack of automation and inconsistencies in many TDIR workflows, it makes sense that even when security teams feel they have what they need, there is still room to improve efficiency and velocity of defense operations.”

Secure Operations Are In A Visibility Crisis

Organizations globally report that they can “see” or monitor only 66% of their IT environments, leaving ample room for blindspots, including those in the cloud. While no organization is immune from adversarial advances, the lack of full visibility means that organizations are potentially blind to any advances in those unseen environments.

“Despite having the lowest number of security incidents, APJ reports the lowest visibility of all regions at 62%, signaling that these teams may be missing and failing to report incidents as a result,” noted Samantha Humphries, Senior Director of International Security Strategy, Exabeam. “With business transformation initiatives moving operations to the cloud and an ever-increasing number of edge connections, lack of visibility will likely continue to be a major risk point for security teams in the year ahead.”

TDIR Automation Lags

With TDIR representing the prevailing workflow of security operations teams, more than half (53%) of global organizations have automated 50% or less of their TDIR workflow, contributing to the amount of time spent on TDIR (57%). Unsurprisingly, respondents continue to want a strong TDIR platform that includes investigation and remediation automation, yet hesitation to automate remains.

“As attackers increase their pace, enterprises will have to overcome their reluctance to automate remediation, which often stems from concern over what might happen without a human approving the process,” said Michelle Abraham, Research Director for IDC’s Security and Trust Group. “Organizations should embrace all the helpful expertise they can find, including automation.”

2024 and Beyond’s Greatest TDIR Needs

When organizations were asked about the TDIR management areas where they require the most help, 36% of organizations expressed the need for third-party assistance in managing their threat detection and response, citing the challenge of handling it entirely on their own. This highlights a growing opportunity for the integration of automation and AI-driven security tools. The second most identified need, at 35%, was a desire for an improved understanding of normal user, entity, and peer group behavior within their organization, demonstrating a demand for TDIR solutions equipped with user and entity behavior analytics (UEBA) capabilities. These solutions should ideally minimize the need for extensive customization while offering automated timelines and threat prioritization.

“As organizations continue to improve their TDIR processes, their security program metrics will likely look worse before they get better. But the tools exist to put them back on the front foot,” continued Moore. “Because AI-driven automation can aid in improving metrics and team morale, we’re already seeing increased demand to build even more AI-powered features. We expect the market demand for security solutions that leverage AI to continue in 2024 and beyond.”

The organizations surveyed for the report represent North America (Canada, Mexico, and the United States), Western Europe (UK and Germany), and APJ (Australia, New Zealand, and Japan), across multiple world industries.

The State of Threat Detection, Investigation, and Response 2023 report can be found here.

The post New Study Shows Over Half of Organizations Experienced Significant Security Incidents in The Last Year appeared first on Cybersecurity Insiders.