[By Mike Walters, President and co-founder of Action1]

Two years have passed since the cybersecurity world was rocked by the discovery of Log4Shell, a critical vulnerability in the Log4j library. First discovered on December 9, 2021, this legendary flaw exposed hundreds of thousands of systems to potential attacks. Jen Easterly, head of the Cybersecurity and Infrastructure Security Agency (CISA), called it “the most serious flaw” she has seen in her decades-long career. Since Log4Shell emerged, bad actors have been spreading various payloads through this vulnerability, including coin miners, botnets, and malware that helped them establish backdoors and carry out other illegal activities. The most notorious threats that have used Log4Shell are Dridex and Conti.

Even today, Log4Shell remains a haunting presence in the digital realm, demanding attention of cybersecurity professionals. As we approach the second anniversary of Log4Shell, let’s delve into the ongoing dangers it poses, the measures organizations should take to protect themselves, and the broader question of whether vulnerabilities in common libraries will continue to rise.

Understanding Log4Shell and Its Enduring Impact

Log4j, a logging library fundamental to Java-based applications, had been prone to the Log4Shell vulnerability for decades before its official discovery. With Java being widely used on billions of systems, including IoT devices and critical infrastructure, the vulnerability’s reach is extensive. Log4Shell exploits Log4j’s ability to resolve requests to LDAP and JNDI servers without proper validation, granting attackers the ability to execute arbitrary Java code or access sensitive information.

This vulnerability, assigned a critical score of 10 and tagged as CVE-2021-44228, affected major companies like Microsoft, Amazon, and IBM.  As we enter 2023, its effects linger. The Cybersecurity and Infrastructure Security Agency (CISA) has recently warned organizations that threat actors are still frequently using the Log4Shell exploit in their attacks due to its ease of discovery through vulnerability scanning and open-source research. The agency advises organizations to prioritize patching Log4Shell in their environments.

The 2023 Arctic Wolf Labs research found that Log4j was among four of the top five external software exploits utilized by threat actors in 2022. According to Tenable, 72% of organizations remained vulnerable to Log4Shell in October 2022. We can suggest that their percentage hasn’t reduced much since then.

Why Log4Shell Persists as a Threat

The Log4Shell vulnerability presents a unique set of challenges in its detection and remediation. Despite the availability of the patch that is easy to install, identifying every system vulnerable to Log4Shell within complex infrastructures remains a formidable task. This difficulty arises from the extensive use of the Log4j library by enterprises across a wide range of infrastructures and applications, both directly and through third-party integrations.

Within this landscape, there exists a multitude of vulnerable software titles, numbering in the hundreds. Some of this software has regrettably been forgotten over time, slipping under the radar of traditional vulnerability management solutions. Even custom, homebrew software often relies on the Log4j library, further complicating the detection process.

Crucially, the task of detection should not be entrusted solely to the software itself. Instead, a more effective approach involves direct examination of the library files, specifically the lib and jar files, by third-party solutions. This shift in focus addresses the challenge of identifying Log4Shell in the software that may not be readily apparent through standard software-level scans.

Despite concerted efforts over the past two years to mitigate the risks associated with Log4Shell, significant gaps persist in our defenses. It is incumbent upon software companies to play a pivotal role in enforcing the security-by-design approach.

Firstly, software companies should take proactive steps by implementing specific scripted detections. Using languages such as PowerShell or Python, they can develop detection mechanisms tailored to their own software utilizing the Log4j library.

Secondly, software companies must adopt a compositional analysis approach during vulnerability scanning. This advanced technique enables them to go beyond merely identifying the software itself and its version. It extends to detecting the libraries used by the software, providing a comprehensive view of the potential vulnerabilities. While some virtual machine (VM) software currently possesses this capability, not all solutions are equipped for this level of analysis.

The Future of Library Vulnerabilities

In September of this year, a vulnerability (CVE-2023-4863) emerged in libwebp, a library used for handling WebP bitmap images. Though not identical, it drew comparisons to Log4Shell.

First, similar to Log4j’s role in Java-based applications, libwebp is indispensable for displaying WebP-formatted images. Its widespread use elevates the risk, potentially affecting a vast array of software. Second, both vulnerabilities earned a critical severity rating of 10.0 on the CVSS scale.

Just as Log4j allowed remote code execution, libwebp’s flaw permits maliciously crafted files to breach expected boundaries, leading to unauthorized access, data leaks, and malicious activity.

In both cases, initial assessments underestimated the extent of the vulnerabilities. Libwebp’s impact initially seemed confined to Google Chrome but extended further. Similarly, Log4Shell was initially associated with web services but later revealed its reach across multiple software types. Notably, both vulnerabilities were quickly exploited by threat actors after disclosure.

The parallel between the libwebp incident and Log4j/Log4Shell suggests a potential trend in the proliferation of vulnerabilities in common libraries.

Conclusion: The Path Forward

To rid ourselves of vulnerabilities like Log4Shell in the future, a security-by-design strategy is paramount. Software vendors should regularly update all libraries used in their software. Software consumers must remain vigilant, conducting regular vulnerability scans on internet-facing hosts, fixing vulnerabilities, conducting regular penetration tests, and having a proper Web Application Firewall (WAF) in place.

As we approach the second anniversary of Log4Shell’s discovery, its enduring presence serves as a stark reminder of the ever-evolving cybersecurity landscape. By learning from the lessons it presents, we can better prepare for the challenges of tomorrow and secure our digital environments against the next Log4Shell.

The post Log4Shell: A Persistent Threat to Cybersecurity – Two Years On appeared first on Cybersecurity Insiders.

[By Eitan Worcel, CEO and co-founder, Mobb.ai]

While it is expected that organizations do as much as possible to secure their software applications, expecting developers to write secure code only sets both up for failure. The root of the issue is that secure coding isn’t typically taught at schools where developers learn the basics, and when companies focus on speed above everything else, processes and well planned security architecture get kicked to the side in order for developers to deliver fast, secure architecture. Even if organizations provide security training or require third-party certificates, it’s not enough to override the core focus of why developers are hired in the first place – to create and build the technology we rely on to advance our society.

Coding is an artform as much as it is a computer science. The creative nature of code paired with the rigidness of security brings to light a crucial oversight in the industry: expecting developers to excel in secure coding from the get-go without a foundational emphasis is not just impractical—it’s unrealistic. For secure coding to become the norm, organizations need to take on the responsibility of making security an organic part of the development process which also means investing time in proper threat modeling and building good security architecture. Only then can organizations ensure that innovation isn’t stifled by security concerns.

The Reality of Secure Coding Expectations

The industry’s long-standing belief that on-the-job training is sufficient for developers to master writing secure code and incorporate the skill into their day-to-day workload overlooks several key realities. Firstly, as I mentioned above, secure coding is often not included in the standard educational curriculum for developers, which means it isn’t a skill they become deeply familiar with during their early learning phases. Secondly, the day-to-day demands of their roles do not typically require a continuous engagement with secure coding practices.

This creates a disconnect where embedding secure coding into a developer’s routine, even with multiple training sessions, remains an ambitious and unlikely goal. Training, while valuable, doesn’t necessarily transform developers into security experts. This gap between expectations and reality is highlighted in Secure Code Warrior’s ‘The challenges (and opportunities) to improve software security’ 2022 whitepaper. The findings are telling: 33% of developers are uncertain about what makes their code vulnerable, and 63% find the art of writing secure code challenging.

Where Companies Miss the Mark

The ‘State of Developer-Driven Security’ 2022 survey has indicated a glaring gap in the industry. Despite 75% of managers acknowledging the need for more training in security frameworks and encouraging developers to learn or adopt secure coding practices, many companies still fail to incorporate these standards into their hiring practices or job descriptions. If secure coding isn’t identified as a key hiring criterion or a defined responsibility within roles, employers can’t then expect developers to make it a priority.

However, the industry is beginning to recognize this discrepancy. A notable 82% of managers have started showing a preference for hiring developers who already possess secure coding skills, but only 66% of managers look at secure coding skills when assessing new hires and only 44% evaluate those skills via a written test. This shift points to a broader issue: the divergence between industry expectations and the practical reality of software development. Secure coding is a specialized skill that demands ongoing practice and support beyond theoretical knowledge.

Embracing a New Standard in Software Development

The evolution of software development hinges on effectively integrating security into its development core processes. Educational institutions have a pivotal role in this transformation, as they are responsible for instilling foundational skills in future developers. This approach aims to nurture a new generation of developers for whom security is a natural and essential element of software creation, thereby establishing a foundation where innovation intrinsically includes security considerations.

In parallel with these educational efforts, businesses have a crucial role in shaping a conducive environment for secure coding. This responsibility extends beyond integrating security into operational and recruitment strategies; it also involves conducting a threat modeling process, adopting tools that make securing code simple and aligned with developers’ core skill sets and workflows. By embedding security technology into processes instead of expecting and relying on human compliance allows businesses to align their pursuit of creative innovation with a steadfast commitment to security. This balance is needed to achieve a future in which technological breakthroughs are not only pioneering, but also securely engineered by design.

The post Stop Expecting Developers to Write Secure Code appeared first on Cybersecurity Insiders.

By Christoph Nagy, SecurityBridge

So your SAP system has been breached.

While this is not an unusual occurrence, it’s still a serious issue that needs your immediate attention. Since SAP is one of the most widely used systems by organizations around the globe and houses a lot of business-critical and thus valuable information, hackers constantly try to find backdoors and vulnerabilities for exploitations.

The more time that elapses before the breach is dealt with, the longer hackers have access to the data your company houses in the SAP platform, and the more damage they can do.

The first step is to determine where the cybersecurity breach occurred, and then walk through the steps of addressing it. And when the immediate attack is dealt with, putting in place resources to prevent it from happening again is a wise course of action. Let’s start with the kinds of SAP breaches that might befall your company.

The Most Common Attack Vectors

We’re defining a breach as any exploitation of the vulnerabilities of a system resulting in unauthorized access to that system and its data. The most common (and sizable) damages to a company that is successfully attacked is financial damage (in the form of fines, the cost of addressing the breach, among other expenses) and a hit to the company’s reputation. Customers are less likely to stick around when they don’t feel their business or confidential data is being safeguarded properly.

When a breach occurs, it’s most likely tied to one of the following:

Vulnerabilities in code. All applications are subject to vulnerabilities, and it’s possible for custom SAP applications to provide a window for attackers to access the overall system.

Unapplied security patches. Patches for SAP applications are extremely important, since they address known flaws that could be exploited in a breach attempt. Companies that delay implementing these patches leave themselves exposed.

System misconfigurations. When settings in an SAP application are misconfigured—or keep unused functions active—attackers can exploit this mistake and gain unauthorized access. You see this most often when applications are left on default settings or someone goes in and makes changes that they shouldn’t.

Inside jobs. Occasionally, someone with at least some level of access already, like an employee, can clear a path for attackers to gain entry into the system. More often than not, it’s the employee’s account, but not the employee themselves causing the breach. The employee account could be taken over by bad actors through phishing or social engineering tactics—the MGM Grand/Caesar’s breach provides a perfect example of this type of attack.

How to Respond to an Attack

When you’ve identified where the threat has come from and what vulnerability has been exploited, it’s time to take decisive action. Reacting quickly but also in the right way will help reestablish your company’s security posture. For most breaches, the following steps will be the most effective means of getting a handle on the situation:

  • Lock down any compromised user accounts and cut off access to the network and system by any third parties such as partners or clients that are involved in the attack. If such a tactical approach doesn’t work, you might need to isolate the full SAP system, going into full lockdown or cutting off its access to the internet so unauthorized users can’t keep finding their way in while you address the issue.
  • Put together a team of stakeholders—executives, your best tech leads, SAP admins, and any other experts available—to assess the damage of the threat and make a plan to deal with it.
  • Make sure to keep all SAP logs relating to security and put them under forensic analysis. It can be useful to look at these logs, such as the Security audit log, JAVA audit log, and HANA audit log within the timeframe of the attack.
  • Use those logs to assess the details of the vulnerability that was exploited and identify the critical events and activity patterns during the key time periods.
  • Install fixes and patches as needed to shore up vulnerabilities and adopt the appropriate security configurations to stop the attack and prevent that specific vulnerability from being exploited again.
  • Only then should you return, one application at a time, to normal SAP operations. Monitor your SAP security logs following this return to make sure operations are now secure.

While all of the above is happening, be sure to comply with all legal requirements for communications with affected or relevant parties. Especially if there is ever a legal investigation on your company’s actions during and after a breach, transparency and timely notification to affected parties so they can take appropriate action will work in your favor.

Future Actions

Once the immediate threat is over, most companies should shift to prevention mode: making it so such a breach can’t happen again. Perhaps those fixes and patches can be extended to other SAP applications. Following NIST and other common SAP security frameworks is recommended.

Further SAP process improvements can help provide preventative measures or early alerts of a potential attack. Some features can detect anomalies in SAP systems or include automation capabilities that can make changes to protect a system on the fly. You can even set up the capability to alert users when their credentials might be compromised—like if they were just used to sign in from an unusual geographical location or were exposed due to a hack elsewhere. In those cases, contacting the SAP security team immediately could make a big difference in preventing authorized accounts from being misused.

There’s never a good time to experience an SAP breach, but companies that have a plan to address it quickly and effectively will fare better in both the short and long term than those that don’t. SAP’s systems are critical for many companies, so ensuring the strongest possible security posture for those applications is an equally critical task that organizations should prioritize.

Christoph Nagy has 20 years of working experience within the SAP industry. He has utilized this knowledge as a founding member and CEO at SecurityBridge–a global SAP security provider, serving many of the world’s leading brands and now operating in the U.S. Through his efforts, the SecurityBridge Platform for SAP has become renowned as a strategic security solution for automated analysis of SAP security settings, and detection of cyber-attacks in real-time. Prior to SecurityBridge, Nagy applied his skills as a SAP technology consultant at Adidas and Audi.

The post A Guide to Handling SAP Security Breaches appeared first on Cybersecurity Insiders.

NEW RESEARCH: Artificial intelligence and Machine Learning Can Be Used to Stop DAST Attacks Before they Start


Within cloud security, one of the most prevalent tools is dynamic application security testing, or DAST. DAST is a critical component of a robust application security framework, identifying vulnerabilities in your cloud applications either pre or post deployment that can be remediated for a stronger security posture.

But what if the very tools you use to identify vulnerabilities in your own applications can be used by attackers to find those same vulnerabilities? Sadly, that’s the case with DASTs. The very same brute-force DAST techniques that alert security teams to vulnerabilities can be used by nefarious outfits for that exact purpose.

There is good news, however. A new research paper written by Rapid7’s Pojan Shahrivar and Dr. Stuart Millar and published by the Institute of Electrical and Electronics Engineers (IEEE) shows how artificial intelligence (AI) and machine learning (ML) can be used to thwart unwanted brute-force DAST attacks before they even begin. The paper Detecting Web Application DAST Attacks with Machine Learning was presented yesterday to the specialist AI/ML in Cybersecurity workshop at the 6th annual IEEE Dependable and Secure Computing conference, hosted this year at the University of Southern Florida (USF) in Tampa.

The team designed and evaluated AI and ML techniques to detect brute-force DAST attacks during the reconnaissance phase, effectively preventing 94% of DAST attacks and eliminating the entire kill-chain at the source. This presents security professionals with an automated way to stop DAST brute-force attacks before they even start. Essentially, AI and ML are being used to keep attackers from even casing the joint in advance of an attack.

This novel work is the first application of AI in cloud security to automatically detect brute-force DAST reconnaissance with a view to an attack. It shows the potential this technology has in preventing attacks from getting off the ground, plus it enables significant time savings for security administrators and lets them complete other high-value investigative work.

Here’s how it is done: Using a real-world dataset of millions of events from enterprise-grade apps, a random forest model is trained using tumbling windows of time to generate aggregated event features from source IPs. In this way the characteristics of a DAST attack related to, for example, the number of unique URLs visited per IP or payloads per session, is learned by the model. This avoids the conventional threshold approach, which is brittle and causes excessive false positives.

This is not the first time Millar and team have made major advances in the use of AI and ML to improve the effectiveness of cloud application security. Late last year, Millar published new research at AISec in Los Angeles, the leading venue for AI/ML cybersecurity innovations, into the use of AI/ML to triage vulnerability remediation, reducing false positives by 96%. The team was also delighted to win AISec’s highly coveted Best Paper Award, ahead of the likes of Apple and Microsoft.

A complimentary pre-print version of the paper Detecting Web Application DAST Attacks with Machine Learning is available on the Rapid7 website by clicking here.






In cybersecurity, the arms race between defenders and attackers never ends. New technologies and strategies are constantly being developed, and the struggle between security measures and hacking techniques persists. In this never ending battle, Carl Froggett, the CIO of cybersecurity vendor Deep Instinct, provides an insightful glimpse into the changing landscape of cyber threats and innovative ways to tackle them.

A changing cyber threat landscape

According to Froggett, the fundamental issue that many organizations are still grappling with is the basic hygiene of technology. Whether it’s visibility of inventory, patching, or maintaining the hygiene of the IT environment, many are still struggling.

But threats are growing beyond these fundamental concerns. Malware, ransomware, and the evolution of threat actors have all increased in complexity. The speed of attacks has changed the game, requiring much faster detection and response times.

Moreover, the emergence of generative AI technologies like WormGPT has introduced new threats such as sophisticated phishing campaigns utilizing deep fake audio and video, posing additional challenges for organizations and security professionals alike.

From Signatures to Machine Learning – The Failure of Traditional Methods

The security industry’s evolution has certainly been a fascinating one. From the reliance on signatures during the ’80s and ’90s to the adoption of machine learning only a few years ago, the journey has been marked by continuous adaptation and an endless cat and mouse game between defenders and attackers. Signature based endpoint security, for example, worked well when threats were fewer and well defined, but the Internet boom and the proliferation and sophistication of threats necessitated a much more sophisticated approach.

Traditional protection techniques, such as endpoint detection and response (EDR), are increasingly failing to keep pace with these evolving threats. Even machine learning-based technologies that replaced older signature-based detection techniques are falling behind. A significant challenge lies in finding security solutions that evolve as rapidly as the threats they are designed to combat.

Carl emphasized the overwhelming volume of alerts and false positives that EDR generates, revealing the weaknesses in machine learning, limited endpoint visibility, and the reactive nature of EDR that focuses on blocking post-execution rather than preventing pre-execution.

Machine learning provided a much-needed leap in security capabilities. By replacing static signature based detection with dynamic models that could be trained and improved over time, it offered a more agile response to the evolving threat landscape. It was further augmented with crowdsourcing and intelligent sharing, and analytics in the cloud, offering significant advancements in threat detection and response.

However, machine learning on its own isn’t good enough – as evidenced by the rising success of attacks. Protection levels would drop off significantly without continuous Internet connectivity, showing that machine learning based technologies are heavily dependent on threat intelligence sharing and real-time updates. That is why the detect-analyze-respond model, although better than signatures, is starting to crumble under the sheer volume and complexity of modern cyber threats.

Ransomware: A Growing Threat

A glaring example of this failing model can be seen in the dramatic increase of ransomware attacks. According to Zscaler, there was a 40% increase in global ransomware attacks last year, with half of those targeting U.S institutions. Machine learning’s inadequacy is now becoming visible, with 25 new ransomware families identified using more sophisticated and faster techniques. The reliance on machine learning alone has created a lag that’s unable to keep pace with the rapid development of threats.

“We must recognize that blocking attacks post-execution is no longer enough. We need to be ahead of the attackers, not trailing behind them. A prevention-first approach, grounded in deep learning, doesn’t just block threats; it stops them before they can even enter the environment.” added Carl.

The Deep Learning Revolution

The next evolutionary step, according to Froggett, is deep learning. Unlike machine learning, which discards a significant amount of available data and requires human intervention to assign weights to specific features, deep learning uses 100% of the available data. It learns like humans, allowing for prediction and recognition of malware variants, akin to how we as humans recognize different breeds of dogs as dogs, even if we have never seen the specific breed before.

Deep learning’s comprehensive approach takes into account all features of a threat, right down to its ‘DNA,’ as Froggett described it. This holistic understanding means that mutations or changes in the surface characteristics of a threat do not confound the model, allowing for a higher success rate in detection and prevention. Deep learning’s ability to learn and predict without needing constant updates sets it apart as the next big leap in cybersecurity.

Deep Instinct utilizes these deep learning techniques for cybersecurity. Unlike traditional crowd-sourcing methods, their model functions as if it’s encountering a threat for the first time. This leads to an approach where everything is treated as a zero-day event, rendering judgments without relying on external databases.

One interesting aspect of this deep learning approach is that it isn’t as computationally intensive as one might think. Deep Instinct’s patented model, which operates in isolation without using customer data, is unique in its ability to render verdicts swiftly and efficiently. In contrast to other machine learning-based solutions, Deep Instinct’s solution is more efficient, lowering latency and reducing CPU and disk IOPS. The all-contained agent makes their system quicker to return verdicts, emphasizing speed and efficiency.

Deep Instinct focuses on preventing breaches before they occur, changing the game from slow detection and response to proactive prevention.

“The beauty of our solution is that it doesn’t merely detect threats; it anticipates them,” Froggett noted during our interview. Here’s how:

  1. Utilizing Deep Learning: Leveraging deep learning algorithms, the product can discern patterns and anomalies far beyond traditional methods.
  2. Adaptive Protection: Customized to the unique profile of each organization, it offers adaptable protection that evolves with the threat landscape.
  3. Unprecedented Accuracy: By employing state-of-the-art deep learning algorithms, the solution ensures higher accuracy in threat detection, minimizing false positives.

Advice for Security Professionals: Navigating the Challenging Terrain

Froggett’s advice for security professionals is grounded in practical wisdom. He emphasizes the need for basic IT hygiene such as asset management, inventory patching, and threat analysis. Furthermore, the necessity of proactive red teaming, penetration testing, and regular evaluation of all defense layers cannot be overstated.

The CIO also acknowledges the challenge of the “shift left” phenomenon, where central control in organizations is declining due to rapid innovation and decentralization. The solution lies in balancing business strategies with adjusted risk postures and focusing on closing the increasing vulnerabilities.

Conclusion: A New Era of Prevention

The current trajectory of cybersecurity shows that reliance on machine learning and traditional techniques alone is not enough. With the exponential growth in malware and ransomware, coupled with the increased sophistication of attacks using generative AI, a new approach is needed. Deep learning represents that revolutionary step.

The future of cybersecurity lies in suspending what we think we know and embracing new and adaptive methodologies such as deep learning, leading into a new era of prevention-first security.

 

The post The Evolution of Security: From Signatures to Deep Learning appeared first on Cybersecurity Insiders.

By Dotan Nahum, Head of Developer-First Security at Check Point Software Technologies

In an era where data breaches and cybersecurity attacks are rampant, secure software design has become not only a matter of technical proficiency, but a crucial component of corporate responsibility. It has led to a significant rise in the importance of secure design patterns – recurring solutions to common problems in software design that account for security.

A secure design pattern does not exclusively mean designing software that works as intended. It involves creating a system that continues to operate correctly under malicious attacks, safeguarding the system’s data and its users’ privacy. It’s a proactive approach to prevent potential security flaws rather than a reactive one where developers patch up vulnerabilities after exploitation.

From Start to Finish: The Importance of Consistency and Security

Secure design patterns are not mere add-ons or isolated fixes; rather, they are foundational paradigms that guide developers in designing secure software from the ground up. Traditional software development often relies on reactive security measures to patch vulnerabilities after they are discovered. However, secure design patterns promote a proactive approach to security by mitigating potential threats during the initial design phase. By building security into the core of the software architecture, developers can significantly reduce the likelihood of vulnerabilities, enhance the system’s overall resilience, and maintain consistent security measures across multiple projects.

7 Steps to Implement Secure Design Patterns Today

Implementing secure design patterns is not a one-time task. It’s an ongoing process that evolves as new security threats and mitigation techniques emerge. The key is to create a culture of security in your organization where every member understands the importance of security and their role in maintaining it. These seven steps provide a solid foundation, but true security requires constant vigilance, learning, and adaptation.

Use Design Patterns that Promote Security

Several design patterns inherently enhance the security of a system. For instance, the Proxy Pattern can add an additional layer of protection when accessing sensitive data or communicating with external services. The Factory Pattern helps to instantiate objects in a controlled manner, reducing the chances of improper instantiation that could lead to vulnerabilities.

Adopt the Principle of Least Privilege (PoLP)

The principle of least privilege (PoLP) is a crucial part of secure design that should be reviewed regularly. It entails that a user (or a process) should only have the bare minimum privileges necessary to perform a task, and no more. Implementing PoLP can limit the potential damage caused by errors or security breaches. In the design phase, consider the roles and privileges each component needs and restrict excess rights proactively.

Implement Input Validation and Sanitization

A standard gateway for attackers is improperly validated and sanitized user inputs, and injecting malicious code or data into your system can have catastrophic consequences like XSS and SQL injection attacks. You can use strict input validation patterns for every input field in your application and sanitize data to neutralize or remove any potentially harmful elements before processing them.

Use Secure Communication Protocols

Secure data transmission is critical to safeguard sensitive information from interception and unauthorized access. Use secure communication protocols like HTTPS and TLS to encrypt data during transit. You can implement secure design patterns like the ‘Decorator’ pattern to encapsulate secure communication logic within relevant modules.

Monitor and Update Dependencies Regularly

Stay vigilant about the security of third-party libraries and dependencies used in your software projects. Regularly monitor for security updates and patches and promptly address any known vulnerabilities. The ‘Observer’ pattern can assist in maintaining a dynamic and responsive approach to monitoring and updating dependencies.

Adopt Secure Coding Standards

Secure coding standards provide developers with guidelines to prevent common programming errors that can lead to security vulnerabilities. Some reliable sources include the CERT Secure Coding Standards or OWASP Secure Coding Practices. Following these standards ensures the codebase maintains a strong foundation against security flaws and reinforces good coding practices.

Continuous Security Testing and Auditing

Designing and developing secure software is not enough; continuous security testing is key to maintaining robust security. Regularly conduct penetration testing, static code analysis, and security audits to identify potential vulnerabilities. Additionally, consider implementing security as part of your DevOps process (DevSecOps), integrating security checks into the continuous integration and delivery (CI/CD) pipeline.

Remember, the cost of ignoring secure design patterns can be immense, leading to financial losses and damage to an organization’s reputation and trust. As we continue to digitize and interconnect every aspect of our lives, secure design is more than a good practice – it is a fundamental necessity for software development in the 21st century.

Dotan Nahum is the Head of Developer-First Security at Check Point Software Technologies. https://spectralops.io  Dotan was the co-founder and CEO at Spectralops, which was acquired by Check Point Software, and now is the Head of Developer-First Security. Dotan is an experienced hands-on technological guru & code ninja. Major open-source contributor. High expertise with React, Node.js, Go, React Native, distributed systems and infrastructure (Hadoop, Spark, Docker, AWS, etc.)

 

The post 7 Steps to Implement Secure Design Patterns – A Robust Foundation for Software Security appeared first on Cybersecurity Insiders.

In the complex field of application security, the challenges surrounding open source software security require innovative solutions. In a recent interview with Varun Badhwar, Founder and CEO of Endor Labs, he provided detailed insights into these specific issues and how Endor Labs is positioning itself to tackle them head-on.

The Broken State of Application Security

Software developers currently spend more than half their time investigating an overwhelming number of security alerts and maintaining tools in CI/CD pipelines. Badhwar characterizes the problem:

“Application security is fundamentally broken today – engineering teams are constantly being asked to deploy numerous AppSec tools in the CI/CD pipeline, which creates substantial work for developers, slows down feature delivery, and adds friction.”

Endor Labs aims to mitigate this productivity tax by focusing on OSS security, with a goal to reduce 80% of vulnerability noise.

Open Source Security and Endor Labs’ Innovative Approach

Open source software (OSS) makes up a significant portion of modern application code, sometimes exceeding 90%. While fostering efficiency and collaboration, it also introduces vulnerabilities if not managed correctly.

Challenges in Open Source Security:

  1. Proliferation of OSS Components: With 80-90% of application code being borrowed from open source repositories, it’s essential to know what components are being used and how.
  2. False Positives: Traditional security tools generate an overwhelming number of false positives, creating a massive burden on developers.
  3. Incompleteness and Inaccuracy: Existing tools often lack insight into how open source code is being used, resulting in both noisy and incomplete risk assessments.
  4. Transitive Dependencies and Reputation Risks: Hidden vulnerabilities and dependencies are often overlooked, posing a latent threat to security.

Endor Labs’ Approach to Open Source Security

Endor Labs’ pioneering approach focuses on actual risks and utilization patterns within OSS. This empowers DevSecOps teams to prioritize risks, secure CI/CD pipelines, and meet compliance objectives like SBOMs. Their methodology includes:

  1. Intelligent Analysis: By understanding exactly how developers are using open source code, Endor Labs pinpoints the actual risks. 90% of code in modern applications is open source software, yet only 12% of that code is actually used within applications. Endor Labs replaces the existing breed of Software Composition Analysis (SCA) solutions that lack context on what parts of the code developers are actually using.
  2. Evidence-Driven Insights: Endor Labs employs an evidence-driven approach that assesses the true impact and risk of vulnerabilities based on how code is being used, rather than blanket evaluations.
  3. Eliminating Noise: By focusing on what matters, Endor Labs eliminates up to 80% of the noise associated with traditional tools, saving developers’ time.
  4. Tackling Hidden Risks: The solution addresses hidden dangers like vulnerabilities present in transitive dependencies, uncovering risks that might otherwise be missed. Endor Labs research reveals that 95% of vulnerabilities live in transitive dependencies, yet most organizations have no visibility into them.
  5. Holistic View of Risk: Endor Labs provides a comprehensive view of risk by evaluating not just the code but also the reputation and potential hazards associated with using specific open source components.
  6. Regulatory Compliance: With open source being labeled a national security issue, Endor Labs ensures that their approach aligns with regulatory requirements, including initiatives like Software Bill of Materials.

Endor Labs’ approach to open source and application security is not only revolutionary but necessary in today’s interconnected development lifecycle. By focusing on actual risks, reducing noise, and providing a comprehensive and intelligent analysis, they are shaping the future of how organizations manage and secure their applications and open source components.

Advice to Organizations and Developers

For organizations and developers, the future lies in consolidating the DevSecOps toolchain, simplifying tool deployments, and prioritizing the risks that matter. In the interview, Varun provided actionable guidance to both developers and organizations:

  1. Embrace Open Source While Ensuring Security: Utilize the benefits of open source software, but with a focus on security and compliance. Implement intelligent tools that understand how code is being used, thereby reducing noise and pinpointing real threats.
  2. Streamline Development Pipelines: Avoid overcomplication and duplication by consolidating the DevSecOps toolchain. Choose tools that simplify deployments, enforce consistent security policies, and enable building software that is “secure by default.”
  3. Foster Collaboration Between Teams: Work towards aligning engineering and security teams, viewing them as internal partners. Focus on real issues that matter most, creating a synergy that enhances overall productivity and security.
  4. Adhere to Regulatory Requirements: Stay abreast of regulatory standards such as Software Bill of Materials (SBOMs), recognizing the importance of transparency and compliance, especially as open source security continues to be a national concern.
  5. Adopt a ‘Trust but Verify’ Approach: Balance the use of open source with vigilant verification of its security. Encourage a development model that leverages OSS benefits without slowing down the development process, promoting a secure and innovative environment.

Endor Labs is at the forefront of reshaping how we approach application security. With a new $70 million round of funding and a clear mission to enable developers to be more productive without compromising on security, they are leading the way toward a more secure and efficient future in software development.

For more information on Endor Labs, visit https://www.endorlabs.com

The post Reducing the Productivity Tax in Open Source Software Security – A Deep Dive with Varun Badhwar of Endor Labs appeared first on Cybersecurity Insiders.

By Richard Bird, Chief Security Officer at Traceable

In the ever-evolving landscape of cybersecurity, it’s concerning to witness a persistent rise in breachesThe underlying issue? The consistent sidelining of API security. Despite the transformative role APIs play in modern digital infrastructures, they remain an underestimated component in many security strategies. This oversight isn’t merely a lapse; it’s a gaping vulnerability. Without vigilant monitoring and robust protection, APIs become inviting gateways for adversaries seeking unauthorized access.

In 2022, the digital realm witnessed a stark reminder of this vulnerability. Twitter, rebranded as X, succumbed to an API breach, leading to the exposure of data for 5.4 million users. This incident wasn’t an isolated one. Optus, a prominent telecom entity, encountered a ransomware attack initiated through an API vulnerability. The aftermath of their decision not to pay the ransom was the compromise of data for 10 million individuals, both past and present customers.

As we navigate the latter half of 2023, the horizon remains clouded with challenges. For a brighter, more secure future, it’s imperative that we introspect, drawing insights from past API breaches.

To chart a path forward, we must dissect recent API breaches, identifying critical areas of focus that will fortify businesses against future threats.

JumpCloud

Breach Overview: JumpCloud, an enterprise software company, faced a sophisticated attack from nation-state hackers. These adversaries exploited vulnerabilities to access the system, leading JumpCloud to reset customer API keys as a precautionary measure. The breach raised concerns about the security measures in place, especially when dealing with nation-state actors who possess advanced capabilities.

Lesson: Third-party solution providers can be a significant risk vector, especially when they’re targeted by highly skilled adversaries.

Prevention: It’s crucial to conduct thorough security assessments of third-party vendors and ensure they adhere to stringent security standards. Additionally, monitoring and real-time threat detection can help in early identification of such sophisticated attacks.

T-Mobile

Breach Overview: In January 2023, T-Mobile found itself at the center of a cybersecurity storm, disclosing a data breach that impacted approximately 37 million customers. A malicious actor exploited a specific API, gaining unauthorized access. Alarmingly, this breach came on the heels of a previous incident, despite T-Mobile’s substantial investments in bolstering their cybersecurity defenses. The intruder maintained access for over six weeks, starting from late November 2022, before the breach was detected and addressed.

Lesson: Even with recent security enhancements, organizations can remain vulnerable, especially when they lack comprehensive visibility and control over their API inventory.

Prevention: Organizations should implement continuous API monitoring, adopt zero-trust policies for sensitive data access, and employ advanced threat detection mechanisms that can discern between legitimate and malicious API traffic patterns.

Cisco

Breach Overview: Cisco, a tech giant, identified a critical vulnerability in its SD-WAN vManage software. This vulnerability allowed unauthorized API access, enabling attackers to send crafted API requests, potentially retrieving or manipulating information. The issue was not just about unauthorized access but also the potential manipulation of network configurations.

Lesson: Even industry leaders can have lapses, emphasizing the importance of continuous vigilance.

Prevention: Strict access controls for APIare essential. Organizations should also invest in automated vulnerability scanning tools and ensure that security patches are applied promptly.

Razer

Breach Overview: Razer, a renowned tech company, faced two significant security incidents. The recent one involved a potential data leak after claims of stolen source code and encryption keys. Previously, in 2020, a misconfiguration by an IT vendor left sensitive data exposed, highlighting the risks associated with third-party integrations.

Lesson: Continuous oversight and third-party integrations can introduce vulnerabilities, making it essential to have a robust security review mechanism.

Prevention: Regular security audits and third-party risk assessments are crucial. All configurations, especially those by external parties, should undergo rigorous security checks.

QuickBlox

Breach Overview: QuickBlox, a platform offering chat and video calling solutions, had critical vulnerabilities in its software development kit and APIs. These vulnerabilities could allow attackers to access and steal personal data of millions of users. The breach underscored the challenges of securing modern software architectures, especially when theare widely used across industries.

Lesson: As software architectures evolve, they can introduce new vulnerabilities if not designed with a security-first mindset.

Prevention: A security-first approach in software development is essential. Regular updates, patches, and security training for developers can help in minimizing such vulnerabilities.

The Bottom Line? Holistic Data Security is Non-Negotiable

APIare the universal attack vector and demand our undivided attention. Their integral role in bridging various data layers makes them both invaluable and, if overlooked, perilous. A cybersecurity strategy that sidelines API security is akin to building a fortress but leaving the main gate unguarded. As we architect our future security blueprints, it’s essential to adopt a holistic approach, encompassing every facet of our digital infrastructure. And while innovation propels us forward, the wisdom gleaned from past breaches must serve as our guiding beacon, ensuring that history’s pitfalls aren’t repeated.

The post API Breaches Are Rising: To Secure the Future, We Need to Learn from the Past appeared first on Cybersecurity Insiders.

By Doug Dooley, COO, Data Theorem

The rise of cloud-native applications has revolutionized the way businesses operate, enabling them to scale rapidly and stay agile in a fast-paced digital environment. However, the increasing reliance on Application Programming Interfaces (APIs) to connect and share data between disparate systems has also brought new risks and vulnerabilities to the forefront. With every new API integration, the attack surface of an organization grows, creating new opportunities for attackers to exploit vulnerabilities and gain access to sensitive data.

This article will attempt to shed some more light on:

  • API Attack Surfaces
  • Shadow APIs
  • Zombie APIs
  • API Protection

APIs have become the backbone of modern digital ecosystems, allowing organizations to streamline operations, automate processes, and provide seamless user experiences. They are the data transporters for all cloud-based applications and services. APIs act as intermediaries between applications, enabling them to communicate with each other and exchange data. They also provide access to critical services and functionality in your cloud-based applications. If an attacker gains access to your APIs, they can easily bypass security measures and gain access to your cloud-based applications, which can result in data breaches, financial losses, and reputational damage. For hackers looking to have the best return on investment (ROI) of their time and energy for exploiting and exfiltrating data, APIs are one of the best targets available today.

It’s clear these same APIs that enable innovation, revenue, and profits also create new avenues for attackers to achieve successful data breaches for their own financial gains. As the number of APIs in use grows, so does the attack surface of an organization. According to a recent industry study by Enterprise Strategy Group (ESG) titled “Securing the API Attack Surface”, the majority (75%) of organizations typically change or update their APIs on a daily or weekly basis, creating a significant challenge for protecting the dynamic nature of API attack surfaces.

API security is critical because APIs are often the important link in the security chain of modern applications. Developers often prioritize speed, features, functionality, and ease of use over security, which can leave APIs vulnerable to attacks. Additionally, cloud-native APIs are often exposed directly to the internet, making them accessible to anyone. This can make it easier for hackers to exploit vulnerabilities in your APIs and gain access to your cloud-based applications. As evidence, the same ESG study also revealed most all (92%) organizations have experienced at least one security incident related to insecure APIs in the past 12 months, while the majority of organizations (57%) have experienced multiple security incidents related to insecure APIs during the past year.

One of the biggest challenges in protecting an API environment is the proliferation of Shadow APIs. Shadow APIs are APIs that are used by developers or business units without the knowledge or approval of IT security teams. These APIs can be created by anyone with the technical knowledge to build them, and because they are not managed by the IT department they are often not subject to the same security controls and governance policies as officially sanctioned APIs.

Shadow APIs lack clarity of priority, ownership, and security policy controls. They often have a business purpose such as supporting features in a mobile and web applications, but no one is sure whether these APIs are running in production or non-production, who the clear owners are, and which security policy controls should be applied to protect them from attack. For example, a developer may create an API to streamline a workflow, or a business unit may create an API to integrate a third-party application. However, when these APIs are not properly vetted, tested, and secured, they can pose a significant risk to the organization. Shadow APIs can introduce vulnerabilities, such as unsecured endpoints, weak authentication mechanisms, and insufficient access controls, which can be exploited by attackers to gain unauthorized access to sensitive data.

Another challenge facing organizations is the emergence of Zombie APIs. Zombie APIs are APIs that are no longer in use but are still active on the network and running in the cloud. These APIs can be left over from legacy systems, previous versions of the API, or retired applications; or they may have been created by developers who have since left the organization. Zombie APIs can be particularly dangerous because they may not be monitored or secured, making them vulnerable to exploitation.

While Zombie APIs do not have a clear business purpose, they consume resources, can add an expense for organizations, and create additional attack surface. For example, a Zombie API can be an older version of an API that is no longer connected to its original application but left in place for potential backward compatibility reasons. However, over time that legacy API is forgotten, yet its underlying resources (compute, storage, databases) that fuel the API’s operations are left running without proper oversight, maintenance, and security hardening. Attackers can use these APIs to gain unauthorized access to sensitive data, bypass security controls, and launch lateral movement attacks against other systems on the network. Zombie APIs can also be used to launch Server-Side Request Forgery (SSRF) or remote code execution (RCE) attacks, which can bring down entire systems and cause significant damage to an organization’s reputation as seen with the Capital One Breach and Log4shell global exploits, respectively.

To mitigate the risks posed by Shadow and Zombie APIs, organizations must take a proactive approach to API management and security. This includes developing a comprehensive API management strategy that includes security controls, active monitoring, and reporting capabilities.

One key aspect of API management is the establishment of a centralized API inventory catalog. This catalog should include all approved APIs, along with information about their functionality, usage, and security controls. This can help IT and security teams identify Shadow APIs and Zombie APIs, as well as track and monitor API usage to ensure compliance with governance policies.

Another important aspect of API management is the implementation of security controls. These may include encryption, access controls, authentication mechanisms, and threat detection and response capabilities. Security controls should be implemented at all layers of the API stack, from the application layer to the transport and infrastructure service layers, to ensure that APIs are protected against a wide range of attacks.

In addition, organizations should also implement scanning, observability, dynamic analysis and reporting capabilities to detect and respond to API-related threats. This may include real-time scanning of API usage, logging and run-time analysis of API activity, and alerting and reporting capabilities to notify IT and security teams of potential threats.

When it comes to securing APIs and reducing attack surfaces, Cloud Native Application Protection Platform (CNAPP) is a newer security framework that provides security specifically for cloud-native applications by protecting them against various API attacks threats. CNAPPs do three primary jobs: (1) artifact scanning in pre-production; (2) cloud configuration and posture management scanning; (3) run-time observability and dynamic analysis of applications and APIs, especially in production environments. With CNAPP scanning pre-production and production environments, an inventory list of all APIs and software assets is generated. If the dynamically generated inventory of cloud assets has APIs connected to them, Shadow or Zombie APIs can be discovered. As a result, CNAPPs help to identify these dangerous classes of APIs and help to add layers of protection to prevent them from causing harm and exposure from vulnerable API attack surfaces.

Ultimately, the key to managing the risks posed by expanding API attack surfaces with Shadow and Zombie APIs is to take a proactive approach to API management and security. When it comes to cloud security, CNAPP is well suited for organizations with cloud-native applications, microservices, and APIs that require application-level security. API security is a must-have when building out cloud-native applications, and CNAPP offers an effective approach for protecting expanding API attack surfaces, including those caused by Shadow and Zombie APIs.

The post Shadow APIs and Zombie APIs are Common in Every Organizations’ Growing API Attack Surface appeared first on Cybersecurity Insiders.

In an era where digital transformation accelerates and cyber threats proliferate rapidly, the role of effective threat modeling in software development is becoming more critical. Traditional methods of threat modeling often fall short, as they are often labor-intensive, inconsistent, and challenging to scale across large or dynamic application portfolios. Recognizing this gap, IriusRisk set out to redefine the threat modeling landscape, pioneering an automated threat modeling solution that enables organizations to put secure design directly in the hands of the engineers building the software.

Understanding Threat Modeling

Threat modeling, a proactive approach to identifying, managing, and mitigating potential security threats at design time, plays a crucial role in the cybersecurity lifecycle of applications. It involves predicting attacker behavior, identifying potential security vulnerabilities in a system, and defining effective countermeasures. From sophisticated cyber-attacks to simple configuration errors, threat modeling seeks to preemptively address a broad range of potential threats to applications.

The Traditional Approach to Threat Modeling

Traditionally, threat modeling has been a manual, expertise-heavy process. Techniques like STRIDE, PASTA, or Trike have been used to predict threat scenarios. However, these methods often require significant investment in skilled talent, are time-consuming, and can lead to inconsistencies in the threat model output. This manual process struggles to scale with the increasing complexity of application portfolios and the speed of modern development cycles, creating a pressing need for a more efficient solution.

Enter IriusRisk: Revolutionizing Threat Modeling

This is where IriusRisk enters the scene. IriusRisk’s platform is designed to overcome the shortcomings of manual threat modeling. It combines an inference based rules engine with a knowledge base of security design patterns and countermeasures. As IriusRisk Co-Founder and CEO, Stephen de Vries puts it, “Our engine uses rules to identify architectural patterns, and then applies the corresponding risk patterns to very quickly produce a repeatable and consistent threat model of a given diagram.”

The Mechanics of IriusRisk’s Threat Modeling Platform

The IriusRisk platform embraces a design-first approach, starting with the ingestion of an application’s design, which can be manually added or imported from various architectural design tools such as Visio, Terraform or Lucid Charts. Once the design is ingested, the platform’s rule-based engine applies a set of predefined rules corresponding to various components and data flows within the system. Based on this, a comprehensive threat model is automatically generated, detailing potential security threats and suggesting appropriate countermeasures, tailored to the system’s unique design and the organization’s requirements for security.

IriusRisk and DevSecOps: A Seamless Integration

Integration into DevSecOps practices is a critical aspect of the IriusRisk platform. The platform aligns threat modeling with the software development lifecycle (SDLC), enabling developers to identify and rectify potential threats early in the development process. Moreover, it can be seamlessly incorporated into Continuous Integration/Continuous Deployment (CI/CD) pipelines and interacts efficiently with other development and security tools, thereby reinforcing a proactive and holistic security culture.

IriusRisk’s innovation hasn’t gone unnoticed by industry experts. The platform has received high praise for its approach to automated threat modeling, its ability to scale, and its seamless integration into modern development workflows.

Six Essential Best Practices for Threat Modeling

Below are six best practices that will fortify your threat modeling process and enable a robust, resilient application security posture.

  1. Embrace Automation: Leverage automation to streamline and standardize threat modeling. It minimizes human error, saves time, and optimizes resource allocation, facilitating consistent security practices across projects.
  2. Embed Security in the Development Lifecycle: Incorporate threat modeling into the early stages of the software development lifecycle. This approach ensures potential security threats are identified and addressed from the get-go, significantly reducing the cost and effort of mitigating them later.
  3. Continuous Update and Review: Just as software development is an iterative process, so too should be threat modeling. Review and update your models regularly, particularly when significant changes are made to the system, to ensure continuous security coverage.
  4. Empower Developers with Security Knowledge: Providing developers with the tools and knowledge to identify and mitigate security threats fosters a proactive security culture and reduces the burden on security teams.
  5. Prioritize Threats Based on Real-world Impact: All threats are not created equal. Prioritize identified threats based on their potential impact and the likelihood of exploitation to allocate resources effectively.
  6. Use Standardized Frameworks and Libraries: Adopting standardized frameworks and libraries such as STRIDE, PASTA or VAST offers a structured approach to identifying, classifying, and addressing threats. These frameworks have been tested and refined by the cybersecurity community and are regularly updated to address evolving threats. Their widespread use also offers the advantage of community support and shared learning.

In conclusion, threat modeling is a fundamental cornerstone of a comprehensive cybersecurity strategy. In our evolving digital landscape, embracing automation, such as that offered by IriusRisk, becomes pivotal to identify, address, and mitigate potential threats proactively. As the speed of software delivery is ever more important, an automated, continuous threat modeling process is no longer a luxury but a necessity for better protection and sustainable cybersecurity resilience.

The post Ensuring Cyber Resilience: The Critical Role of Threat Modeling in Software Security appeared first on Cybersecurity Insiders.