The content of this post is solely the responsibility of the author.  AT&T does not adopt or endorse any of the views, positions, or information provided by the author in this article. 

Analyzing an organization’s security posture through the prism of a potential intruder’s tactics, techniques, and procedures (TTPs) provides actionable insights into the exploitable attack surface. This visibility is key to stepping up the defenses of the entire digital ecosystem or its layers so that the chance of a data breach is reduced to a minimum. Penetration testing (pentesting) is one of the fundamental mechanisms in this area.

The need to probe the architecture of a network for weak links through offensive methods co-occurred with the emergence of the “perimeter security” philosophy. Whereas pentesting has largely bridged the gap, the effectiveness of this approach is often hampered by a crude understanding of its goals and the working principles of ethical hackers, which skews companies’ expectations and leads to frustration down the line.

The following considerations will give you the big picture in terms of prerequisites for mounting a simulated cyber incursion that yields positive security dividends rather than being a waste of time and resources.

Eliminating confusion with the terminology

Some corporate security teams may find it hard to distinguish a penetration test from related approaches such as red teaming, vulnerability testing, bug bounty programs, as well as emerging breach and attack simulation (BAS) services. They do overlap in quite a few ways, but each has its unique hallmarks.

Essentially, a pentest is a manual process that boils down to mimicking an attacker’s actions. Its purpose is to find the shortest and most effective way into a target network through the perimeter and different tiers of the internal infrastructure. The outcome is a snapshot of the system’s protections at a specific point in time.

In contrast to this, red teaming focuses on exploiting a segment of a network or an information / operational technology (IT/OT) system over an extended period. It is performed more covertly, which is exactly how things go during real-world compromises. This method is an extremely important prerequisite for maintaining OT cybersecurity, an emerging area geared toward safeguarding industrial control systems (ICS) at the core of critical infrastructure entities.

Vulnerability testing, in turn, aims to pinpoint flaws in software and helps understand how to address them. Bug bounty programs are usually limited to mobile or web applications and may or may not match a real intruder’s behavior model. In addition, the objective of a bug bounty hunter is to find a vulnerability and submit a report as quickly as possible to get a reward rather than investigating the problem in depth.

BAS is the newest technique on the list. It follows a “scan, exploit, and repeat” logic and pushes a deeper automation agenda, relying on tools that execute the testing with little to no human involvement. These projects are continuous by nature and generate results dynamically as changes occur across the network.

By and large, there are two things that set pentesting aside from adjacent security activities. Firstly, it is done by humans and hinges on manual offensive tactics, for the most part. Secondly, it always presupposes a comprehensive assessment of the discovered security imperfections and prioritization of the fixes based on how critical the vulnerable infrastructure components are.

Choosing a penetration testing team worth its salt

Let’s zoom into what factors to consider when approaching companies in this area, how to find professionals amid eye-catching marketing claims, and what pitfalls this process may entail. As a rule, the following criteria are the name of the game:

  • Background and expertise. The portfolio of completed projects speaks volumes about ethical hackers’ qualifications. Pay attention to customer feedback and whether the team has a track record of running pentests for similar-sized companies that represent the same industry as yours.
  • Established procedures. Learn how your data will be transmitted, stored, and for how long it will be retained. Also, find out how detailed the pentest report is and whether it covers a sufficient scope of vulnerability information along with severity scores and remediation steps for you to draw the right conclusions. A sample report can give you a better idea of how comprehensive the feedback and takeaways are going to be.
  • Toolkit. Make sure the team leverages a broad spectrum of cross-platform penetration testing software that spans network protocol analyzers, password-cracking solutions, vulnerability scanners, and for forensic analysis. A few examples are Wireshark, Burp Suite, John the Ripper, and Metasploit.
  • Awards and certifications. Some of the industry certifications recognized across the board include Certified Ethical Hacker (CEH), Certified Mobile and Web Application Penetration Tester (CMWAPT), GIAC Certified Penetration Tester (GPEN), and Offensive Security Certified Professional (OSCP).

The caveat is that some of these factors are difficult to formalize. Reputation isn’t an exact science, nor is expertise based on past projects. Certifications alone don’t mean a lot without the context of a skill set honed in real-life security audits. Furthermore, it’s challenging to gauge someone’s proficiency in using popular pentesting tools. When combined, though, the above criteria can point you in the right direction with the choice.

The “in-house vs third-party” dilemma

Can an organization conduct penetration tests on its own or rely solely on the services of a third-party organization? The key problem with pentests performed by a company’s security crew is that their view of the supervised infrastructure might be blurred. This is a side effect of being engaged in the same routine tasks for a long time. The cybersecurity talent gap is another stumbling block as some organizations simply lack qualified specialists capable of doing penetration tests efficiently.

To get around these obstacles, it is recommended to involve external pentesters periodically. In addition to ensuring an unbiased assessment and leaving no room for conflict of interest, third-party professionals are often better equipped for penetration testing because that’s their main focus. Employees can play a role in this process by collaborating with the contractors, which will extend their security horizons and polish their skills going forward.

Penetration testing: how long and how often?

The duration of a pentest usually ranges from three weeks to a month, depending on the objectives and size of the target network. Even if the attack surface is relatively small, it may be necessary to spend extra time on a thorough analysis of potential entry points.

Oddly enough, the process of preparing a contract between a customer and a security services provider can be more time-consuming than the pentest itself. In practice, various approvals can last from two to four months. The larger the client company, the more bureaucratic hurdles need to be tackled. When working with startups, the project approval stage tends to be much shorter.

Ideally, penetration tests should be conducted whenever the target application undergoes updates or a significant change is introduced to the IT environment. When it comes to a broad assessment of a company’s security posture, continuous pentesting is redundant – it typically suffices to perform such analysis two or three times a year.

Pentest report, a goldmine of data for timely decisions

The takeaways from a penetration test should include not only the list of vulnerabilities and misconfigurations found in the system but also recommendations on the ways to fix them. Contrary to some companies’ expectations, these tend to be fairly general tips since a detailed roadmap for resolving all the problems requires a deeper dive into the customer’s business model and internal procedures, which is rarely the case.

The executive summary outlines the scope of testing, discovered risks, and potential business impact. Because this part is primarily geared toward management and stakeholders, it has to be easy for non-technical folks to comprehend. This is a foundation for making informed strategic decisions quickly enough to close security gaps before attackers get a chance to exploit them.

The description of each vulnerability unearthed during the exercise must be coupled with an evaluation of its likelihood and potential impact according to a severity scoring system such as CVSS. Most importantly, a quality report has to provide a clear-cut answer to the question “What to do?”, not just “What’s not right?”. This translates to remediation advice where multiple hands-on options are suggested to handle a specific security flaw. Unlike the executive summary, this part is intended for IT people within the organization, so it gets into a good deal of technical detail.

The bottom line

Ethical hackers follow the path of a potential intruder – from the perimeter entry point to specific assets within the digital infrastructure. Not only does this strategy unveil security gaps, but it also shines a light on the ways to resolve them.

Unfortunately, few organizations take this route to assess their security postures proactively. Most do it for the sake of a checklist, often to comply with regulatory requirements. Some don’t bother until a real-world breach happens. This mindset needs to change.

Of course, there are alternative methods to keep abreast of a network’s security condition. Security Information and Events Management (SIEM), Security Orchestration, Automation, and Response (SOAR), and vulnerability scanners are a few examples. The industry is also increasingly embracing AI and machine learning models to enhance the accuracy of threat detection and analysis.

Still, penetration testing maintains a status quo in the cybersecurity ecosystem. That’s because no automatic tool can think like an attacker, and human touch makes any protection vector more meaningful to corporate decision makers.

The post Looking at a penetration test through the eyes of a target appeared first on Cybersecurity Insiders.

AT&T Cybersecurity is committed to providing thought leadership to help you strategically plan for an evolving cybersecurity landscape. Our 2023 AT&T Cybersecurity Insights™ Report: Edge Ecosystem is now available. It describes the common characteristics of an edge computing environment, the top use cases and security trends, and key recommendations for strategic planning.

Get your free copy now.

This is the 12th edition of our vendor-neutral and forward-looking report. During the last four years, the annual AT&T Cybersecurity Insights Report has focused on edge migration. Past reports have documented how we

This year’s report reveals how the edge ecosystem is maturing along with our guidance on adapting and managing this new era of computing.

Watch the webcast to hear more about our findings.

The robust quantitative field survey reached 1,418 professionals in security, IT, application development, and line of business from around the world. The qualitative research tapped subject matter experts across the cybersecurity industry.

At the onset of our research, we set out to find the following:

  1. Momentum of edge computing in the market.
  2. Collaboration approaches to connecting and securing the edge ecosystem.
  3. Perceived risk and benefit of the common use cases in each industry surveyed.

The results focus on common edge use cases in seven vertical industries – healthcare, retail, finance, manufacturing, energy and utilities, transportation, and U.S. SLED and delivers actionable advice for securing and connecting an edge ecosystem – including external trusted advisors. Finally, it examines cybersecurity and the broader edge ecosystem of networking, service providers, and top use cases.

As with any piece of primary research, we found some surprising and some not-so-surprising answers to these three broad questions.

Edge computing has expanded, creating a new ecosystem

Because our survey focused on leaders who are using edge to solve business problems, the research revealed a set of common characteristics that respondents agreed define edge computing.

  • A distributed model of management, intelligence, and networks.
  • Applications, workloads, and hosting closer to users and digital assets that are generating or consuming the data, which can be on-premises and/or in the cloud.
  • Software-defined (which can mean the dominant use of private, public, or hybrid cloud environments; however, this does not rule out on-premises environments).

Understanding these common characteristics are essential as we move to an even further democratized version of computing with an abundance of connected IoT devices that will process and deliver data with velocity, volume, and variety, unlike anything we’ve previously seen.

Business is embracing the value of edge deployments

The primary use case of industries we surveyed evolved from the previous year. This shows that businesses are seeing positive outcomes and continue to invest in new models enabled by edge computing.

Industry

2022 Primary Use Case

2023 Primary Use Case

Healthcare

Consumer Virtual Care

Tele-emergency Medical Services

Manufacturing

Video-based Quality Inspection

Smart Warehousing

Retail

Lost Prevention

Real-time Inventory Management

Energy and Utilities

Remote Control Operations

Intelligent Grid Management

Finance

Concierge Services

Real-time Fraud Protection

Transportation

n/a

Fleet Tracking

U.S. SLED

Public Safety and Enforcement

Building Management

 

A full 57% of survey respondents are in proof of concept, partial, or full implementation phases with their edge computing use cases.

One of the most pleasantly surprising findings is how organizations are investing in security for edge. We asked survey participants how they were allocating their budgets for the primary edge use cases across four areas – strategy and planning, network, security, and applications.

The results show that security is clearly an integral part of edge computing. This balanced investment strategy shows that the much-needed security for ephemeral edge applications is part of the broader plan.

Edge project budgets are notably nearly balanced across four key areas:

  • Network – 30%
  • Overall strategy and planning – 23%
  • Security – 22%
  • Applications – 22%

A robust partner ecosystem supports edge complexity

Across all industries, external trusted advisors are being called upon as critical extensions of the team. During the edge project planning phase, 64% are using an external partner. During the production phase, that same number increases to 71%. These findings demonstrate that organizations are seeking help because the complexity of edge demands more than a do-it-yourself approach.

A surprise finding comes in the form of the changing attack surface and changing attack sophistication. Our data shows that DDoS (Distributed Denial of Service) attacks are now the top concern (when examining the data in the aggregate vs. by industry). Surprisingly, ransomware dropped to eighth place out of eight in attack type.

The qualitative analysis points to an abundance of organizational spending on ransomware prevention over the past 24 months and enthusiasm for ransomware containment. However, ransomware criminals and their attacks are relentless. Additional qualitative analysis suggests cyber adversaries may be cycling different types of attacks. This is a worthwhile issue to discuss in your organization. What types of attacks concern your team the most?

Building resilience is critical for successful edge integration

Resilience is about adapting quickly to a changing situation. Together, resilience and security address risk, support business needs, and drive operational efficiency at each stage of the journey. As use cases evolve, resilience gains importance, and the competitive advantage that edge applications provide can be fine-tuned. Future evolution will involve more IoT devices, faster connectivity and networks, and holistic security tailored to hybrid environments.

Our research finds that organizations are fortifying and future-proofing their edge architectures and adding cyber resilience as a core pillar. Empirically, our research shows that as the number of edge use cases in production grows, there is a strong need and desire to increase protection for endpoints and data. For example, the use of endpoint detection and response grows by 12% as use cases go from ideation to full implementation.

Maturity in understanding edge use cases and what it takes to protect actively is a journey that every organization will undertake.

Key takeaways

You may not realize you’ve already encountered edge computing – whether it is through a tele-medicine experience, finding available parking places in a public structure, or working in a smart building. Edge is bringing us to a digital-first world, rich with new and exciting possibilities.

By embracing edge computing, you’ll help your organization gain important, and often competitive business advantages. This report is designed to help you start and further the conversation. Use it to develop a strategic plan that includes these key development areas.

  • Start developing your edge computing profile. Work with internal line-of-business teams to understand use cases. Include key business partners and vendors to identify initiatives that impact security.
  • Develop an investment strategy. Bundle security investments with use case development. Evaluate investment allocation. The increased business opportunity of edge use cases should include a security budget.
  • Align resources with emerging security priorities. Use collaboration to expand expertise and lower resource costs. Consider creating edge computing use case experts who help the security team stay on top of emerging use cases.
  • Prepare for ongoing, dynamic response. Edge use cases rapidly evolve once they show value. Use cases require high-speed, low-latency networks as network functions and cybersecurity controls converge.

A special thanks to our contributors for their continued guidance on this report

A report of this scope and magnitude comes together through a collaborative effort of leaders in the cybersecurity market.

Thank you to our 2023 AT&T Cybersecurity Insights Report contributors!

To help start or advance the conversation about edge computing in your organization, use the infographic below as a guide.

Cybersecurity Infographic Insights Report

The post Securing the Edge Ecosystem Global Research released – Complimentary report available appeared first on Cybersecurity Insiders.

The content of this post is solely the responsibility of the author.  AT&T does not adopt or endorse any of the views, positions, or information provided by the author in this article. 

The California Privacy Rights Act (CPRA) was passed in November 2020. It amends the 2018 California Consumer Privacy Act (CCPA) introduced in response to rising consumer data privacy concerns. It has significantly impacted data collection and handling practices, giving consumers more control over how businesses handle their data.

Companies were given until January 1st, 2023, to achieve compliance. This article will discuss the key requirements of the CPRA and provide practical tips for companies to implement the necessary changes to ensure compliance.

What is the California Privacy Rights Act (CPRA)?

The CPRA is California’s most technical privacy law to date. It resembles the EU’s older and more popular General Data Protection Regulation (GDPR). The main difference is that the GDPR framework focuses on legal bases for data processing. On the other hand, the CPRA relies on opt-out consent.

The CPRA builds on the six original consumer rights introduced by the CCPA in 2018. As a reminder, the CCPA rights are:

  • The right to know what personal information is being collected by a business
  • The right to delete that personal information
  • The right to opt in or opt out of the sale of personal information
  • The right of non-discrimination for using these rights
  • The right to initiate a private cause of action – limited to data breaches

CPRA created two additional rights:

  • The right to correct inaccurate personal information
  • The right to limit the use and disclosure of sensitive information

The CPRA also introduced the California Privacy Protection Agency (CPPA,) which is the privacy enforcement agency for the new regulations.

How does CPRA impact business operations?

Data collection is a nearly universal activity for companies in the 21st century. Significant changes to data collection and handling practices can cause slight disruptions in operations. For example, the new regulations force businesses to re-evaluate their service provider and contractor relationships. Service providers and contractors, regardless of location, must abide by the same laws when dealing with businesses in California.

Since enforcement action is possible even when there has not been a breach, businesses must quickly understand their CPRA obligations and implement reasonable security procedures.

How much does non-compliance cost?

Non-compliance with CPRA regulations results in financial penalties, depending on the nature of the offenses.

  • The penalty for a mistake is $2,000 per offense
  • The penalty for a mistake resulting from negligence is $2,500 per offense
  • The penalty for knowingly disregarding regulations is $7,500 per offense

Since the penalties are on a “per offense” basis, costs of non-compliance can easily reach millions, particularly in the event of a data breach.

7 Step CPRA checklist for compliance

Process the minimal amount of personal information

The CPRA introduces the data minimization principle. Businesses should only obtain the personal information they need for processing purposes. If you collect any more data than data, it’s time to update your collection practices. The collected data must be stored securely. A reputable cloud storage solution is an excellent way to keep consumer data.

Update your privacy policy and notices

With the eight new rights introduced by the CCPA and CPRA, there must be changes to your privacy policy to abide by these regulations. Adequate policy notices for consumers should accompany the policy changes. You must provide the notices at the starting point of data collection. To re-purpose any already-collected data, you must first get consent.

Establish a data retention policy

To comply with the retention requirements of the CPRA, you must delete the personal data you no longer need. Establishing a data retention policy is a great first step towards compliance. The policy should include the categories of collected information, their purpose, and the time you plan to store it before deletion.

Review contracts with service providers

Service providers must abide by the same regulations. That’s why any third-party contracts must include adequate measures for handling data to ensure its protection and security. Service providers must notify you if they can no longer comply with your requirements.

Take actions to prevent a data breach

Compliance with regulations is only the first step in consumer data protection. You should also take steps to improve your cyber resilience and minimize the chances of a data breach. Ensure employees use modern tools such as password managers to protect their online accounts. Train employees to recognize common scams attackers use to gain access.

You should also consider regular risk assessments and cybersecurity audits to identify system vulnerabilities. Knowing your risks will help you make the necessary changes to protect your data.

Make it easy for customers to opt out or limit data sharing

The CPRA requires businesses to provide consumers with links where they can change how they wish their data to be handled. Consumers must be able to opt out of the sale or sharing of their data. Additionally, consumers have the right to limit the use of sensitive information such as geolocation, health data, document numbers, etc.

Don’t retaliate against customers who exercise their rights

Retaliation against customers who exercise their CPRA rights clearly violates the new regulations. Customers have rights, and you must comply with them to avoid financial punishment.

Final thoughts

California businesses must comply with CPRA regulations. We also see other states implementing the same or similar data protection frameworks. Even if you’re not based in California, understanding these new laws and how they impact your business operations will help you start implementing positive changes.

The post The CPRA compliance checklist every business should follow in 2023 appeared first on Cybersecurity Insiders.

The content of this post is solely the responsibility of the author.  AT&T does not adopt or endorse any of the views, positions, or information provided by the author in this article. 

Introduction

Artificial Intelligence (AI) is the mimicry of certain aspects of human behaviour such as language processing and decision-making using Large Language Models (LLMs) and Natural Language Processing (NLP).

LLMs are specific type of AI that analyse and generate natural language using deep learning algorithms. AI programs are made to think like humans and mimic their actions without being biased or influenced by emotions.

LLMs provide systems to process large data sets and provide a clearer view of the task at hand. AI can be used to identify patterns, analyse data, and make predictions based on the data provided to them. It can be used as chatbots, virtual assistants, language translation and image processing systems as well.

Some major AI providers are ChatGPT by Open AI, Bard by Google, Bing AI by Microsoft and Watson AI by IBM. AI has the potential to revolutionize various industries including transportation, finance, healthcare and more by making fast, accurate and informed decisions with the help of large datasets. In this article we will talk about certain applications of AI in healthcare.

Applications of AI in healthcare

There are several applications of AI that have been implemented in healthcare sector which has proven quite successful.
Some examples are:

Medical imaging: AI algorithms are being used to analyse medical images such as x-ray, MRI scans and CT scans. AI algorithms can help radiologists identify abnormalities – assisting radiologists to make more accurate diagnoses. For example, Google’s AI powered Deepmind has shown similar accuracy when compared to human radiologists in identifying breast cancer.
 

Personalised medicine: AI can be used to generate insights on biomarkers, genetic information, allergies, and psychological evaluations to personalise the best course of treatment for patients.

This data can be used to predict how the patient will react to various courses of treatment for a certain condition. This can minimize adverse reactions and reduce the costs of unnecessary or expensive treatment options. Similarly, it can be used to treat genetic disorders with personalised treatment plans. For example, Deep Genomics is a company using AI systems to develop personalised treatments for genetic disorders.

Disease diagnosis: AI systems can be used to analyse patient data including medical history and test results to make more accurate and early diagnosis of life-threatening conditions like cancer. For example, Pfizer has collaborated with different AI based services to diagnose ailments and IBM Watson uses NLP and machine learning algorithms for oncology in developing treatment plans for cancer patients.

Drug discovery: AI can be used in R&D for drug discovery, making the process faster. AI can remove certain constraints present in drug discovery processes for novel chronic diseases. It can lead to saving millions of patients worldwide with a sped-up process, making it both cost and time efficient.

Per McKinsey research, there are around 270 companies working in AI-driven discovery with around 50% situated in the US. In addition, they have identified Southeast Asia and Western Europe as emerging hubs in this space. For example, Merck & Co. are working to develop a new treatment with the help of AI for Alzheimer’s.

What to expect in the future

We are seeing a revolution in the field of Machine Learning and AI happen in the past few years. Now we have LLMs and Image Processing Systems which can be used for faster, more efficient and prioritized results to make decisions more accurately and provide the best possible patient care.

Properly trained AIs are not biased – it’s important to develop these AI systems ethically. The efficiency of these systems depends on specific application and implementation.

AI systems can be biased if they are trained on biased data, so it is important to ensure that the data these models are trained on is diverse and representative. Implementation of AI in healthcare is still in early stages in drug discovery and it’ll see a continued growth going forward.

The post The role of AI in healthcare: Revolutionizing the healthcare industry appeared first on Cybersecurity Insiders.

Going to RSA next week? If you don’t know, it’s a huge cybersecurity conference held at Moscone Center in San Francisco, CA. If you’re going, please stop by the AT&T Cybersecurity booth and check us out. It’s at #6245 in the North Hall. Remember to bring a picture ID for RSA check-in, otherwise you’ll have to go back to your hotel and get it.

The RSA theme this year is “Stronger Together” which sounds like a great plan to me!

The details

So, the details: AT&T Cybersecurity will be at RSA Conference 2023 (San Francisco, April 24-27), in booth 6245 in the North Hall. We’ll have a 10’ digital wall, four demo stations, and a mini theatre for presentations.

What can you expect to see in the AT&T Cybersecurity booth?

The AT&T Cybersecurity booth will be a hub of activity with demo stations, presentations, and other social networking activities. Our goal is to help you address macro challenges in your organization such as:

  • Pro-active and effective threat detection and response
  • Modernizing network security
  • Protecting web applications and APIs
  • Engaging expert guidance on cybersecurity challenges

Demo stations

Come check out our four demo stations that will provide you an opportunity to meet and talk with AT&T Cybersecurity pros. Our demos are highlighting:

  • Managed XDR
  • Network Modernization
  • Web Application and API Security (WAAP)
  • AT&T Cybersecurity Consulting

In-booth mini-theatre

The AT&T Cybersecurity booth includes a mini-theater where you can relax and enjoy presentations every 15 minutes plus get one of our limited-edition AT&T Cybersecurity mini-backpacks for all of your RSA memorabilia

Join us for presentations about:

  • 2023 AT&T Cybersecurity Insights Report: Edge Ecosystem

Hot off the press for RSA, the 2023 AT&T Cybersecurity Insights Report is our annual thought leadership research. Learn how seven industries are using edge computing for competitive business advantages, what the perceived risks are, and how security is an integral part of the next generation of computing.

  • The Endpoint Revolution

Understand today’s “endpoint revolution” and the multi-layered preventative and detective controls that should be implemented to secure your organization.

  • Modernizing Network Security

Learn more about the modernization of enterprise security architectures and consolidation of multiple security controls, including those crucial to supporting hybrid work and the migration of apps and data to cloud services.

  • Alien Labs Threat Intelligence

Learn how the AT&T Alien Labs threat intelligence team curates intelligence based on global visibility of indicators of compromise into threats and tactics, techniques, and procedures of cybercriminals.

  • Next Generation Web Application and API Protection (WAAP) Security

Learn how WAAP is expanding to include additional features and how a service provider can help guide you to the right solution. The WAAP market is diverse and includes DDOS, bot management, web application protection and API security.

  • Empowering the SOC with Next Generation Tools

Learn how a new era of operations in security and networking is creating more efficiency in the SOC.

Events

Monday, April 24

2023 AT&T Cybersecurity Insights Report: Edge Ecosystem

Report launch – attend a mini-theater presentation for your copy 

Monday, April 24

Cloud Security Alliance Panel: 8:00 AM – 3:00 PM Pacific Moscone South 301-304
Featuring AT&T Cybersecurity’s Scott Scheppers discussing cybersecurity employee recruitment and retention.

Cloud Security Alliance Mission Critical summit RSAC 2023
(Open to RSA registrants) – All Day

Wednesday, April 26

Happy Hour at the AT&T Cybersecurity Booth N6245: 4:30 – 6:00 PM Pacific

 

Join us for networking and refreshments after a long day at the conference.

Wednesday, April 26

Partner Perspectives Track Session: 2:25 – 3:15 PM Pacific Moscone South 155
Cutting Through the Noise of XDR – Are Service Providers an Answer? Presented by AT&T Cybersecurity’s Rakesh Shah
 

 

As you can see, we have an exciting RSA week planned! We look forward to seeing and meeting everyone at the conference!

The post Get ready for RSA 2023: Stronger Together appeared first on Cybersecurity Insiders.

This is the second blog in the series focused on PCI DSS, written by an AT&T Cybersecurity consultant. See the first blog relating to IAM and PCI DSS here.

There are several issues implied in the PCI DSS Standard and its associated Report on Compliance which are rarely addressed in practice. This occurs frequently on penetration and vulnerability test reports that I’ve had to assess.

Methodology

First off is a methodology which matches the written policies and procedures of the entity seeking the assessment. I frequently see the methodology dictated by the provider, not by the client. As a client you should be asking (possibly different providers) at minimum for:

  • Internal and external network vulnerability testing
  • Internal and external penetration testing for both application and network layers
  • Segmentation testing
  • API penetration testing
  • Web application vulnerability testing.

Application

Each of these types of tests then needs to be applied to all appropriate in-scope elements of the cardholder data environment (CDE). Generally, you will provide either a list of URLs or a list of IP addresses to the tester. PCI requires that all publicly reachable assets associated with payment pages be submitted for testing. In as much as dynamic IP assignment is very common, especially in Cloud environments, ensure that you are providing a consistent set of addressing information across quarterly testing orders.

ASV scans

Make sure that the Approved Scanning Vendor (ASV) scans are attested scans, both by you and the ASV, and that the scan report shows enough detail to know what was scanned and the results. The first two summary pages are rarely enough for the assessor to work with since they may give a quantity of assets scanned and a quantity found, but no specific information on what was scanned.  

Report inclusions

You will need to specify to the testing provider that each of the reports must include

  • The tester’s credentials and training record showing appropriate training within the prior 12 months
  • If it’s an internal resource performing the tests, explain in the report how they are independent of the organization managing the equipment being tested. (Admins report to CIO, testers report to CTO, for instance, although that could mean testers and developers were in the same organization and not necessarily independent).
  • The date of the previous test completion (to prove “at least quarterly” (or annual) execution).
  • The dates of the current test execution.
  • Dates of remediation testing and exactly what it covered, along with a summary of the new results (just rewriting the old results is very difficult for the Qualified Security Assessor (QSA) to recognize at assessment time).
  • All URLS and IP addresses covered, and explain any accommodations made for dynamic DNS assignments such as in the cloud platforms, any removals, or additions to the inventory from the previous test (deprecated platforms, in-maintenance and therefore undiscovered, cluster additions, etc.). Any assets that were under maintenance during the scheduled test must have a test performed on them as soon as they come back online, or they could languish without testing for substantial periods.
  • Explain any resources, for which results are included in the report, but are not in fact part of the scope of the CDE and therefore may not need the remediations that an in-scope device does need (e.g., printers on CDE-adjacent networks).
  • Explanations of why any issues found, and deemed failures, by the testing are not in fact germane to the overall security posture. (This may be internally generated, rather than part of the test report).
  • Suspected and confirmed security issues that arose during the previous year are listed by the tester in the report with a description as to how the testing confirmed that those issues remain adequately remediated. At a minimum, anything addressed by the Critical Response Team should be included here.
  • Any additional methodology to confirm the PCI requirements (especially for segmentation, and how the testing covered all segmentation methods in use).

PCI DSS 4.0 additions

In future PCI DSS 4.0 assessments, the testers must also prove that their test tools were up to date and capable of mimicking all current and emerging attacks. This does not mean another 100 pages of plugin revisions that a QSA cannot practically compare to anything. A new paradigm for test and system-under-test component revision level validation will have to be developed within the testing industry.

Credentialed internal vulnerability scans are also required by PCI DSS 4.0 requirement 11.3.1.2. This requires creation of the role(s) and privilege(s) to be assigned to the test userID, including a sufficient level of privilege to provide meaningful testing without giving the test super-user capabilities, per requirement 7. Management authorization to enable the accounts created for testing, and management validation of the role and of the credentials every six months.. Requirement 8 controls also apply to the credentials created for testing. These include, but are not limited to, 12-character minimum passwords, unique passwords, monitoring of the activity of the associated userID(s), and disabling the account(s) when not in use.

The post PCI DSS reporting details to ensure when contracting quarterly CDE tests appeared first on Cybersecurity Insiders.

This is the fourth blog in the series focused on PCI DSS, written by an AT&T Cybersecurity consultant. See the first blog relating to IAM and PCI DSS here. See the second blog on PCI DSS reporting details to ensure when contracting quarterly CDE tests here. The third blog on network and data flow diagrams for PCI DSS compliance is here.

Requirement 6 of the Payment Card Industry (PCI) Data Security Standard (DSS) v3.2.1 was written before APIs became a big thing in applications, and therefore largely ignores them.

However, the Secure Software Standard  and PCI-Secure-SLC-Standard-v1_1.pdf from PCI have both begun to recognize the importance of covering them.

The Open Web Application Security Project (OWASP) issued a top 10 flaws list specifically for APIs from one of its subgroups, the OWASP API Security Project in 2019. Ultimately if the APIs exist in, or could affect the security of the CDE, they are in scope for an assessment.

API testing transcends traditional firewall, web application firewall, SAST and DAST testing in that it addresses the multiple co-existing sessions and states that an application is dealing with. It uses fuzzing techniques (automated manipulation of data fields such as session identifiers) to validate that those sessions, including their state information and data, are adequately separated from one another.

As an example: consumer-A must not be able to access consumer-B’s session data, nor to piggyback on information from consumer-B’s session to carry consumer-A’s possibly unauthenticated session further into the application or servers. API testing will also ensure that any management tasks (such as new account creation) available through APIs are adequately authenticated, authorized and impervious to hijacking.

Even in an API with just 10 methods, there can be more than 1,000 tests that need to be executed to ensure all the OWASP top 10 issues are protected against. Most such testing requires the swagger file (API definition file) to start from, and a selection of differently privileged test userIDs to work with.

API testing will also potentially reveal that some useful logging, and therefore alerting, is not occurring because the API is not generating logs for those events, or the log destination is not integrated with the SIEM. The API may thus need some redesign to make sure all PCI-required events are in fact being recorded (especially when related to access control, account management, and elevated privilege use). PCI DSS v4.0 has expanded the need for logging in certain situations, so ensure tests are performed to validate the logging paradigm for all required paths.

Finally, both internal and externally accessible APIs should be tested because least-privilege for PCI requires that any unauthorized persons be adequately prevented from accessing functions that are not relevant to their job responsibilities.

AT&T Cybersecurity provides a broad range of consulting services to help you out in your journey to manage risk and keep your company secure. PCI-DSS consulting is only one of the areas where we can assist. Check out our services.

The post Application Programming Interface (API) testing for PCI DSS compliance appeared first on Cybersecurity Insiders.

This is the third blog in the series focused on PCI DSS, written by an AT&T Cybersecurity consultant. See the first blog relating to IAM and PCI DSS here. See the second blog on PCI DSS reporting details to ensure when contracting quarterly CDE tests here.

PCI DSS requires that an “entity” have up to date cardholder data (CHD) flow and networking diagrams to show the networks that CHD travels over.

Googling “enterprise network diagram examples” and “enterprise data flow diagram examples” gets several different examples for diagrams which you could further refine to fit whatever drawing tools you currently use, and best resembles your current architecture.

The network diagrams are best when they include both a human recognizable network name and the IP address range that the network segment uses. This helps assessors to correlate the diagram to the firewall configuration rules or (AWS) security groups (or equivalent).

Each firewall or router within the environment and any management data paths also need to be shown (to the extent that you have control over them).

You must also show (because PCI requires it) the IDS/IPS tools and both transaction logging and overall system logging paths. Authentication, anti-virus, backup, and update mechanisms are other connections that need to be shown. Our customers often create multiple diagrams to reduce the complexity of having everything in one.

Both types of diagrams need to include each possible form of ingestion and propagation of credit card data, and the management or monitoring paths, to the extent that those paths could affect the security of that cardholder data.

Using red to signify unencrypted data, blue to signify data you control the seeding or key generation mechanism for and either decrypt or encrypt (prior to saving or propagation), brown to signify DUKPT (Derived Unique Key per Transaction) channels, and green to signify data you cannot decrypt (such as P2PE) also helps you and us understand the risk associated with various data flows. (The specific colors cited here are not mandatory, but recommendations borne of experience).

As examples:

In the network diagram:

In the web order case, there would be a blue data path from the consumer through your web application firewall and perimeter firewall, to your web servers using standard TLS1.2 encryption, since it is based on your web-site’s certificate.

There may be a red unencrypted path between the web server and order management server/application, then there would be a blue data path from your servers to the payment gateway using encryption negotiated by the gateway. This would start with TLS1.2, which might then use an iFrame to initiate a green data path directly from the payment provider to the consumer to receive the card data, bypassing all your networking and systems. Then there would be a blue return from the payment provider to your payment application with the authorization completion code.

In the data flow diagram:

An extremely useful addition to most data flow diagrams is a numbered sequence of events with the number adjacent to the arrow in the appropriate direction.

In the most basic form that sequence might look like

  1. Consumer calls into ordering line over POTS line (red – unencrypted)
  2. POTS call is converted to VOIP (blue – encrypted by xxx server/application)
  3. Call manager routes to a free CSR (blue-encrypted)
  4. Order is placed (blue-encrypted)
  5. CSR navigates to payment page within the same web form as a web order would be placed (blue-encrypted, served by the payment gateway API)
  6. CSR takes credit card data and enters it directly into the web form. (blue-encrypted, served by the payment gateway API)
  7. Authorization occurs under the payment gateway’s control.
  8. Authorization success or denial is received from the payment gateway (blue-encrypted under the same session as step 5)
  9. CSR confirms the payment and completes the ordering process.

This same list could form the basis of a procedure for the CSRs for a successful order placement. You will have to add your own steps for how the CSRs must respond if the authorization fails, or the network or payment page goes down.

Remember all documentation for PCI requires a date of last review, and notation of by whom it was approved as accurate. Even better is to add a list of changes, or change identifiers and their dates, so that all updates can be traced easily. Also remember that even updates which are subsequently reverted must be documented to ensure they don’t erroneously get re-implemented, or forgotten for some reason, thus becoming permanent.

The post Guidance on network and data flow diagrams for PCI DSS compliance appeared first on Cybersecurity Insiders.

The content of this post is solely the responsibility of the author.  AT&T does not adopt or endorse any of the views, positions, or information provided by the author in this article.

The cloud has revolutionized the way we do business. It has made it possible for us to store and access data from anywhere in the world, and it has also made it possible for us to scale our businesses up or down as needed.

However, the cloud also brings with it new challenges. One of the biggest challenges is just keeping track of all of the data that is stored in the cloud. This can make it difficult to identify and respond to security incidents.

Another challenge is that the cloud is a complex environment. There are many different services and components that can be used in the cloud, and each of these services and components has different types of data stored in different ways. This can make it difficult to identify and respond to security incidents.

Finally, since cloud systems scale up and down much more dynamically than anything we’ve seen in the past, then the data we need to understand the root cause and scope of an incident can disappear in the blink of an eye.

In this blog post, we will discuss the challenges of cloud forensics and incident response, and we will also provide some tips on how to address these challenges.

How to investigate a compromise of a cloud environment

When you are investigating a compromise of a cloud environment, there are a few key steps that you should follow:

  1. Identify the scope of the incident: The first step is to identify the scope of the incident. This means determining which resources were affected and how the data was accessed.
  2. Collect evidence: The next step is to collect evidence. This includes collecting log files, network traffic, metadata, and configuration files.
  3. Analyze the evidence: The next step is to analyze the evidence. This means looking for signs of malicious activity and determining how the data was compromised.
  4. Respond to the incident and contain it: The next step is to respond to the incident. This means taking steps to mitigate the damage and prevent future incidents. For example with a compromise of an EC2 system in AWS, that may include turning off the system or updating the firewall to block all network traffic, as well as isolating any associated IAM roles by adding a DenyAll policy. Once the incident is contained, that will give you more time to investigate safely in detail.
  5. Document the incident: The final step is to document the incident. This includes creating a report that describes the incident, the steps that were taken to respond to the incident, and the lessons that were learned.

What data can you get access to in the cloud?

Getting access to the data required to perform an investigation to find the root cause is often harder in the cloud than it is on-prem. That’s as you often find yourself at the mercy of the data the cloud providers have decided to let you access. That said, there are a number of different resources that can be used for cloud forensics, including:

  • AWS EC2: Data you can get includes snapshots of the volumes and memory dumps of the live systems. You can also get cloudtrail logs associated with the instance.
  • AWS EKS: Data you can get includes audit logs and control plane logs in S3. You can also get the docker file system, which is normally a versioned filesystem called overlay2. You can also get the docker logs from containers that have been started and stopped.
  • AWS ECS: You can use ecs execute or kubectl exec to grab files from the filesystem and memory.
  • AWS Lambda: You can get cloud trail logs and previous versions of lambda.
  • Azure Virtual Machines: You can download snapshots of the disks in VHD format.
  • Azure Kubernetes Service: You can use “command invoke” to get live data from the system.
  • Azure Functions: A number of different logs such as “FunctionAppLogs”.
  • Google Compute Engine: You can access snapshots of the disks, downloading them in VMDK format.
  • Google Kubernetes Engine: You can use kubectl exec to get data from the system.
  • Google Cloud Run: A number of different logs such as the application logs.

AWS data sources

Figure 1: The various data sources in AWS

Tips for cloud forensics and incident response

Here are a few tips for cloud forensics and incident response:

  • Have a plan: The first step is to have an explicit cloud incident response plan. This means having a process in place for identifying and responding to security incidents in each cloud provider, understanding how your team will get access to the data and take the actions they need.
  • Automate ruthlessly: The speed and scale of the cloud means that you don’t have the time to perform steps manually, since the data you need could easily disappear by the time you get round to responding. Use the automation capabilities of the cloud to set up rules ahead of time to execute as many as possible of the steps of your plan without human intervention.
  • Train your staff: The next step is to train your staff on how to identify and respond to security incidents, especially around those issues that are highly cloud centric, like understanding how accesses and logging work.
  • Use cloud-specific tools: The next step is to use the tools that are purpose built to help you to identify, collect, and analyze evidence produced by cloud providers. Simply repurposing what you use in an on-prem world is likely to fail.

If you are interested in learning more about my company, Cado Response, please visit our website or contact us for a free trial.

The post Cloud forensics – An introduction to investigating security incidents in AWS, Azure and GCP appeared first on Cybersecurity Insiders.

In times of economic downturn, companies may become reactive in their approach to cybersecurity management, prioritizing staying afloat over investing in proactive cybersecurity measures. However, it’s essential to recognize that cybersecurity is a valuable investment in your company’s security and stability. Taking necessary precautions against cybercrime can help prevent massive losses and protect your business’s future.

As senior leaders revisit their growth strategies, it’s an excellent time to assess where they are on the cyber-risk spectrum and how significant the complexity costs have become. These will vary across business units, industries, and geographies. In addition, there is a new delivery model for cybersecurity with the pay-as-you-go, and use-what-you need from a cyber talent pool and tools and platform that enable simplification.

cybersecurity top of mind

It’s important to understand that not all risks are created equal. While detection and incident response are critical, addressing risks that can be easily and relatively inexpensively mitigated is sensible. By eliminating the risks that can be controlled, considerable resources can be saved that would otherwise be needed to deal with a successful attack.

Automation is the future of cybersecurity and incident response management. Organizations can rely on solutions that can automate an incident response protocol to help eliminate barriers, such as locating incident response plans, communicating roles and tasks to response teams, and monitoring actions during and after the threat.

Establish Incident Response support before an attack

In today’s rapidly changing threat environment, consider an Incident Response Retainer service which can help your organization with a team of cyber crisis specialists on speed dial, ready to take swift action. Choose a provider who can help supporting your organization at every stage of the incident response life cycle, from cyber risk assessment through remediation and recovery.

Effective cybersecurity strategies are the first step in protecting your business against cybercrime. These strategies should include policies and procedures that can be used to identify and respond to potential threats and guidance on how to protect company data best. Outlining the roles and responsibilities of managing cybersecurity, especially during an economic downturn, is also essential.

Managing vulnerabilities continues to be a struggle for many organizations today. It’s essential to move from detecting vulnerabilities and weaknesses to remediation. Cybersecurity training is also crucial, as employees unaware of possible risks or failing to follow security protocols can leave the business open to attack. All employees must know how to identify phishing and follow the principle of verifying requests before trusting them.

Penetration testing is an excellent way for businesses to reduce data breach risks, ensure compliance, and assure their supplier network that they are proactively safeguarding sensitive information. Successful incident response requires collaboration across an organization’s internal and external parties.

A top-down approach where senior leadership encourages a strong security culture encourages every department to do their part to support in case of an incident. Responding to a cloud incident requires understanding the differences between your visibility and control with on-premises resources and what you have in the cloud, which is especially important given the prevalence of hybrid models.

Protective cybersecurity measures are essential for businesses, especially during economic downturns. By prioritizing cybersecurity, companies can protect their future and safeguard against the costly consequences of a successful cyberattack.

cyber top of mind

The post Improving your bottom line with cybersecurity top of mind appeared first on Cybersecurity Insiders.