This is the third blog in the series focused on PCI DSS, written by an AT&T Cybersecurity consultant. See the first blog relating to IAM and PCI DSS here. See the second blog on PCI DSS reporting details to ensure when contracting quarterly CDE tests here.

PCI DSS requires that an “entity” have up to date cardholder data (CHD) flow and networking diagrams to show the networks that CHD travels over.

Googling “enterprise network diagram examples” and “enterprise data flow diagram examples” gets several different examples for diagrams which you could further refine to fit whatever drawing tools you currently use, and best resembles your current architecture.

The network diagrams are best when they include both a human recognizable network name and the IP address range that the network segment uses. This helps assessors to correlate the diagram to the firewall configuration rules or (AWS) security groups (or equivalent).

Each firewall or router within the environment and any management data paths also need to be shown (to the extent that you have control over them).

You must also show (because PCI requires it) the IDS/IPS tools and both transaction logging and overall system logging paths. Authentication, anti-virus, backup, and update mechanisms are other connections that need to be shown. Our customers often create multiple diagrams to reduce the complexity of having everything in one.

Both types of diagrams need to include each possible form of ingestion and propagation of credit card data, and the management or monitoring paths, to the extent that those paths could affect the security of that cardholder data.

Using red to signify unencrypted data, blue to signify data you control the seeding or key generation mechanism for and either decrypt or encrypt (prior to saving or propagation), brown to signify DUKPT (Derived Unique Key per Transaction) channels, and green to signify data you cannot decrypt (such as P2PE) also helps you and us understand the risk associated with various data flows. (The specific colors cited here are not mandatory, but recommendations borne of experience).

As examples:

In the network diagram:

In the web order case, there would be a blue data path from the consumer through your web application firewall and perimeter firewall, to your web servers using standard TLS1.2 encryption, since it is based on your web-site’s certificate.

There may be a red unencrypted path between the web server and order management server/application, then there would be a blue data path from your servers to the payment gateway using encryption negotiated by the gateway. This would start with TLS1.2, which might then use an iFrame to initiate a green data path directly from the payment provider to the consumer to receive the card data, bypassing all your networking and systems. Then there would be a blue return from the payment provider to your payment application with the authorization completion code.

In the data flow diagram:

An extremely useful addition to most data flow diagrams is a numbered sequence of events with the number adjacent to the arrow in the appropriate direction.

In the most basic form that sequence might look like

  1. Consumer calls into ordering line over POTS line (red – unencrypted)
  2. POTS call is converted to VOIP (blue – encrypted by xxx server/application)
  3. Call manager routes to a free CSR (blue-encrypted)
  4. Order is placed (blue-encrypted)
  5. CSR navigates to payment page within the same web form as a web order would be placed (blue-encrypted, served by the payment gateway API)
  6. CSR takes credit card data and enters it directly into the web form. (blue-encrypted, served by the payment gateway API)
  7. Authorization occurs under the payment gateway’s control.
  8. Authorization success or denial is received from the payment gateway (blue-encrypted under the same session as step 5)
  9. CSR confirms the payment and completes the ordering process.

This same list could form the basis of a procedure for the CSRs for a successful order placement. You will have to add your own steps for how the CSRs must respond if the authorization fails, or the network or payment page goes down.

Remember all documentation for PCI requires a date of last review, and notation of by whom it was approved as accurate. Even better is to add a list of changes, or change identifiers and their dates, so that all updates can be traced easily. Also remember that even updates which are subsequently reverted must be documented to ensure they don’t erroneously get re-implemented, or forgotten for some reason, thus becoming permanent.

The post Guidance on network and data flow diagrams for PCI DSS compliance appeared first on Cybersecurity Insiders.

The content of this post is solely the responsibility of the author.  AT&T does not adopt or endorse any of the views, positions, or information provided by the author in this article.

The cloud has revolutionized the way we do business. It has made it possible for us to store and access data from anywhere in the world, and it has also made it possible for us to scale our businesses up or down as needed.

However, the cloud also brings with it new challenges. One of the biggest challenges is just keeping track of all of the data that is stored in the cloud. This can make it difficult to identify and respond to security incidents.

Another challenge is that the cloud is a complex environment. There are many different services and components that can be used in the cloud, and each of these services and components has different types of data stored in different ways. This can make it difficult to identify and respond to security incidents.

Finally, since cloud systems scale up and down much more dynamically than anything we’ve seen in the past, then the data we need to understand the root cause and scope of an incident can disappear in the blink of an eye.

In this blog post, we will discuss the challenges of cloud forensics and incident response, and we will also provide some tips on how to address these challenges.

How to investigate a compromise of a cloud environment

When you are investigating a compromise of a cloud environment, there are a few key steps that you should follow:

  1. Identify the scope of the incident: The first step is to identify the scope of the incident. This means determining which resources were affected and how the data was accessed.
  2. Collect evidence: The next step is to collect evidence. This includes collecting log files, network traffic, metadata, and configuration files.
  3. Analyze the evidence: The next step is to analyze the evidence. This means looking for signs of malicious activity and determining how the data was compromised.
  4. Respond to the incident and contain it: The next step is to respond to the incident. This means taking steps to mitigate the damage and prevent future incidents. For example with a compromise of an EC2 system in AWS, that may include turning off the system or updating the firewall to block all network traffic, as well as isolating any associated IAM roles by adding a DenyAll policy. Once the incident is contained, that will give you more time to investigate safely in detail.
  5. Document the incident: The final step is to document the incident. This includes creating a report that describes the incident, the steps that were taken to respond to the incident, and the lessons that were learned.

What data can you get access to in the cloud?

Getting access to the data required to perform an investigation to find the root cause is often harder in the cloud than it is on-prem. That’s as you often find yourself at the mercy of the data the cloud providers have decided to let you access. That said, there are a number of different resources that can be used for cloud forensics, including:

  • AWS EC2: Data you can get includes snapshots of the volumes and memory dumps of the live systems. You can also get cloudtrail logs associated with the instance.
  • AWS EKS: Data you can get includes audit logs and control plane logs in S3. You can also get the docker file system, which is normally a versioned filesystem called overlay2. You can also get the docker logs from containers that have been started and stopped.
  • AWS ECS: You can use ecs execute or kubectl exec to grab files from the filesystem and memory.
  • AWS Lambda: You can get cloud trail logs and previous versions of lambda.
  • Azure Virtual Machines: You can download snapshots of the disks in VHD format.
  • Azure Kubernetes Service: You can use “command invoke” to get live data from the system.
  • Azure Functions: A number of different logs such as “FunctionAppLogs”.
  • Google Compute Engine: You can access snapshots of the disks, downloading them in VMDK format.
  • Google Kubernetes Engine: You can use kubectl exec to get data from the system.
  • Google Cloud Run: A number of different logs such as the application logs.

AWS data sources

Figure 1: The various data sources in AWS

Tips for cloud forensics and incident response

Here are a few tips for cloud forensics and incident response:

  • Have a plan: The first step is to have an explicit cloud incident response plan. This means having a process in place for identifying and responding to security incidents in each cloud provider, understanding how your team will get access to the data and take the actions they need.
  • Automate ruthlessly: The speed and scale of the cloud means that you don’t have the time to perform steps manually, since the data you need could easily disappear by the time you get round to responding. Use the automation capabilities of the cloud to set up rules ahead of time to execute as many as possible of the steps of your plan without human intervention.
  • Train your staff: The next step is to train your staff on how to identify and respond to security incidents, especially around those issues that are highly cloud centric, like understanding how accesses and logging work.
  • Use cloud-specific tools: The next step is to use the tools that are purpose built to help you to identify, collect, and analyze evidence produced by cloud providers. Simply repurposing what you use in an on-prem world is likely to fail.

If you are interested in learning more about my company, Cado Response, please visit our website or contact us for a free trial.

The post Cloud forensics – An introduction to investigating security incidents in AWS, Azure and GCP appeared first on Cybersecurity Insiders.

In times of economic downturn, companies may become reactive in their approach to cybersecurity management, prioritizing staying afloat over investing in proactive cybersecurity measures. However, it’s essential to recognize that cybersecurity is a valuable investment in your company’s security and stability. Taking necessary precautions against cybercrime can help prevent massive losses and protect your business’s future.

As senior leaders revisit their growth strategies, it’s an excellent time to assess where they are on the cyber-risk spectrum and how significant the complexity costs have become. These will vary across business units, industries, and geographies. In addition, there is a new delivery model for cybersecurity with the pay-as-you-go, and use-what-you need from a cyber talent pool and tools and platform that enable simplification.

cybersecurity top of mind

It’s important to understand that not all risks are created equal. While detection and incident response are critical, addressing risks that can be easily and relatively inexpensively mitigated is sensible. By eliminating the risks that can be controlled, considerable resources can be saved that would otherwise be needed to deal with a successful attack.

Automation is the future of cybersecurity and incident response management. Organizations can rely on solutions that can automate an incident response protocol to help eliminate barriers, such as locating incident response plans, communicating roles and tasks to response teams, and monitoring actions during and after the threat.

Establish Incident Response support before an attack

In today’s rapidly changing threat environment, consider an Incident Response Retainer service which can help your organization with a team of cyber crisis specialists on speed dial, ready to take swift action. Choose a provider who can help supporting your organization at every stage of the incident response life cycle, from cyber risk assessment through remediation and recovery.

Effective cybersecurity strategies are the first step in protecting your business against cybercrime. These strategies should include policies and procedures that can be used to identify and respond to potential threats and guidance on how to protect company data best. Outlining the roles and responsibilities of managing cybersecurity, especially during an economic downturn, is also essential.

Managing vulnerabilities continues to be a struggle for many organizations today. It’s essential to move from detecting vulnerabilities and weaknesses to remediation. Cybersecurity training is also crucial, as employees unaware of possible risks or failing to follow security protocols can leave the business open to attack. All employees must know how to identify phishing and follow the principle of verifying requests before trusting them.

Penetration testing is an excellent way for businesses to reduce data breach risks, ensure compliance, and assure their supplier network that they are proactively safeguarding sensitive information. Successful incident response requires collaboration across an organization’s internal and external parties.

A top-down approach where senior leadership encourages a strong security culture encourages every department to do their part to support in case of an incident. Responding to a cloud incident requires understanding the differences between your visibility and control with on-premises resources and what you have in the cloud, which is especially important given the prevalence of hybrid models.

Protective cybersecurity measures are essential for businesses, especially during economic downturns. By prioritizing cybersecurity, companies can protect their future and safeguard against the costly consequences of a successful cyberattack.

cyber top of mind

The post Improving your bottom line with cybersecurity top of mind appeared first on Cybersecurity Insiders.

The content of this post is solely the responsibility of the author.  AT&T does not adopt or endorse any of the views, positions, or information provided by the author in this article. 

What is an e-mail?

E-mail, also referred to as electronic mail, is an internet service which allows people and digital services to transmit messages(letters) in electronic form across Internet. To send and receive an E-mail message, an individual or service requires to have an e-mail address, i.e. electronic mail address which is generally in emailaddress@domain.com format. E-mails are more reliable, fast, and inexpensive form of messaging both in personal and professional environment.

What are e-mail headers?

E-mail headers are metadata information attached with every email sent or receive across the internet, email headers contain important information required for delivery of emails. E-mail headers contain information such as:

  • Sender’s IP address
  • Server the email came through
  • Domain the email originated from
  • SPF (Sender Policy Framework)
  • DKIM
  • DMARC
  • Time of sending receiving email message
  • Other important information required to validate the authenticity of the email received

Using E-mail header analysis, users can identify if an e-mail is legitimate or a scam. To view email headers in most clients, you can right click on the message and choose “show original” or “view-source.”

Metadata

Now, let us understand the terms related to metadata what it is and why the metadata associated is so important for email communications.

Metadata: Metadata is kind of data which provides information about the other data. For example: Email headers provide information about email communication.

SPF: also known as Sender Policy Framework, is a DNS record used for authentication mechanism in email addresses. SPF is a txt record configured in DNS records. It contains IP addresses and domain names which are authorised to send emails for a domain. The recipient can check the SPF record under email headers to verify if the email was originated from specified IP addresses or domain names.

DKIM: DomainKeys Identified Mail, is a cryptographic method that uses a digital signature to sign and verify emails. This allows the receiver’s mailbox to verify that the email was sent by authenticated user/owner of the domain. When an email is sent from a DKIM configured domain, it generates hashes for the email and encrypts them with private key which is available to the sender. It uses hashes to compare the mail origination and mail received content so that recipient can verify that email was not manipulated or tampered.

DMARC: Domain based Message Authentication, Reporting and Conformance is an email standard used for protecting email senders and recipients from spam, spoofing and spamming. DMARC indicates that an email is protected by SPF and DKIM as well. If SPF or DKIM fails to match the records, DMARC provides options such as quarantine or reject options for the message. For configuring DMARC to DNS records, SPF and DKIM configuration is mandatory.

Message ID: Message ID is a unique mail identifier for each email received; every email will have a unique Message ID.

E-mail header analysis has been used in criminal investigations to track down suspects and in civil litigation to prove the authenticity of emails. It’s also used by business to combat modern day email attacks like email spoofing.

There are various tools available for email header analysis, however, free tools may have limited capabilities.

The post E-mail header analysis appeared first on Cybersecurity Insiders.

The content of this post is solely the responsibility of the author.  AT&T does not adopt or endorse any of the views, positions, or information provided by the author in this article. 

The global COVID-19 pandemic has left lasting effects on the workplace across all sectors. With so many people required to stay home, businesses in every field turned to remote work to open new possibilities for staying connected across distances. Now that the pandemic has largely subsided, many working environments have transitioned into a new hybrid workplace style. With this new approach to the office, employers and IT specialists have had to adapt to the increased risk of cybersecurity breaches within the company context. 

The first security measure businesses adopted during the pandemic was using VPNs that allowed employees to work remotely while still enjoying connectivity and security. Despite their popularity, however, VPN authentication can grant malicious third parties unrestricted network access and allow them to compromise an organization’s digital assets. 

To combat these vulnerabilities, organizations must consider establishing hybrid workplace network security. Investing in organizational cybersecurity means investing in the organization’s future; now, cybersecurity is as essential for the continuity and success of a business as the lock on its front door was once considered to be. 

This article will discuss types of network security breaches to watch out for. Then we will review practices you can adopt to establish hybrid workplace security and mitigate the risk of granting malicious third parties unrestricted network access.

Three types of hybrid network security breaches to watch out for

There are multiple potential gaps in every hybrid workplace network, including interpersonal communications, outdated software, and uninformed employees. Cybersecurity breaches at even a very small scale can grant hackers access to sensitive information, which could lead to the leakage of important data. 

This is a serious problem as, according to recent surveys, 45% of companies in the United States have been faced with data leakage in the past. With hybrid and remote workplaces becoming increasingly normal, workplace network security must become a priority. 

Here are three types of security breaches to watch out for. 

1. Phishing attacks

One type of cybersecurity attack is phishing. Phishing involves a hacker attempting to trick employees or co-workers into revealing sensitive information, granting access to protected files, or inadvertently downloading malicious software. 

Phishing is enacted by hackers who successfully adopt an employee’s personality, writing style, or company presence. According to recent statistics, 80% of breaches involve compromised identities, which can have a domino effect, leading to larger-scale company-wide cybersecurity breaches. 

2. Ransomware attacks

A second variety of cybersecurity breaches is ransomware. Ransomware is an attack where hackers encrypt files on a company’s network and demand payment to restore access. In other words, they gain private access to the workplace network and then essentially hold it hostage, demanding a “ransom” to prevent leaking any sensitive work data that might be stored there. 

Phishing can be used as an initial method of accessing a network so that hackers can then install ransomware. 

3. Man-in-the-Middle attacks

A third type of cybersecurity breach is a man-in-the-middle attack, where a hacker intercepts and alters communications between two parties to steal data or manipulate transactions. A man-in-the-middle attack can also be a type of phishing breach.  

Six practices to establish hybrid workplace security

The most effective overall approach to combating potential cyberattacks is establishing a comprehensive, multifaceted system of defenses. 

The combination of different approaches, such as widespread workplace cybersecurity education paired with awareness about making smart purchasing decisions, can shore up the defenses before an attack. Meanwhile, introducing specific preventive cybersecurity measures will guarantee a more robust cybersecurity structure across the workplace in case of a malicious incident.

 Here are six specific practices to establish hybrid workplace security. 

1. Choose trustworthy vendors

Part of running a business is working within a broader network of vendors, contractors, and clients. One way to establish cybersecurity from the outset is to carefully and thoroughly vet every business partner and vendor before working with them. Before signing a company-wide phone contract, for example, look for business phone services that come with features such as enhanced cyber protection and cyberattack insurance. 

When your business or employees request or send money online, they should use specific transfer sources as instructed. Employers should look for bank transfers that come with digital security encryption and protection against chargebacks to prevent breaches during the transaction. 

2. Adopt alternative remote access methods

Since breaches of company networks protected by VPNs are becoming increasingly common, seeking out alternative remote access methods is a good way to ensure the ongoing security of the workplace network. 

Software-defined perimeter, or SDP, uses a cloud-based approach so that each device can be easily synced across geographic barriers. A software-defined perimeter relies on identity authentication before connecting users and, as such, acts as a virtual barrier around every level of access. 

3. Introduce zero-trust network access (ZTNA)

Zero-trust network access means that every single request to access the company network, including all employee requests, must pass several layers of authentication before being granted. This way, all employees, both in-person and remote, will have to engage with the same advanced-level security protocols.  

Zero-trust network access also means that every device is analyzed and confirmed so hackers or bad actors attempting to impersonate an employee can be tracked and identified. 

4. Enact company-wide cybersecurity training programs

Create training documents that are easily accessible to both in-person and remote employees. 

Regular training on the latest cybersecurity protocols and procedures is an important way to maintain constant awareness of cybersecurity threats among your entire staff and establish clear and direct actions employees can take if they suspect they have been targeted by a bad actor. 

Since phishing is one of the top methods of cyberattacks in the workplace, the better informed that employees at every level of the company are, the more secure the workplace will be. 

5. Conduct regular cybersecurity tests

For hybrid companies, identifying potential vulnerabilities and weak spots in the cybersecurity system is key to preventing effective attacks.

Instruct the in-house IT team to conduct regular cybersecurity tests by launching false phishing campaigns and attempting to simulate other hacking strategies. If your hybrid business does not have an entire IT team, hire outside cybersecurity consultants to analyze the state of your company’s current cybersecurity defenses. 

IT experts should also be consulted to determine the best cybersecurity software for your business. All software and hardware should be updated regularly on every workplace device, and employees should be encouraged to update the software on their smartphones and other personal devices that might be used for work purposes. 

Since software updates contain the latest cybersecurity measures, they are essential to cyber risk management in the hybrid workplace. 

6. Install security software on all workplace devices 

In addition to the protection provided by personnel and alternative access networks, every workplace device should be equipped with adequate cybersecurity protective software. Installing a firewall on every workplace computer and tablet can protect the core of each hard drive from malware that may have been accidentally installed. 

A strong firewall can protect against any suspicious activity attempts within the company network. By providing a powerful firewall coupled with secure remote access methods, the entire workplace network should be secured from attempts at illicit access by cybercriminals with malicious intent. 

Data diodes are another viable method of securing the network; similar to software firewalls, data diodes work less like an identity barrier and more like a physical separator. While firewalls analyze and vet each incoming action request, data diodes function by separating distinct aspects of each electronic transaction or interaction. So even in case of a system failure, the main result would be a total lack of connectivity between parts, ensuring that cybercriminals would still be prevented from accessing company information. 

Final thoughts

Since a hybrid workplace encompasses both in-person and remote employees at the same time, hybrid companies face a unique set of challenges. Each cybersecurity policy must incorporate both types of employees, which can be difficult to enact across the board. 

To instill preventive measures that can thwart attempts at phishing, ransomware, malware, identity theft, and other malicious attacks, hybrid companies can boost their workplace training programs and install higher-level security software. These measures will help to prevent attacks and minimize damage in the case of a cybersecurity breach so that sensitive personal and company data will be protected no matter what. 

The post How to establish network security for your hybrid workplace appeared first on Cybersecurity Insiders.

AT&T Cybersecurity is committed to providing thought leadership to help you strategically plan for an evolving cybersecurity landscape. Our 2023 AT&T Cybersecurity InsightsTM Report: Edge Ecosystem is now available. It describes the common characteristics of an edge computing environment, the top use cases and security trends, and key recommendations for strategic planning.

Get your free copy now.

This is the 12th edition of our vendor-neutral and forward-looking report. During the last four years, the annual AT&T Cybersecurity Insights Report has focused on edge migration. Past reports have documented how we

This year’s report reveals how the edge ecosystem is maturing along with our guidance on adapting and managing this new era of computing.

Watch the webcast to hear more about our findings.

The robust quantitative field survey reached 1,418 professionals in security, IT, application development, and line of business from around the world. The qualitative research tapped subject matter experts across the cybersecurity industry.

At the onset of our research, we set out to find the following:

  1. Momentum of edge computing in the market.
  2. Collaboration approaches to connecting and securing the edge ecosystem.
  3. Perceived risk and benefit of the common use cases in each industry surveyed.

The results focus on common edge use cases in seven vertical industries – healthcare, retail, finance, manufacturing, energy and utilities, transportation, and U.S. SLED and delivers actionable advice for securing and connecting an edge ecosystem – including external trusted advisors. Finally, it examines cybersecurity and the broader edge ecosystem of networking, service providers, and top use cases.

As with any piece of primary research, we found some surprising and some not-so-surprising answers to these three broad questions.

Edge computing has expanded, creating a new ecosystem

Because our survey focused on leaders who are using edge to solve business problems, the research revealed a set of common characteristics that respondents agreed define edge computing.

  • A distributed model of management, intelligence, and networks.
  • Applications, workloads, and hosting closer to users and digital assets that are generating or consuming the data, which can be on-premises and/or in the cloud.
  • Software-defined (which can mean the dominant use of private, public, or hybrid cloud environments; however, this does not rule out on-premises environments).

Understanding these common characteristics are essential as we move to an even further democratized version of computing with an abundance of connected IoT devices that will process and deliver data with velocity, volume, and variety, unlike anything we’ve previously seen.

Business is embracing the value of edge deployments

The primary use case of industries we surveyed evolved from the previous year. This shows that businesses are seeing positive outcomes and continue to invest in new models enabled by edge computing.

Industry

2022 Primary Use Case

2023 Primary Use Case

Healthcare

Consumer Virtual Care

Tele-emergency Medical Services

Manufacturing

Video-based Quality Inspection

Smart Warehousing

Retail

Lost Prevention

Real-time Inventory Management

Energy and Utilities

Remote Control Operations

Intelligent Grid Management

Finance

Concierge Services

Real-time Fraud Protection

Transportation

n/a

Fleet Tracking

U.S. SLED

Public Safety and Enforcement

Building Management

A full 57% of survey respondents are in proof of concept, partial, or full implementation phases with their edge computing use cases.

One of the most pleasantly surprising findings is how organizations are investing in security for edge. We asked survey participants how they were allocating their budgets for the primary edge use cases across four areas – strategy and planning, network, security, and applications.

The results show that security is clearly an integral part of edge computing. This balanced investment strategy shows that the much-needed security for ephemeral edge applications is part of the broader plan.

Edge project budgets are notably nearly balanced across four key areas:

  • Network – 30%
  • Overall strategy and planning – 23%
  • Security – 22%
  • Applications – 22%

A robust partner ecosystem supports edge complexity

Across all industries, external trusted advisors are being called upon as critical extensions of the team. During the edge project planning phase, 64% are using an external partner. During the production phase, that same number increases to 71%. These findings demonstrate that organizations are seeking help because the complexity of edge demands more than a do-it-yourself approach.

A surprise finding comes in the form of the changing attack surface and changing attack sophistication. Our data shows that DDoS (Distributed Denial of Service) attacks are now the top concern (when examining the data in the aggregate vs. by industry). Surprisingly, ransomware dropped to eighth place out of eight in attack type.

The qualitative analysis points to an abundance of organizational spending on ransomware prevention over the past 24 months and enthusiasm for ransomware containment. However, ransomware criminals and their attacks are relentless. Additional qualitative analysis suggests cyber adversaries may be cycling different types of attacks. This is a worthwhile issue to discuss in your organization. What types of attacks concern your team the most?

Building resilience is critical for successful edge integration

Resilience is about adapting quickly to a changing situation. Together, resilience and security address risk, support business needs, and drive operational efficiency at each stage of the journey. As use cases evolve, resilience gains importance, and the competitive advantage that edge applications provide can be fine-tuned. Future evolution will involve more IoT devices, faster connectivity and networks, and holistic security tailored to hybrid environments.

Our research finds that organizations are fortifying and future-proofing their edge architectures and adding cyber resilience as a core pillar. Empirically, our research shows that as the number of edge use cases in production grows, there is a strong need and desire to increase protection for endpoints and data. For example, the use of endpoint detection and response grows by 12% as use cases go from ideation to full implementation.

Maturity in understanding edge use cases and what it takes to protect actively is a journey that every organization will undertake.

Key takeaways

You may not realize you’ve already encountered edge computing – whether it is through a tele-medicine experience, finding available parking places in a public structure, or working in a smart building. Edge is bringing us to a digital-first world, rich with new and exciting possibilities.

By embracing edge computing, you’ll help your organization gain important, and often competitive business advantages. This report is designed to help you start and further the conversation. Use it to develop a strategic plan that includes these key development areas.

  • Start developing your edge computing profile. Work with internal line-of-business teams to understand use cases. Include key business partners and vendors to identify initiatives that impact security.
  • Develop an investment strategy. Bundle security investments with use case development. Evaluate investment allocation. The increased business opportunity of edge use cases should include a security budget.
  • Align resources with emerging security priorities. Use collaboration to expand expertise and lower resource costs. Consider creating edge computing use case experts who help the security team stay on top of emerging use cases.
  • Prepare for ongoing, dynamic response. Edge use cases rapidly evolve once they show value. Use cases require high-speed, low-latency networks as network functions and cybersecurity controls converge.

A special thanks to our contributors for their continued guidance on this report

A report of this scope and magnitude comes together through a collaborative effort of leaders in the cybersecurity market.

Thank you to our 2023 AT&T Cybersecurity Insights Report contributors!

To help start or advance the conversation about edge computing in your organization, use the infographic below as a guide.

Cybersecurity Infographic Insights Report

The post Securing the Edge Ecosystem Global Research released – Complimentary report available appeared first on Cybersecurity Insiders.

This is the first of a series of consultant-written blogs around PCI DSS.

Many organizations have multiple IAM schemes that they forget about when it comes to a robust compliance framework such as PCI DSS.

There are, at minimum, two schemes that need to be reviewed, but consider if you have more from this potential, and probably incomplete, list:

  • Cloud service master account management AWS (Amazon Web Services), Microsoft Azure, Google Cloud Platform (GCP), Oracle Cloud Architecture (OCA),
  • Name Service Registrars (E.g., GoDaddy, Network Solutions)
  • DNS service (E.g., Akamai, CloudFront)
  • Certificate providers (E.g., Entrust, DigiCert)
  • IaaS (Infrastructure as a Service) and SaaS (Software as a Service)) accounts (E.g.: Digital Realty, Equinix, Splunk, USM Anywhere (USMA), Rapid7)
  • Servers and networking gear administrative account management (Firewalls, routers, VPN, WAF, load balancer, DDoS prevention, SIEM, database, Wi-Fi)
  • Internal user account management, (Active Directory, LDAP or equivalent, and third parties who may act as staff augmentation or maintenance and repair services, API accesses)
  • Consumer account management (often self-managed in a separate database using a different set of encryption, tools and privileges or capabilities, from staff logins).
  • PCI DSS v4.0 expands the requirement to all system, automated access, credentialed testing, and API interfaces, so those need to be considered too.

Bottom line, in whatever fashion someone or something validates their authorization to use the device, service, or application, that authorization must be mapped to the role and privileges afforded to that actor. The goal being to ensure that each is provisioned with the least-privilege needed to be able to complete its or their intended function(s) and can be held accountable for their actions.

As many of the devices as possible should be integrated into a common schema, since having multiple devices with local only admin accounts is a recipe for disaster.

If privilege escalation is possible from within an already-authenticated account, the mechanism by which that occurs must be thoroughly documented and monitored (logged) too.

PCI DSS Requirement 7 asks the assessor to review the roles and access privileges and groupings that individuals could be assigned to, and that those individuals are specifically authorized to have those access rights and roles. This covers both physical and logical access.

Requirement 9 asks specifically about business-based need and authorization for visitors gaining physical access to any sensitive areas. Frequent visitors such as janitors and HVAC maintenance must be remembered when writing policy and procedures and when conferring access rights for physical access.

Requirement 8 then asks the assessor to put together the roles, privileges, and assignments with actual current staff members, and to validate that the privileges those staff currently have, were authorized, and match the authorized privileges. This is one of the few for-ever requirements of PCI DSS, so if paperwork conferring and authorizing access for any individuals or automation has been lost, it must be re-created to show authorization of the current access rights and privileges.

PCI DSS v4.0 requires much more scrutiny of APIs – which are a growing aspect of application programming. The design engineers need to ensure that APIs and automated processes are given, or acquire, their own specific, unique, authorization credentials, and the interface has session control characteristics that are well-planned, documented, and managed using the same schema created for Requirement 7. Cross-session data pollution and/or capture must be prevented. If the API is distributed as a commercial off-the-shelf (COTS) product, it cannot have default credentials programmed in, but the installation process must ask for, or create and store appropriately, strong credentials for management and use.

Requirements 1 and 6 both impact role and privilege assignments also, where separation of duties between development and production in both networking and code deployment is becoming blurred in today’s DevSecOps and agile world. However, PCI’s standard remains strict and requires such separations, challenging very small operations. The intent is that no one person (or login ID) should have end-to-end control of anything, and no-one should be reviewing or QA’ing and authorizing their own work. This might mean a small organization needs to contract one or more reviewers1 if there’s one person doing development, and the other doing deployment.

Even in larger organizations where developers sometimes need access to live production environments to diagnose specific failures, they must not be using the same login ID as they use for development. Organizations could choose asmith as the developer role and andys as the administrative login ID for the same person, to ensure privilege escalations are deliberately bounded and easily trackable (per requirement 10). Also, no-one should ever be using elevated privileges to perform their day-to-day job; elevations should always be used for point tasks and dropped as soon as they are no longer needed.

Next, third parties allowed into your cardholder data environment (CDE) – for maintenance purposes for instance – must always be specifically authorized to be there (physically or logically) and monitored while they are there. Most SIEM tools these days monitor everything indiscriminately, but PCI also says their access must be cut off as soon as it is no longer needed.

That might mean time-bounding their logical access, and it does mean escorting them while they are present. Staff must also be empowered and encouraged to challenge people with no badge, or no escort, and to escort them out of any sensitive area until their escort can be reunited with them. If your staff has access to customer premises where PCI-sensitive data is present, (either physically or logically) they must conduct themselves in like manner.

PCI DSS v4.0 also adds a requirement that any normally automated process that can be used interactively (e.g. for debugging) must log any of the interactive usage that occurs, with the appropriate individual’s attribution.

Lastly, PCI DSS 4.0 adds credentialed testing using high access privileges for requirement 11 (although not necessarily administrative privilege), which requires those credentials to be designed into the overall requirement 7 schema and subjected to the requirement 8 restrictions and constraints.

1Reviewers are secure-code reviewers and security-trained functional QA staff.

The post Identity and Access Management (IAM) in Payment Card Industry (PCI) Data Security Standard (DSS) environments. appeared first on Cybersecurity Insiders.

The content of this post is solely the responsibility of the author.  AT&T does not adopt or endorse any of the views, positions, or information provided by the author in this article. 

If cyber threats feel like faceless intruders, you’re only considering a fraction of the risk. Insider threats pose a challenge for organizations, often catching them by surprise as they focus on securing the perimeter.

There is a bright side, however. Understanding the threat landscape and developing a security plan will help you to mitigate risk and prevent cyber incidents. When designing your strategy, be sure to account for insider threats.

What is an insider threat?

Perhaps unsurprisingly, insider threats are threats that come from within your organization. Rather than bad actors from the outside infiltrating your network or systems, these risks refer to those initiated by someone within your organization – purposefully or as a result of human error.

There are three classifications of insider threats:

  • Malicious insider threats are those perpetrated purposefully by someone with access to your systems. This may include a disgruntled employee, a scorned former employee, or a third-party partner or contractor who has been granted permissions on your network.
  • Negligent insider threats are often a matter of human error. Employees who click on malware links in an email or download a compromised file are responsible for these threats.
  • Unsuspecting insider threats technically come from the outside. Yet, they rely on insiders’ naivety to succeed. For example, an employee whose login credentials are stolen or who leaves their computer unguarded may be a victim of this type of threat.

Keys to identifying insider threats

Once you know what types of threats exist, you must know how to detect them to mitigate the risk or address compromises as quickly as possible. Here are four key ways to identify insider threats:

Monitor

Third parties are the risk outliers that, unfortunately, lead to data compromise all too often. Monitoring and controlling third-party access is crucial to identifying insider threats, as contractors and partners with access to your networks can quickly become doorways to your data.

Consider monitoring employee access as well. Security cameras and keystroke logging are methods some companies may choose to monitor movement and usage, though they may not suit every organization.

Audit

Pivotal to risk mitigation – for insider threats or those outside your network – is an ongoing auditing process. Regular audits will help understand typical behavior patterns and identify anomalies should they arise. Automated audits can run based on your parameters and schedule without much intervention from SecOps. Manual audits are also valuable for ad hoc reviews of multiple or disparate systems.

Report

A risk-aware culture is based on ongoing communication about threats, risks, and what to do should issues arise. It also means establishing a straightforward process for whistleblowing. SecOps, try as they might, cannot always be everywhere. Get the support of your employees by making it clear what to look out for and where to report any questionable activity they notice. Employees can also conduct self-audits with SecOps’ guidance to assess their risk level.

Best practices for prevention

Prevention of insider threats relies on a few key aspects. Here are some best practices to prevent threats:

Use MFA

The low-hanging fruit in security is establishing strong authentication methods and defining clear password practices. Enforce strong, unique passwords, and ensure users must change them regularly. Multifactor authentication (MFA) will protect your network and systems if a user ID or password is stolen or compromised.

Screen candidates and new hires

Granted, bad actors have to start somewhere, so screening and background checks do not eliminate every threat. Still, it’s helpful to have processes in place to screen new hires, so you know to whom you’re granting access to your systems. Depending on the nature of the relationship, this best practice may also apply to third-party partners, contractors, and vendors.

Define roles and access

This may seem obvious to some, yet it’s often overlooked. Each user or user group in your organization should have clearly defined roles and access privileges relevant to their needs. For example, your valuable data is left on the table if entry-level employees have carte blanche across your network. Ensure roles and access levels are well-defined and upheld.

Have a straightforward onboarding and offboarding process

Most organizations have a clear and structured onboarding process for registering and bringing users online. Your onboarding process should include clear guidelines for network usage, an understanding of what will happen in the case of a data compromise (deliberate or accidental), where to report issues, and other security measures.

Just as important – if not more – as onboarding is the offboarding process. Languishing user accounts pose a major security risk as they lay theoretically dormant and unmonitored, and no user in the organization will notice if their account is being used. Ensure swift decommissioning of user accounts when employees leave the organization.

Secure infrastructure

Apply strict access controls to all physical and digital access points across your organization. Use least privileged access to limit accessibility, as recommended above. Opt for stronger verification measures, including PKI cards or biometrics, particularly in more sensitive business areas. Secure desktops and install gateways to protect your environment from nodes to the perimeter.

Establish governance procedures

Security requires everyone’s participation, yet organizations need buy-in from key leadership team members and nominated people or a team to hold the reigns. Establishing a governance team and well-defined procedures will ensure attention to security risks at all times and save valuable time should a breach occur.

The tools of the trade

“Organizations must be able to address the risks from malicious insiders who intentionally steal sensitive data for personal reasons as well as users who can accidentally expose information due to negligence or simple mistakes.”

Thankfully, you don’t have to do it all alone. With a data-aware insider threat protection solution, you can rest with the peace of mind that you – and your network – are safe.

The post How Can You Identify and Prevent Insider Threats? appeared first on Cybersecurity Insiders.

Going to RSA next week? If you don’t know, it’s a huge cybersecurity conference held at Moscone Center in San Francisco, CA. If you’re going, please stop by the AT&T Cybersecurity booth and check us out. It’s at #6245 in the North Hall. Remember to bring a picture ID for RSA check-in, otherwise you’ll have to go back to your hotel and get it.

The RSA theme this year is “Stronger Together” which sounds like a great plan to me!

The details

So, the details: AT&T Cybersecurity will be at RSA Conference 2023 (San Francisco, April 24-27), in booth 6245 in the North Hall. We’ll have a 10’ digital wall, four demo stations, and a mini theatre for presentations.

What can you expect to see in the AT&T Cybersecurity booth?

The AT&T Cybersecurity booth will be a hub of activity with demo stations, presentations, and other social networking activities. Our goal is to help you address macro challenges in your organization such as:

  • Pro-active and effective threat detection and response
  • Modernizing network security
  • Protecting web applications and APIs
  • Engaging expert guidance on cybersecurity challenges

Demo stations

Come check out our four demo stations that will provide you an opportunity to meet and talk with AT&T Cybersecurity pros. Our demos are highlighting:

  • Managed XDR
  • Network Modernization
  • Web Application and API Security (WAAP)
  • AT&T Cybersecurity Consulting

In-booth mini-theatre

The AT&T Cybersecurity booth includes a mini-theater where you can relax and enjoy presentations every 15 minutes plus get one of our limited-edition AT&T Cybersecurity mini-backpacks for all of your RSA memorabilia

Join us for presentations about:

  • 2023 AT&T Cybersecurity Insights Report: Edge Ecosystem

Hot off the press for RSA, the 2023 AT&T Cybersecurity Insights Report is our annual thought leadership research. Learn how seven industries are using edge computing for competitive business advantages, what the perceived risks are, and how security is an integral part of the next generation of computing.

  • The Endpoint Revolution

Understand today’s “endpoint revolution” and the multi-layered preventative and detective controls that should be implemented to secure your organization.

  • Modernizing Network Security

Learn more about the modernization of enterprise security architectures and consolidation of multiple security controls, including those crucial to supporting hybrid work and the migration of apps and data to cloud services.

  • Alien Labs Threat Intelligence

Learn how the AT&T Alien Labs threat intelligence team curates intelligence based on global visibility of indicators of compromise into threats and tactics, techniques, and procedures of cybercriminals.

  • Next Generation Web Application and API Protection (WAAP) Security

Learn how WAAP is expanding to include additional features and how a service provider can help guide you to the right solution. The WAAP market is diverse and includes DDOS, bot management, web application protection and API security.

  • Empowering the SOC with Next Generation Tools

Learn how a new era of operations in security and networking is creating more efficiency in the SOC.

Events

Monday, April 24

2023 AT&T Cybersecurity Insights Report: Edge Ecosystem

Report launch – attend a mini-theater presentation for your copy 

Monday, April 24

Cloud Security Alliance Panel: 8:00 AM – 3:00 PM Pacific Moscone South 301-304
Featuring AT&T Cybersecurity’s Scott Scheppers discussing cybersecurity employee recruitment and retention.

Cloud Security Alliance Mission Critical summit RSAC 2023
(Open to RSA registrants) – All Day

Wednesday, April 26

Happy Hour at the AT&T Booth N624: 4:30 – 6:00 PM Pacific

 

Join us for networking and refreshments after a long day at the conference.

Wednesday, April 26

Partner Perspectives Track Session: 2:25 – 3:15 PM Pacific Moscone South 155
Cutting Through the Noise of XDR – Are Service Providers an Answer? Presented by AT&T Cybersecurity’s Rakesh Shah
 

 

As you can see, we have an exciting RSA week planned! We look forward to seeing and meeting everyone at the conference!

The post Get ready for RSA 2023: Stronger Together appeared first on Cybersecurity Insiders.

By Pat McGarry, CTO of ThreatBlockr

There are two indisputable facts about the cybersecurity industry right now. One, we are still in the middle of a massive staffing crisis. Two, one of the biggest drivers of this staffing crisis is burnout of security professionals.

A recent study indicates up to 84% of cybersecurity professionals are experiencing burnout. Personally, I was surprised that number wasn’t closer to 100, given what these men and women face on a day-to-day basis.

The past three years have been the gift that keeps on giving to threat actors. Threat surfaces widened with the rise of remote and hybrid work, networks became more vulnerable, and breaches became big business on the dark web.

The technologies we deploy to protect our data have been overwhelmed by a flood of malicious traffic and security teams are forced to respond to more and more alerts from more and more tools, worried that one misstep could result in disaster. Security professionals are not set up for success, which explains why there are 3.4 million cybersecurity roles unfilled worldwide. This is unsustainable.

We can’t keep throwing more of the same kinds of security technologies onto our networks and expecting different results. Threat Blocking-as-a-Service (TBaaS) gets you different results.

Instead of chasing after ever-changing attacks and threats, TBaaS focuses on known threat actors. This model blocks traffic entering the network as well as calls and traffic back out, all autonomously. Importantly, this type of enforcement can only be accomplished by leveraging massive amounts of cyber intelligence to get the clearest picture possible of who the threat actors attacking our networks, users, and data are.

The impact of TBaaS to networks and their security teams is felt instantaneously. We know that 30-50% of the traffic hitting a security stack is coming from IP addresses of known threat actors. Blocking this results in an immediate increase to your security posture while providing a significant boon to the performance of the rest of the security stack. This also eases the pressure on security teams significantly.

The idea of TBaaS – using cyber intelligence to block known threat actors from entering or exiting the network – is so simple that people assume their security stack technologies are already doing that. Unfortunately, without TBaaS, they aren’t. Threat Blocking-as-a-Service stands on five pillars that make it effective:

  • Visibility
  • Risk management
  • Consolidation
  • Budget

Every other tool in the modern security stack might have one, two, or maybe three of these assets, but TBaaS is the only one that combines all of them. Let’s dive into why this holistic approach makes such a difference.

Visibility

The threats coming in and out of our networks are constantly changing. Where we patch for one type of attack, threat actors deftly evolve more, each time adding layers of obfuscation and complexity. Most of these threat actors are well-funded – often by nation-states – which is of course why they have the resources to inflict such harm and adapt their methods so rapidly.

However, the constant in this discussion is not the “what” of the attacks but rather the “who.” Who are sending these attacks? And where are they?

The cyber intelligence community is comprised of government, open source, and private enterprises who research answers to those two pivotal questions. The TBaaS model is based on the idea of “the more intelligence the better” and ingests intelligence feeds and lists from anywhere with up-to-the-minute updates. This provides as much visibility as possible into the threat landscape, which in turn allows for significant network, user, and data protection.

Defense

Currently, the majority of threat intelligence is leveraged in the “detect/respond/recover” functions of a security stack. Make no mistake: utilizing threat intelligence in this space is essential. However, failing to leverage the full power of threat intelligence ahead of a breach has left systems open to breaches. As such, TBaaS is very much a “left of boom” technology.

Utilizing massive amounts of cyber intelligence to block traffic to and from known threat actors is the true defense for any network, and the second pillar of TBaaS.

Risk management

One of the most pivotal concepts in cybersecurity is redundancy: we create overlapping protections so one piece’s failure doesn’t mean system failure. For decades, however, the “identify and protect” piece has been filled by one single technology: the firewall. Firewalls were never built to handle either the amount of traffic thrown at them nor the amount of encrypted traffic they would have to parse.

The TBaaS model instead welcomes other tools and technologies, but also reduces risk by creating a true protection model.

Consolidation

No matter how great all your technologies are, if they aren’t talking to each other you’re headed for disaster. Another pillar of TBaaS is the consolidation of information: not just ingesting and acting on cyber intelligence, but also feeding its own actions and logs into the rest of the security stack to utilize. This type of data consolidation can reduce multiple alerts as well as aid in the “detect/respond/recover” phases if an unknown threat makes its way into the network.

Budget

One of my colleagues loves to ask people when making cybersecurity budget decisions: what is your budget for ransom? Because the truth is, unless you’re actively blocking known threat actors, it’s not a matter of if a breach happens, but when, and how often.

Cybersecurity budgets are tight, which is why another pillar of the TBaaS model is budgetary value. Of course, the solution itself should be affordable, but it also alleviates other issues causing budget headaches.

  • Autonomous. Operates and updates without the need for staff to monitor, reducing the strain on the security staff.
  • Reduce known-bad traffic hitting the security stack. Optimizing performance for the rest of the security stack.
  • Reduce alerts. This also helps relieve the burden placed on expensive in-house cybersecurity staff, as well as help to avoid alert fatigue.

Clearly, what we’re doing as an industry isn’t working very well. Threat Blocking-as-a-Service is a paradigm shift in the industry to solve that conundrum. Sometimes it’s the simplest solutions that we can’t believe we weren’t already doing. By focusing on stopping the threat actors, by definition you stop all of the threats they present. That is Threat Blocking-as-a-Service.

The post Pillars of Threat Blocking-as-a-Service appeared first on Cybersecurity Insiders.