We’re pleased to announce the availability of the 2023 AT&T Cybersecurity Insights Report™: Focus on State and Local government and higher Education in the United States (US SLED). It looks at the edge ecosystem, surveying US SLED leaders, and provides benchmarks for assessing your edge computing plans. This is the 12th edition of our vendor-neutral and forward-looking report. Last year’s Focus on  US SLED report documented trends in securing the data, applications, and endpoints that rely on edge computing (get the 2022 report).

Get the complimentary 2023 report.

The robust quantitative field survey reached 1,418 security, IT, application development, and line of business professionals worldwide. The qualitative research tapped subject matter experts across the cybersecurity industry.

At the onset of our research, we established the following hypotheses.

  • Momentum edge computing has in the market.
  • Approaches to connecting and securing the edge ecosystem – including the role of trusted advisors to achieve edge goals.
  • Perceived risk and perceived benefit of the common use cases in each industry surveyed.

The results focus on common edge use cases in seven vertical industries – healthcare, retail, finance, manufacturing, energy and utilities, transportation, and US SLED- delivering actionable advice for securing and connecting an edge ecosystem, including external trusted advisors. Finally, it examines cybersecurity and the broader edge ecosystem of networking, service providers, and top use cases. For this Focus on US SLED, 178 respondents represented the vertical.

The role of IT is shifting, embracing stakeholders at the ideation phase of development.

Edge computing is a transformative technology that brings together various stakeholders and aligns their interests to drive integrated outcomes. The emergence of edge computing has been fueled by a generation of visionaries who grew up in the era of smartphones and limitless possibilities. Look at the infographic below for a topline summary of key findings.

In this paradigm, the role of IT has shifted from being the sole leader to a collaborative partner in delivering innovative edge computing solutions. In addition, we found that US SLED leaders are budgeting differently for edge use cases. These two things, along with an expanded approach to securing edge computing, were prioritized by our respondents in the 2023 AT&T Cybersecurity Insights Report: Edge Ecosystem.

In 2023, US SLED respondents’ primary edge use case is building management, which involves hosted HVAC applications, electricity and utility monitoring applications, and various sensors for large buildings. This is just the beginning of the evolution in the public sector to increase the value of public investments, so every dollar goes a bit further. In higher education, edge uses cases are being used for things like immersive and interactive learning and helping faculty to be more accessible with solutions like real-time feedback.

Edge computing brings the data closer to where decisions are made.

With edge computing, the intelligence required to make decisions, the networks used to capture and transmit data, and the use case management are distributed. Distributed means things work faster because nothing is backhauled to a central processing area such as a data center and delivers the near-real-time experience.

With this level of complexity, it’s common to re-evaluate decisions regarding security, data storage, or networking. The report shares the trends emerging as US SLED embraces edge computing. One area examined is expense allocation, and what we found may surprise you. The research reveals the allocation of investments across overall strategy and planning, network, application, and security for the anticipated use cases that organizations plan to implement within the next three years.

How to prepare for securing your edge ecosystem.

Develop your edge computing profile. It is essential to break down the barriers that typically separate the internal line of business teams, application development teams, network teams, and security teams. Technology decisions should not be made in isolation but rather through collaboration with a diverse group of stakeholders. Understanding the capabilities and limitations of all stakeholders makes it easier to identify gaps in evolving project plans.

The edge ecosystem is expanding, and expertise is available to offer solutions that address cost, implementation, mitigating risks, and more. Including expertise from the broader SLED edge ecosystem increases the chances of outstanding performance and alignment with organizational goals.

Develop an investment strategy. During edge use case development, organizations should carefully determine where and how much to invest. Think of it as part of monetizing the use case. Building security into the use case from the start allows the organization to consider security as part of the overall project cost. It’s important to note that no one-size-fits-all solution can provide complete protection for all aspects of edge computing. Instead, organizations should consider a comprehensive and multi-layered approach to address the unique security challenges of each use case.

Increase your compliance capabilities. Regulations in the public sector and for education can vary significantly. This underscores the importance of not relying solely on a checkbox approach or conducting annual reviews to help ensure compliance with the growing number of regulations. Keeping up with technology-related mandates and helping to ensure compliance requires ongoing effort and expertise. If navigating compliance requirements is not within your organization’s expertise, seeking outside help from professionals specializing in this area is advisable.

Align resources with emerging priorities. External collaboration allows organizations to utilize expertise and reduce resource costs. It goes beyond relying solely on internal teams within the organization. It involves tapping into the expanding ecosystem of edge computing experts who offer strategic and practical guidance. Engaging external subject matter experts (SMEs) to enhance decision-making can help prevent costly mistakes and accelerate deployment. These external experts can help optimize use case implementation, ultimately saving time and resources.

Build-in resilience. Consider approaching edge computing with a layered mindset. Take the time to ideate on various “what-if” scenarios and anticipate potential challenges. For example, what measures exist if a private 5G network experiences an outage? Can data remain secure when utilizing a public 4G network? How can business-as-usual operations continue in the event of a ransomware attack?

Successful SLED edge computing implementations require a holistic approach encompassing collaboration, compliance, resilience, and adaptability. By considering these factors and proactively engaging with the expertise available, organizations can unlock the full potential of edge computing to deliver improved outcomes, operational efficiency, and cost-effectiveness.

The post Get the AT&T Cybersecurity Insights Report: Focus on US SLED appeared first on Cybersecurity Insiders.

The content of this post is solely the responsibility of the author.  AT&T does not adopt or endorse any of the views, positions, or information provided by the author in this article. 

APIs, formally known as application programming interfaces, occupy a significant position in modern software development. They revolutionized how web applications work by facilitating applications, containers, and microservices to exchange data and information smoothly. Developers can link APIs with multiple software or other internal systems that help businesses to interact with their clients and make informed decisions.

Despite the countless benefits, hackers can exploit vulnerabilities within the APIs to gain unauthorized access to sensitive data resulting in data breaches, financial losses, and reputational damage. Therefore, businesses need to understand the API security threat landscape and look out for the best ways to mitigate them.

The urgent need to enhance API security 

APIs enable data exchanges among applications and systems and help in the seamless execution of complex tasks. But as the average number of APIs rises, organizations often overlook their vulnerabilities, making them a prime target of hackers. The State of API Security Q1 Report 2023 survey finding concluded that the attacks targeting APIs had increased 400% during the past six months.

Security vulnerabilities within APIs compromise critical systems, resulting in unauthorized access and data breaches like Twitter and Optus API breaches. Cybercriminals can exploit the vulnerabilities and launch various attacks like authentication attacks, distributed denial-of-service attacks (DDoS), and malware attacks. API security has emerged as a significant business issue as another report reveals that by 2023, API abuses will be the most frequent attack vector causing data breaches, and also, 50% of data theft incidents will happen due to insecure APIs. As a result, API security has. become a top priority for organizations to safeguard their data, which may cost businesses $75 billion annually.

Why does API security still pose a threat in 2023?

Securing APIs has always been a daunting task for most organizations, mainly because of the misconfigurations within APIs and the rise in cloud data breaches. As the security landscape evolved, API sprawl became the top reason that posed a threat to API security. API sprawl is the uncontrolled proliferation of APIs across an organization and is a common problem for enterprises with multiple applications, services, and development teams.

As more APIs are created, they expanded the attack surface and emerged as an attractive target for hackers. The issue is that the APIs are not always designed by keeping security standards in mind. This leads to a lack of authorization and authentication, exposing sensitive data like personally identifiable information (PII) or other business data. 

API sprawl produces shadow and zombie APIs that further threaten API security. A zombie API is an exposed, abandoned, outdated, or forgotten API which increases the API security threat landscape. These APIs proved helpful at some point, but later they got replaced by newer versions. As organizations work on building new products or features, they neglect the already existing APIs to wander in the application environment allowing the threat actors to penetrate the vulnerable API and access sensitive data.

Contrastingly, shadow APIs are third-party APIs often developed without proper surveillance and remain untracked and undocumented. Enterprises that fail to protect against shadow APIs introduce reliability issues, unwanted data loss, penalties for non-compliance, and increased operational costs.

Moreover, the emergence of new technologies like the Internet of Things (IoT) has introduced more difficulty in maintaining API security. With more devices connected to the internet that can be accessed remotely, any inadequate security measures can lead to unauthorized access and potential data breaches. In addition, generative AI algorithms can pose security challenges. Hackers can use AI algorithms to detect the vulnerabilities within the APIs and launch targeted attacks.

Best practices to improve API security amid rising threats

API security has become a critical concern for organizations and requires a holistic cybersecurity approach to mitigate the threats and vulnerabilities. Developers and security teams must come forward and collaborate to implement the best practices like the ones mentioned below to improve API security:

Discover all the APIs

API discovery is crucial in uncovering modern API security threats like zombie and shadow APIs. The security teams are trained in protecting the mission-critical APIs but discovering the internal, external, and third-party APIs is also vital to enhance API security. Organizations must invest in automated API discovery tools that detect every API endpoint and provide visibility into which APIs are live, their location, and how they function.

Developers should also monitor the API traffic by integrating API gateways and proxies that may indicate the presence of shadow APIs. In addition, creating policies that define how the APIs are documented, used, and managed further helps locate unknown or vulnerable APIs.

Assess all APIs via testing

As API security threats become more prevalent, security teams can’t rely on common testing methods. They need to adopt an advanced form of security testing methods like SAST (static application security testing). It is a white-box security testing method that identifies the vulnerabilities and remediates the security flaws within the source code. Providing immediate feedback to developers allows them to create a secure code that ultimately leads to secure applications. However, as this testing cannot detect vulnerabilities outside the code, security teams can consider using other security testing tools like DAST, IAST, or XDR to improve security standards.

Adopt a Zero Trust security framework

Also, users must authorize and authenticate themselves to access the data, and this way plays a vital role in reducing the attack surface.

Users must authorize and authenticate themselves to access them and help reduce the attack surface. In addition, by leveraging Zero Trust architecture (ZTA), APIs can be segmented into smaller units having their own set of authentication, authorization, and security policies. This gives security architects more control over API access and enhances API security.

API posture management

API posture management is another great way that helps organizations to detect, monitor, and minimize potential security threats due to vulnerable APIs. Various posture management tools continuously monitor the APIs and notify them about suspicious or unauthorized activities. This enables organizations to respond promptly to API security threats and reduce the attack surface.

These tools also perform regular vulnerability assessments that scan the APIs for security flaws, allowing organizations to take measures to strengthen API security. Besides this, these tools provide API auditing capabilities and ensure compliance with leading industry regulations such as HIPAA or GDPR and other internal policies to maintain transparency, and maximize overall security standards.

Implementing API threat prevention

Improving API security is an ongoing task; therefore, threats can still emerge no matter how strong monitoring and security policies are. This raises the need to implement proactive API threat preventive measures that identify and mitigate potential API threats that adversely impact a business.

API threat prevention includes using specialized security solutions and techniques like threat modeling, behavioral analysis, vulnerability scanning, incident response, and reporting. Also, by continuous monitoring, enforcing encryption or authentication mechanisms, or API rate limits, organizations can avoid data breaches and ensure uninterrupted business operations.

Final thoughts

With the rise in API adoption, organizations face significant challenges in securing them against malicious actors resulting in unauthorized access and potential data breaches. Therefore, ensuring API security is the foremost responsibility of every developer. This can be achieved by following practices like discovering all the APIs, performing security testing, deploying  a Zero Trust approach, using API posture management tools, and adopting API threat prevention measures. By following these practices, security teams can reduce the API threat surface, ensure that all APIs are secure, and stay compliant with industry standards.

The post Why is API security the next big thing in Cybersecurity? appeared first on Cybersecurity Insiders.

The content of this post is solely the responsibility of the author.  AT&T does not adopt or endorse any of the views, positions, or information provided by the author in this article. 

The installation of Active Directory (AD) on Windows Server 2019 calls for a thorough understanding of technical nuances and a steadfast dedication to security best practices. This guide will walk you through the process of securely implementing Active Directory, ensuring the highest level of protection for the information and resources within your company.

Planning and design

Start by carefully planning and designing. Analyze your organization’s requirements, network topology, and security requirements in great detail. Establish the necessary number of organizational units (OUs), domains, and user and group structures. Make a thorough design plan that complies with your organization’s compliance standards and security guidelines.

Installing Windows Server 2019

Install Windows Server 2019 on a dedicated system that satisfies the system minimums. Use the most recent Windows Server 2019 ISO and adhere to recommended procedures for a secure installation. Set a strong password for the Administrator account and enable Secure Boot if it is supported in the BIOS/UEFI settings for hardware security.

Choose the right deployment type

Select the domain controller (DC) installation as the Active Directory deployment type. By doing this, you can be confident that your server is a dedicated domain controller overseeing your domain’s directory services, authentication, and security policies.

Install Active Directory Domain Services (AD DS) role

Add the Active Directory Domain Services (AD DS) role to Windows Server 2019. For the installation, use Server Manager or PowerShell. Select the appropriate forest and domain functional levels during the procedure and specify the server as a domain controller.

Choose an appropriate Forest Functional Level (FFL)

Select the highest Forest Functional Level (FFL) compatible with your domain controllers. This enables access to the most recent AD features and security upgrades. Examine the FFL specifications and confirm that every domain controller currently in use can support the selected level.

Secure DNS configuration

AD heavily relies on DNS for name resolution and service location. Ensure that DNS is configured securely by:

a. Using Active Directory Integrated Zones for DNS storage, enabling secure updates and zone replication through AD.

b. Implementing DNSSEC to protect against DNS data tampering and for secure zone signing.

c. Restricting zone transfers to authorized servers only, preventing unauthorized access to DNS data.

d. Implementing DNS monitoring and logging for suspicious activities using tools like DNS auditing and query logging.

Use strong authentication protocols

Configure Active Directory to use strong authentication protocols such as Kerberos. To stop credential-based attacks, disable older, less secure protocols like NTLM and LM hashes. Ensure domain controllers are set up to favor robust authentication techniques over weak ones when performing authentication.

Securing administrative accounts

Safeguard administrative accounts by:

a. Creating complicated, one-of-a-kind passwords for each administrative account, following the password policy guidelines, and rotating passwords frequently.

b. Adding multi-factor authentication (MFA) to all administrative accounts to improve login security and reduce the risk of credential theft.

c. Enforcing the principle of least privilege, role-based access control (RBAC), and limiting the use of administrative accounts to authorized personnel only.

d. To reduce the attack surface and potential insider threats, administrative account privileges should be regularly reviewed, and extra access rights should be removed.

Applying group policies

Leverage Group Policy Objects (GPOs) to enforce security settings and standards across your Active Directory domain. Implement password policies, account lockout policies, and other security-related configurations to improve the overall security posture.

Protecting domain controllers

Domain controllers are the backbone of Active Directory. Safeguard them by:

a. Isolating domain controllers in a separate network segment or VLAN to minimize the attack surface and prevent lateral movement.

b. Enabling BitLocker Drive Encryption on the system volume of the domain controller to safeguard critical data from physical theft or unauthorized access.

c. Setting up Windows Firewall rules to restrict inbound traffic to critical AD services and thwart potential dangers.

d. Performing regular domain controller backups and securely storing those backups to protect data integrity and speed up disaster recovery. Create system state backups using the Windows Server Backup feature, and for redundancy, think about using off-site storage.

Monitor and audit

Implement a robust monitoring and auditing system to detect potential security breaches and unauthorized access. Employ Security Information and Event Management (SIEM) solutions for thorough threat monitoring, set up real-time alerts for crucial security events, and use Windows Event Forwarding to centralize log data for analysis.

Perform regular backups

Create regular system state backups of Active Directory to ensure data integrity and quick recovery in case of data loss or disaster. Periodically test the restoration procedure to confirm its efficacy and guarantee that backups are safely kept off-site.

Conclusion

By following this technical guide, you can confidently and securely implement Active Directory on Windows Server 2019, ensuring your organization has a robust, dependable, highly secure Active Directory environment that safeguards valuable assets and sensitive data from the constantly changing threat landscape. Always remember that security is a continuous process, and maintaining a resilient AD infrastructure requires staying current with the latest security measures.

The post Securely implementing Active Directory on Windows Server 2019 appeared first on Cybersecurity Insiders.

As cybersecurity becomes increasingly complex, having a centralized team of experts driving continuous innovation and improvement in their Zero Trust journey is invaluable. A Zero Trust Center of Excellence (CoE) can serve as the hub of expertise, driving the organization’s strategy in its focus area, standardizing best practices, fostering innovation, and providing training. It can also help organizations adapt to changes in the cybersecurity landscape, such as new regulations or technologies, ensuring they remain resilient and secure in the face of future challenges. The Zero Trust CoE also ensures that organization’s stay up-to-date with the latest security trends, technologies, and threats, while constantly applying and implementing the most effective security measures.

Zero Trust is a security concept that continues to evolve but is centered on the belief that organizations should not automatically trust anything inside or outside of their perimeters. Instead, organizations must verify and grant access to anything and everything trying to connect to their systems and data. This can be achieved through a unified strategy and approach by centralizing the organization’s Zero Trust initiatives into a CoE. Below are some of the benefits realized through a Zero Trust CoE.

Zero Trust - advantages of using a center of excellence
A critical aspect of managing a Zero Trust CoE effectively is the use of Key Performance Indicators (KPIs). KPIs are quantifiable measurements that reflect the performance of an organization in achieving its objectives. In the context of a Zero Trust CoE, KPIs can help measure the effectiveness of the organization’s Zero Trust initiatives, providing valuable insights that can guide decision-making and strategy.

Creating a Zero Trust CoE involves identifying the key roles and responsibilities that will drive the organization’s Zero Trust initiatives. This typically includes a leadership team, a Zero Trust architecture team, a engineering team, a policy and compliance team, an education and training team, and a research and development team. These teams will need to be organized to support the cross-functional collaboration necessary for enhancing productivity.

A Zero Trust CoE should be organized in a way that aligns with the organization’s overall strategy and goals, while also ensuring effective collaboration and communication. AT&T Cybersecurity consultants can also provide valuable leadership and deep technical guidance for each of the teams. Below is an approach to structuring the different members of the CoE team:

teams within a zero trust COE

  • Leadership team: This team is responsible for setting the strategic direction of the CoE. It typically includes senior executives and leaders from various departments, such as IT, security, and business operations.
     
  • Zero Trust architects: This individual or team is responsible for designing and implementing the Zero Trust architecture within the organization. They work closely with the leadership team to ensure that the architecture aligns with the organization’s strategic goals.
     
  • Engineering team: This team is responsible for the technical implementation of the Zero Trust strategy. This includes network engineers, security analysts, and other IT professionals.
     
  • Policy and compliance team: This team is responsible for developing and enforcing policies related to Zero Trust. They also ensure that the organization follows compliance with relevant regulations and standards.
     
  • Education and training team: This team is responsible for educating and training staff members about Zero Trust principles and practices. They develop training materials, conduct workshops, and provide ongoing support.
     
  • Research and lab team: This team stays abreast of the latest developments in Zero Trust and explores new technologies and approaches that could enhance the organization’s Zero Trust capabilities. AT&T Cybersecurity consultants, with their finger on the pulse of the latest trends and developments, can provide valuable insights to this team.

Each of these teams should have its own set of KPIs that align with the organization’s overall business goals. For example, the KPIs for the ‘Engineering Team’ could include the number of systems that have been migrated to the Zero Trust architecture, while the KPIs for the ‘Policy and Compliance Team’ could include the percentage of staff members who comply with the organization’s Zero Trust policies.

Monitoring and evaluating these KPIs regularly is crucial for ensuring the effectiveness of the CoE. This should be done at least quarterly but could be done more frequently depending on the specific KPI and the dynamics of the organization and the cybersecurity landscape. The results of this monitoring and evaluation should be used to adjust the CoE’s activities and strategies as needed.

There are challenges associated with monitoring and evaluating KPIs. It can be time-consuming and require specialized skills and tools. Additionally, it can be difficult to determine the cause of changes in KPIs, and there can be a lag between changes in activities and changes in KPIs. To overcome these challenges, it’s important to have clear processes and responsibilities for monitoring and evaluating KPIs, to use appropriate tools and techniques, and to be patient and persistent.

While the CoE offers many benefits, it can also present challenges. Without leadership and oversight, it can become resource-intensive, create silos, slow down decision-making, and be resistant to change. To overcome these challenges, it’s important to ensure that the CoE is aligned with the organization’s overall strategy and goals, promotes collaboration and communication, and remains flexible and adaptable. AT&T Cybersecurity consultants, with their deep expertise and broad perspective, can provide valuable leadership in each of these areas. They can help consolidate expertise, develop and enforce standards, drive innovation, and provide education and training.

The CoE should drive Zero Trust related projects, such as developing a Zero Trust Architecture that includes components such as Zero Trust Network Access (ZTNA), a capability of Secure Access Service Edge (SASE). The CoE can provide the expertise, resources, and guidance needed to successfully implement these types of projects. Implementing ZTNA requires a structured, multi-phased project that would have a plan similar to the following:

  • Project initiation: Develop a project plan with timelines, resources, and budget. Identify the scope, objectives, and deliverables as well as the key stakeholders and project team members.
     
  • Assessment and planning: Develop a detailed plan for implementing ZTNA. Conduct a thorough assessment of the current network infrastructure and security environment looking for vulnerabilities and areas of improvement.
     
  • Design and develop: Design the ZTNA architecture, taking into account the organization’s specific needs and constraints. Create test plans to be used in the lab, pilot sites, and during deployment.
     
  • Implementation: Deploy and monitor the ZTNA program in a phased manner, starting with less critical systems and gradually expanding to more critical ones.
     
  • Education and training: Develop and distribute user guides and other training materials. Conduct training sessions on how to use the new system.
     
  • Monitoring: Continuously monitor the performance of the platform, report on the assigned KPIs, and conduct regular audits to identify areas for improvement.
     
  • Maintenance and support: Regularly update and improve the solution based on feedback and technical innovations. Provide ongoing technical support for users of the ZTNA platform.

Throughout the ZTNA implementation, the Zero Trust CoE plays a central role in coordinating activities, providing expertise, and ensuring alignment with the organization’s overall Zero Trust strategy. The CoE is responsible for communicating with stakeholders, managing risk, and ensuring the project stays on track and achieves the stated objectives.

In conclusion, a Zero Trust Center of Excellence is a powerful tool that can help organizations enhance their cybersecurity posture, stay ahead of evolving threats, and drive continuous improvement in their Zero Trust initiatives. By centralizing expertise, standardizing practices, fostering innovation, and providing education and training, a Zero Trust CoE can provide a strategic, coordinated approach to managing Zero Trust initiatives.

As cyber threats continue to evolve, the importance and potential of a Zero Trust CoE, led by AT&T cybersecurity consultants, will only increase. Contact AT&T Cybersecurity for more information on the Zero Trust journey and how to establish a Center of Excellence.

The post Leveraging AT&T Cybersecurity Consulting for a robust Zero Trust Center of Excellence appeared first on Cybersecurity Insiders.

The content of this post is solely the responsibility of the author.  AT&T does not adopt or endorse any of the views, positions, or information provided by the author in this article. 

The supply chain, already fragile in the USA, is at severe and significant risk of damage by cyberattacks. According to research analyzed by Forbes, supply chain attacks now account for a huge 62% of all commercial attacks, a clear indication of the scale of the challenge faced by the supply chain and the logistics industry as a whole. There are solutions out there, however, and the most simple of these concerns a simple upskilling of supply chain professionals to be aware of cybersecurity systems and threats. In an industry dominated by the need for trust, this is something that perhaps can come naturally for the supply chain.

Building trust and awareness

At the heart of a successful supply chain relationship is trust between partners. Building that trust, and securing high quality business partners, relies on a few factors. Cybersecurity experts and responsible officers will see some familiarity – due diligence, scrutiny over figures, and continuous monitoring. In simple terms, an effective framework of checking and rechecking work, monitored for compliance on all sides.

These factors are a key part of new federal cybersecurity rules, according to news agency Reuters. Among other measures are a requirement for companies to have rigorous control over system patching, and measures that would require cloud hosted services to identify foreign customers. These are simple but important steps, and give a hint to supply chain businesses as to what they should be doing; putting in measures to monitor, control, and enact compliance on cybersecurity threats. That being said, it can be the case that the software isn’t in place within individual businesses to ensure that level of control. The right tools, and the right personnel, is also essential.

The importance of software

Back in April, the UK’s National Cyber Security Centre released details of specific threats made by Russian actors against business infrastructure in the USA and UK. Highlighted in this were specific weaknesses in business systems, and that includes in hardware and software used by millions of businesses worldwide. The message is simple – even industry standard software and devices have their problems, and businesses have to keep track of that.

There are two arms to ensure this is completed. Firstly, the business should have a cybersecurity officer in place whose role it is to monitor current measures and ensure they are kept up to date. Secondly, budget and time must be allocated at an executive level firstly to promote networking between the business and cybersecurity firms, and between partner businesses to ensure that even cybersecurity measures are implemented across the chain.

Utilizing AI

There is something of a digital arms race when it comes to artificial intelligence. As ZDNet notes, the lack of clear regulation is providing a lot of leeway for malicious actors to innovate, but for businesses to act, too. While regulations are now coming in, it remains that there is a clear role for AI in prevention.

According to an expert interviewed by ZDNet in their profile of the current situation, digital threat hunters are already using sophisticated AI to look for patterns, patches and unusual actions on the network, and are then using these large data sets to join up the dots and provide reports to cyber security officers. Where the challenge arrives is in that weapons race; as AI models become more sophisticated and powerful, they will ‘hack’ faster than humans can. The defensive models need to stay caught up but will struggle with needing to act within regulatory guidelines. The key here will be in proactive regulation from the government, to enable businesses to deploy these measures with assurance as to their legality and safety. 

With the supply chain involving so many different partners, there are a wider number of wildcards that can potentially upset the balance of the system. However, businesses that are willing to take a proactive step forward and be an example within their own supply chain ecosystem stand to benefit. By building resilience into their own part of the process, and influencing partners to do the same, they can make serious inroads in fighting back against the overwhelming number of supply chain oriented cybersecurity threats.

The post Building Cybersecurity into the supply chain is essential as threats mount appeared first on Cybersecurity Insiders.

SC Award badge

Today, SC Media announced the winners of its annual cybersecurity awards for excellence and achievements.

At AT&T Cybersecurity we are thrilled that AT&T Alien Labs was awarded Best Threat Intelligence in this prestigious competition. The Alien Labs team works closely with the Open Threat Exchange (OTX), an open and free platform that lets security professionals easily share, research, and validate the latest threats, trends and techniques.

With more than 200,000 global security and IT professionals submitting data daily, OTX has become one of the world’s largest open threat intelligence communities. It offers context and details on threats, including threat actors, organizations and industries targeted, and related indicators of compromise.

The full list of winners is here.

The post AT&T Cybersecurity wins SC Media Award for Best Threat Intelligence appeared first on Cybersecurity Insiders.

The content of this post is solely the responsibility of the author.  AT&T does not adopt or endorse any of the views, positions, or information provided by the author in this article. 

Memory forensics plays a crucial role in digital investigations, allowing forensic analysts to extract valuable information from a computer’s volatile memory. Two popular tools in this field are Volatility Workbench and Volatility Framework. This article aims to compare and explore these tools, highlighting their features and differences to help investigators choose the right one for their needs.

Volatility Workbench, a powerful tool built on the Volatility Framework, is specifically designed to simplify and enhance the process of memory forensics. This article explores the capabilities of Volatility Workbench, highlighting its importance in uncovering critical evidence and facilitating comprehensive memory analysis.

Understanding Volatility Framework:

Volatility Framework is a robust tool used for memory analysis. It operates through a command-line interface and offers a wide range of commands and plugins. It enables investigators to extract essential data from memory dumps – including running processes, network connections, and passwords. However, it requires technical expertise to utilize effectively.

Volatility introduced people to the power of analyzing the runtime state of a system using the data found in volatile storage (RAM). It also provided a cross-platform, modular, and extensible platform to encourage further work into this exciting area of research. Volatility framework can be downloaded here. The Volativity Foundation provides these tools.

Introducing Volatility Workbench:

Volatility Workbench is a user-friendly graphical interface built on the Volatility Framework. It simplifies memory analysis by providing a visual interface that is more accessible, even for users with limited command-line experience. With Volatility Workbench, investigators can perform memory analysis tasks without the need for extensive command-line knowledge. Volatility Workbench can be downloaded here.

One of the key advantages of Volatility Workbench is its user-friendly interface, designed to simplify the complex process of memory forensics. With its graphical interface, investigators can navigate through various analysis options and settings effortlessly. The tool presents information in a visually appealing manner – with graphs, charts, and timelines, making it easier to interpret and draw insights from extracted data.

The initial interface when the Volatility Workbench is started looks like this:

Volativity Workbench main screen 

The Volatility Workbench offers options to browse and select memory dump files in formats such as *.bin, *.raw, *.dmp, and *.mem. Once a memory dump file is chosen, the next step is to select the platform or operating system that the system being analyzed is using.

memdump screen of Volativity Workbench

Once the memory image file and platform is selected, click on Get Process List in Volatility Workbench.

It will begin memory scanning. After that, you can use the multiple option in the command tab by selecting a valid command. The description of the command will be available in the dialog box on the side pane.

When the Get Process list is finished, the interface will like this:

Volativity Workbench command descriptions

Now we can select the command we want to use – let’s try using the command drop down menu.

Drop down commands in Volativity Workbench

Voila, we have commands available for analyzing the Windows memory dump.

Let’s try a command which lists process memory ranges that potentially contain injected code.

Passmark popup in Volatility Workbench

As seen in image above you can see the command as well as its description. You also have an option to select specific process IDs from the dropdown menu for the processes associated with the findings.

Malfind command screen in Volatility Workbench

Let’s use the Malfind command to list process memory ranges that potentially contain injected code. It will take some time to process.

process ranges identified by malfind command

The analysis of the Malfind output requires a combination of technical skills, knowledge of malware behavior, and understanding of memory forensics. Continuously updating your knowledge in these areas and leveraging available resources can enhance your ability to effectively analyze the output and identify potential threats within memory dumps.

Look for process names associated with the identified memory regions. Determine if they are familiar or potentially malicious. Cross-reference them with known processes or conduct further research if necessary.

Some of the features of Volatility Workbench:

  • It streamlines memory forensics workflow by automating tasks and providing pre-configured settings.
  • It offers comprehensive analysis capabilities, including examining processes, network connections, and recovering artifacts.
  • It seamlessly integrates with plugins for additional analysis options and features.
  • It lets you generate comprehensive reports for documentation and collaboration.

Conclusion

By leveraging the capabilities of the underlying Volatility Framework, Volatility Workbench provides a streamlined workflow, comprehensive analysis options, and flexibility through plugin integration. With its user-friendly interface, investigators can efficiently extract valuable evidence from memory dumps, uncover hidden activities, and contribute to successful digital investigations. Volatility Workbench is an indispensable tool in the field of memory forensics, enabling investigators to unravel the secrets stored within a computer’s volatile memory.

The post Volatility Workbench: Empowering memory forensics investigations appeared first on Cybersecurity Insiders.

The content of this post is solely the responsibility of the author.  AT&T does not adopt or endorse any of the views, positions, or information provided by the author in this article. 

What exactly is resilience? According to the U.S. National Institute of Standards and Technology, the goal of cyber resilience is to “enable mission or business objectives that depend on cyber resources to be achieved in a contested cyber environment.” In other words, when you’re at odds with cybercriminals and nation-state actors, can you still get your job done? If not, how quickly can you get back up and running? In this article, we outline steps to ensure that if your cloud networks fail, your business won’t fail along with them.

Take stock of what you can’t (and can) live without

Being resilient during and post-cyber-attack means being able to continue business operations either leanly or back to full throttle soon after. While resources are being pooled to respond and recover from an incident, what data must be protected and what operations must go on?

Data that must be protected include those defined by regulation (e.g., personal identifiable information), intellectual property, and financial data. Data itself must be protected in multiple forms: at rest, in transit, and in use. The type of business you’re in may already dictate what’s essential; critical infrastructure sectors with essential operations include telecommunications, healthcare, food, and energy. Anything that your business relies on to survive and sustain should be treated as highest priority for security.

Ensure required availability from your cloud provider

An essential part of resilience is the ability to stay online despite what happens. Part of the cloud provider’s responsibility is to keep resources online, performing at the agreed level of service. Depending on the needs of your business, you will require certain levels of service to maintain operations.

Your cloud provider promises availability of resources in a service-level agreement (SLA), a legal document between the two parties. Uptime, the measure of availability, ranges from 99.9% to 99% in the top tiers of publicly available clouds from Amazon and Microsoft. A difference of 0.9% may not seem like much, but that translates from roughly 9 hours of downtime to over 3.5 days annually—which might be unacceptable for some types of businesses.

Store backups—even better, automate

As ransomware proliferates, enterprises need to protect themselves against attackers who block access to critical data or threaten to expose it to the world. One of the most fundamental ways to continue business operations during such an incident is to rely on backups of critical data. After you’ve identified which data is necessary for business operations and legal compliance, it’s time to have a backup plan.

While your cloud service provider provides options for backup, spreading the function across more than one vendor will reduce your risk—assuming they’re also secure. As Betsy Doughty, Vice President of Corporate Marketing of Spectra Logic says, “it’s smart to adhere to the 3-2-1-1 rule: Make three copies of data, on two different mediums, with one offsite and online, and one offsite and offline.” Automated snapshots and data backup can run in the background, preparing you in the event of a worst-case scenario.

Expose and secure your blind spots

A recent report from the U.S. Securities and Exchange Commission observes that resilience strategies include “mapping the systems and process that support business services, including those which the organization may not have direct control.” Cloud networks certainly apply here, as with any outsourced services, you relinquish some control.

Relinquishing control does not have to mean lack of visibility. To gain visibility into what data is being transferred and how people are using cloud applications, consider the services of cloud access service brokers (CASBs), who sit between a cloud user and cloud provider. CASBs can improve your resilience providing detail into your cloud network traffic, enabling assessment for both prevention of attack and impact on business operations in the event of an incident. They also enforce security policies in place such as authentication and encryption.

Test your preparedness periodically

After all the hard work of putting components and plans into place, it’s time to put things to the test. Incident response tests can range from the theoretical to a simulated real-world attack. As processes and people change, performing these tests periodically will ensure you have an updated assessment of preparedness. You could run more cost-effective paper tests more frequently to catch obvious gaps and invest in realistic simulations at a longer interval. Spending the resources to verify and test your infrastructure will pay off when an attack happens and the public spotlight is on you.

Towards a resilient cloud

Being able to withstand a cyber-attack or quickly bring operations back online can be key to the success of a business. While some responsibility lies in the cloud provider to execute on their  redundancy and contingency plans per the SLA, some of it also lies in you. By knowing what’s important, securing your vulnerabilities, and having a tested process in place, you are well on your way to a secure and resilient cloud network.

The post Securing your cloud networks: Strategies for a resilient infrastructure appeared first on Cybersecurity Insiders.

The content of this post is solely the responsibility of the author.  AT&T does not adopt or endorse any of the views, positions, or information provided by the author in this article. 

In the realm of information security and covert communication, image steganography serves as a powerful technique for hiding sensitive data within innocent-looking images. By embedding secret messages or files within the pixels of an image, steganography enables covert transmission without arousing suspicion. This article aims to delve into the world of image steganography, exploring its principles, techniques, and real-world applications.

Understanding image steganography

  • Image steganography is the practice of concealing information within the data of digital images without altering their visual appearance. The hidden data can include text, images, audio, or any other form of binary information.
  • Image steganography serves as a clandestine communication method, providing a means to transmit sensitive information without arousing the suspicion of adversaries or unauthorized individuals. It offers an additional layer of security and confidentiality in digital communication.
  • Steganography vs. Cryptography: While cryptography focuses on encrypting data to render it unreadable, steganography aims to hide the existence of the data itself, making it inconspicuous within an image. Steganography can be used in conjunction with encryption to further enhance the security of covert communication.

Techniques of image steganography

  • LSB substitution: The Least Significant Bit (LSB) substitution method involves replacing the least significant bits of pixel values with secret data. As the least significant bits have minimal impact on the visual appearance of the image, this technique allows for the hiding of information without noticeably altering the image.
  • Spatial domain techniques: Various spatial domain techniques involve modifying the pixel values directly to embed secret data. These techniques include modifying pixel intensities, color values, or rearranging pixels based on a predefined pattern.
  • Transform domain techniques: Transform domain techniques, such as Discrete Cosine Transform (DCT) or Discrete Fourier Transform (DFT), manipulate the frequency domain representation of an image to embed secret data. This allows for the concealment of information within the frequency components of an image.
  • Spread spectrum techniques: Inspired by radio frequency communication, spread spectrum techniques spread the secret data across multiple pixels by slightly modifying their values. This method makes the hidden data more robust against detection and extraction attempts.
  • Adaptive steganography: Adaptive techniques dynamically adjust the embedding process based on the image content and local characteristics, making the hidden data even more resistant to detection. This approach enhances security and makes it harder for adversaries to identify stego images.

Let’s see a working example of image steganography using a free tool called OpenStego, the same can be downloaded from here. You will be required to have Java Runtime Environment for OpenStego to work on your system.

Once, you’ve installed OpenStego, you will see its interface as shown below:

OpenStego tool screen capture

It has multiple options including Hide Data and Extract Data – more about these options can be found at official documentation of the tool.

We need to have two files, Message File (Which will be hidden data or data we want to hide) and Cover File (The file which we will use as a cover to hide the message file.)

I have downloaded two image files for the same.

message and image screenshots - both look harmless and cute

Now, let’s hide the message file which is a quote inside the cover file which is “Hello” image.

After that, you will have to provide the directory and name for the output file. The same can be seen in the snapshot below:

openstego screen where you can enter password for the message

You can also choose to encrypt the hidden data so that it is not accessible without a password. Click Hide data once you have followed all the steps.

After the process is completed, a success popup will appear on Openstego screen.

OpenStego working

Now, we have 3 files and output file is the one which has the hidden image.

input, message and output, where output looks just like input

If we compare the properties of the output file and cover file, we will notice certain differences – like the size value will be different.

Now, let’s delete the cover file and message file and try to extract the data. If you open the output file you won’t notice any difference as it appears the same as any other image file. However, let’s try to extract data using OpenStego.

We have to select the path of the file we wish to extract data from and provide a destination folder for extraction. We also have to provide the password if any was chosen at the time of hiding the data.

entering password in openstego to get hidden message

Let’s select Extract data. Once, the extraction is done, a confirmation pop-up will appear on your screen.

extracting hidden message in openstego

Let us check the extracted file by going to the destination folder we assigned for the extraction of the message file.

seeing original message in openstego

As visible in the snapshot above, the message file is successfully extracted.

Real-world applications of steganography

  • Covert communication: Image steganography finds applications in covert communication where parties need to exchange sensitive information discreetly. This includes intelligence agencies, law enforcement, and whistleblowers who require secure channels for sharing classified or confidential data.
  • Digital watermarking: Steganography techniques can be employed for digital watermarking to embed copyright information, ownership details, or authentication codes within images. This allows for tracking and protecting intellectual property rights.
  • Information hiding in multimedia: Image steganography can be extended to other forms of multimedia, including audio and video, allowing for the concealment of information within these media formats. This can be used for copyright protection, digital rights management, or covert messaging.
  • Steganalysis and forensics: Image steganalysis focuses on detecting the presence of hidden information within images. Forensic investigators can employ steganalysis techniques to identify potential steganographic content, aiding in digital investigations.

Conclusion

Image steganography has emerged as a sophisticated method for covert communication and secure data transmission. By exploiting the subtle nuances of digital images, sensitive information can be hidden from prying eyes. As technology advances, the field of steganography continues to evolve, with new techniques and algorithms being developed to enhance the security and robustness of data hiding.

However, it is essential to balance the use of steganography with ethical considerations and adhere to legal frameworks to ensure its responsible and lawful application. As information security remains a critical concern in the digital age, image steganography serves as a valuable tool in safeguarding sensitive data and enabling secure communications.

The post Image steganography: Concealing secrets within pixels appeared first on Cybersecurity Insiders.

Executive summary

On April 21st, 2023, AT&T Managed Extended Detection and Response (MXDR) investigated an attempted ransomware attack on one of our clients, a home improvement business. The investigation revealed the attacker used AuKill malware on the client’s print server to disable the server’s installed EDR solution, SentinelOne, by brute forcing an administrator account and downgrading a driver to a vulnerable version.

AuKill, first identified by Sophos X-Ops researchers in June 2021, is a sophisticated malware designed to target and neutralize specific EDR solutions, including SentinelOne and Sophos. Distributed as a dropper, AuKill drops a vulnerable driver named PROCEXP.SYS (from Process Explorer release version 16.32) into the system’s C:WindowsSystem32drivers folder. This malware has been observed in the wild, utilized by ransomware groups to bypass endpoint security measures and effectively spread ransomware variants such as Medusa Locker and Lockbit on vulnerable systems.

In this case, SentinelOne managed to isolate most of the malicious files before being disabled, preventing a full-scale ransomware incident. As a result, AT&T MXDR found no evidence of data exfiltration or encryption. Despite this, the client opted to rebuild the print server as a precautionary measure. This study provides an in-depth analysis of the attack and offers recommendations to mitigate the risk of future attacks.

Investigating the first phase of the attack

Initial intrusion

The targeted asset was the print server, which we found unusual. However, upon further investigation we concluded the attacker misidentified the asset as a Domain Controller (DC), as it had recently been repurposed from a DC to a print server. The attacker needed both local administrator credentials and kernel-level access to successfully run AuKill and disable SentinelOne on the asset. To gain those local administrator credentials, the attacker successfully brute-forced an administrator account. Shortly after the compromise, this account was observed making unauthorized registry changes.

 screen shot of USM IOCs for Aukill 

Aukill metadata for ioc

Establishing a beachhead

After compromising the local administrator account, the attackers used the “UsersAdministratorMusicaSentinel” folder as a staging area for subsequent phases of their attack. All AuKill-related binaries and scripts were executed from this path, with the innocuous “Music” folder name helping to conceal their malicious activities.

seemingly innocent Music file - not innocent!

AuKill malware has been found to operate using two Windows services named “aSentinel.exe” and “aSentinelX.exe” in its SentinelOne variant. In other variants, it targets different EDRs, such as Sophos, by utilizing corresponding Windows services like “aSophos.exe” and “aSophosX.exe”. 

Aukill mitigated - put in quarantine

Establishing persistence

We also discovered “aSentinel.exe” running from “C:Windowssystem32”, indicating that the attackers attempted to establish a foothold on the compromised server. Malware authors frequently target the system32 folder because it is a trusted location, and security software may not scrutinize files within it as closely as those in other locations. This can help malware bypass security measures and remain hidden. It is likely that the malware was initially placed in the “UsersAdministratorMusicaSentinel” directory and later copied to the system32 directory for persistence.

how Aukill keeps persistent

Network reconnaissance

Our investigation also revealed that PCHunter, a publicly accessible utility previously exploited in ransomware incidents like Dharma, was running from the “UsersAdministratorMusicaSentinel” directory. This suggests that the attackers used PCHunter as a reconnaissance tool to survey the client’s network before deploying the EDR killer malware. Additionally, PCHunter enables threat actors to terminate programs and interface directly with the Windows kernel, which aligns with the needs of the attacker. We observed PCHunter generating several randomly named .sys files, as illustrated below:

Aukill using PCHunter for reconnaisance

Preventing data recovery

We found that the attacker deleted shadow volume copies from the print server. Windows creates these copies to restore files and folders to previous versions in case of data loss. By removing the shadow copies, the attacker was attempting to make it more challenging for our client to recover their files if they were successfully encrypted. Although no ransomware was deployed, the deletion of shadow copies reveals the attackers’ intentions. This information, together with the usage of PCHunter and the staging of the EDR killer malware, paints a more complete picture of the attacker’s objectives and tactics.

Bypassing native Windows protection

With all these pieces in place, the attacker last needed to acquire kernel-level access. Despite gaining administrator rights early on, the attacker did not have enough control over the system to kill SentinelOne at this time. EDR solutions are classified as essential by Windows and are protected from being turned off by attackers when they escalate privileges. To successfully circumvent these safeguards, the attacker would need to travel one level deeper into the operating system and gain kernel-level access to the machine.

Investigating the second phase of the attack

Dropping the vulnerable driver

Our team discovered that AuKill had replaced the current Process Explorer driver, PROCEXP152.sys, with an outdated and vulnerable version named PROCEXP.SYS (from Process Explorer release version 16.32), located in the C:WindowsSystem32drivers directory. The alarm screenshot below demonstrates how AuKill swapped the existing driver with this older version, making the system susceptible to further exploitation.

 USM screen - second phase of Aukill remediation

Windows incorporates a security feature called Driver Signature Enforcement, which ensures that kernel-mode drivers are signed by a valid code signing authority before they can run. To bypass this security measure, the attackers exploited the insecure PROCEXP.SYS driver, which was produced and signed by Microsoft at an earlier date. As demonstrated in the SentinelOne screenshot below, the driver is signed and verified by Microsoft. Furthermore, the originating process was aSentinel.exe, an executable created to disable SentinelOne.

aukill remediation

Acquiring kernel-level access

Process Explorer, a legitimate system monitoring tool developed by Microsoft’s Sysinternals team, enables administrators to examine and manage applications’ ongoing processes, as well as their associated threads, handles, and DLLs.

Upon startup, Process Explorer loads a signed kernel-mode driver, facilitating interaction with the system’s kernel, which is responsible for managing hardware and resources. Normally, that driver is PROCEXP152.sys. The attacker replaced the PROCEXP152.sys driver on the print server with the exploitable PROCEXP.SYS, employing what is known as a BYOVD (Bring Your Own Vulnerable Driver) attack. The attacker used this method to exploit the now vulnerable kernel mode driver to gain the kernel-level access they needed to successfully kill SentinelOne.

Killing SentinelOne

The kernel-mode driver used by Process Explorer has the unique ability to terminate handles that are inaccessible even to administrators. A handle is an identifier that corresponds to a specific resource opened by a process, such as a file or a registry key. At this point, AuKill hijacked Process Explorer’s kernel driver to specifically target protected handles associated with SentinelOne processes running on the print server. The SentinelOne processes were killed when the protected process handles were closed, rendering the EDR powerless. AuKill then generated several threads to ensure that these EDR processes remained disabled and did not resume. Each thread concentrated on a certain SentinelOne component and regularly checked to see if the targeted processes were active. If they were, AuKill would terminate them. SentinelOne was out of the way and no longer an obstacle to the attacker.

Response

Customer interaction

At this point, the attacker had gained privileged access to the asset, deployed their malware, and successfully killed the endpoint protection solution, SentinelOne. Based on the Cyber Kill Chain methodology developed by Lockheed Martin, we can conclude that the attacker had now successfully reached the “Command and Control” stage. However, the attacker did not reach the “Actions on Objectives” stage, as SentinelOne managed to disrupt ransomware deployment enough before it was killed to prevent any additional damage.

Any attempts to re-deploy malware or move laterally following the disablement of the EDR were thwarted by our team, who swiftly alerted the client to the activity and advised that the asset be taken offline and isolated from the rest of the network. Our team informed the client that the shadow copies had been deleted and SentinelOne had been turned off on their print server. After having our threat hunters thoroughly review their environment, w e reassured the client that no sensitive information was exfiltrated or encrypted. In response to the attack, the client moved to rebuild their print server and reinstall SentinelOne.

Recommendations

As BYOVD attacks to bypass EDR software become more widespread, we strongly advise blacklisting outdated drivers with a known history of exploitation. Furthermore, we encourage our clients to maintain an inventory of the drivers installed on their systems, ensuring they remain current and secure. Lastly, we recommend bolstering the security of administrator accounts to defend against brute force attacks, as the incident detailed in this blog post could not have transpired without the initial privileged user compromise.

The post Stories from the SOC – Unveiling the stealthy tactics of Aukill malware appeared first on Cybersecurity Insiders.