This blog was written by an independent guest blogger.

Despite years of industry efforts to combat insider threats, malicious behavior can still sometimes be difficult to identify. As organizations work towards building a corporate cyber security culture, many have begun looking into zero-trust architectures to cover as many attack surfaces as possible.

This action is a step in the right direction, but it also has the potential to raise fears and generate negative responses from employees. Zero-trust security could instill demotivation and resentment if taken as a sign of poor faith and mistrust, accelerating turnover rates and bringing the Great Resignation to a peak. 

How can an organization effectively navigate zero-trust without creating friction among employers and employees? In addition, how can they get there without holding trust-building exercises as part of an in-office environment?

Why trust matters in modern business environments

The security perimeter is no longer a physical location in a modern enterprise; it is a set of access points dispersed in and delivered from the cloud. In addition to identity, the authorization model should factor in the sensitivity of the data, the source location of the request, reliability of the endpoint, etc. The use of multiple cloud platforms and a growing number of endpoints can massively expand the attack surface.

The foundation of zero-trust security starts by eliminating the word trust. Criminals today don’t break into network perimeters; they log in with stolen credentials and then move laterally across the network, hunting for more valuable data. Protecting the path from identity to data is crucial – this is at the heart of an ID-centric zero-trust architecture. To do so, security teams should:

  • Validate the user
  • Verify the device
  • Limit access and privilege

The layers that connect identity to data play essential roles in sharing context and supporting policy enforcement. A zero-trust architecture is continuously aware of identity and monitors for a change in context.

A new memorandum by the United States Government Office of Management and Budget (OBM) outlines why zero-trust architecture is crucial to securing web applications that are relied on daily. The SolarWinds attack reminds us that supply chain security is vital, and the recent Log4Shell incident also highlights how crucial effective incident response is, so finding a way to an improved security posture is imperative.

However, zero-trust does not mean encouraging mistrust through the organization’s networks, and companies should not have to rely on technologies alone for protection. When it is a team effort, security is best applied, and successful zero-trust depends on a culture of transparency, consistency, and communication across the whole organization. But how can organizations achieve this?

The two pillars of building (Zero) Trust

When building zero-trust in any organization, two key pillars must be considered – culture and tools.

As companies begin implementing zero-trust, they must also integrate it into their culture. Inform employees what’s going on, what the process of zero-trust entails, how it impacts and benefits them and the company, and how they can support the zero-trust process. By engaging employees and challenging them to embrace skepticism towards potential threats, businesses are planting the seeds of security across their organizational ecosystem. Once employees understand the value of zero-trust, they also feel trusted and empowered to be part of the broader cybersecurity strategy.

Once zero-trust has been implemented at the core of an organizations cybersecurity culture, the next step is to apply best practices to implement zero-trust. There are several measures that organizations can take, including:

  • Use strong authentication to control access.
  • Elevate authentication.
  • Incorporate password-less authentication.
  • (Micro)segment corporate network.
  • Secure all devices.
  • Segment your applications.
  • Define roles and access controls.

Although Zero-Trust is technology agnostic, it is deeply rooted in verifying identities. One of the first steps is identifying the network’s most critical and valuable data, applications, assets, and services. This step will help prioritize where to start and enable zero-trust security policies to be created. If the most critical assets can be identified, organizations can focus their efforts on prioritizing and protecting those assets as part of their zero-trust journey.

The use of multi-factor authentication is crucial here. It is not a case of if to use it, but when. Phishing-resistant MFA can’t be compromised even by a sophisticated phishing attack, which means the MFA solution cannot have anything that can be used as a credential by someone who stole it. This includes one-time passwords, security questions, and imperceptible push notifications.

The challenge of implementing zero-trust

One essential problem that most enterprises are dealing with is the issue of fragmented IAM. As a result, zero-trust implementation is fraught with high complexity, risks, and costs.

The key reason behind this problem is that organizations are operating multiple identity security silos. In fact, the Thales 2021 Access Management Index report indicates that 33% of the surveyed organizations have deployed three or more IAM tools. Coordinating that many systems can, at a minimum, create operational complexity, but it can also increase the risk of fragmented security policies, siloed views of user activity, and siloed containment.

A zero-trust culture should help enterprises with IAM silos to move towards a standardized zero-trust security model, with standardized security policies and adjustments orchestrated from a central control panel across underlying silos. The process should provide insights on security policy gaps and inconsistencies and recommend security policy adjustments based on zero-trust security principles.

Conclusion

A zero-trust approach to security is to cover all attack surfaces and protect organizations, but they mean nothing without people using them appropriately. Aligning company success and security with employee success and security is crucial. Deploying a centralized IAM solution that covers all attack surfaces ensures optimal protection and helps build confidence in a zero-trust business and computing environment.

The post Building trust in a Zero-Trust security environment appeared first on Cybersecurity Insiders.

Stories from the SOC is a blog series that describes recent real-world security incident investigations conducted and reported by the AT&T SOC analyst team for AT&T Managed Extended Detection and Response customers.

Executive summary

Once a malicious actor has gained initial access to an internal asset, they may attempt to conduct command and control activity. The ‘Command and Control’ (C&C) tactic, as identified by the MITRE ATT&CK© Framework, consists “of techniques that adversaries may use to communicate with systems under their control within a victim network.” Cobalt Strike is an effective adversary simulation tool used in security assessments but has been abused by malicious actors for Command and Control of victim networks. If configured by attackers, it can be used to deploy malicious software, execute scripts, and more.

This investigation began when the Managed Extended Detection and Response (MXDR) analyst team received multiple alarms involving the detection of Cobalt Strike on an internal customer asset. Within ten minutes of this activity, the attacker launched a Meterpreter reverse shell and successfully installed remote access tools Atera and Splashtop Streamer on the asset. These actions allowed the attacker to establish multiple channels of command and control. In response, the MXDR team created an investigation and informed the customer of this activity. The customer determined that an endpoint detection and response (EDR) agent was not running on this asset, which could have prevented this attack from occurring. This threat was remediated by isolating the asset and scanning it with SentinelOne to remove indicators of compromise. Additionally, Cobalt Strike, Atera, and Splashtop Streamer were added to SentinelOne’s blacklist to prevent unauthorized execution of this software in the customer environment.

Investigation

Initial alarm review

Indicators of Compromise (IOC)

An initial alarm was triggered by a Windows Defender detection of Cobalt Strike on an internal customer asset. The associated log was provided to USM Anywhere using NXLog and was detected using a Windows Defender signature. Multiple processes related to Cobalt Strike were attached to this alarm.

Cobalt Strike, as mentioned previously, is a legitimate security tool that can be abused by malicious actors for Command and Control of compromised machines. In this instance, a Cobalt Strike beacon was installed on the compromised asset to communicate with the attacker’s infrastructure. Windows Defender took action to prevent these processes from running.

Immediately following the Cobalt Strike detection, an additional alarm was triggered for a Meterpreter reverse shell.

Meterpreter

A Meterpreter reverse shell is a component of the Metasploit Framework and requires the attacker to set up a remote ‘listener’ on their own infrastructure that ‘listens’ for connections. Upon successful exploitation, the victim machine connects to this remote listener, establishing a channel for the attacker to send malicious commands. A Meterpreter reverse shell can be used to allow an attacker to upload files to the victim machine, record user keystrokes, and more. In this instance, Windows Defender also took action to prevent this process from running.

Expanded investigation

Events search

During post-exploitation, an attacker may leverage scheduled tasks to run periodically, disable antivirus, or configure malicious applications to execute during startup. To query for this activity, specific event names, such as ‘Windows Autostart Location’, ‘New Scheduled Task’, and events containing ‘Windows Defender’, were added to a filter in USM Anywhere. An additional filter was applied to display events occurring in the last 24 hours. This expanded event search provided context into attacker activity around the time of the initial Cobalt Strike and Meterpreter alarms.

context for Cobalt Strike

Event deep dive

Just after the Cobalt Strike and Meterpreter detections, a scheduled task was created named “Monitoring Recovery.” This task is identified by Windows Event ID 106:

log Cobalt Strike

This scheduled task was used to install two remote monitoring and management (RMM) applications: Atera and Splashtop Streamer.

Shortly after this task was created and executed, an event was received indicating “AteraAgent.exe” was added as a Windows auto-start service.

AlteraAgent

AteraAgent.exe is associated with Atera, a legitimate computer management application that allows for remote access, management, and monitoring of computer systems, but has been abused by attackers for command and control of compromised systems.

This change was followed by an event involving “SRService.exe” being added as a Windows auto-start service on this asset:
SRServer
SRService.exe is associated with Splashtop Streamer Service, a remote access application commonly used by IT support, also abused by attackers for C&C communications.
At this point, the attacker attempted to create multiple channels for command and control using Cobalt Strike, Meterpreter, Atera, and Splashtop Streamer. While the Cobalt Strike and Meterpreter sessions were terminated by Windows Defender, Atera and Spashtop Streamer were successfully added as startup tasks. This allowed the attacker to establish persistence in the customer environment. Persistence, as identified by the MITRE ATT&CK framework, allows the attacker to maintain “access to systems across restarts, changed credentials, and other interruptions that could cut off their access.”

Response

Building the investigation

All alarms and events were carefully recorded in an investigation created in USM Anywhere. The customer was immediately contacted regarding this compromise, which lead to an ‘all-hands-on-deck’ call to remediate this threat. This compromise was escalated to the customer’s Threat Hunter, as well as management and Tier 2 analysts.

Customer interaction

The MXDR team worked directly with the customer to contain and remediate this threat. This asset was quarantined from the customer network where it was scanned for malicious indicators using SentinelOne. The customer installed the SentinelOne EDR agent on this asset to protect it from any current threats. Additionally, the unauthorized applications Cobalt Strike, Meterpreter, Atera, and Splashtop Streamer were added to SentinelOne’s blacklist to prevent future execution of these programs in the customer environment.

Limitations and opportunities

Limitations

While this compromise was quickly detected and contained, the customer lacked the protection required to prevent the applications Atera and Splashtop Steamer from being installed and added as Windows auto-start programs.

Opportunities

To protect an enterprise network from current threats, a multi-layered approach must be taken, otherwise known as ‘Defense in Depth.’ This entails multiple layers of protection, including Endpoint Detection and Response, implementation of a SIEM (Security Information and Event Management System), and additional security controls. With the addition of an EDR agent installed on this asset, this malicious behavior would have been prevented. AT&T’s Managed Endpoint Security (MES) provides endpoint detection and response and can be utilized along with USM Anywhere to actively detect, prevent, and notify the customer of malicious activity in their environment.

The post Stories from the SOC – Command and Control appeared first on Cybersecurity Insiders.

This blog was written by an independent guest blogger.

Businesses that allow employees to work from home are more likely to encounter a new security threat — compromised smart home devices.

Smart technology connected to an employee’s home network, like smart thermostats, appliances, and wearables, can all fall victim to hackers. Workers that join their employer’s network remotely can unwittingly allow compromised devices to open the doors to hackers.

The right IT policies, training and technology can help businesses counter smart home device breaches.

Why hackers target smart home devices

Attacks against smart home devices are rising fast. There were more than 1.5 billion attacks on smart devices in the first half of 2021, with attackers generally looking to steal data or use compromised devices for future breaches and cryptocurrency mining.

IoT devices are often not as guarded as laptops or smartphones and are easier to breach. They may not be updated as frequently, making them vulnerable to well-known exploits. Users may also not notice unusual activity from an IoT device as readily, allowing hackers to use it as part of a botnet or further attacks.

At the same time, the number of smart home devices is growing fast. Consumers have access to a growing range of IoT appliances, including smart refrigerators, lightbulbs, coffee makers and washing machines. The smart home device market is expanding quickly, making it a fast-growing target for hackers.

As a result, smart home technology is a prime target for hackers who need devices to stage an attack or want to break into otherwise secure networks.

Protecting business networks from smart home security threats

Employees are ultimately responsible for their home devices, but a wider range of people and organizations can take action to make them more secure. Employers, IT departments, managed service providers (MSPs) and communication service providers (CSPs) have the power to improve safety.

Some IoT device security stakeholders, like CSPs, can also provide risk mitigation to customers who may not receive security support from their employer or IT team. Employers and IT departments can work with CSPs to cover aspects of home device security that they may not be able to manage on their own.

The right WFH policies and employee training can help protect business networks from an attack that uses smart home devices. In most cases, a combination of approaches will be necessary.

One popular strategy for securing WFH employee smart devices includes appointing an internal organizational member responsible for monitoring IoT security. They should require WFH employees with smart home devices to follow best practices, like automating updates and ensuring they are digitally signed.

Requiring home IoT devices to have a Secure Boot feature available and enabled will also be helpful. This ensures that the device’s bootloader executable is genuine and has not been tampered with, initiates basic logging and checks for available firmware updates.

This feature provides an excellent foundation for IoT device security and helps automate device updating. Secure Boot also lets IT teams verify that employee smart devices are not compromised.

It’s also important for an organization to formally determine its IoT risks and build a security policy. Companies that don’t know what kinds of dangers they face won’t be able to create a set of rules and requirements for WFH employees that keeps devices and networks safe.

Make sure IoT devices don’t become a security threat

Smart home devices are increasingly popular, but they can create significant security risks for employers. Having the right IT policy will help companies manage these risks.

A well-documented IoT policy that remote workers can follow, Secure Boot devices and a designated IoT security manager will make it easier for businesses to protect their networks from smart device security threats.

The post How to counter smart home device breaches appeared first on Cybersecurity Insiders.

Cyberattacks are alarming, and establishments must increase protections, embrace a layered attitude, and cultivate security-conscious users to combat growing concerns.

Cybersecurity leaders are being inundated with talent development resources offered, encompassing hiring, recruitment, and retention of the talent pipeline. Fifty percent of hiring managers typically deem that their candidates aren’t highly qualified. Globally, the cybersecurity professional shortage is estimated to be 2.72 million based on findings in the 2021 (ISC)2 Cybersecurity Workforce Study & ISACA State of Cybersecurity 2021 Survey.

The cybersecurity workforce demand is a standing boardroom agenda for CISOs and senior executive constituents. CISOs must work collaboratively alongside human resources to solve talent pipeline challenges.

A Cyber Seek 2021 assessment indicates 597, 767 national cybersecurity job openings; thus, assertively, organizations must address this immediate disparity through consensus-building, diversity of thought, and out-of-the-box thinking. CISOs must evaluate their current hiring practices, transform ideal-to-actual job descriptions, and scrutinize their HR/organizational culture to remove aggressive tendencies and embrace a more forward-leaning, authentic, and autonomous culture.

Talent development is considered the cornerstone to increasing diversity-infused candidates into the cybersecurity pipeline. Based on my experience, I have adopted a three-prong attack strategy to cultivate a unique palette of experience and knowledge to ascertain a solid talent-rich team.

This goes beyond the outdated mentality of third-party partnerships to lean on certificates, degrees, professional associations, and internship/fellowship programming to acquire unique talent. This approach, combined with interview preparation and stretch assignments, creates real-time, mutually beneficial skills for current team members.

Lastly, providing opportunities to showcase my employees’ newfound skills through conferences (internal/external), community engagements, and immersive responsibilities provide hands-on experiences & shadowing opportunities. This helps to level up knowledge transfer and strengthen mentorship/sponsorship programs that create a more synergistic, follow-then-lead approach to build the talent pipeline.

As a transformational leader, it is paramount to change current hiring practices to further reach untapped talent inside and outside the organization using my three-prong attack strategy:

1. Go where the talent is located. Seek talent that has the drive, ambition, and tenacity to level themselves up through self-driven, multipronged vectors and consequently are thirsty and self-motivated.

2. Survey current hiring practices to identify the talent gaps. (Who? Where? Why? When? What?  & How?). Build a diverse talent pipeline and create new partnerships that are currently serving the population previously identified in the gap analysis.

3. “Try before you buy” mentality. Increase credibility and employee confidence through stretch assignments, mentorships/sponsorships, and leadership development tasks to align employees with exposure and insight before leaping to a new role.

My guiding principles lead me to ignite my employees' inner authenticity and emotional intelligence to provide a team-oriented, future-oriented culture. This culture relies heavily on an in-group collectivism mindset to tap into “their inner leader.” Deeply coupled partnerships operate from a customized driver/navigator paradigm to provide an inclusive, autonomous environment.

In my experience, cybersecurity job descriptions primarily tend to be too inelastic. The panic-stricken job descriptions can turn away competent, qualified, and dedicated applicants. Plus, many highly qualified individuals do not have college degrees nor have attended boot camps or completed traditional security training that would be excellent security candidates.

Moreover, career changers are a large part of the untapped real estate that possess lucrative, diverse skillsets (i.e., lawyers, teachers, and librarians). Hiring candidates with the desire, passion, and willingness to learn or self-hone their skills should be treasured and respected.  Pioneering thought leadership is vital to building an above-board Diversity, Equity, and Inclusion (DEI) focused organization to complement current best practices interlaced with a meet-them-where-they-are mentality to cultivate good results.

The post Challenges that impact the Cybersecurity talent pipeline appeared first on Cybersecurity Insiders.

Resilience means more than bouncing back from a fall at a moment of significantly increased threats. When addressing resilience, it’s vital to focus on long-term goals instead of short-term benefits. Resilience in the cybersecurity context should resist, absorb, recover, and adapt to business disruptions.

Cyber resiliency can’t be accomplished overnight. For the longest time, the conversation around getting the cybersecurity message across at the board level has revolved around the business language. Businesses cannot afford to treat cybersecurity as anything but a systemic issue. While the board tends to strategize about managing business risks, cybersecurity professionals tend to concentrate their efforts at the technical, organizational, and operational levels. The languages used to manage the business and manage cybersecurity are different. This might obscure both the understanding of the real risk and the best approach to address the risk. Early on in my career, I was told to think of how to transform geek to CEO speak. That piece of advice still holds true.

Why? The argument for board-level cybersecurity understanding

The reality today is that cybersecurity is a critical business issue that must be a priority for every organization. As business operations become increasingly digitized, data has become one of the most valuable assets of any organization. This has resulted in increased expectations from customers, employees, regulators, and other stakeholders that an organization has developed appropriate resilience measures to protect against the evolving cyber threat landscape. The failure to do so presents substantial risks, including loss of consumer confidence, reputational damage, litigation, and regulatory consequences.

How? Changing the narrative away from the ‘team of no.'

The ‘how’ equation comes in two distinct yet equally important parts. One is levelling-up of the board’s cybersecurity knowledge. The other ensures that security teams get board-level support. The second of these requires those teams to help change the narrative: instead of being the 'team of no,' security teams need to be seen as influencers. Enablers and not enforcers, in other words.

It's time to stop repeating how things can't be done (on security grounds). Rather, we need to preach from the business transformation book and explain how they can be. We must stop operating out of silos and build out relationships with all business players, embedding 'scenario thinking' and responsiveness into organizational cyber functioning. But just as importantly, to address the first part, the board needs to proactively plan and prepare for a cyber-crisis; only by understanding the risks can the business be in the right strategic place to combat them successfully.

Cybersecurity teams should equip the board with the following as a starting point. 

  • A clear articulation of the current cyber risks facing all aspects of the business (not just IT); and
  • A summary of recent cyber incidents, how they were handled, and lessons learned.
  • Short- and long-term road maps outlining how the company will continue to evolve its cyber capabilities to address new and expanded threats, including the related accountabilities in place to ensure progress; and
  • Meaningful metrics that provide supporting essential performance and risk indicators of successful management of top-priority cyber risks that are being managed today

Business and cybersecurity success go hand in hand

As the board’s role in cyber-risk oversight evolves, the importance of having a robust dialogue with the cyber influencers within an organization cannot be overestimated. Without close communication between boards and the cyber/risk team, the organization could be at even greater risk.

If this sounds like a cybersecurity grooming exercise, that's because it is. Preparing cybersecurity practitioners with business acumen for the board to act as the voice of educated reason isn't such a bad idea, is it? The best businesses thrive because they have people at the very top who can exert control based on informed decision-making when a crisis looms. Leaving cybersecurity out of this success equation in 2022 is a very risky game.

The post Cybersecurity and resilience: board-level issues appeared first on Cybersecurity Insiders.

Stories from the SOC is a blog series that describes recent real-world security incident investigations conducted and reported by the AT&T SOC analyst team for AT&T Managed Extended Detection and Response customers.

Executive summary

One of the most prevalent threats today, facing both organizations and individuals alike, is the use of ransomware. In 2021, 37% of organizations said they were victims of some type of ransomware attack. Ransomware can render large amounts of important data inaccessible nearly instantly. This makes reacting to potential ransomware events in a timely and accurate manner extremely important. Utilizing an endpoint security tool is critical to  help mitigate these threats. However, it is vital to maintain vigilance and situational awareness when addressing these threats, and not rely solely on one piece of information when performing analysis.

The AT&T Managed Extended Detection and Response (MXDR) analyst team received an alarm stating SentinelOne had detected ransomware on a customer’s asset. The logs suggested the threat had been automatically quarantined, but further analysis suggested something more sinister was afoot. The same malicious executable had been detected on that asset twice before, both times reportedly being automatically quarantined. This type of persistent malware can be an indicator of a deeper infection such as a rootkit. After a more in-depth analysis and collaboration with the customer, the decision was made to quarantine and power off the asset, and replace the asset entirely due to this persistent malware.

Investigation

Initial alarm review

Indicators of Compromise (IOC)

The initial SentinelOne alarm alerted us to an executable ‘mssecsvc.exe’:

IoC persistent malware

The name of the executable as well as the file path is cleverly crafted to imitate a legitimate Windows program.

Expanded investigation

Events search

Searching events for the file hash revealed it had been repeatedly detected on the same asset over the last 2 weeks. In each instance the event log reports the executable being automatically quarantined by SentinelOne.

Persistent malware events

Additionally, a seach in USM Anywhere revealed two previous investigations opened for the same executable on the same asset. In both previous investigations the customer noted SentinelOne had automatically quarantined the file but did not take any further action regarding the asset.

Event deep dive

In the new instance of this alarm the event log reports SentinelOne successfully killed any processes associated with the executable and quarantined the file.

deep dive 1 Deep dive 2

This may lead one to believe there is no longer a threat. But the persistent nature of this file raises more questions than the event log can answer.

Reviewing additional indicators

It is important to not rely on a single piece of information when assessing threats and to go beyond just what is contained in the logs we are given. Utilizing open-source threat intelligence strengthens our analysis and can confirm findings. Virus Total confirmed the file hash was deemed malicious by multiple other vendors.

Persistent malware additional indicators

The executable was also analyzed in JoeSandbox. This revealed the file contained a device path for a binary string ‘FLASHPLAYERUPDATESERVICE.EXE which could be used for kernel mode communication, further hinting at a rootkit.

JoeSandbox

Response

Building the investigation

Despite the event log suggesting the threat had been automatically quarantined, the combination of the repeat occurrence and the findings on open-source threat intel platforms warranted raising an investigation to the customer. The customer was alerted to the additional findings, and it was recommended to remove the asset from the network.

Response for persistent malware

The customer agreed with the initial analysis and suspected something more serious. The analysts then searched through the Deep Visibility logs from SentinelOne to determine the source of the mssecsvc.exe. Deep Visibility logs allow us to follow associated processes in a storyline order. In this case, it appears the ‘mssecsvc.exe’ originated from the same ‘FlashPlayerUpdateService.exe’ we saw in the JoeSandbox analysis. Deep Visibility also showed us that mssecsvc.exe had a Parent Process of wininit.exe, which was likely to be the source of persistence.

customer response to persistent malware

Customer interaction

Another notable feature of USM Anywhere is the ability to take action from one centralized portal. As a result of the investigation, the analysts used the Advanced AlienApp for SentinelOne to place the asset in network quarantine mode and then power it off. An internal ticket was submitted by the customer to have the asset replaced entirely.

Limitations and opportunities

Limitations

A limiting factor for the SOC is our visibility into the customer's environment as well as what information we are presented in log data. The event logs associated with this alarm suggested there was no longer a threat, as it had been killed and quarantined by SentinelOne. Taking a single instance of information at face value could have led to further damage, both financially and reputationally. This investigation highlighted the importance of thinking outside the log, researching historical investigations, and combining multiple sources of information to improve our analysis.

The post Stories from the SOC – Persistent malware appeared first on Cybersecurity Insiders.

In the previous article, we covered the build and test process and why it’s important to use automated scanning tools for security scanning and remediation. The build pipeline compiles the software and packages into an artifact. The artifact is then stored in a repository (called a registry) where it can be retrieved by the release pipeline during the release process. The release process prepares the different components, including the packaged software (or artifacts) to be deployed into an environment. This article will cover the contents of a release and features within a release pipeline that prepare a release for deployment into an environment.

DevOps process

Artifact registry

Artifacts are stored in a registry (separate system from the code repository) and accessible by DevOps tools like release pipelines and the IT systems that the application will be deployed on to. Registries should be managed as an IT system and provided to the Development and DevOps teams as a service. The IT systems that support the registry should be hardened using the corporate security policies. The registry should be private and only accessible within the company if it is not intended to be a public source for artifacts. Password protection and IP whitelisting are also advised to ensure that packages can only be retrieved by approved systems. Logging information needs to be sent to a Security Operations Center (SOC) for monitoring. Encryption of the packages at rest and in transit is also required.

Contents of a release

A release is created by the release pipeline (Azure DevOps, Jenkins, Team City) and uses the artifacts created by a build pipeline. The release pipeline is triggered by the build pipeline and it knows attributes like the latest software version that was created and the name and location of the artifacts. The release pipeline is highly configurable depending on when the release should be scheduled to deploy, what variables and secrets (passwords, certificates, and keys) should be used, which version of the compiled code needs to be deployed, and approval processes to protect environments from having a release replaced without approvals.

Releases are capable of being automatically deployed onto IT systems when a new artifact version is built. DevSecOps best practice encourages automated builds but advises manual approval instead of automated releases to most environments. However, it may be appropriate for release pipelines to automatically deploy into a development environment that is under the development team control. Environments controlled by different teams like Quality Assurance (QA), User Acceptability Testing (UAT), Production, and Disaster Recovery (DR) typically do not have automated release pipelines after every new artifact version is built.

Variables and secrets are how running applications can be adapted to the different environments (development, QA, UAT, Production and DR). Variables are created in the pipeline tools and can be applied during the release pipeline. DevSecOps recommends storing secrets in a separate “key” vault that can be connected to the pipeline. Separate variables and secrets allow the software artifacts to remain the same no matter which environment they are deployed into. When the software is deployed, it looks for variables and secrets that were configured in the build and release processes and uses them to properly set the system up. Variables and secrets are a big reason why releases can be deployed so quickly and multiple times per day.

DevOps release

Version control is mandatory for knowing which version of software is being deployed, having a rollback plan to recover if there is a bug or issue, and keeping track of features and additions to the application as developers work on the code. Every time a build creates an artifact, a version number is applied. The version number is used by the release pipeline so that it knows which artifact to retrieve and deploy. DevSecOps recommends using the semantic versioning system for all artifacts.

Release pipelines have approval features that control the software release. At a minimum, approvals should be setup for each environment, or where separation of duties is required between the development team and other groups in the organization. For example, the QA group should be the only team who can grant approval to release and deploy into the QA environment. This is because QA teams may still be working through their test plans on an older release, and they need to finish before testing the new release.

Build and release agents

The two types of servers (IT) for build and release activities are vendor-hosted and self-hosted. Vendor-hosted agents are managed by the vendor so upgrades and maintenance or taken care of. Also, a new agent is provided every time the build or release pipelines are run. This makes resource management easy for the company but may not be an option for unique build and deploy dependencies. While extremely secure, builds and releases performed by vendor-hosted servers are not in the company’s control.

Self-hosted agents are managed by the company and require the systems to be upgraded, maintained and any dependencies be installed before the agents can be used in build and release activities. Self-hosted agents work well when the DevOps platforms are internally hosted and not using any cloud-based servers. For example, self-hosted Jenkins pipelines use self-hosted servers and remain completely private and in the control of IT.

Next steps

There are many moving parts and components to the release process that need to be architected and designed with security in mind. The parts and components overlay with multiple vendors, all of the different environment owners, security, and IT. This requires the different members of a DevOps team, spread across all the different organizations, to work together and deliver changes to business systems safely and securely. The next article covers how a release is deployed and operated securely

The post DevOps release process appeared first on Cybersecurity Insiders.

This is part one of a three-part series, written by an independent guest blogger. Please keep an eye out for the next blog in this series.

Remote work is the new reality for companies of all sizes and across every industry.  As the majority of employees now perform their job functions outside the technology ecosystem of their local office, the cybersecurity landscape has evolved with the adoption of terms such as Zero Trust and Secure Services Edge (SSE).  To accommodate this new landscape, organizations have undergone fundamental changes to allow employees to work from anywhere, using any device, and many times at the expense of data security. As a result, a paradigm shift has occurred that demonstrates employees are increasingly dependent on their smartphones and tablets which have jointly become the new epicenter of endpoint security.

This next-level dependence on mobile devices is consistent across the remote work environment.  There are countless anecdotes about the new reality of hybrid work.  For example, workers using personal tablets to access sensitive data via SaaS apps, or taking a work Zoom call while waiting in the school pickup line.   The constant for each of these stories has been the overwhelming preference to use whatever device is available to complete the task at hand. Therefore, it is extremely logical that bad actors have pivoted to mobile to launch their attacks given the overwhelming use of non-traditional endpoints to send email, edit spreadsheets, update CRMs and craft presentations.  

4.32B Active Mobile Internet Users

56.89% Mobile Internet Traffic as Share of Total Global Online Traffic

Although the experience paradigm quickly changed with the adoption of remote work, the perception of mobile devices as a risk vector has been more gradual for most customers. In fact, Gartner estimates that only 30% of enterprise customers currently employ a mobile threat detection solution.  Many organizations still assume that their UEM solution provides security or that iOS devices are already safe enough. The most shocking feedback from customers indicates that they historically haven’t seen attacks on mobile, so they have no reason to worry about it.  Given this mindset, it’s again no surprise that hackers have trained their focus on mobile as their primary attack vector and entry point to harvest user credentials.

  • 16.1 % of Enterprise Devices Encountered one (or more) Phishing or Malicious links in 3Q2021 globally
  • 51.2% of Personal Devices Encountered one (or more) Phishing or Malicious links in 3Q2021 globally.

What this mindset reveals is a certain naivete from many organizations, regardless of size or industry, that believe mobile devices do not present significant risk and therefore don’t need to be considered in their data security and compliance strategies. This oversight points to two separate tenants that must be addressed when protecting sensitive data via mobile devices:

Endpoint security is an absolute requirement to protect sensitive data and it includes laptops, desktops, and mobile devices

There isn’t a single business that would issue a laptop to an employee without some version of anti-virus or anti-malware security installed yet most mobile devices have no such protections.  The primary explanation for this is that organizations think mobile device management is the same as mobile endpoint security.  While device management tools are capable of locking or wiping a device, they lack the vast majority of capabilities necessary to proactively detect threats. Without visibility into threats like mobile phishing, malicious network connections, or advanced surveillanceware like Pegasus, device management falls far short of providing the necessary capabilities for true mobile security.

Even cybersecurity thought leaders sometimes overlook the reality of cyber-attacks on mobile.  In a recent blog, “5 Endpoint Attacks Your Antivirus Won’t Catch”, the entire story was exclusive to the impact on traditional endpoints even though rootkits and ransomware are just as likely to occur on mobile. 

Traditional security tools do not inherently protect mobile devices

Given the architectural differences that exist between mobile operating systems (iOS/Android) and traditional endpoint OS (MacOS, Windows, Linux, etc.), the methods for securing them are vastly different.  These differences inhibit traditional endpoint security tools, which are not purpose-built for mobile, from providing the right level of protection. 

This is especially true when talking about the leading EPP/EDR vendors such as Carbon Black, SentinelOne and Crowdstrike.  Their core functionality is exclusive to traditional endpoints, although the inclusion of mobile security elements to their solutions is trending.  We’re seeing strategic partnerships emerge and it’s expected that the mobile security and traditional endpoint security ecosystems will continue to merge as customers look to consolidate vendors. 

What’s more is that there are so many ways that users interact with their smartphones and tablets that are unique to these devices. For example, a secure email gateway solution can’t protect against phishing attacks delivered via SMS or QR codes. Also, can you identify all of your devices (managed and unmanaged) that are subject to the latest OS vulnerability that was just identified and needs to be patched immediately?  Did one of your engineers just fall victim to a man-in-the-middle attack when they connected to a malicious WiFi network at a random coffee shop?  These are just some of the examples of the threats and vulnerabilities that can only be mitigated with the use of a mobile endpoint security tool, dedicated to protecting mobile endpoints.

The acceleration of remote work and the “always-on” productivity that's expected has shifted your employees’ preferences for the devices they use to get work done.   Reading email, sending an SMS rather than leaving a voicemail (who still uses voicemail?), and the fact that just about every work-related application now resides in the cloud has changed how business is transacted.  This pivot to mobile has already occurred. It’s well past time that companies acknowledge this fact and update their endpoint security posture to include mobile devices.  

If you would like to learn more or are interested in a Mobile Security Risk Assessment to provide visibility into the threat landscape of your existing mobile fleet, please click here or contact your local AT&T sales team.           

The post Endpoint security and remote work appeared first on Cybersecurity Insiders.

This blog was written by an independent guest blogger.

APIs are a crucial tool in today’s business environment. Allowing applications to interact and exchange data and services means that companies can provide an ever-greater range of features and functionalities to their clients quickly and easily. So, it is no wonder that a quarter of businesses report that APIs account for at least 10% of their total revenue – a number that will only increase in coming years.

But for all their benefits, APIs also create security concerns for organizations. In one survey of API users, 91% reported an API-related security incident. Unfortunately, API security efforts within many organizations are simply not sufficient, exposing the company and its clients to attack and loss of sensitive data. 

Every business that uses APIs, indeed every business even thinking about using APIs, should have a solid API security strategy in place. This article reviews API vulnerabilities and outlines steps organizations should take to secure their APIs.

The importance of APIs

APIs provide numerous benefits for both businesses and their customers. At its most basic level, an API is simply a tool that allows an application to communicate with external applications and data sources. Developers can leverage these connections to create new applications, functionalities, and analytical tools, speeding the pace of business innovation and constantly improving user experience.

APIs facilitate everything from online payment systems and banking to travel aggregator services, social media, and media streaming services. They are also an important part of the rapidly expanding cryptocurrency world. 

Crypto developers use APIs to build decentralized applications (DApps) on blockchains. APIs also interact with the smart contracts that control everything from transactions to the formation of decentralized autonomous organizations (blockchain governance structures known colloquially as DAOs).

APIs also ease data sharing among corporate applications, reducing the need for repetitive and wasteful data entry. And they are an essential part of automating many business functions. And in a business environment that increasingly includes remote workers, they help businesses build effective collaboration tools to ensure that their teams continue to work well, even when virtual.

Businesses can also use APIs for advanced competitive intelligence programs. Not only can they simplify the aggregation of competitive data from a range of sources, but they are integral in building effective data analytics and display tools. 

They can even be used to continuously track changes to your competitors’ websites so you can always be on top of the latest innovations in your industry (e.g., with tools like Visualping).  

API security vulnerabilities

Because APIs are such a dominant part of the business landscape, cyber attackers have targeted them with growing frequency. Gartner predicted that API attacks would be the most common attack vector this year, and that prediction is rapidly proving true.

Some of the world’s largest and most sophisticated companies have suffered widely publicized data breaches resulting from API attacks. And as businesses have painfully learned, hackers have many different ways to attack APIs.

Targeting code vulnerabilities

As with any software, APIs are only as good as their underlying code. Poor coding of APIs creates inherent vulnerabilities that hackers are only too happy to exploit.

DDoS attacks

Distributed denial of service attacks, which attempt to render APIs completely unavailable to users by overwhelming them with traffic, are rapidly increasing in frequency. One reason is the increase in e-commerce in recent years. DDoS attacks can prevent access to inventories by adding stock to carts that they then never check out (denial of inventory attack).

Failed authentication and access control policies

It is crucial for organizations to strictly control API access and require strong authentication. Company API security policies should include role-based access control, least privilege, and zero trust policies to limit opportunities for hackers to interfere with APIs using compromised credentials. These policies will also help restrict how far a successful hacker can get within company systems using compromised credentials, especially if companies strictly limit granting wide-ranging privileges to users.

Man-in-the-Middle (MitM) attacks 

Hackers can insert themselves between users and APIs by intercepting and changing the communications between them. Using MitM attacks, hackers can gain access to sensitive user accounts and information, which they can use to exfiltrate company data. The danger of MitM attacks increases when companies do not apply transport layer security (TLS) in their APIs.

Securing your APIs

So what steps do businesses need to take to have the best security possible when using APIs? 

Build an API inventory

The first step is to know what APIs you have and how you use them. A complete API inventory, including whether you have multiple versions of a given API, allows you to minimize your overall attack surface by eliminating unused or outdated APIs. An API inventory also helps you prioritize your security efforts, directing resources towards your most critical systems.

Create effective API security policies

API vulnerabilities start well before a hacker ever enters the picture. Unfortunately, many companies don’t adequately protect their API assets because they don’t have API security policies in place, or if they do, those policies are ineffective. Organizations must apply strong security policies to their API usage and routinely enforce and update those policies.

Use strong authentication methods and encryption

In addition to having policies that limit who can access your APIs, you need to verify the identity of the people and services accessing them. Authentication methods such as API key or OAuth authentication harden your APIs against attacks and reduce your attack surface.

Limit data exposure

The less data transferred through an API, the less there is for an attacker to intercept or exfiltrate. Therefore, keep data sharing across an API to what is absolutely necessary. Not only do you minimize potential breach issues, but the organization will also be in a better position concerning compliance issues.

Conclusion

APIs will only continue to grow in popularity and utility. And they will also continue to be popular attack targets. So, make sure you are taking all the necessary steps to secure your APIs against attackers. 

The post How and why you should secure APIs appeared first on Cybersecurity Insiders.

 “Approximately 64% of global CISOs were hired from another company” according to the 2021 MH Global CISO Research Report. The reasons are because of talent shortages, the role is still new to some companies, and companies have not created a succession plan to support internal promotions.

To overcome these challenges, companies can look to Virtual Chief Information Security Officer (vCISO) or a vCISO as a service provider. Companies should consider both the vCISO candidate and the additional “as a service” capabilities that the Provider brings to support the security program. This article covers what to look for when selecting a vCISO and vCISO as a service provider.

What to look for with the candidate

Businesses will want to align their CISO requirements with the skillset and background of the candidate vCISO. For example, the business may want a vCISO with security architecture experience when they are deploying a managed firewall service. Alternatively, if the business has a need to build a Security Operations Center (SOC) then a vCISO with SOC deployment experience might be preferred. While experience in a focused area is beneficial, a vCISO will have the following fundamental skills that align and preferably expand past the business security needs.

  • Provide executive-level advisory and presentations.
  • Create and track a risk register with identified cybersecurity gaps.
  • Ability to develop, implement, and manage cybersecurity roadmap.
  • Run tabletop exercises to identify business unit priorities and create alignment.
  • Respond to third-party due diligence requests.
  • Hardware and software assets as well as data identification and risk analysis.
  • Reporting on metrics and key performance indicators (KPIs).
  • Deliver and report on vulnerability and penetration testing.
  • Oversee reporting, steering, and committee meetings.
  • Review and update incident response plans.
  • Identification, mitigation, and remediation activities for security related events.
  • Policy and procedure development, updating and creation.
  • Budget and planning development.
  • Develop and run security awareness training.

What to look for in a vCISO as a service provider

vCISO as a service expands the vCISO from an individual contributor into a team that is engaged to lead a program or initiative. For example, instead of having a vCISO with SOC building experience, the entire team is brought in to create the program and build the SOC. Building a relationship with the Provider helps businesses quickly engage resources to support these larger types of initiatives. As the relationship grows, the business builds trust and expands into a valuable partnership. Below are items to consider when trying to find the right trusted partner.

  • Access to a team of experts for a specific topic or concern through collaboration and sharing between the provider’s internal vCISO committee.
  • Provide a diverse group of professionals that allow the customer to get a vCISO who can quickly engage within the customer’s timeline and budget.
  • Leverage the diverse experience gained by the provider because of their engagements in different industries and business sizes from small business to global enterprise.
  • Strategy frameworks and resources to build a security program and help create a succession plan.
  • Meet the customer timelines and budgets through different levels of retainers and engagement models.
  • Addressing security topics and strategy objectively while providing unbiased recommendations to security challenges.
  • Coverage area to support regional, national, and global footprints.

The vCISO role is a flexible model to help customers manage cost, enhance quality of their deliverables, and reduce the time it takes to deliver on security activities. Engagements can be for a specific project, to provide coverage while a permanent CISO is identified, or to take on the role full-time. These benefits strengthen the relationship between customers and service provider which in turn, create the trusted partnership that is needed for stronger security.

The post What to look for in a vCISO as a service appeared first on Cybersecurity Insiders.