By Gal Helemski, co-founder and CTO, PlainID

The number of access rules that must be managed across directories, applications, repositories, and other platforms by today’s digitally oriented enterprises is growing at an unprecedented pace. One of the major security headaches this creates is that controlling and auditing authorisations and entitlement is becoming more complex and challenging.

Also playing a bigger role is the widespread adoption of remote and hybrid working arrangements, and taken collectively, many organisations are now at greater risk of data breaches – unless they can consolidate and standardise access controls more effectively.

These challenges serve to highlight the value and growth in the adoption of identity and access management (IAM) technologies, which are used for regulating who has access to what information and how it is used. In particular, security teams are looking at how IAM can manage access across expanding and complex enterprise security perimeters.

While IAM has emerged from requirements focused on issues such as identity lifecycle, governance, proofing and access, today’s digital user journeys have prompted an important shift in emphasis. For instance, given significantly expanding security risk vectors and the need for more effective privacy controls and governance, the current generation of IAM solutions deliver more advanced levels of access control, with authorisation reemerging as a crucial component of IAM.

More specifically, real-time “dynamic authorisation” is becoming central to the zero-trust security strategies that aim to protect today’s dynamic technology environments. This represents an expansion of existing IAM components, which are now employed to build more robust systems that reduce the danger of compromised credentials providing unauthorised access to digital assets.

While this objective is growing in importance, one of the challenges of delivering on it is the disparate nature of access and authorisation policies used within the typical modern organisation. In many cases, for example, thousands of policies may be in use without sufficient levels of standardisation, centralised management or visibility. The result of these shortcomings can range from operational inefficiency to significantly increased risk.

Prevention is better than cure

Responding to these increasingly pressing issues, enterprise security teams are focusing on how they can standardise and consolidate access to deliver a preventative approach to today’s diverse risks. In effect, identity has become the common denominator for enforcing authentication and access control (via dynamic authorisation).

Looking ahead, the broader adoption of dynamic authorisation is likely to be driven by a range of factors, such as those organisations moving from an in-house policy engine to a proven industry solution, particularly as applications are built or refreshed. In the case of those organisations focused on the implementation of zero-trust architectures, for example, manually processing the growing number of entitlements is – for many – no longer sustainable. Instead, security teams need the capabilities that only automated solutions can provide if they are to minimise the impact of human error and more effectively control their exposure to risk.

Indeed, dynamic authorisation is increasingly viewed as a prerequisite for delivering effective zero-trust architectures. As part of this approach, implementing a fine-grained authorisation policy can put organisations in a much stronger position to meet their data privacy compliance obligations across specific data sets.

This kind of dynamic decision-making is central to the ability of security teams to make real-time changes in how and when users are granted access to data and resources across enterprise networks. Without an effective approach to policy management that allows users to be verified through an authentication solution, data is much more difficult to protect. When the network is controlled within a resilient architecture, however, access points to critical data are protected by more resilient and agile security measures.

In today’s dynamic business environment, companies are facing a range of crucial challenges related to access control, security, and cybercrime. In order to remain secure and agile, it is essential for organisations to adopt a standardised, consolidated approach to access and authorisation. This can not only help to provide robust security that supports the goals and priorities of the business, but by taking this approach, companies can achieve a win-win situation where effective security and bottom-line success go hand in hand.

The post Why Access Control Should Be a Core Focus for Enterprise Cybersecurity appeared first on Cybersecurity Insiders.

Is your organization doing enough to protect its environment from hackers?

In 2021, U.S. companies lost nearly $7 billion to phishing scams, malware, malvertising, and other cybercrimes. Experts estimate that by 2025, such schemes will cost businesses worldwide more than $10.5 trillion annually. Given those figures, it’s clear that companies can’t afford to ignore the threat hackers pose to their bottom line.

Fortunately, vulnerability scanning has proven to be an extremely effective means of identifying and eliminating endpoint vulnerabilities that could allow a cybercriminal to access your company’s network and data.

What is Vulnerability Scanning?

Every month, hundreds of vulnerabilities are discovered in applications being used by businesses and organizations around the world. That fact alone makes it impossible for an IT or security team to keep track of all exploitable vulnerabilities that could threaten their company’s network at any given time.

That’s why vulnerability scanning exists.

A vulnerability scan is a high-level automated test that searches for known security weaknesses within a system and reports them so they can be eliminated. In addition to software vulnerabilities, a comprehensive vulnerability scanner can also detect risks such as configuration errors or authorization issues. When used with other cybersecurity measures, these scans can go a long way toward securing your company’s systems and data from hackers waiting to exploit an opening in your attack surface.

Types of Vulnerability Scanning

There are specific types of scans available for different areas of network infrastructure, and your organization’s specific needs will determine which are most appropriate. To gain a comprehensive understanding of your environment’s risk, it’s important to use a tool that is able to detect all types of vulnerabilities.

Authenticated vs. Unauthenticated

An unauthenticated scan can identify vulnerabilities a hacker could exploit without supplying system login credentials. On the other hand, an authenticated scan looks for weaknesses that could only be exploited by someone who does have access to those credentials.

External Vulnerability Scan

An external vulnerability scan tests assets outside your network and targets IT infrastructure, such as websites, ports, services, networks, systems, and applications exposed to the internet. These scans seek to expose threats along your network’s perimeter as well as any lurking within security firewalls and other defensive applications.

Endpoint Vulnerability Scan

An endpoint, or internal, vulnerability scan identifies vulnerabilities exploitable by insider threats, hackers, or malware that have already made it into your system via any remote computing device connected to your network, such as a mobile phone, tablet, desktop, or workstation.

Unfortunately, endpoint security doesn’t always receive the attention it deserves, even though the rise in remote work triggered by the COVID-19 pandemic has dramatically increased the potential for external hacks at many companies and organizations.

Environmental Vulnerability Scan

An environmental vulnerability scan is designed for the specific environment in which your company’s technology operates. These specialized scans are available for various technologies, including websites, cloud-based services, mobile devices, and more.

Vulnerability Scanning Best Practices

Every company and organization should incorporate vulnerability scanning into its threat mitigation strategy. Adhering to the following best practices will help ensure your business is well-positioned to fend off hackers looking to exploit any weaknesses within your environment:

  • Scan Often: Long gaps between scans leave your system open to new vulnerabilities, and some assets may need more frequent scans than others. Establish a vulnerability scan schedule for each device.
  • Scan All Devices That Touch Your Network: Every device connected to your network represents a potential vulnerability, so ensure that each is scanned periodically in accordance with potential risk and impact.
  • Ensure Accountability for Critical Assets: Decide who will be responsible for patching a particular asset when vulnerabilities are identified.
  • Establish Patching Priorities: Vulnerabilities discovered on internet-facing devices should have priority over devices already protected by settings or firewalls.
  • Document Scans and Their Results: Documenting that scans per the established timetable will allow you to track vulnerability trends and issue recurrence to uncover susceptible systems.
  • Establish a Remediation Process: Lay out specific time frames for addressing discovered vulnerabilities based on the severity of the threat and the urgency to remediate.

Schedule Your Syxsense Demo

Syxsense combines IT management, patch management, and security vulnerability scanning in one powerful solution.

To get started, schedule your free demo today.

The post Your Guide to Vulnerability Scanning appeared first on Cybersecurity Insiders.

Cybercriminals are smarter, faster, and more relentless in their attacks than in times past. Data breaches are a serious threat to organizations, but vulnerability management automation can help reduce the number of incidents businesses face each year.

Managing vulnerabilities is difficult in an increasingly connected cyber environment. Companies have their own networks, networks connected to their supply chains, vendor access, remote workers, and other entry points, all creating security gaps. But so do outdated software, security misconfigurations, and other less-obvious vulnerabilities.

Vulnerability management automation is a necessity because it helps IT and security teams find and fix weaknesses before they become active threats. Automating a vulnerability management program is a daunting task, but it is extremely rewarding. Let’s talk about how to manage vulnerabilities with automation and why it’s so important moving into 2023.

What is vulnerability management?

Vulnerability management involves discovering, analyzing, prioritizing, and remediating cyber weaknesses within an organization’s system. The goal is to seal up any gaps that could let unauthorized users and other cyber threats into your infrastructure. When vulnerabilities are managed effectively, weaknesses can easily be found and assessed before they become attack vectors.

Common vulnerabilities

There are many types of vulnerabilities that can lead to a data breach or cyberattack. Weak user credentials, unpatched software, zero-day vulnerabilities, unsecured APIs, misconfigurations, and SQL injection are all common vulnerabilities that organizations need to watch for. If left unattended, the results could be a data breach leading to unauthorized data access, financial fraud, or even a serious attack.

4 stages of vulnerability management

Vulnerability management programs vary from company to company, but they all include four main stages:

  1. Identify
  2. Evaluate
  3. Fix
  4. Report

The first step is to scan the entire network and IT system to identify all the vulnerabilities. Next, each vulnerability needs to be evaluated and prioritized according to its risk to the system and the organization. Then, each vulnerability should be remediated with a patch or some other fix. If there isn’t a solution available, then teams may have to mitigate the risk with additional security measures.

Finally, a key part of vulnerability management is reporting. Reporting is exceptionally important in vulnerability management because it helps improve the organization’s security posture moving forward.

The importance of vulnerability management automation

Gone are the days when a few manual patch jobs would easily secure IT infrastructure.

Organizations are becoming more aware that cyber threats are all around them. Today, 50% of detected vulnerabilities don’t have a CVE for reference. Frauds and bad actors of today use sophisticated tools and expertise to gain unauthorized access to organizations’ networks. They even pose a threat to national security.

Governments have been cracking down on cyber criminals to avoid a catastrophe. In fact, 80% of governments have already established cybersecurity compliance regulations that require regular vulnerability reporting.

Beyond that, vulnerability management automation is crucial to reduce the cost, severity, and containment time. According to a recent report, automation can significantly reduce the impact of a data breach. The Ponemon Institute reports that automation cuts the cost of a data breach in half and shaves off 77 days when it comes to containing threats.

Benefits of vulnerability management automation

Automation offers cybersecurity teams several benefits when it comes to workflows, daily tasks, and managing vulnerabilities more effectively. Identifying and remediating IT risks consumes a lot of time and energy. But with automation, scanning and identifying vulnerabilities is as easy as the click of a button. Plus, automation can be used to mitigate vulnerabilities, execute containment protocols, and so much more.

Here are some of the benefits that vulnerability management automation has to offer:

Improved visibility

AI algorithms don’t have to sleep, they don’t need breaks, and some of them are great at learning how to identify patterns across large amounts of data, even in real-time. Vulnerability management is about detecting and securing IT and customer-facing systems throughout an organization.

As companies continually expand to the cloud, remote endpoints, IoT, and virtual machines, visibility is difficult for human eyes. But automation enables teams to have eyes on the entire ecosystem without all the tedious work behind the scenes.

Simple reporting

One of the biggest benefits to organizations that vulnerability management automation has to offer is the ease of reporting. Government agencies are increasing cybersecurity regulations around the world, which includes regular reporting. There are many reasons why reporting is important, such as spreading the word about new attack vectors, weaknesses, and security postures. But reporting can be a big hassle for organizations.

However, vulnerability management automation actually makes reporting remarkably easy. It helps teams track changes, security audits, efficacy, and other essential data that regulating bodies want to know. With automation, it’s also possible to build and send reports on a recurring schedule so that you never miss a deadline again.

Reduce manual errors

When vulnerability management is left to tools that need to be watched, there is plenty of room for error. Keeping track of updates, fixing configuration mistakes, and monitoring for anomalous activity, all on top of managing vulnerabilities, doesn’t promote the level of thoroughness that is required to keep an organization’s IT system airtight.

In the beginning stages, personnel will need to spend time learning the tools, understanding the reporting system, and making sure that everything is configured appropriately. But after that, automation enables teams to take a more hands-off approach since the automatic tools can scan, find, and prioritize risks without the need for human interaction.

Faster response

One of the best benefits that teams can expect from vulnerability management automation is faster response times. Innately, automated tools enable organizations to keep an eye on their networks 24/7 without interruptions. Not only that, but they also work much more quickly than humans. Applying patches and updates quickly is essential to keep networks secure.

The vulnerability data collected through automation is also helpful when it comes to analyzing an attack. Teams can use the information collected on an exploited weakness to understand the cyberattack on a deeper level and take the appropriate steps to prevent similar attacks from happening in the future. Vulnerability management automation also allows teams to secure the system before a threat actor exploits discovered weaknesses.

Data-driven security

Finally, the greatest benefit of vulnerability management automation is that it allows teams to make security decisions thanks to data. Without collecting important information about a certain weakness in a system, it’s nearly impossible to treat vulnerabilities effectively. Without data, teams are basically using trial and error to secure their organization’s most crucial data.

Data-driven security is all about leveraging data to improve your security posture. Vulnerability management automation equips security, and IT teams with data to effectively fix weaknesses immediately and for future instances. Plus, you can use the data to track changes over time, predict new vulnerabilities when infrastructure evolves, and many other IT applications.

Final thoughts

Modern organizations require modern cybersecurity processes. Vulnerability management alone is an excellent place to begin, but adding automation to the mix takes cybersecurity efforts to the next level. Vulnerability management automation reduces the impact of threats, and the burden on cybersecurity teams and makes it easy to find and fix vulnerabilities.

Learn more about how vulnerability management is changing in 2023.

The post Vulnerability Management Automation: A Mandate, Not A Choice appeared first on Cybersecurity Insiders.

Traditional vulnerability management is in need of a desperate change due to the lack of effectiveness in combating modern cyberattacks. It’s a bold statement, but true, nonetheless, because it’s just not enough.

Numbers don’t lie, and the only direction the average cost of recovering from cyberattacks seems to move is up. Putting the monetary effect aside, a successful cyberattack from ineffective vulnerability management can fatally hit an organization’s reputation. This snowballs quickly into loss of business, and it’s only downhill from there.

All these arguments only support the fact that traditional vulnerability management isn’t effective in the current operating environment and highlights the consequences of its ineffectiveness.

The importance of reinventing it cannot be understated enough because the price of recovering from a cyberattack is hefty.

What’s Lacking in Traditional Vulnerability Management?

Traditional vulnerability management, or just looking at software vulnerabilities/CVEs, is what we’ve been following for the past three decades or so. But in the modern scenario of rapid technological transformation, cyberattacks are becoming ingenious and deceptively dangerous. With newer and devious ways of breaching your network’s cyber defense, attackers exploit risks beyond software vulnerabilities, significantly reducing the effectiveness of vulnerability management.

IT asset exposures, misconfigurations, deviations in security controls, and security anomalies are the new dangerous risks that attackers are exploiting. And traditional vulnerability management has no way of combating them and preventing cyberattacks.

In traditional vulnerability management, a disconnect exists between vulnerability scanning and remediation and the teams performing them. Typically, the info-security team takes charge of assessing vulnerabilities and continuously dumps the task of remediating the issues to the IT teams. IT teams, already understaffed, are often overwhelmed with fixing thousands of vulnerabilities.

Adding to the issue, the lack of integration and automation between vulnerability scanners and remediation tools further reduces the effectiveness of vulnerability management.

Making Vulnerability Management Effective with the Necessary Reinvention

Advanced Vulnerability Management (AVM) is the new way of effectively performing vulnerability management in a modern computing environment. It is the process of going beyond traditional vulnerability management with a broader approach to vulnerabilities by covering various other security risks. Advanced Vulnerability Management gives you a holistic view of your IT, discovering dangerous anomalies that can threaten an organization’s cyber defense.

By integrating vulnerability detection, assessment, and remediation into a unified, continuous, and automated process, Advanced Vulnerability Management increases the scope of detection and remediates dangerous risks with relevant security measures.

Advanced Vulnerability Management faces the challenge of risk beyond software vulnerabilities head-on by increasing the scope of detection. By harnessing smarter, faster, and more powerful scanners that can detect IT asset exposures, misconfigurations, and deviations in security controls, Advanced Vulnerability Management covers all possible attack vectors and ensures that no risks go under the radar.

With integration and automation as the core principle around which Advanced Vulnerability Management revolves, its effectiveness in preventing cyberattacks increases multiple-fold. It also increases the speed of responding to threats, not allowing a ‘threat to become an attack.’  Manually performing a vulnerability management task, like the correlation of vulnerability data between teams, gets taken out of the equation entirely, further improving productivity and efficiency.

Further, by aligning an organization with compliance policies and reducing the attack surface with preventive measures, Advanced Vulnerability Management reduces the attack surface and improves an organization’s security posture.

Closing Thoughts

In the modern, ever-evolving tech space, ineffective vulnerability management can only lead to one result. A fatal cyberattack completely destroys an organization. Effective vulnerability management can help prevent it, but a reinvention of the way we perform vulnerability management is in dire need in a modern computing environment.

The post Why Traditional Vulnerability Management isn’t Cutting it Anymore appeared first on Cybersecurity Insiders.

By Amit Shaked, CEO and co-founder, Laminar

Out of the total reported data breaches in 2022 in the U.S., nearly half (45%) happened in the cloud and cost organizations over $9 million. While the statistics paint a bleak picture, the good news is that as adversaries have evolved, so have security technologies.

In its 2022 Hype Cycle for Data Security, Gartner announced a new category of solutions titled “Data Security Posture Management,” or DSPM. The term “DSPM” is used by Gartner to describe a product that “provides visibility into where sensitive data is, who has access to that data, how it has been utilized, and what the security posture of the data store or application is.”

The definition provides a high-level overview of what DSPM is, but in order for security teams to get the most benefits from the technology, it’s important to take a deeper look at the challenges driving the need for DSPM, the benefits it can bring data security professionals, and what the key components of a DSPM solution should be.

Why Do Data Security Professionals Need DSPM?

We all live in a digitized world. Competition between companies to deliver the best services and solutions possible to other businesses or to consumers is at an all-time high. Innovation has become a necessary part of doing business, not just a nice-to-have.

The biggest winners in our cloud-based digital era are those who generate the most value from data. The new age of data democratization (i.e. making it available to anyone who needs it) and cloud transformation is characterized by:

  1. The multi-cloud norm. The current state of cloud data security shows that over half of organizations are working with two or more cloud service providers (CSPs). The complex infrastructure design can easily make it complicated for data security and governance professionals who are tasked with dealing with multiple cloud providers all configured differently and are consistently evolving and challenging to manage.

  2. The sheer proliferation of data. Developers and data scientists have the ability to spin up new datastores in a matter of moments. As a result, organizations are increasingly creating what is known as “shadow data”. Shadow data is any data that is not governed, under the same security structure or reported to the security or IT team. 82% of security professionals today are concerned about it.

  3. An evolving security perimeter. We’ve reached the death of the traditional security perimeter in our cloud age. Data is accessible to anyone, across the globe with no single contention point — leading to sensitive data landing in the hands of adversaries.

  4. A faster development cycle. Developers now create in hours, weeks, or days rather than months or years — environments change with the click of a few buttons and often without security teams’ knowledge.

All of these components lead to what is now being referred to as the “innovation attack surface”. A new threat vector that most organizations unconsciously accept as the cost of doing business. In contrast to traditional attack surfaces determined by external forces seeking to exploit vulnerabilities to gain illicit access to protected information, the innovation attack surface results from the massive, non-contiguous patchwork of accidental risk created by the smartest people in the business. In essence, it refers to the continuous unintentional risk cloud data users, such as developers and data scientists, create when using data to drive innovation.

The advent of this new attack surface created the need for DSPM solutions.

What Value Does DSPM Bring to Organizations? 

There are many benefits to deploying a DSPM, including:

  1. Avoiding sensitive data exposure. DSPM protects cloud data by finding all known and “shadow data”. The visibility finds mis-placed data, mis-configured data assets, as well as overly permissive access rights, thus identifying sensitive data over exposure.

  2. Creating a more manageable attack surface. Identifying and remediating any data security violations, getting rid of outdated data and ensuring compliance with any existing legislation such as PCI, GDPR, CCPA and more, allows data growth, but reduces data risk.

  3. Lessening friction with value creators. DSPM empowers developers and data scientists because it automates the validation and enforcement of any existing security policies. Value creators can feel free to create, all while data is being protected with proper guardrails.

  4. Reducing the risk of compliance fees. CDMC, GDPR, COPAA and other compliance regulations can easily cause headaches for data security professionals. DSPM can help lighten the load by discovering data in the cloud, classifying it and then making sure it compares against data security policies. If it doesn’t, it can help drive the change so that it does.

  5. Lowering cloud cost. In our current economic climate, it’s critical to look at unnecessary costs and take the appropriate action to reduce them. The right DSPM has features to help data security professionals identify redundant, obsolete and trivial (ROT) data in the cloud and reduce cloud usage fees as a result.

What Should Data Security Professionals Look for in a DSPM Solution?

A mature DSPM solution has four key elements that work together: discovery, prioritization, security and monitoring. The tool should be able to identify all data, known and unknown, as well as prioritize it based on volume, exposure and security posture. Then, it should also be able to verify the security posture with your data security policies, alert on violations and provide guidelines on remediation. DSPMs should also be able to monitor the data regardless of where it moves across the cloud.

The Bottom Line?

Adversaries are constantly evolving, but new technologies such as DSPM can help data security, governance and privacy practitioners stay one step ahead. By partnering with a mature DSPM provider, organizations can reduce costs, prevent data leaks and breaches and combat the new innovation attack surface.

The post The Data Security Team’s Guide to Data Security Posture Management (DPSM) appeared first on Cybersecurity Insiders.

Most of us who have been gaining knowledge about the current cybersecurity landscape are aware that Facebook founder Mark Zuckerberg covers his laptop with a tape to avoid any prying eyes tracking him down through the webcam. It is learnt that the owner of Meta also keeps the front camera of his iPhone covered with a cover to keep his private life away from snooping eyes.

Tove Marks from VPNOverview did some analysis on how hackers can take control of a webcam just by implanting a small malicious software code that thereafter allows them to gain a video streaming of what is happening before the cam.

Therefore, the next time when you see your webcam light blinking and device battery exhausting faster than usual, you must quickly put the device under surveillance.

A recent study made by VPNOverview says that one in every three Americans does not know that their webcams can be hacked and their privacy can be breached. So, the online resource that shares knowledge about online security encourages people to gain knowledge of the current threat landscape and follow basic security hygiene while accessing online resources through devices such as computers and smart phones.

What happens if the webcam is in control of a hacker?

Let us analyze it with an example: In the year 2016, a couple living in a condo in New York received a call. Actually, it was the female who receives a call on her Skype account stating that her private life activities were recorded and if she doesn’t obey the demands of the hackers, the videos will be posted on the X-rated websites.

She was first shocked and contacted her partner, who then asked her to visit the nearest police control room. By the time she narrated the incident to the cops at the office, her partner came and stood beside the lady and explained to the cops that she might be a victim of S$%tortion.

As predicted, the cops found that the lady’s laptop webcam was hacked and cyber criminals caught hold of her private audio and video to a certain extent and so were using those clips to blackmail her.

What if the same thing happens with the kids in our house whose nanny cams are hijacked and the life of the child falls into jeopardy?

Agree that it’s not that easy to do so. But it’s not that tough as well; especially with a range of technology-based hardware and software available for purchase online.

So, better keep your phones covered with a bookcase pouch or your laptops covered with a tape. And do keep a vigil on the webcam light and always update your PC or phone with the latest security and OS updates.

 

The post How to say your webcam on laptop or smartphone has been hacked appeared first on Cybersecurity Insiders.

School districts are constantly being targeted by cyber attacks, leading to data breaches and information misuse. So, to those who are worried about the privacy of student info, here are some tips to protect it from prying eyes.

1.) Categorization of data is important in such scenarios and that can be done through data classification where private data like Personally Identifiable Information(PII) can be protected with more security measures than the stuff that don’t need them.

2.) Deletion of old records is also vital as it helps in cutting down the attack surface to minimize privacy risks.

3.) Keeping a tab on file sharing practices is vital because often we see a wide variety of data being shared among pupils, staff, and administrative employees. What if all the data is available to share with everyone and if in case, any vital info spills mistakenly to an unauthorized person? So, better keep the sharing policies and practices ready that prohibit everything being shared with anyone.

4.) Sensitive info should always be encrypted, as it helps to prevent data spillage while being stored and transmitted. So, if unfortunately, the information gets shared or accessed by any unauthorized person, they cannot read it without decryption and that’s going to earn a brownie for sure.

5.) Following basic cybersecurity hygiene by students and staff while sharing the data is also important, as it avoids being targeted by phishing scams or other type of social engineering attacks. This also implies while abiding by the FERPA rules when implementing online education programs via the Internet or mobile apps.

6.) It is important for a school district or an educational institute to go for a 3rd party data auditing at least once in 3 or 6 months time frame. As it helps in keeping sensitive data safe from any kind of digital invasions.

 

The post How to handle personal data of students appeared first on Cybersecurity Insiders.

By Scott Gordon, CISSP, Oomnitza 

Technology oversight is a common mandate across IT and security frameworks and compliance specifications, but achieving that oversight is difficult. The rise of hybrid workplaces, shadow IT/DevOps, and cloud infrastructure dynamics continue to create cybersecurity risks. SecOps, Governance Risk and Compliance (GRC) and ITOps teams use wide variety of tools and operational data to mitigate security posture exposures and fortify business resiliency, yet audit readiness and compliance validation remain a challenge. According to a recent survey, 66% of organizations failed at least one audit over the last three years [1]. Another survey calculated that organizations spend $3.5M each year on compliance activities [2].

Why? First, technology and operational intelligence, across the myriad of users, endpoints, applications and infrastructure, is siloed and fragmented. Beyond event logging, where is no established way to aggregate, correlate, and analyze this data, which exists within different departments, divisions, and management tools. Second, the tasks required to ascertain control and policy compliance details, resolve violations and provide adherence proof are resource intensive and error prone. As audit frequency and range expand to meet multiple evolving specifications, how can organizations reduce issues, delays, and spend? Answering this question has placed CISOs on a path towards continuous audit readiness that’s accomplished by automating audit processes, from Scope to Evidence.

Clearly, smaller enterprises tend to be more cloud-first, but larger the company, the more distributed the environment — and the more siloed divisions and IT domains become. The pandemic accelerated vlouf migration, propelled digital transformation initiatives, and surged hybrid workplace adoption. The net effect of these events has introduced well known audit readiness challenges, such as:

  • Audit data is siloed and fragmented, preventing timely, efficient and accurate analysis
  • Attestation-based compliance does note replace quantitative control assessment
  • Identifying and resolving remote workforce policy deviation is difficult
  • Cloud resource monitoring and policy enforcement is more fractured
  • Less controlled use of cloud resources introducing new exposures
  • Audit delay, re-audits and unplanned audit spend is increasing

GRC and security teams often have large, disparate technology datasets that are often incorrect or duplicative, hindering effective control analysis. Data discrepancies and deviation from pre-designated control frameworks are common. GRC team requests for audit support, investigations and corrective actions result in large, cross-department time and resource drains– often with incomplete or unsubstantiated outcomes. Overall, these audit challenges yield increased compliance gaps, prolonged audits, unplanned expenditures, and greater penalty and procedure refactoring costs. Beyond failing to meet audit specifications, there is the risk of attack and data leakage –upwards of 69% of cyberattacks started with an exploited mismanaged internet-facing assets [3].

One foundation for audit and compliance readiness is to identify and settle on a common security framework, and as a result, common control areas: asset intelligence, IT management, and protection mechanisms. Asset/Technology Intelligence incorporates endpoints, applications, and network and cloud infrastructure. IT Management (that includes Identity and Access Management) encompasses ownership, access, entitlements, configuration, and lifecycle management controls. Protection mechanisms incorporates a wide variety of cyber defenses such as malware, encryption, vulnerability management and firewall technologies.

To advance audit process automation, policies and their technical controls can be used to monitor, verify, report, resolve and refine adherence to specifications. For example, to satisfy PCI-DSS as well as other mandates, a policy for compliant virtual systems running in a payment processing zone would include operating a standard configuration that has system encryption and managed detection and response MDR) active, having an active and authorized owner, and resources consistently managed (access) a specified interval. A compliance workflow would create, monitor and respond to deviations related to these policy-based controls.

Technical control validation (beyond attestation), when used within a process automation platform, reduces audit and compliance complexity and lowers auditing expenditures. ITOps, security and GRC teams can map each set of policies based on user, ownership, location, and technology security / operational state conditions. This also facilitates working with business units to identify unique business requisites and contractual obligations. This approach does not hold water if the underlying audit data is still inaccurate, missing, or conflicting.

Data incongruity impacts evidence generation and threat resolution – and is the antithesis of progressing continuous audit readiness. Audit process automation, from Scope to Evidence, necessitates establishing a unified, integrated system of record for asset technology. Most enterprises have several sources that might conflict with each other or may not be regularly updated. This manifests in present-day auditing gaps – according to Cybersecurity Insiders research more than half of respondents confirmed that their organization have less than 75% asset intelligence coverage.

GRC, security and IT teams need actionable insight on what resources, from endpoints and applications to network and cloud infrastructure, are associated with which owners, managers and departments – and where are these resources located.  Which endpoints have inactive or outmodes defenses and are vulnerable? What software applications are installed and what SaaS applications are being accessed and is such use unauthorized and licenses. Where are new instances of network and cloud workloads being spun up, who manages them, and are they correctly configured or exposed. It is this matrix of data that organizations use to apply a policy (guidelines to be met) that drive the processes (actions to be taken) and procedures (detailed steps that comprise the action) — these elements serve as the basis for automation.

A new class of  Enterprise Technology Management (ETM) tools have emerged as an enabler for continuous audit readiness by providing the ability to automate key business processes for technology and IT management. These platforms deliver the necessary system of record and workflow flexibility to enable continuous audit readiness.  ETM platforms apply multi-source data normalization and advanced correlation that better equip security and GRC staff to to analyze and interpret policy compliance information. They also provides low-code workflow editing and management, leveraging this unified and accurate technology intelligence, to streamline a wide variety of compliance verification and remediation tasks. This approach makes audit reporting preparation always available, incident management more proactive, audit completion more predictable (and less costly), and audit workflows more easily manageable – across an enterprise’s entire IT estate.

49% of organizations expressed room for improvement in their workflows due to periodic security and compliance issues [5] Given on-going operational dynamics, ever-increasing technology volume, and present-day shrinking budgets, now is the time to determine where and how to progress continuous audit readiness.

Scott Gordon (CISSP)
CMO at Oomnitza

1 ESG Research: 2021: State of Data Privacy and Compliance
2 Vanson Bourne/Telos: 2020 Survey, A Wake-Up Call: The Harsh Reality of Audit Fatigue
3 ESG Research: 2022 Security Hygiene and Posture Management
4 Cybersecurity Insiders: 2022 Attack Surface Management Maturity Report
5 You-Gov/Oomnitza: 2022 State of Audit Readiness Report

The post Forging the Path to Continuous Audit Readiness appeared first on Cybersecurity Insiders.