The partnership between these two market-leading vendors enables MSSPs around the world to fast-track cutting-edge MXDR services.

AT&T, the leader in network and managed security services, and SentinelOne, the leader in next generation, autonomous endpoint protection, today announced a strategic alliance to help prevent cybercrime. The partnership focuses on providing managed security service providers (MSSPs) around the world with a clear path to providing top-tier managed extended detection and response (MXDR) capabilities for customers.

“Managed XDR is a lot different than the conventional detection and response systems in the sense that it enables members of our partner program to build solutions on the platforms their customers already use in order to make the best out of their investments,” says Rakesh Shah, Vice President of Product at AT&T Cybersecurity. “The new alliance combines AT&T USM Anywhere network threat detection capabilities with SentinelOne endpoint protection. Together, these two security platforms provide industry-leading network and endpoint threat detection and response solutions that will enable MSSPs to be successful at providing their end customers with world-class security.”

“AT&T and SentinelOne help MSSPs enter the era of XDR, protecting more surfaces at speeds and scales previously not possible with humans alone. SentinelOne’s autonomous technology coupled with AT&T’s integrated network technologies and services enables MSSPs to reduce risk and boost protection for their customers,” says Mike Petronaci, VP Product at SentinelOne.

The alliance streamlines XDR attainment for partner program members that provide manage security services for a range of organizations. An ideal customer for this MXDR solution would be an MSSP managing small-to-midsized enterprises. Those enterprises may be interested in outsourcing managed cybersecurity services because they do not have the in-house resources to deliver the security results they need. Larger enterprises that do not want to outsource their security completely but are looking for some help could also use this MXDR solution managed by one of our partners.

The tight integration this alliance brings provides MSSP partners with ready access to the award-winning USM Anywhere and SentinelOne platforms. In addition, for MSSPs that acquire SentinelOne endpoint protection through the partner program, AT&T will manage hundreds of additional indicators of compromise through a unique integration within USM Anywhere that streams uniquely tailored security telemetry from the SentinelOne Deep Visibility platform.

SentinelOne partnership

The post AT&T Cybersecurity’s Partner Program and SentinelOne enter managed XDR market with robust alliance appeared first on Cybersecurity Insiders.

Perspective:

While there is an alphabet soup of compliance requirements and security standards frameworks, this post will focus on the two prevalent certifications frequently discussed for SaaS and B2B businesses. Security and compliance qualifications, like SOC 2 and ISO 27001, demonstrate that you apply good practices in your business. They are often classified as “security” and thought of as the technical security of your systems. However, they are broader, focusing on organizational practices supporting your security and other objectives. That includes availability (system resilience), the confidentiality of data, privacy for your users, integrity of the system processing objectives, scalable process design, and operational readiness to support significant business customers.

So, before we get into which one would you pick, how, and why, let's quickly get aligned on the key benefits of why these certifications and attestations are relevant from a business standpoint.

Background and benefits:

It helps establish brand trust and enable sales: Your customer's looking to use your software, consider your product, and your capabilities as an organization. These qualifications play an essential role in demonstrating your business is “enterprise-ready,” providing a reliable service and keeping their data secure.

It helps demonstrate compliance and establish a baseline for risk management: These certifications often become mandates from procurement teams to demonstrate supply chain security. Or they can be used to demonstrate compliance with regulations and satisfy regulatory requirements.

It helps reduce overhead and time responding to due diligence questionnaires: A significant pain point for software companies is the relentless due diligence in serving enterprise customers. Hundreds, even thousands of “security questions” and vendor audits are common. Standards like SOC 2 and ISO 27001 are designed to have a single independent audit process that satisfies broad end-user requirements.

It helps streamline and improve business operations: You adopt “good” or “best” industry practices by going through these certifications. Investors, regulators, partners, Board, the management team, and even employees benefit from implementing and validating your alignment to standards. It provides peace of mind that you are improving your security posture, helps address compliance requirements, and strengthens your essential operational practices.

Which standard is best for these goals? 

Each standard has different requirements, nuances in how they are applied, and perceptions in the market. This impacts which may be best for your business and how they help you achieve the goals above.

Below, we'll compare the two most common standards, SOC and ISO.

Often, we see that the SOC 2 reports are widely adopted and acknowledged. Many procurement and security departments may require a SOC 2 report before approving a SaaS vendor for use.  If your business handles any customer data, getting a SOC 2 report will help show your customers and users that you seriously consider data security and protection. Healthcare, retail, financial services, SaaS, cloud storage, and computing companies are just some businesses that will benefit from SOC 2 compliance certification.

What is a SOC -2 certification?

SOC-2 is based on five Trust Service Criteria (TSC) principles.

Security – making sure that sensitive information and systems are protected from security risks and that all predefined security procedures are being followed

Availability – ensuring that all systems are available and minimizing downtime to protect sensitive data

Processing integrity – verifying data integrity during processing and before authorization

Confidentiality – allowing information access only to those approved and authorized to receive

Privacy – managing personal and private information with integrity and care

SOC 2 examinations were designed by the American Institute of Certified Public Accountants (AICPA) to help organizations protect their data and the privacy of their client's information. A SOC 2 assessment focuses on an organization's security controls related to overall services, operations, and cybersecurity compliance. SOC 2 examinations can be completed for organizations of various sizes and across different sectors. 

Businesses that handle customer data proactively perform SOC 2 audits to ensure they meet all the criteria. Once an outside auditor performs a SOC 2 audit, the auditor will issue a SOC 2 certificate that shows the business complies with all the requirements if the business passes the audit. There are two types of SOC 2 audits: Type 1 and Type 2. The difference between them is simple: A Type 1 audit looks at the design of a specific security process or procedure at one point in time, while a Type 2 audit assesses how successful that security process is.

What Is ISO/IEC 27001:2013?

The ISO/IEC 27001 is an international information security standard published jointly by the International Organization for Standardization (ISO) and the International Electrotechnical Commission (IEC.) It is part of the ISO/IEC 27000 family of standards. It offers a framework to help organizations establish, implement, operate, monitor, review, maintain, and continually improve their information security management systems.

ISO 27001 details the specification for Information Security Management System (ISMS) to help organizations address people, processes, and technology about data security to protect the confidentiality, integrity, and availability of their information assets. The ISO 27001 framework is based on risk assessment and risk management, and compliance involves identifying information security risks and implementing appropriate security controls to mitigate them. It also includes 27017 and 27018 to demonstrate cloud security and privacy protections and /or do 27701 (privacy management system) as an extension to ISO 27001.

The intent of information protection – a common thread between both SOC and ISO 27001.

Both SOC 2 and ISO 27001 are similar in that they are designed to instill trust with clients that you are protecting their data. If you look at their principles, they each cover essential dimensions of securing information, such as confidentiality, integrity, and availability.

The good news from this comparison is that both frameworks are broadly recognized certifications that prove to clients that you take security seriously. The great news is that if you complete one certification, you are well along the path to achieving the other. These attestations and certifications are reputable and typically accepted by clients as proof that you have proper security. Suppose you sell to organizations in the United States. In that case, they will likely accept either SOC 2 or ISO 27001 as a third-party attestation to your InfoSec program. Both are equally “horizontal” in that most industries accept them.

There are several key differences between ISO 27001 vs. SOC 2, but the main difference is scope. ISO 27001 is to provide a framework for how organizations should manage their data and prove they have an entire working ISMS in place. In contrast, SOC 2 demonstrates that an organization has implemented essential data security controls. 

Which one should you go with?

Whatever certification you decide to do first, the odds are as your business grows, you will eventually have to complete both certifications to meet the requirements of your global clientele. The encouraging news is that there are more accessible, faster, and more cost-effective methods to leverage your work in one certification to reduce the amount of work you need to do in subsequent certifications. We are suggesting that you explore compliance with a proactive mindset, as it will save you time and money in the long run.

The post Security frameworks / attestations and certifications: Which one is the right fit for your organization? appeared first on Cybersecurity Insiders.

This blog was written by an independent guest blogger.

If you don’t think API security is that important, think again. Last year, 91% of organizations had an API security incident. The proliferation of SOAP and REST APIs makes it easy for organizations to tailor their application ecosystems. But, APIs also hold the keys to all of a company’s data. And as data-centric projects become more in demand, it increases the likelihood of a target API attack campaign. 

Experts agree that organizations that keep their API ecosystem open should also take steps to prevent ransomware attacks and protect data from unauthorized users. Here is a list of 12 tips to help protect your API ecosystem and avoid unnecessary security risks. 

Encryption

The best place to start when it comes to any cybersecurity protocol is encryption. Encryption converts all of your protected information into code that can only be read by users with the appropriate credentials. Without the encryption key, unauthorized users cannot access encrypted data. This ensures that sensitive information stays far from prying eyes. 

In today’s digital business environment, everything you do should be encrypted. Using a VPN and Tor together runs your network connection through a secured server. Encrypting connections at every stage can help prevent unwanted attacks. Customer-facing activities, vendor and third-party applications, and internal communications should all be protected with TLS encryption or higher. 

Authentication

Authentication means validating that a user or a machine is being truthful about their identity. Identifying each user that accesses your APIs is crucial so that only authorized users can see your company’s most sensitive information. 

There are many ways to authenticate API users:

  • HTTP basic authentication
  • API authentication key configuration
  • IdP server tokens

OAuth & OpenID Connect

A great API has the ability to delegate authentication protocols. Delegating authorizations and authentication of APIs to an IdP can help make better use of resources and keep your API more secure. 

OAuth 2 is what prevents people from having to recall from memory thousands of passwords for numerous accounts across the internet and allows users to connect via trusted credentials through another provider (like when you use Facebook, Apple, or Google to log in or create an account online).

This concept is also applied to API security with IdP tokens. Instead of users inputting their credentials, they access the API with a token provided by a third-party server. Plus, you can leverage the OpenId Connect standard by adding an identity layer on top of OAuth. 

Audit, log, and version

Without adequate API monitoring, there is no way organizations can stop insidious attacks. Teams should continuously monitor the API and have an organized and repeatable troubleshooting process in place. It’s also important that companies audit and log data on the server and turn it into resources in case of an incident. 

A monitoring dashboard can help track API consumption and enhance monitoring practices. And don’t forget to add the version on all APIs and depreciate them when appropriate. 

Stay private

Organizations should be overly cautious when it comes to vulnerabilities and privacy since data is one of the most valuable and sought-after business commodities. Ensure error messages display as little information as possible, keep IP addresses private, and use a secure email gateway for all internal and external messaging. Consider hiring a dedicated development team that has only necessary access and use an IP whitelist and blacklist to restrict access to resources. 

Consider your infrastructure

Without a good infrastructure and security network, it’s impossible to keep your API secure. Make sure that your servers and software are up to date and ensure that regular maintenance is done to consolidate resources. You should also ensure that third-party service providers use the most up-to-date versioning and encryption protocols. 

Throttling and quotas

DDOS attacks can block legitimate users from using their dedicated resources, including APIs. Restricting access to the API and application organizations can ensure that no one will abuse your APIs. Setting throttling limits and quotas is a great way to prevent cyberattacks from numerous sources, such as a DDOS attack. Plus, you can prevent overloading your system with unnecessary requests. 

Data validation

All data must be validated according to your administrative standards to prevent malicious code from being injected into your API. Check every piece of data that comes through your servers and reject anything unexpected, significantly large, or from an unknown user. JSON and XML schema validation can help check your parameters and prevent attacks. 

OWASP Top 10

Staying up on the OWASP (Open Web Application Security Project) Top 10 can help teams implement proactive measures to protect the API from known vulnerabilities. The OWASP Top 10 lists the 10 worst vulnerabilities according to their exploitability and impact. Organizations should regularly review their systems and secure all OWASP vulnerabilities. 

API firewalling

An API firewall makes it more difficult for hackers to exploit API vulnerabilities. API firewalls should be configured into two layers. The first DMZ layer has an API firewall for basic security functions, including checking for SQL injections, message size, and other HTTP security activities. Then the message gets forwarded to the second LAN layer with more advanced security functions. 

API gateway management

Using an API gateway or API management solution can help save organizations a lot of time and effort when successfully implementing an API security plan. An API gateway helps keep data secure with tools to help monitor and control your API access. 

In addition to streamlined API security implementation, an API management solution can help you make sense of API data to power future business decisions. Plus, with the help of creative graphic design, many API management solutions and gateways offer a simple UI with easy navigation. 

Call security experts

Although cybersecurity positions are popping up worldwide, many organizations are having difficulty finding talented experts with the right security credentials to fill in the security gaps. There are ways to attract cybersecurity professionals to your company, but cybersecurity can’t wait for the right candidate. 

Call the security experts at AT&T cybersecurity to help you manage your network and API security. Plus, you can use ICAP (Internet Content Adaptation Protocol) servers to scan the payload of your APIs. 

Final thoughts

As digital tools and technologies continue to evolve, so will hackers’ attempts to exploit crucial business data. Putting some basic API security best practices in place will help prevent attacks in the future and contribute to a healthy IT policy management lifecycle. 

The best way to ensure that your APIs are safe is to create a company-wide mindset of cyber hygiene through continuous training and encouraging DevSecOps collaborative projects. However, organizations can secure their digital experiences and important data by following these simple tips to enhance their API security. 

The post API security: 12 essential best practices to keep your data & APIs safe appeared first on Cybersecurity Insiders.

This blog was written by an independent guest blogger.

“Ransomware has become the enemy of the day; the threat that was first feared on Pennsylvania Avenue and subsequently detested on Wall Street is now the topic of conversation on Main Street.”

Frank Dickson, Program Vice President, Cybersecurity Products at IDC

In the first installment of this blog series (Endpoint Security and Remote Work), we highlighted the need to provide comprehensive data protections to both traditional and mobile endpoints as an enabler of remote work.  In this second chapter, we’ll expand on the importance of endpoint security as one of many key elements for defining an organization’s security posture as it relates to arguably the most relevant cybersecurity issue of the day.  

Cue the ominous music and shadowy lighting as it is likely the mood for most cybersecurity professionals when considering the topic of ransomware. To the dismay of corporate executives, government and education leaders, and small business owners, ransomware is pervasive and evolving quickly.  As evidence, a recent report indicated that roughly half of all state and local governments worldwide were victim of a ransomware attack in 2021. 

However, there are important steps that can be taken along the path to digital transformation to minimize the risk associated to these attacks.  As companies consider the evolution of their strategy for combating ransomware, there are five key strategies to help with reducing the risks inherent to an attack:

1. Prevent phishing attacks and access to malicious websites

Companies must be able to inspect all Internet bound traffic from every endpoint, especially mobile, and block malicious connections.  This challenge is significantly more complex than simply inspecting corporate email.  In fact, because bad actors are highly tuned to user behavior, most threat campaigns generally include both a traditional and mobile phishing component to the attack.                                              

Bad actors are highly tuned to user behavior as they look to perpetuate their attacks and SMS/Messaging apps provide considerably higher response rates. To quantify, SMS has a 98% open rate and an average response time of just 90 seconds.   The same stats for email usage equate to a 20% open rate and 1.5-hour response time which help explain why hackers have pivoted to mobile to initiate ransomware attacks.

As a result, Secure Web Gateways (SWG) and Mobile Endpoint Security (MES) solutions need to work in concert to secure every connection to the Internet and from any device. Both SWG and MES perform similar functions specific to inspecting web traffic but they do it from different form factors and operating systems.  The data protections for SWG are primarily available on traditional endpoints (Windows, MacOS, etc.) where MTD addresses the mobile ecosystem with protections for iOS and Android.  Because ransomware can be initiated in many ways including but not limited to email, SMS, QR codes, and social media, every organization must employ tools to detect and mitigate threats that target all endpoints. 

2. Prevent privilege escalation and application misconfigurations

Another tell-tale sign of a possible ransomware attack is the escalation of privileges by a user within the organization.  Hackers will use the compromised credentials of a user to access systems and disable security functions necessary to execute their attack.  The ability of the IT organization to recognize when a user’s privileges have been altered is made possible through UEBA (User and Entity Behavior Analytics).  Many times, hackers will modify or disable security functions to allow them easier access and more dwell time within an organization to identify more critical systems and data to include in their attack.  The ability to identify abnormal behavior such as privilege escalation or “impossible travel” are early indicators of ransomware attacks and key aspects of any UEBA solution.  For example, if a user logs into their SaaS app in Dallas and an hour later in Moscow, your security staff need to be aware, and you must have tools to automate the necessary response that starts with blocking access to the user. 

3. Prevent lateral movement across applications

After the ransomware attack has been initiated, the next key aspect of the attack is to obtain access to other systems and tools with high value data that can be leveraged to increase the ransom.  Therefore, businesses should enable segmentation at the application level to prevent lateral movement.  Unfortunately, with traditional VPNs, access management can be very challenging.  If a hacker were to compromise a credential and access company resources via the VPN, every system accessible via the VPN could now be available to expand the scope of the attack. 

Current security tools such as Zero Trust Network Access prevent that lateral movement by authenticating the user and his/her privileges on an app-by-app basis.  That functionality can be extended by utilizing context to manage the permissions of that user based on many factors such which device is being utilized for the request (managed vs. unmanaged), the health status of the device, time of day/location, file type, data classification such as confidential/classified, user activity such as upload/download, and many more.  A real-world example would allow view only access to non-sensitive corporate content via their personal tablet to perform their job, but would require the data be accessed via a managed device if they were to take any action such as sharing or downloading that content.  

4. Minimize the risk of unauthorized access to private applications

It is essential for companies to ensure that corporate/proprietary apps and servers aren’t discoverable on the Internet.  Authorized users should only get access to corporate information using adaptive access policies that are based on users’ and devices’ context.  Whether these applications reside in private data centers or IaaS environments (AWS, Azure, GCP, etc.), the same policies for accessing data should be consistent. Ideally, they are managed by the same policy engine to simplify administration of an organization’s data protections.  One of the most difficult challenges for security teams in deploying Zero Trust is the process of creating policy.  It can take months or even years to tune false positives and negatives out of a DLP policy, so a unified platform that simplifies the management of those policies across private apps, SaaS, and the Internet is absolutely critical. 

5. Detect data exfiltration and alterations

A recent trend amongst ransomware attacks has included the exfiltration of data in addition to the encryption of the critical data.  In these examples, the data that was stolen was then used as leverage against their victim to encourage the payment of the ransom.  LockBit 2.0 and Conti are two separate ransomware gangs notorious for stealing data for the purposes of monetizing it and at the same time using it to damage the reputation of their targets.

Hence, companies must be able to leverage the context and content-aware signals of their data to help mitigate malicious downloads or modifications of their data.  At the same time, it is just as important that these signals travel with the files throughout their lifecycle so that the data can be encrypted when accessed via an unauthorized user, thereby preventing them from being able to view the content.  Enterprise Data Rights Management and DLP together can provide this functionality that serves as an important toolset to combat ransomware attacks by minimizing the value of the data that is exfiltrated. 

It should also be noted that this functionality is just as important when considering the impact to compliance and collaboration.  Historically, collaboration has been thought to increase security risk, but the ability to provide data protections based on data classification can dramatically improve a company’s ability to collaborate securely while maximizing productivity.

As stated above, there is considerably more to preventing ransomware attacks than good endpoint security hygiene.  With the reality of remote work and the adoption of cloud, the task is significantly more challenging but not impossible.  The adoption of Zero Trust and a data protection platform that includes critical capabilities (UEBA, EDRM, DLP, etc.) enables companies to provide contextually aware protections and understand who is accessing data and what actions are being taken…key indicators that can be used to identify and stop ransomware attacks before they occur.  

For more information regarding how to protect your business from the perils of ransomware, please reach out to your assigned AT&T account manager or click here to learn more about how Lookout’s platform helps safeguard your data.

This is part two of a three-part series, written by an independent guest blogger. Please keep an eye out for the last blog in this series which will focus on the need to extend Endpoint Detection and Response capabilities to mobile.

The post 5 ways to prevent Ransomware attacks appeared first on Cybersecurity Insiders.

This blog was written by an independent guest blogger.

Stop, look, listen; lock, stock, and barrel; “Friends, Romans, Countrymen…” The 3 Little Pigs; Art has 3 primary colors; photography has the rule of thirds; the bands Rush and The Police; the movie The 3 Amigos. On and on it goes – “Omne trium perfectum” – “Everything that comes in threes is perfect.”

While this article doesn’t provide perfection, we’ll focus on the top three API vulnerabilities (according to OWASP). OWASP’s international standard is important to read because it’s not only developed by professionals worldwide, but it’s also read by the threat actors who will take advantage of those vulnerabilities.

OWASP determines the risk of APIs based on the level of the API vulnerability's Exploitability, Weakness Prevalence, Weakness Detectability, and Technical Impact. Therefore, the API Top 10 are in order of OWASP's own risk methodology. Their risk method doesn't consider the chance of materialization or the impact – that's left up to each business. But these three are great places to start because they've affected large companies such as Peloton in 2021.

1. API1:2019 Broken Object Level Authorization (BOLA)

In this vulnerability, aka BOLA, APIs expose endpoints that handle object identifiers, which in turn allows visitors access to numerous resources. This attack is like Insecure Direct Object Reference (IDOR), where applications use user-supplied credentials to access objects. In the API sphere, BOLA is more accurate than IDOR – the problem is broken authorization over a sequence of API calls. Every call to a data source that uses user-provided input should include object level auth checks.

Here’s a simple example of how this works.

An API call has the following path: /customers/user/bob/profile. An attacker will attempt various names in place of “bob” to see what can be accessed, such as:

/customers/user/alice/profile

/customers/user/john/profile

Even if the name is replaced with long mixed characters, if those character sequences are sequential or otherwise guessable, the problem remains and is vulnerable.

Mitigation

  • Implement an authorization mechanism that relies on user policies and hierarchy.
  • Use an authorization mechanism to check if the logged-in user has authorization to perform the requested action on the record in every function that uses an input from the client to access a record in the database.
  • Use random and non-guessable values for record IDs.
  • Evaluate the authorization checks.

2. API2:2019 Broken User Authentication

When authentication mechanisms are implemented improperly, attackers can compromise authentication tokens or exploit implementation flaws by assuming other users’ identities.

A prominent example of this vulnerability is the 2021 Parler breach. Other factors came into play in the whole breach, but at least one endpoint did not require authentication, giving anyone who found it (and someone did) unhindered access to images.

Mitigation

  • Use industry-standard authentication and token generation mechanisms (and read the accompanying documentation).
  • Be aware of all the possible API authentication flows in the product or service (mobile/ web/deep links that implement one-click authentication/etc.).
  • Treat “forget password” endpoints as login endpoints in terms of brute force, rate limiting, and lockout protection.
  • Use the OWASP Authentication Cheat Sheet.
  • Implement multi-factor authentication wherever and whenever possible.
  • Check for weak passwords.
  • API keys should be used for client app authentication, but not user authentication.

3. API3:2019 Excessive Data Exposure

Developers, designers, and/or engineers may not take data sensitivity into consideration. They may favor using client-side filtering, which means that data is not filtered before reaching the user.

When testing, ask “What should the user know?” and display the minimum amount of data necessary.

Mitigation

  • Test or capture the API calls (using, e.g., Postman or OWASP ZAP) and look for “token” or “key” to see what it reveals.
  • Threat model the data to review the flow and data filtering.
  • Never depend on client-side filtering of sensitive data.
  • Review the API responses. Do they contain valid data?
  • Determine what data type is crossing the wire. Is it sensitive, confidential, PII, etc.? If it is, then it poses both security and privacy threats.

An important aspect of security and risk management is acknowledging that nothing is 100% secure or risk-free. There's always a risk. One concept in self-defense is appearing hard to get. Someone who walks tall and confidently, has no visible jewelry or purse, and is not distracted is considered a much harder target for being accosted than someone who slumps, lazes along, has visible necklaces and bracelets, and is on the phone (distracted). The former doesn't eliminate risk but presents a greatly reduced risk.

Securing APIs needs to move toward a confident posture and reduced risk model. Attackers are looking at the OWASP API Top 10 and other lists of common attack mechanisms, then applying those to their targets. An API that has missed any of these is at much greater risk than an organization that has addressed these, even if there are some other security issues (and there are always security issues). But if attackers have a difficult time making headway on a target, then it's more likely that they'll move on. A major challenge for organizations is that one never knows when or what attackers are doing, so staying on top of security maintenance is another challenge (think of it as job security). One way to become better acquainted with API security is to examine the fundamental aspects.

Focusing efforts on a few high-risk items won’t solve all the vulnerabilities, but that focus provides immediate guidance for engineering, developers, security, and privacy teams. In turn, this provides a roadmap for projects and tasks and prevents any appearance of negligence. These active and engaged responses to known vulnerabilities increase service security and customer trust.

The post API attack types and mitigations appeared first on Cybersecurity Insiders.

This blog was written by an independent guest blogger.

It’s well known that there’s a pervasive cybersecurity skills shortage. The problem has multiple ramifications. Current cybersecurity teams often deal with consistently heavy workloads and don’t have time to deal with all issues appropriately. The skills shortage also means people who need cybersecurity talent may find it takes much longer than expected to find qualified candidates.

Most people agree there’s no single way to address the issue and no fast fix. However, some individuals wonder if global recruitment could be an option, particularly after human resources managers establish that there aren’t enough suitable candidates locally.

Current cybersecurity professionals planning career changes

A June 2022 study from Trellix revealed that 30% of current cybersecurity professionals are thinking about changing their careers. Gathering from a wider candidate pool by recruiting people on a global level could increase the number of overall options a company has when trying to fill open positions.

However, it’s essential to learn what’s causing cybersecurity professionals to want to leave the field. Otherwise, newly hired candidates may not stick around for as long as their employers hope. It’s also important to note that the Trellix poll surveyed people from numerous countries, including the United States, Canada, India, France, and Japan.

Another takeaway from the study was that 91% of people believed there should be more efforts to increase diversity in the cybersecurity sector. The study showed that most employees in the industry now are straight, white, and male. If more people from minority groups feel welcomed and accepted while working in cybersecurity roles, they’ll be more likely to enter the field and stay in it for the long term.

Appealing perks help attract workers

Some companies have already invested in global recruitment efforts to help close cybersecurity skills gaps.

For example, Microsoft recently expanded its cybersecurity skills campaign to an additional 23 countries – including Ireland, Israel, Norway, Poland, and South Africa. All the places were identified as under high threat of cybersecurity attacks. Microsoft representatives have numerous plans to get people the knowledge they need to enter the workforce confidently and fill cybersecurity roles.

The hiring initiative also includes some Asia-Pacific (APAC) countries. That’s significant since statistics suggest it will face a labor shortage of 47 million people across all job types by 2030.

Something human resources leaders must keep in mind before hiring cybersecurity professionals is that the open positions should include attractive benefits packages that are better than or on par with what other companies in the sector provide.

Since cybersecurity experts are in such high demand, they enjoy the luxury of being picky about which jobs they consider and how long they stay in them. Even though cultural differences exist, there are some similarities in what most people look for in their job prospects. Competitive salaries and generous paid time off are among the many examples.

Shortfalls persist despite 700,000 workforce entrants

Global research published in 2021 by (ISC)² found that 700,000 new people had joined the cybersecurity workforce since 2020. However, the study also showed that the worldwide pool of professionals must grow by 65% to keep pace with demand.

The study’s results also suggested that one possibility is to recruit people who don’t have cybersecurity backgrounds. The data indicated that 17% of respondents came into the field from unrelated sectors.

Some experts suggest tapping into specific population groups as a practical way to address the shortage. For example, people with autism and ADHD often have skills that make them well suited for the cybersecurity industry.

Global recruitment is not an all-encompassing solution

Hiring people from around the world could close skill gaps in situations where it’s evident there’s a lack of talent wherever a company primarily operates. However, as the details above highlight, the skills shortage is a widespread issue.

Accepting applications from a global talent pool could also increase administrative tasks when a company is ready to hire. That’s partially due to the higher number of applications to evaluate. Additionally, there are other necessities associated with aspects like visa applications or time zone specifics if an international new hire will work remotely.

People in the IT sector should ideally see global recruitment as one of many possibilities for reducing the cybersecurity skills gap severity. It’s worth consideration, but not at the expense of ignoring other strategies.

The post Can global recruitment solve the cybersecurity hiring problem? appeared first on Cybersecurity Insiders.

In the previous article, we covered the release process and how to secure the parts and components of the process. The deploy and operate processes are where developers, IT, and security meet in a coordinated handoff for sending an application into production.

The traditional handoff of an application is siloed where developers send installation instructions to IT, IT provisions the physical hardware and installs the application, and security scans the application after it is up and running. A missed instruction could cause inconsistency between environments. A system might not be scanned by security leaving the application vulnerable to attack. DevSecOps focus is to incorporate security practices by leveraging the security capabilities within infrastructure as code (IaC), blue/green deployments, and application security scanning before end-users are transitioned to the system.

Infrastructure as Code

IaC starts with a platform like Ansible, Chef, or Terraform that can connect to the cloud service provider’s (AWS, Azure, Google Cloud) Application Programming Interface (API) and programmatically tells it exactly what infrastructure to provision for the application. DevOps teams consult with developers, IT and security to build configuration files with all of the requirements that describe what the cloud service provider needs to provision for the application. Below are some of the more critical areas that DevSecOps covers using IaC.

IaC diagram

Capacity planning – This includes rules around autoscaling laterally (automatically adding servers to handle additional demand, elastically) and scaling up (increasing the performance of the infrastructure like adding more RAM or CPU). Elasticity from autoscaling helps prevent non-malicious or malicious Denial of Service incidents.

Separation of duty – While IaC helps break down silos, developers, IT, and security still have direct responsibility for certain tasks even when they are automated. Accidentally deploying the application is avoided by making specific steps of the deploy process responsible to a specific team and cannot be bypassed.

Principal of least privilege – Applications have the minimum set of permissions required to operate and IaC ensures consistency even during the automated scaling up and down of resources to match demand. The fewer the privileges, the more protection systems have from application vulnerabilities and malicious attacks.

Network segmentation – Applications and infrastructure are organized and separated based on the business system security requirements. Segmentation protects business systems from malicious software that can hop from one system to the next, otherwise known as lateral movement in an environment.

Encryption (at rest and in transit) – Hardware, cloud service providers and operating systems have encryption capabilities built into their systems and platforms. Using the built-in capabilities or obtaining 3rd party encryption software protects the data where it is stored. Using TLS certificates for secured web communication between the client and business system protects data in transit. Encryption is a requirement for adhering with industry related compliance and standards criteria.

Secured (hardened) image templates – Security and IT develop the baseline operating system configuration and then create image templates that can be reused as part of autoscaling. As requirements change and patches are released, the baseline image is updated and redeployed.

Antivirus and vulnerability management tools – These tools are updated frequently to keep up with the dynamic security landscape. Instead of installing these tools in the baseline image, consider installing the tools through IaC.

Log collection – The baseline image should be configured to send all logs created by the system to a log collector outside of the system for distribution to the Network Operations Center (NOC) or Security Operations Center (SOC) where additional inspection and analysis for malicious activity can be performed. Consider using DNS instead of IP addresses for the log collector destination.

Blue green deployment

Blue green deployment strategies increase application availability during upgrades. If there is a problem, the system can be quickly reverted to a known secured and good working state. A blue green deployment is a system architecture that seamlessly replaces an old version of the application with a new version.

Blue green deployment

Deployment validation should happen as the application is promoted through each environment. This is because of the configuration items (variables and secrets) that are different between the environments. Typically, validation happens during non-business hours and is extremely taxing on the different groups supporting the application. With a blue green deployment, the new version of an application can be deployed and validated during business hours. Even if there are concerns when end-users are switched over during non-business hours, fewer employees are needed to participate.

Automate security tools installation and scanning

Internet facing application attacks continue to increase because of the ease of access to malicious tools, the speed at which some vulnerabilities can be exploited, and the value of the data extracted. Dynamic Scanning Tools (DAST) are a great way to identify vulnerabilities and fix them before the application is moved into production and released for end-users to access.

DAST tools provide visibility into real-world attacks because they mimic how hackers would attempt to break an application. Automating and scheduling the scanning of applications in a regular cadence helps find and resolve vulnerabilities quickly. Company policy may require vulnerability scanning for compliance with regulatory and standards like PCI, HIPPA or SOC.

DAST for web applications focuses on the OWASP top 10 vulnerabilities like SQL injection and cross-site scripting. Manual penetration (PEN) testing is still required to cover other vulnerabilities like logic errors, race conditions, customized attack payloads, and zero-day vulnerabilities. Also, not all applications are web based so it is important to select and use the right scanning tools for the job. Manual and automatic scanning can also help spot configuration issues that lead to errors in how the application behaves.

Next Steps

Traditional deployments of applications are a laborious process for the development, IT, and security teams. But that has all changed with the introduction of Infrastructure as Code, blue-green deployments, and the Continuous Delivery (CD) methodology. Tasks performed in the middle of the night can be moved to normal business hours. Projects that take weeks of time can be reduced to hours through automation. Automated security scanning can be performed regularly without user interaction. With the application deployed, the focus switches to monitoring and eventually decommissioning it as the final steps in the lifecycle.

The post DevSecOps deploy and operate processes appeared first on Cybersecurity Insiders.

If your organization is having trouble creating policies, I hope that this blog post will help you set a clear path. We’ll discuss setting up your organization up for success by ensuring that you do not treat your policies as a “do once and forget” project. Many organizations I have worked with have done that, but later realized good policy lifecycle is required, and a pillar of good governance.

Organizations often feel that developing and enforcing policies is bureaucratic and tedious, but the importance of policies is often felt when your organization does not have them. Not only are they a cost of doing business, but they are also used to establish the foundation and norms of acquiring, operating, and securing technology and information assets.

The lifecycle, as it implies, should be iterative and continuous, and policies should be revisited at a regular cadence to ensure they remain relevant and deliver value to your business.

IT policy process

 Assess

The first step is to find out where your organization is, this step should shine a light on where, and what gaps exist.

First, determine how you will be assessing your policies; here is a checklist, whether you are building new ones or bringing current ones up to date:

  • Is it current and up to date
  • Does it have a clear purpose or goal
  • Does it have a clear scope (inclusions /exclusions)
  • Does it have a clear ownership
  • Does it have a clear list of affected people
  • Does it have language that is easy to understand
  • Is it detailed enough to avoid misinterpretations
  • Does it follow the laws/regulations/ethical standards
  • Does it reflect the organizational goals/values and culture
  • Are key terms and acronyms defined
  • Have related policies and procedures been identified
  • Are there clear consequences for non-compliance
  • Is it approved and supported by management
  • Is it enforceable

Next, inventory your organization’s policies by listing them and then assessing the quality using the previous list. Based on the quality, identify if your organization needs new policies or if the existing ones need improvement, then determine the amount of work that will be required.

Best practices suggest that you may want to prioritize your efforts on the most significant improvements, those that focus on the most serious business vulnerabilities.

Understand that policy improvement does not end with a new policy document. You will need to plan for communications, training, process changes, and any technology improvements needed to make the policy fair and enforceable.

Develop

After the assessment is done, you should plan on developing your policies or revamping the old ones. Although there is no consensus on what makes a good policy, referenced material [1] [2] [3] [4] suggests the following best practices, policies should have a clear purpose and precise presentation that drives compliance by eliminating misinterpretations;

All policies should include and describe the following:

  • Purpose
  • Expectations
  • Consequences
  • Glossary of terms

For maximum effect, policies should be written:

  • With everyday language
  • With direct and active voice
  • Precisely to avoid misinterpretation
  • Realistically
  • Consistently in keeping with standards

Consider that policies need to be actively sold to the people who are supposed to follow them. You can achieve that by using a communication plan that includes:

  • Goals and objectives
  • Key messages
  • Potential barriers
  • Suggested actions
  • Budget considerations
  • Timelines

Enforcement

A lack of enforcement will create ethical, financial, and legal risks to any organization. Among the risks are loss of productivity due to abuse of privileges, potential wasted resources, and loss of reputation if an employee engages in illegal activities due to poor policy enforcement, which can lead to potential litigation. Make sure that you have clear rules of engagement.

Your organization should establish the proper support framework around Leadership, Process, and Monitoring. Policies should perform against standards. Policies don't always fail due to bad behavior; they fail because:             

  • They are poorly written
  • There is no enforcement
  • They are illegal or unethical
  • They are poorly communicated
  • They go against company culture

If your company feels overwhelmed thinking about all the moving pieces that make up an IT Policy Management Lifecycle. Let AT&T Cybersecurity Consulting help whether you need to amend existing policies, implement one or more brand new policies, or need a complete overhaul of the entire policy portfolio.

References

1) F. H. Alqahtani, “Developing an Information Security Policy: A Case Study Approach,” Science Direct, vol. 124, pp. 691-697, 2017.

2) S. Diver, “SANS White Papers,” SANS , 02 03 2004. [Online]. Available: https://www.sans.org/white-papers/1331/. [Accessed 15

3) S. V. Flowerday and T. Tuyikeze, “Information security policy development and implementation: The what, how, and who,” Science Direct, vol. 61, pp. 169-183, 2016.

4) K. J. Knapp, R. F. Morris, T. E. Marshall and T. A. Byrd, “Information security policy: An Organizational level process model,” Science Direct, vol. 28, no. 7, pp. 493-508, 2007.

The post How to create a continuous lifecycle for your IT Policy Management appeared first on Cybersecurity Insiders.

Introduction

Since my previous blog CMMC Readiness was published in September 2021, the Department of Defense (DoD) has made modifications to the program structure and requirements of the Cybersecurity Maturity Model Certification (CMMC) interim rule first published in September 2020.  CMMC 2.0 was officially introduced in November 2021 with the goal of streamlining and improving CMMC implementation.

In this blog, I will identify the key changes occurring with CMMC 2.0 and discuss an implementation roadmap to CMMC readiness.

Key changes

Key changes in CMMC 2.0 include:

  • Maturity Model reduced from 5 compliance levels to 3
    • Level 3 – Expert
    • Level 2 – Advanced (old Level 3)
    • Level 1 – Foundational
  • Improved alignment with National Institute of Standards and Technology (NIST)
    • NIST SP 800-171
    • NIST SP 800-172
  • Practices reduced from 130 to 110 for Level 2 Certification
  • Independent assessment by C3PAO at Level 2 – Advanced
  • Self-assessment at Level 1 – Foundational, limited at Level 2 – Advanced
  • Removed processes (ML.2.999 Policy, ML.2.998 Practices, and ML.3.997 Resource Plan)

Figure 1. CMMC Model

CMMC model

Source: Acquisition & Sustainment – Office of the Under Secretary of Defense

CMMC requirements at Level 1 and Level 2 now align with National Institute of Standards and Technology (NIST) Special Publication (SP) 800-171 – Protecting Controlled Unclassified Information in Nonfederal Information Systems and Organizations.  This alignment should be beneficial to most DIB organizations since they have been subject to FAR 52.204-21 or DFARS 252.204-7012 and should have been self-attesting to NIST SP 800-171 practices whether it be the 17 NIST practices required for those handling only FCI or the 110 NIST practices for those handing FCI and CUI.  Those organizations that took self-attestation seriously over the years should be able to leverage the work they have previously performed to place themselves in a strong position for CMMC certification.

CMMC 2.0 may have dropped the three Processes (ML.2.999 Policy, ML.2.998 Practices, and ML.3.997 Resource Plan), but that does not eliminate the requirement for formal security policies and control implementation procedures.  CUI security requirements were derived in part from NIST Special Publication 800-53 Security and Privacy Controls for Federal Information Systems and Organizations (NIST SP 800-53).  The tailoring actions addressed in Appendix E of NIST SP 80-171R2 specify that the first control of each NIST SP 800-53 family (e.g., AC-1, AT-1, PE-1, etc.), which prescribe written and managed policies and procedures, are designated as NFO or “expected to be routinely satisfied by nonfederal organizations without specification”.  This means that they are required as part of the organization’s information security management plan and are applicable to the CUI environment.  Refer to Appendix E for other NIST SP 800-53 controls that are designated as NFO and include them in your program.

Implementation roadmap

Although there have been welcomed changes to the structure of CMMC, my recommended approach to implementation first presented last September has changed little.  The following presents a four-step approach to get started down the road to CCMC Level 2 certification. 

CMMC implementation

Education

I cannot stress the importance of educating yourself and your organization on the CMMC 2.0 requirements.  A clear and complete understanding of the statute including the practice requirements and the certification process is critical to achieving and maintaining CMMC certification.  This understanding will be integral to crafting a logical, cost-effective approach to certification and will also provide the information necessary to effectively communicate with your executive leadership team. 

Start your education process by reading the CMMC 2.0 documents relevant to your certification level found at OUSD A&S – Cybersecurity Maturity Model Certification (CMMC) (osd.mil).

  • Cybersecurity Maturity Model Certification (CMMC) Model Overview Version 2.0/December 2021 – presents the CMMC model and each of its elements
  • CMMC Model V2 Mapping Version 2 December 2021 – Excel spreadsheet that presents the CMMC model in spreadsheet format.
  • CMMC Self-Assessment Scope – Level 2 Version 2 December 2021 – Guidance on how to identify and document the scope of your CMMC environment.
  • CMMC Assessment Guide – Level 2 Version 2.0 December 2021 – Assessment guidance for CMMC Level 2 and the protection of Controlled Unclassified Information (CUI).

Define

The CMMC environment that will be subject to the certification assessment must be formally defined and documented.    The first thing that the CMMC Third-Party Assessor Organization (C3PAO) engaged to perform the Level 2 certification must do is review and agree with the CMMC scope presented by the DIB organization.  If there is no agreement on the scope, the C3PAO cannot proceed with the certification assessment. 

Scope

CMMC environment includes all CUI-related associated assets found in the organization’s enterprise, external systems and services, and any network transport solutions.  You should identify all of  the CUI data elements that are present your environment and associate them with one or more business processes.  This includes CUI data elements provided by the Government or a Prime Contractor, as well as any CUI created by you as part of the contract execution.  Formally document the CUI data flow through each business process to visualize the physical and logical boundaries of the CMMC environment.  The information gleaned during this process will be valuable input to complete your System Security Plans (SSPs).

Not sure which data elements are CUI?  Work directly with your legal counsel and DoD business partner(s) to reach a consensus on what data elements will be classified as CUI.   Visit the NARA website at (Controlled Unclassified Information (CUI) | National Archives) for more information concerning the various categories of CUI.   Ensure that the classification discussions held by the team and any decisions that are made are documented for posterity. Do not forget to include CUI data elements that are anticipated to be present under any new agreements.

Figure 2. High-Level CMMC Assessment Scope

CMMC assessment

Based on image from CMMC Assessment Scope – Level 2 Version 2.0 | December 2021

During the scoping exercise, you should look for ways to optimize its CMMC footprint by enclaving CUI business processes from non-CUI business processes through physical or logical segmentation.  File and database consolidation may be helpful in reducing the overall CMMC footprint, as well as avoiding handling CUI that serves no business purpose.

GCC v GCC High

Heads up to those DIB organizations that utilize or plan to utilize cloud-based services to process, store, or transit CUI. The use of cloud services for CUI introduces the GCC vs. GCC High considerations.  The GCC environment is acceptable in those instances where only Basic CUI data elements are present.  GCC High is required if CUI-Specified or ITAR/EAR designated data elements are present.  In some instances, prime contractors that utilized GCC High may require their subcontractors to do the same.

Asset Inventory

Asset inventory is an mandatory and is an important part of scoping.  The table below describes the five categories of CUI assets defined by CMMC 2.0.

Asset

Description

CUI

Assets that process, store, or transmit CUI

Security Protection

Assets that provide security functions or services to the contractor’s CMMC scope.

Contractor Risk Managed

Assets that can, but are not intended to process, store, or transmit CUI due to security controls (policies, standards, and practices) put in place by the contractor.

Specialized

Special group of assets (government property, Internet of Things (IoT), Operational Technology (OT), Restricted Information Systems, and Test Equipment) that may or may not process, store, or transmit CUI.

Out-Of-Scope

Assets that cannot process, store, or transit CUI because they are physically or logically separated from CUI assets.

DIB contractors are required to formally document all CUI assets in an asset inventory as well as in their SSPs.  There are no requirements expressed for what information is to be captured in the inventory, but I would recommend in addition to capturing basic information (i.e., serial numbers, make, models, manufacturer, asset tag id, and location) you consider mapping the assets to their relevant business processes and identify asset ownership.   Owners should be given the responsibility for overseeing the appropriate use and handling of the CUI-associated systems and data throughout their useful lifecycles.  An asset management system is recommended for this activity, but Microsoft Excel should be adequate for capturing and maintaining the CUI inventory for small to midsize organizations.

Figure 3. Asset Inventory

CMMC asset inventory

Assess

Once you have your asset inventories completed and your CMMC scope defined, it’s time to perform a gap analysis to determine your security posture alignment with CMMC requirements.  If you have been performing your annual self-attestation against NIST SP 800-171, you can leverage this work but be sure to assess with greater rigor.  Consider having a CMMC Registered Practitioner from a third-party provider perform the assessment since will provide an unbiased opinion of your posture.  The results of the gap assessment should be placed into a Plan of Action and Milestones (POAM) where you will assign priorities, responsibilities, solutions, and due dates for each gap requiring corrective action.

Remediate

Finally, use the POAM to drive the organizations remediation efforts in preparation for CMMC certification.  Remember that if you contract 3rd-party services as part of remediation (e.g., managed security services, cloud services, etc.) those services become part of your CMMC scope.  Consider performing a second posture assessment after remediation efforts are complete to ensure you are ready for the certification assessment by the C3PAO.  CMMC certification is good for 3 years, so be sure to implement a governance structure to ensure your program is positioned for recertification when the time comes.

Conclusion

I hope this implementation roadmap provides a benefit to you on your CMMC Level 2 certification journey.  Keep in mind, there are no surprising or unusual safeguards involved in the process as CMMC requirements align with industry best practices for cybersecurity.  As with any strong information security program, it is critical that you fully understand the IT environment, relevant business processes, and data assets involved.  As we like to say in cybersecurity, “you can’t protect an asset if you don’t know what it is or where it’s at”.  Completing the upfront administrative work such as education, scope, and inventory will pay dividends as you progress toward independent certification.

The post CMMC 2.0: key changes appeared first on Cybersecurity Insiders.

AT&T Business’ most recently #BizTalks Twitter Chat—What’s New in Cybersecurity—Insights, Threat Trends, & RSA Learnings—explored many emerging concepts in the cybersecurity industry. [Optional sentence: Our very own Tawnya Lancaster, AT&T Cybersecurity’s threat intelligence and trends Research lead, did a takeover of the @ATTBusiness Twitter handle to provide her point of view.] Head to the @ATTBusiness Twitter page—go.att.com/twchat—to see the full chat and learn more.

It was an interesting conversation with diverse opinions. Here are some of the highlights.

Adversary tactics

The top question in terms of engagement was this one, and lots of interesting perspectives:

Edge computing was a hot question

Organized cybercrime is clearly top of mind as well

Don’t forget to follow @ATTBusiness on Twitter and stay tuned for our monthly #BizTalks Twitter Chats which cover a range of topics, including cybersecurity, 5G, manufacturing and supply chain, and healthcare.

The post New in Cybersecurity – Insights, threat trends, & RSA learnings appeared first on Cybersecurity Insiders.