Executive summary

AT&T Alien Labs™ has been tracking a new IoT botnet dubbed “EnemyBot”, which is believed to be distributed by threat actor Keksec. During our investigations, Alien Labs has discovered that EnemyBot is expanding its capabilities, exploiting recently identified vulnerabilities (2022), and now targeting IoT devices, web servers, Android devices and content management system (CMS) servers. In addition, the malware base source code can now be found online on Github, making it widely accessible.

Key takeaways:

  • EnemyBot’s base source code can be found on Github, making it available to anyone who wants to leverage the malware in their attacks.
  • The malware is rapidly adopting one-day vulnerabilities as part of its exploitation capabilities.
  • Services such as VMware Workspace ONE, Adobe ColdFusion, WordPress, PHP Scriptcase and more are being targeted as well as IoT and Android devices.
  • The threat group behind EnemyBot, Keksec, is well-resourced and has the ability to update and add new capabilities to its arsenal of malware on a daily basis (see below for more detail on Keksec)

Background

First discovered by Securonix in March 2022 and later detailed in an in-depth analysis by Fortinet, EnemyBot is a new malware distributed by the threat actor “Keksec” targeting Linux machines and IoT devices.

According to the malware Github’s repository, EnemyBot derives its source code from multiple botnets to a powerful and more adjustable malware. The original botnet code that EnemyBot is using includes: Mirai, Qbot, and Zbot. In addition, the malware includes custom development (see figure 1).

flame botnet

Figure 1. EnemyBot page on Github.

The Keksec threat group is reported to have formed back in 2016 by a number of experienced botnet actors. In November 2021, researchers from Qihoo 360 described in detail the threat actor’s activity in a presentation, attributing to the Keksec the development of botnets for different platforms including Windows and Linux:

  • Linux based botnets: Tsunami and Gafgyt
  • Windows based botnets: DarkIRC, DarkHTTP
  • Dual systems: Necro (developed in Python)

Source code analysis

The developer of the Github page on EnemyBot self describes as a “full time malware dev,” that is also available for contract work. The individual states their workplace as “Kek security,” implying a potential relationship with the broader Keksec group (see figure 2).

contract work availability

Figure 2. EnemyBot developer description.

The malware repository on Github contains four main sections:

cc7.py

This module is a Python script file that downloads all dependencies and compiles the malware into different OS architectures including x86, ARM, macOS, OpenBSD, PowerPC, MIPS, and more (see figure 3)

macOS malware

Figure 3. Compiling malware source code to macOS executable.

Once compilation is complete, the script then creates a batch file ‘update.sh’ which is used by the bot as a downloader that is then delivered to any identified vulnerable targets to spread the malware.

spreading EnemyBot

Figure 4. Generated `update.sh` file to spread EnemyBot on different architectures.

enemy.c

This is the main bot source code. Though it is missing the main exploitation function, it includes all other functionality of the malware and the attacks the bot supports by mixing the various botnet source codes as mentioned above (Mirai, Qbot, and Zbot) — mainly Mirai and Qbot (see figure 5).

 EnemyBot source code

Figure 5. EnemyBot source code.

hide.c

This module is compiled and manually executed to encode / decode the malware’s strings by the attacker to hide strings in binary. For that, the malware is using a simple swap table, in which each char is replaced with a corresponding char in the table (see in figure 6).

EnemyBot decode

Figure 6. String decode.

servertor.c

Figure 7 shows the command-and-control component (C&C) botnet controller. C&C will be executed on a dedicated machine that is controlled by the attacker. It can control and send commands to infected machines. (figure 7)

EnemyBot C&C

Figure 7. C&C component.

New variant analysis

Most of EnemyBot functionality relates to the malware’s spreading capabilities, as well as its ability to scan public-facing assets and look for vulnerable devices. However, the malware also has DDoS capabilities and can receive commands to download and execute new code (modules) from its operators that give the malware more functionality.

In new variants of EnemyBot, the malware added a webscan function containing a total of 24 exploits to attack vulnerabilities of different devices and web servers (see figure 8).

figure 8

Figure 8. EnemyBot calls for a new function “webscan_xywz”.

To perform these functions, the malware randomly scans IP addresses and when it gets a response via SYN/ACK, EnemyBot then scans for vulnerabilities on the remote server by executing multiple exploits.

The first exploit is for the Log4j vulnerability discovered last year as CVE-2021-44228 and CVE-2021-45046:

EnemyBot Log4j

Figure 9. Exploiting the Log4J vulnerability.

The malware also can adopt new vulnerabilities within days of those vulnerabilities being discovered. Some examples are Razer Sila (April 2022) which was published without a CVE (see figure 10) and a remote code execution (RCE) vulnerability impacting VMWare Workspace ONE with CVE-2022-22954 the same month (see figure 11).

Razar sila vuln

Figure 10. Exploiting vulnerability in Razar Sila.

VMWare vuln

Figure 11. Exploiting vulnerability in VMWare Workspace ONE.

EnemyBot has also begun targeting content management systems (e.g. WordPress) by searching for vulnerabilities in various plugins, such as “Video Synchro PDF” (see figure 12).

EnemyBot WordPress

Figure 12. EnemyBot targeting WordPress servers.

In the example shown in figure 12, notice that the malware elevates a local file inclusion (LFI) vulnerability into a RCE by injecting malicious code into the ‘/proc/self/environ’. This method is not new and was described in 2009. The malware uses LFI to call ‘environ’ and passes the shell command in the user agent http header.

Another example of how the malware uses this method is shown in figure 13. In this example the malware is exploiting a vulnerability in DBltek GoIP.

DBItek

Figure 13. Executing shell command through LFI vulnerability in DBltek.

In case an Android device is connected through USB, or Android emulator running on the machine, EnemyBot will try to infect it by executing shell command. (figure 14)

Android case

Figure 14. EnemyBot “adb_infect” function to attack Android devices.

After infection, EnemyBot will wait for further commands from its C&C. However, in parallel it will also further propogate by scanning for additional vulnerable devices. Alien Labs has listed below the commands the bot can receive from its C&C (accurate as of the publishing of this article). 

Command

Action

SH

Execute shell command

PING

Ping to server, wait for command

LDSERVER

Change loader server for payload.

TCPON

Turn on sniffer.

RSHELL

Create a reverse shell on an infected machine.

TCPOFF

Turn off sniffer.

UDP

Start UDP flood attack.

TCP

Start TCP flood attack.

HTTP

Start HTTP flood attack.

HOLD

Start TCP connection flooder.

TLS

Start TLS attack, start handshake without closing the socket.

STD

Start non spoofed UDP flooder.

DNS

Start DNS flooder.

SCANNER ON | OFF

Start/Stop scanner – scan and infect vulnerable devices.

OVH

Start DDos attack on OVH.

BLACKNURSE

Start ICMP flooder.

STOP

Stop ongoing attacks. kill child processes

ARK

Start targeted attack on ARK: Survivor Evolved video game server.

ADNS

Receive targets list from C&C and start DNS attack.

ASSDP

Start SSDP flood attack.

We have also listed the current vulnerabilities EnemyBot uses. As mentioned, some of them have not been assigned a CVE yet. (As of the publishing of this article.)

CVE Number

Affected devices

CVE-2021-44228, CVE-2021-45046

Log4J RCE

CVE-2022-1388

F5 BIG IP RCE

No CVE (vulnerability published on 2022-02)

Adobe ColdFusion 11 RCE

CVE-2020-7961

Liferay Portal – Java Unmarshalling via JSONWS RCE

No CVE (vulnerability published on 2022-04)

PHP Scriptcase 9.7 RCE

CVE-2021-4039

Zyxel NWA-1100-NH Command injection

No CVE (vulnerability published on 2022-04)

Razar Sila – Command injection

CVE-2022-22947

Spring Cloud Gateway – Code injection vulnerability

CVE-2022-22954

VMWare Workspace One RCE

CVE-2021-36356, CVE-2021-35064

Kramer VIAware RCE

No CVE (vulnerability published on 2022-03)

WordPress Video Synchro PDF plugin LFI

No CVE (vulnerability published on 2022-02)

Dbltek GoIP LFI

No CVE(vulnerability published on 2022-03)

WordPress Cab Fare Calculator plugin LFI

No CVE(vulnerability published on 2022-03)

Archeevo 5.0 LFI

CVE-2018-16763

Fuel CMS 1.4.1 RCE

CVE-2020-5902

F5 BigIP RCE

No CVE (vulnerability published on 2019)

ThinkPHP 5.X RCE

No CVE (vulnerability published on 2017)

Netgear DGN1000 1.1.00.48 ‘Setup.cgi’ RCE

CVE-2022-25075

TOTOLink A3000RU command injection vulnerability

CVE-2015-2051

D-Link devices – HNAP SOAPAction – Header command injection vulnerability

CVE-2014-9118

ZHOME < S3.0.501 RCE

CVE-2017-18368

Zyxel P660HN – unauthenticated command injection

CVE-2020-17456

Seowon SLR 120 router RCE

CVE-2018-10823

D-Link DWR command injection in various models

Recommended actions

  1. Maintain minimal exposure to the Internet on Linux servers and IoT devices and use a properly configured firewall.
  2. Enable automatic updates to ensure your software has the latest security updates.
  3. Monitor network traffic, outbound port scans, and unreasonable bandwidth usage.

Conclusion

Keksec’s EnemyBot appears to be just starting to spread, however due to the authors’ rapid updates, this botnet has the potential to become a major threat for IoT devices and web servers. The malware can quickly adopt one-day vulnerabilities (within days of a published proof of concept). This indicates that the Keksec group is well resourced and that the group has developed the malware to take advantage of vulnerabilities before they are patched, thus increasing the speed and scale at which it can spread.

Detection methods

The following associated detection methods are in use by Alien Labs. They can be used by readers to tune or deploy detections in their own environments or for aiding additional research.

SURICATA IDS SIGNATURES

Log4j sids: 2018202, 2018203, 2034647, 2034648, 2034649, 2034650, 2034651, 2034652, 2034653, 2034654, 2034655, 2034656, 2034657, 2034658, 2034659, 2034660, 2034661, 2034662, 2034663, 2034664, 2034665, 2034666, 2034667, 2034668, 2034671, 2034672, 2034673, 2034674, 2034676, 2034699, 2034700, 2034701, 2034702, 2034703, 2034706, 2034707, 2034708, 2034709, 2034710, 2034711, 2034712, 2034713, 2034714, 2034715, 2034716, 2034717, 2034723, 2034743, 2034744, 2034747, 2034748, 2034749, 2034750, 2034751, 2034755, 2034757, 2034758, 2034759, 2034760, 2034761, 2034762, 2034763, 2034764, 2034765, 2034766, 2034767, 2034768, 2034781, 2034782, 2034783, 2034784, 2034785, 2034786, 2034787, 2034788, 2034789, 2034790, 2034791, 2034792, 2034793, 2034794, 2034795, 2034796, 2034797, 2034798, 2034799, 2034800, 2034801, 2034802, 2034803, 2034804, 2034805, 2034806, 2034807, 2034808, 2034809, 2034810, 2034811, 2034819, 2034820, 2034831, 2034834, 2034835, 2034836, 2034839, 2034886, 2034887, 2034888, 2034889, 2034890, 2838340, 2847596, 4002714, 4002715

4001913: AV EXPLOIT LifeRay RCE (CVE-2020-7961)

4001943: AV EXPLOIT Liferay Portal Java Unmarshalling RCE (CVE-2020-7961)

4002589: AV EXPLOIT LifeRay Remote Code Execution – update-column (CVE-2020-7961)

2031318: ET CURRENT_EVENTS 401TRG Liferay RCE (CVE-2020-7961)

2031592: ET WEB_SPECIFIC_APPS Liferay Unauthenticated RCE via JSONWS Inbound (CVE-2020-7961)

2035955: ET EXPLOIT Razer Sila Router – Command Injection Attempt Inbound (No CVE)

2035956: ET EXPLOIT Razer Sila Router – LFI Attempt Inbound (No CVE)

2035380: ET EXPLOIT VMware Spring Cloud Gateway Code Injection (CVE-2022-2294) (set)

2035381: ET EXPLOIT VMware Spring Cloud Gateway Code Injection (CVE-2022-2294)

2035876: ET EXPLOIT VMWare Server-side Template Injection RCE (CVE-2022-22954)

2035875: ET EXPLOIT VMWare Server-side Template Injection RCE (CVE-2022-22954)

2035874: ET EXPLOIT VMWare Server-side Template Injection RCE (CVE-2022-22954)

2036416: ET EXPLOIT Possible VMware Workspace ONE Access RCE via Server-Side Template Injection Inbound (CVE-2022-22954)

4002364: AV EXPLOIT Fuel CMS RCE (CVE-2018-16763)

2030469: ET EXPLOIT F5 TMUI RCE vulnerability CVE-2020-5902 Attempt M1

2030483: ET EXPLOIT F5 TMUI RCE vulnerability CVE-2020-5902 Attempt M2

2836503: ETPRO EXPLOIT Attempted THINKPHP < 5.2.x RCE Inbound

2836504: ETPRO EXPLOIT Attempted THINKPHP < 5.2.x RCE Outbound

2836633: ETPRO EXPLOIT BlackSquid Failed ThinkPHP Payload Inbound

2026731: ET WEB_SERVER ThinkPHP RCE Exploitation Attempt

2024916: ET EXPLOIT Netgear DGN Remote Command Execution

2029215: ET EXPLOIT Netgear DGN1000/DGN2200 Unauthenticated Command Execution Outbound

2034576: ET EXPLOIT Netgear DGN Remote Code Execution

2035746: ET EXPLOIT Totolink – Command Injection Attempt Inbound (CVE-2022-25075)

4001488: AV TROJAN Mirai Outbound Exploit Scan, D-Link HNAP RCE (CVE-2015-2051)

2034491: ET EXPLOIT D-Link HNAP SOAPAction Command Injection (CVE-2015-2051)

4000095: AV EXPLOIT Unauthenticated Command Injection (ZyXEL P660HN-T v1)

4002327: AV TROJAN Mirai faulty Zyxel exploit attempt

2027092: ET EXPLOIT Possible ZyXEL P660HN-T v1 RCE

4002226: AV EXPLOIT Seowon Router RCE (CVE-2020-17456)

2035950: ET EXPLOIT SEOWON INTECH SLC-130/SLR-120S RCE Inbound M1 (CVE-2020-17456)

2035951: ET EXPLOIT SEOWON INTECH SLC-130/SLR-120S RCE Inbound M2 (CVE-2020-17456)

2035953: ET EXPLOIT D-Link DWR Command Injection Inbound (CVE-2018-10823)

 

AGENT SIGNATURES

Java Process Spawning Scripting Process

 

Java Process Spawning WMIC

Java Process Spawning Scripting Process via Commandline (For Jenkins servers)

Suspicious process executed by Jenkins Groovy scripts (For Jenkins servers)

Suspicious command executed by a Java listening process (For Linux servers)

Associated indicators (IOCs)

The following technical indicators are associated with the reported intelligence. A list of indicators is also available in the OTX Pulse. Please note, the pulse may include other activities related but out of the scope of the report.

TYPE

INDICATOR

DESCRIPTION

IP ADDRESS

80.94.92[.]38

Malware C&C

SHA256

7c0fe3841af72d55b55bc248167665da5a9036c972acb9a9ac0a7a21db016cc6

Malware hash

SHA256

2abf6060c8a61d7379adfb8218b56003765c1a1e701b346556ca5d53068892a5

Malware hash

SHA256

7785efeeb495ab10414e1f7e4850d248eddce6be91738d515e8b90d344ed820d

Malware hash

SHA256

8e711f38a80a396bd4dacef1dc9ff6c8e32b9b6d37075cea2bbef6973deb9e68

Malware hash

SHA256

31a9c513a5292912720a4bcc6bd4918fc7afcd4a0b60ef9822f5c7bd861c19b8

Malware hash

SHA256

139e1b14d3062881849eb2dcfe10b96ee3acdbd1387de82e73da7d3d921ed806

Malware hash

SHA256

4bd6e530db1c7ed7610398efa249f9c236d7863b40606d779519ac4ccb89767f

Malware hash

SHA256

7a2a5da50e87bb413375ecf12b0be71aea4e21120c0c2447d678ef73c88b3ba0

Malware hash

SHA256

ab203b50226f252c6b3ce2dd57b16c3a22033cd62a42076d09c9b104f67a3bc9

Malware hash

SHA256

70674c30ed3cf8fc1f8a2b9ecc2e15022f55ab9634d70ea3ba5e2e96cc1e00a0

Malware hash

SHA256

f4f9252eac23bbadcbd3cf1d1cada375cb839020ccb0a4e1c49c86a07ce40e1e

Malware hash

SHA256

6a7242683122a3d4507bb0f0b6e7abf8acef4b5ab8ecf11c4b0ebdbded83e7aa

Malware hash

SHA256

b63e841ded736bca23097e91f1f04d44a3f3fdd98878e9ef2a015a09950775c8

Malware hash

SHA256

4869c3d443bae76b20758f297eb3110e316396e17d95511483b99df5e7689fa0

Malware hash

SHA256

cdf2c0c68b5f8f20af448142fd89f5980c9570033fe2e9793a15fdfdadac1281

Malware hash

 

.table-responsive td {
word-break: break-word;
}

Mapped to MITRE ATT&CK

The findings of this report are mapped to the following MITRE ATT&CK Matrix techniques:

  • TA0001: Initial Access:
    • T1190: Exploit Public-Facing Application
  • TA0008: Lateral Movement:
    • T1210: Exploitation of Remote Services
    • T1021: Remote Services
  • TA0011: Command and Control
    • T1132: Data Encoding
    • T1001: Data Obfuscation
    • T1030: Proxy:
      • 003: Multi-hop Proxy

The post Rapidly evolving IoT malware EnemyBot now targeting Content Management System servers and Android devices appeared first on Cybersecurity Insiders.

Stories from the SOC is a blog series that describes recent real-world security incident investigations conducted and reported by the AT&T SOC analyst team for AT&T Managed Extended Detection and Response customers.

Executive summary

AT&T Alien Labs does a tremendous job of developing and maintaining a database of observed Indicators of Compromise (IOC) that have been involved with at least one customer through the Open Threat Exchange (OTX). Containing over 70 million reference points that cover an array of attack types, techniques, and industries, OTX provides an additional resource for the AT&T Security Operations Center (SOC) analysts to utilize in the event that an unrecognized event takes place on a customer’s network. Not only can an analyst browse external Open Source Intelligence (OSINT), but there is also a repository of previously identified IOCs that can be referenced to point out any sort of pattern or commonality. SOC analysts also have the ability to add newly observed IOCs or remove 'out of date' indicators that are no longer a threat to the customers we serve. 

The AT&T Managed Threat Detection and Response (MTDR) SOC detected a successful connection made between a customer asset and an IOC with a known reputation via OSINT as well as OTX. Signatures provided by the OTX reveal the potential IOC associated with the 'Cobalt Strike' Malware Family, which could be in relation to C2 Beaconing activity involving a customer asset. Upon further investigation, it was determined that the activity was indeed malicious, however due to the location of the subnet it proved to be benign in this specific case.

Investigation

Initial alarm review

Indicators of Compromise (IOC)

From the initial breakdown of the alarm, the analysts knew that a connection was 'Allowed' from a customer owned IP to a specific domain 'tomatoreach[.]com' and external IP '192.243.59[.]12'. The known OTX reputation of the URL and IP is what caused the alarm to trigger. The external OSINT on the two observed IOCs confirmed the suspicious reputation.

OTX suspicious behavior

tomatoreach

Tomatoreach analysis

Tomatoreach suspicious

Expanded investigation

Events search

Event logs of the actual alarm do not reveal any additional IOCs or supporting information as it pertains to the activity.

OTX event search

Event deep dive

Upon further investigation into the involved user around the time of the event, it was determined that the user was associated with browsing an additional 20+ suspicious IOCs. Subject of these newly identified domains varies from content streaming to blog posts. Each new IOC was presented with the investigation in hopes of correlating any unrecognized activity occurring.

OTX deep dive

Response

Building the investigation

Due to the fact that the observed IOCs contain a reputation both on the OTX as well as externally, this alarm looks to be a legitimate concern for the customer. Originally, it was received with a 'High' severity. After additional review, the investigation was opened with a 'Medium' severity because there were no obvious malicious actions taking place with the involved user other than the browsing of suspicious web sites, which may not be authorized under company policy. All supporting evidence was included in the investigation, and a recommendation for remediation was also provided.

OTX response

OTX recommendation

Customer interaction

Per the customer's Incident Response Plan (IRP) a phone call was not required when this investigation was opened. Once addressed, the customer was able to confirm that what occurred was not in the scope of normal business activity. However, identifying the user and the host involved, the customer was able to establish the subnet being a “Guest” network that is authorized for personal use. MTDR's full breakdown of user involved web traffic was valued and aided in the effortless closing of this investigation.

OTX customer interaction

The post Suspicious behavior: OTX Indicator of Compromise – Detection & response appeared first on Cybersecurity Insiders.

The partnership between these two market-leading vendors enables MSSPs around the world to fast-track cutting-edge MXDR services.

AT&T, the leader in network and managed security services, and SentinelOne, the leader in next generation, autonomous endpoint protection, today announced a strategic alliance to help prevent cybercrime. The partnership focuses on providing managed security service providers (MSSPs) around the world with a clear path to providing top-tier managed extended detection and response (MXDR) capabilities for customers.

“Managed XDR is a lot different than the conventional detection and response systems in the sense that it enables members of our partner program to build solutions on the platforms their customers already use in order to make the best out of their investments,” says Rakesh Shah, Vice President of Product at AT&T Cybersecurity. “The new alliance combines AT&T USM Anywhere network threat detection capabilities with SentinelOne endpoint protection. Together, these two security platforms provide industry-leading network and endpoint threat detection and response solutions that will enable MSSPs to be successful at providing their end customers with world-class security.”

“AT&T and SentinelOne help MSSPs enter the era of XDR, protecting more surfaces at speeds and scales previously not possible with humans alone. SentinelOne’s autonomous technology coupled with AT&T’s integrated network technologies and services enables MSSPs to reduce risk and boost protection for their customers,” says Mike Petronaci, VP Product at SentinelOne.

The alliance streamlines XDR attainment for partner program members that provide manage security services for a range of organizations. An ideal customer for this MXDR solution would be an MSSP managing small-to-midsized enterprises. Those enterprises may be interested in outsourcing managed cybersecurity services because they do not have the in-house resources to deliver the security results they need. Larger enterprises that do not want to outsource their security completely but are looking for some help could also use this MXDR solution managed by one of our partners.

The tight integration this alliance brings provides MSSP partners with ready access to the award-winning USM Anywhere and SentinelOne platforms. In addition, for MSSPs that acquire SentinelOne endpoint protection through the partner program, AT&T will manage hundreds of additional indicators of compromise through a unique integration within USM Anywhere that streams uniquely tailored security telemetry from the SentinelOne Deep Visibility platform.

SentinelOne partnership

The post AT&T Cybersecurity’s Partner Program and SentinelOne enter managed XDR market with robust alliance appeared first on Cybersecurity Insiders.

Perspective:

While there is an alphabet soup of compliance requirements and security standards frameworks, this post will focus on the two prevalent certifications frequently discussed for SaaS and B2B businesses. Security and compliance qualifications, like SOC 2 and ISO 27001, demonstrate that you apply good practices in your business. They are often classified as “security” and thought of as the technical security of your systems. However, they are broader, focusing on organizational practices supporting your security and other objectives. That includes availability (system resilience), the confidentiality of data, privacy for your users, integrity of the system processing objectives, scalable process design, and operational readiness to support significant business customers.

So, before we get into which one would you pick, how, and why, let's quickly get aligned on the key benefits of why these certifications and attestations are relevant from a business standpoint.

Background and benefits:

It helps establish brand trust and enable sales: Your customer's looking to use your software, consider your product, and your capabilities as an organization. These qualifications play an essential role in demonstrating your business is “enterprise-ready,” providing a reliable service and keeping their data secure.

It helps demonstrate compliance and establish a baseline for risk management: These certifications often become mandates from procurement teams to demonstrate supply chain security. Or they can be used to demonstrate compliance with regulations and satisfy regulatory requirements.

It helps reduce overhead and time responding to due diligence questionnaires: A significant pain point for software companies is the relentless due diligence in serving enterprise customers. Hundreds, even thousands of “security questions” and vendor audits are common. Standards like SOC 2 and ISO 27001 are designed to have a single independent audit process that satisfies broad end-user requirements.

It helps streamline and improve business operations: You adopt “good” or “best” industry practices by going through these certifications. Investors, regulators, partners, Board, the management team, and even employees benefit from implementing and validating your alignment to standards. It provides peace of mind that you are improving your security posture, helps address compliance requirements, and strengthens your essential operational practices.

Which standard is best for these goals? 

Each standard has different requirements, nuances in how they are applied, and perceptions in the market. This impacts which may be best for your business and how they help you achieve the goals above.

Below, we'll compare the two most common standards, SOC and ISO.

Often, we see that the SOC 2 reports are widely adopted and acknowledged. Many procurement and security departments may require a SOC 2 report before approving a SaaS vendor for use.  If your business handles any customer data, getting a SOC 2 report will help show your customers and users that you seriously consider data security and protection. Healthcare, retail, financial services, SaaS, cloud storage, and computing companies are just some businesses that will benefit from SOC 2 compliance certification.

What is a SOC -2 certification?

SOC-2 is based on five Trust Service Criteria (TSC) principles.

Security – making sure that sensitive information and systems are protected from security risks and that all predefined security procedures are being followed

Availability – ensuring that all systems are available and minimizing downtime to protect sensitive data

Processing integrity – verifying data integrity during processing and before authorization

Confidentiality – allowing information access only to those approved and authorized to receive

Privacy – managing personal and private information with integrity and care

SOC 2 examinations were designed by the American Institute of Certified Public Accountants (AICPA) to help organizations protect their data and the privacy of their client's information. A SOC 2 assessment focuses on an organization's security controls related to overall services, operations, and cybersecurity compliance. SOC 2 examinations can be completed for organizations of various sizes and across different sectors. 

Businesses that handle customer data proactively perform SOC 2 audits to ensure they meet all the criteria. Once an outside auditor performs a SOC 2 audit, the auditor will issue a SOC 2 certificate that shows the business complies with all the requirements if the business passes the audit. There are two types of SOC 2 audits: Type 1 and Type 2. The difference between them is simple: A Type 1 audit looks at the design of a specific security process or procedure at one point in time, while a Type 2 audit assesses how successful that security process is.

What Is ISO/IEC 27001:2013?

The ISO/IEC 27001 is an international information security standard published jointly by the International Organization for Standardization (ISO) and the International Electrotechnical Commission (IEC.) It is part of the ISO/IEC 27000 family of standards. It offers a framework to help organizations establish, implement, operate, monitor, review, maintain, and continually improve their information security management systems.

ISO 27001 details the specification for Information Security Management System (ISMS) to help organizations address people, processes, and technology about data security to protect the confidentiality, integrity, and availability of their information assets. The ISO 27001 framework is based on risk assessment and risk management, and compliance involves identifying information security risks and implementing appropriate security controls to mitigate them. It also includes 27017 and 27018 to demonstrate cloud security and privacy protections and /or do 27701 (privacy management system) as an extension to ISO 27001.

The intent of information protection – a common thread between both SOC and ISO 27001.

Both SOC 2 and ISO 27001 are similar in that they are designed to instill trust with clients that you are protecting their data. If you look at their principles, they each cover essential dimensions of securing information, such as confidentiality, integrity, and availability.

The good news from this comparison is that both frameworks are broadly recognized certifications that prove to clients that you take security seriously. The great news is that if you complete one certification, you are well along the path to achieving the other. These attestations and certifications are reputable and typically accepted by clients as proof that you have proper security. Suppose you sell to organizations in the United States. In that case, they will likely accept either SOC 2 or ISO 27001 as a third-party attestation to your InfoSec program. Both are equally “horizontal” in that most industries accept them.

There are several key differences between ISO 27001 vs. SOC 2, but the main difference is scope. ISO 27001 is to provide a framework for how organizations should manage their data and prove they have an entire working ISMS in place. In contrast, SOC 2 demonstrates that an organization has implemented essential data security controls. 

Which one should you go with?

Whatever certification you decide to do first, the odds are as your business grows, you will eventually have to complete both certifications to meet the requirements of your global clientele. The encouraging news is that there are more accessible, faster, and more cost-effective methods to leverage your work in one certification to reduce the amount of work you need to do in subsequent certifications. We are suggesting that you explore compliance with a proactive mindset, as it will save you time and money in the long run.

The post Security frameworks / attestations and certifications: Which one is the right fit for your organization? appeared first on Cybersecurity Insiders.

This blog was written by an independent guest blogger.

If you don’t think API security is that important, think again. Last year, 91% of organizations had an API security incident. The proliferation of SOAP and REST APIs makes it easy for organizations to tailor their application ecosystems. But, APIs also hold the keys to all of a company’s data. And as data-centric projects become more in demand, it increases the likelihood of a target API attack campaign. 

Experts agree that organizations that keep their API ecosystem open should also take steps to prevent ransomware attacks and protect data from unauthorized users. Here is a list of 12 tips to help protect your API ecosystem and avoid unnecessary security risks. 

Encryption

The best place to start when it comes to any cybersecurity protocol is encryption. Encryption converts all of your protected information into code that can only be read by users with the appropriate credentials. Without the encryption key, unauthorized users cannot access encrypted data. This ensures that sensitive information stays far from prying eyes. 

In today’s digital business environment, everything you do should be encrypted. Using a VPN and Tor together runs your network connection through a secured server. Encrypting connections at every stage can help prevent unwanted attacks. Customer-facing activities, vendor and third-party applications, and internal communications should all be protected with TLS encryption or higher. 

Authentication

Authentication means validating that a user or a machine is being truthful about their identity. Identifying each user that accesses your APIs is crucial so that only authorized users can see your company’s most sensitive information. 

There are many ways to authenticate API users:

  • HTTP basic authentication
  • API authentication key configuration
  • IdP server tokens

OAuth & OpenID Connect

A great API has the ability to delegate authentication protocols. Delegating authorizations and authentication of APIs to an IdP can help make better use of resources and keep your API more secure. 

OAuth 2 is what prevents people from having to recall from memory thousands of passwords for numerous accounts across the internet and allows users to connect via trusted credentials through another provider (like when you use Facebook, Apple, or Google to log in or create an account online).

This concept is also applied to API security with IdP tokens. Instead of users inputting their credentials, they access the API with a token provided by a third-party server. Plus, you can leverage the OpenId Connect standard by adding an identity layer on top of OAuth. 

Audit, log, and version

Without adequate API monitoring, there is no way organizations can stop insidious attacks. Teams should continuously monitor the API and have an organized and repeatable troubleshooting process in place. It’s also important that companies audit and log data on the server and turn it into resources in case of an incident. 

A monitoring dashboard can help track API consumption and enhance monitoring practices. And don’t forget to add the version on all APIs and depreciate them when appropriate. 

Stay private

Organizations should be overly cautious when it comes to vulnerabilities and privacy since data is one of the most valuable and sought-after business commodities. Ensure error messages display as little information as possible, keep IP addresses private, and use a secure email gateway for all internal and external messaging. Consider hiring a dedicated development team that has only necessary access and use an IP whitelist and blacklist to restrict access to resources. 

Consider your infrastructure

Without a good infrastructure and security network, it’s impossible to keep your API secure. Make sure that your servers and software are up to date and ensure that regular maintenance is done to consolidate resources. You should also ensure that third-party service providers use the most up-to-date versioning and encryption protocols. 

Throttling and quotas

DDOS attacks can block legitimate users from using their dedicated resources, including APIs. Restricting access to the API and application organizations can ensure that no one will abuse your APIs. Setting throttling limits and quotas is a great way to prevent cyberattacks from numerous sources, such as a DDOS attack. Plus, you can prevent overloading your system with unnecessary requests. 

Data validation

All data must be validated according to your administrative standards to prevent malicious code from being injected into your API. Check every piece of data that comes through your servers and reject anything unexpected, significantly large, or from an unknown user. JSON and XML schema validation can help check your parameters and prevent attacks. 

OWASP Top 10

Staying up on the OWASP (Open Web Application Security Project) Top 10 can help teams implement proactive measures to protect the API from known vulnerabilities. The OWASP Top 10 lists the 10 worst vulnerabilities according to their exploitability and impact. Organizations should regularly review their systems and secure all OWASP vulnerabilities. 

API firewalling

An API firewall makes it more difficult for hackers to exploit API vulnerabilities. API firewalls should be configured into two layers. The first DMZ layer has an API firewall for basic security functions, including checking for SQL injections, message size, and other HTTP security activities. Then the message gets forwarded to the second LAN layer with more advanced security functions. 

API gateway management

Using an API gateway or API management solution can help save organizations a lot of time and effort when successfully implementing an API security plan. An API gateway helps keep data secure with tools to help monitor and control your API access. 

In addition to streamlined API security implementation, an API management solution can help you make sense of API data to power future business decisions. Plus, with the help of creative graphic design, many API management solutions and gateways offer a simple UI with easy navigation. 

Call security experts

Although cybersecurity positions are popping up worldwide, many organizations are having difficulty finding talented experts with the right security credentials to fill in the security gaps. There are ways to attract cybersecurity professionals to your company, but cybersecurity can’t wait for the right candidate. 

Call the security experts at AT&T cybersecurity to help you manage your network and API security. Plus, you can use ICAP (Internet Content Adaptation Protocol) servers to scan the payload of your APIs. 

Final thoughts

As digital tools and technologies continue to evolve, so will hackers’ attempts to exploit crucial business data. Putting some basic API security best practices in place will help prevent attacks in the future and contribute to a healthy IT policy management lifecycle. 

The best way to ensure that your APIs are safe is to create a company-wide mindset of cyber hygiene through continuous training and encouraging DevSecOps collaborative projects. However, organizations can secure their digital experiences and important data by following these simple tips to enhance their API security. 

The post API security: 12 essential best practices to keep your data & APIs safe appeared first on Cybersecurity Insiders.

This blog was written by an independent guest blogger.

“Ransomware has become the enemy of the day; the threat that was first feared on Pennsylvania Avenue and subsequently detested on Wall Street is now the topic of conversation on Main Street.”

Frank Dickson, Program Vice President, Cybersecurity Products at IDC

In the first installment of this blog series (Endpoint Security and Remote Work), we highlighted the need to provide comprehensive data protections to both traditional and mobile endpoints as an enabler of remote work.  In this second chapter, we’ll expand on the importance of endpoint security as one of many key elements for defining an organization’s security posture as it relates to arguably the most relevant cybersecurity issue of the day.  

Cue the ominous music and shadowy lighting as it is likely the mood for most cybersecurity professionals when considering the topic of ransomware. To the dismay of corporate executives, government and education leaders, and small business owners, ransomware is pervasive and evolving quickly.  As evidence, a recent report indicated that roughly half of all state and local governments worldwide were victim of a ransomware attack in 2021. 

However, there are important steps that can be taken along the path to digital transformation to minimize the risk associated to these attacks.  As companies consider the evolution of their strategy for combating ransomware, there are five key strategies to help with reducing the risks inherent to an attack:

1. Prevent phishing attacks and access to malicious websites

Companies must be able to inspect all Internet bound traffic from every endpoint, especially mobile, and block malicious connections.  This challenge is significantly more complex than simply inspecting corporate email.  In fact, because bad actors are highly tuned to user behavior, most threat campaigns generally include both a traditional and mobile phishing component to the attack.                                              

Bad actors are highly tuned to user behavior as they look to perpetuate their attacks and SMS/Messaging apps provide considerably higher response rates. To quantify, SMS has a 98% open rate and an average response time of just 90 seconds.   The same stats for email usage equate to a 20% open rate and 1.5-hour response time which help explain why hackers have pivoted to mobile to initiate ransomware attacks.

As a result, Secure Web Gateways (SWG) and Mobile Endpoint Security (MES) solutions need to work in concert to secure every connection to the Internet and from any device. Both SWG and MES perform similar functions specific to inspecting web traffic but they do it from different form factors and operating systems.  The data protections for SWG are primarily available on traditional endpoints (Windows, MacOS, etc.) where MTD addresses the mobile ecosystem with protections for iOS and Android.  Because ransomware can be initiated in many ways including but not limited to email, SMS, QR codes, and social media, every organization must employ tools to detect and mitigate threats that target all endpoints. 

2. Prevent privilege escalation and application misconfigurations

Another tell-tale sign of a possible ransomware attack is the escalation of privileges by a user within the organization.  Hackers will use the compromised credentials of a user to access systems and disable security functions necessary to execute their attack.  The ability of the IT organization to recognize when a user’s privileges have been altered is made possible through UEBA (User and Entity Behavior Analytics).  Many times, hackers will modify or disable security functions to allow them easier access and more dwell time within an organization to identify more critical systems and data to include in their attack.  The ability to identify abnormal behavior such as privilege escalation or “impossible travel” are early indicators of ransomware attacks and key aspects of any UEBA solution.  For example, if a user logs into their SaaS app in Dallas and an hour later in Moscow, your security staff need to be aware, and you must have tools to automate the necessary response that starts with blocking access to the user. 

3. Prevent lateral movement across applications

After the ransomware attack has been initiated, the next key aspect of the attack is to obtain access to other systems and tools with high value data that can be leveraged to increase the ransom.  Therefore, businesses should enable segmentation at the application level to prevent lateral movement.  Unfortunately, with traditional VPNs, access management can be very challenging.  If a hacker were to compromise a credential and access company resources via the VPN, every system accessible via the VPN could now be available to expand the scope of the attack. 

Current security tools such as Zero Trust Network Access prevent that lateral movement by authenticating the user and his/her privileges on an app-by-app basis.  That functionality can be extended by utilizing context to manage the permissions of that user based on many factors such which device is being utilized for the request (managed vs. unmanaged), the health status of the device, time of day/location, file type, data classification such as confidential/classified, user activity such as upload/download, and many more.  A real-world example would allow view only access to non-sensitive corporate content via their personal tablet to perform their job, but would require the data be accessed via a managed device if they were to take any action such as sharing or downloading that content.  

4. Minimize the risk of unauthorized access to private applications

It is essential for companies to ensure that corporate/proprietary apps and servers aren’t discoverable on the Internet.  Authorized users should only get access to corporate information using adaptive access policies that are based on users’ and devices’ context.  Whether these applications reside in private data centers or IaaS environments (AWS, Azure, GCP, etc.), the same policies for accessing data should be consistent. Ideally, they are managed by the same policy engine to simplify administration of an organization’s data protections.  One of the most difficult challenges for security teams in deploying Zero Trust is the process of creating policy.  It can take months or even years to tune false positives and negatives out of a DLP policy, so a unified platform that simplifies the management of those policies across private apps, SaaS, and the Internet is absolutely critical. 

5. Detect data exfiltration and alterations

A recent trend amongst ransomware attacks has included the exfiltration of data in addition to the encryption of the critical data.  In these examples, the data that was stolen was then used as leverage against their victim to encourage the payment of the ransom.  LockBit 2.0 and Conti are two separate ransomware gangs notorious for stealing data for the purposes of monetizing it and at the same time using it to damage the reputation of their targets.

Hence, companies must be able to leverage the context and content-aware signals of their data to help mitigate malicious downloads or modifications of their data.  At the same time, it is just as important that these signals travel with the files throughout their lifecycle so that the data can be encrypted when accessed via an unauthorized user, thereby preventing them from being able to view the content.  Enterprise Data Rights Management and DLP together can provide this functionality that serves as an important toolset to combat ransomware attacks by minimizing the value of the data that is exfiltrated. 

It should also be noted that this functionality is just as important when considering the impact to compliance and collaboration.  Historically, collaboration has been thought to increase security risk, but the ability to provide data protections based on data classification can dramatically improve a company’s ability to collaborate securely while maximizing productivity.

As stated above, there is considerably more to preventing ransomware attacks than good endpoint security hygiene.  With the reality of remote work and the adoption of cloud, the task is significantly more challenging but not impossible.  The adoption of Zero Trust and a data protection platform that includes critical capabilities (UEBA, EDRM, DLP, etc.) enables companies to provide contextually aware protections and understand who is accessing data and what actions are being taken…key indicators that can be used to identify and stop ransomware attacks before they occur.  

For more information regarding how to protect your business from the perils of ransomware, please reach out to your assigned AT&T account manager or click here to learn more about how Lookout’s platform helps safeguard your data.

This is part two of a three-part series, written by an independent guest blogger. Please keep an eye out for the last blog in this series which will focus on the need to extend Endpoint Detection and Response capabilities to mobile.

The post 5 ways to prevent Ransomware attacks appeared first on Cybersecurity Insiders.

This blog was written by an independent guest blogger.

It’s well known that there’s a pervasive cybersecurity skills shortage. The problem has multiple ramifications. Current cybersecurity teams often deal with consistently heavy workloads and don’t have time to deal with all issues appropriately. The skills shortage also means people who need cybersecurity talent may find it takes much longer than expected to find qualified candidates.

Most people agree there’s no single way to address the issue and no fast fix. However, some individuals wonder if global recruitment could be an option, particularly after human resources managers establish that there aren’t enough suitable candidates locally.

Current cybersecurity professionals planning career changes

A June 2022 study from Trellix revealed that 30% of current cybersecurity professionals are thinking about changing their careers. Gathering from a wider candidate pool by recruiting people on a global level could increase the number of overall options a company has when trying to fill open positions.

However, it’s essential to learn what’s causing cybersecurity professionals to want to leave the field. Otherwise, newly hired candidates may not stick around for as long as their employers hope. It’s also important to note that the Trellix poll surveyed people from numerous countries, including the United States, Canada, India, France, and Japan.

Another takeaway from the study was that 91% of people believed there should be more efforts to increase diversity in the cybersecurity sector. The study showed that most employees in the industry now are straight, white, and male. If more people from minority groups feel welcomed and accepted while working in cybersecurity roles, they’ll be more likely to enter the field and stay in it for the long term.

Appealing perks help attract workers

Some companies have already invested in global recruitment efforts to help close cybersecurity skills gaps.

For example, Microsoft recently expanded its cybersecurity skills campaign to an additional 23 countries – including Ireland, Israel, Norway, Poland, and South Africa. All the places were identified as under high threat of cybersecurity attacks. Microsoft representatives have numerous plans to get people the knowledge they need to enter the workforce confidently and fill cybersecurity roles.

The hiring initiative also includes some Asia-Pacific (APAC) countries. That’s significant since statistics suggest it will face a labor shortage of 47 million people across all job types by 2030.

Something human resources leaders must keep in mind before hiring cybersecurity professionals is that the open positions should include attractive benefits packages that are better than or on par with what other companies in the sector provide.

Since cybersecurity experts are in such high demand, they enjoy the luxury of being picky about which jobs they consider and how long they stay in them. Even though cultural differences exist, there are some similarities in what most people look for in their job prospects. Competitive salaries and generous paid time off are among the many examples.

Shortfalls persist despite 700,000 workforce entrants

Global research published in 2021 by (ISC)² found that 700,000 new people had joined the cybersecurity workforce since 2020. However, the study also showed that the worldwide pool of professionals must grow by 65% to keep pace with demand.

The study’s results also suggested that one possibility is to recruit people who don’t have cybersecurity backgrounds. The data indicated that 17% of respondents came into the field from unrelated sectors.

Some experts suggest tapping into specific population groups as a practical way to address the shortage. For example, people with autism and ADHD often have skills that make them well suited for the cybersecurity industry.

Global recruitment is not an all-encompassing solution

Hiring people from around the world could close skill gaps in situations where it’s evident there’s a lack of talent wherever a company primarily operates. However, as the details above highlight, the skills shortage is a widespread issue.

Accepting applications from a global talent pool could also increase administrative tasks when a company is ready to hire. That’s partially due to the higher number of applications to evaluate. Additionally, there are other necessities associated with aspects like visa applications or time zone specifics if an international new hire will work remotely.

People in the IT sector should ideally see global recruitment as one of many possibilities for reducing the cybersecurity skills gap severity. It’s worth consideration, but not at the expense of ignoring other strategies.

The post Can global recruitment solve the cybersecurity hiring problem? appeared first on Cybersecurity Insiders.

In the previous article, we covered the release process and how to secure the parts and components of the process. The deploy and operate processes are where developers, IT, and security meet in a coordinated handoff for sending an application into production.

The traditional handoff of an application is siloed where developers send installation instructions to IT, IT provisions the physical hardware and installs the application, and security scans the application after it is up and running. A missed instruction could cause inconsistency between environments. A system might not be scanned by security leaving the application vulnerable to attack. DevSecOps focus is to incorporate security practices by leveraging the security capabilities within infrastructure as code (IaC), blue/green deployments, and application security scanning before end-users are transitioned to the system.

Infrastructure as Code

IaC starts with a platform like Ansible, Chef, or Terraform that can connect to the cloud service provider’s (AWS, Azure, Google Cloud) Application Programming Interface (API) and programmatically tells it exactly what infrastructure to provision for the application. DevOps teams consult with developers, IT and security to build configuration files with all of the requirements that describe what the cloud service provider needs to provision for the application. Below are some of the more critical areas that DevSecOps covers using IaC.

IaC diagram

Capacity planning – This includes rules around autoscaling laterally (automatically adding servers to handle additional demand, elastically) and scaling up (increasing the performance of the infrastructure like adding more RAM or CPU). Elasticity from autoscaling helps prevent non-malicious or malicious Denial of Service incidents.

Separation of duty – While IaC helps break down silos, developers, IT, and security still have direct responsibility for certain tasks even when they are automated. Accidentally deploying the application is avoided by making specific steps of the deploy process responsible to a specific team and cannot be bypassed.

Principal of least privilege – Applications have the minimum set of permissions required to operate and IaC ensures consistency even during the automated scaling up and down of resources to match demand. The fewer the privileges, the more protection systems have from application vulnerabilities and malicious attacks.

Network segmentation – Applications and infrastructure are organized and separated based on the business system security requirements. Segmentation protects business systems from malicious software that can hop from one system to the next, otherwise known as lateral movement in an environment.

Encryption (at rest and in transit) – Hardware, cloud service providers and operating systems have encryption capabilities built into their systems and platforms. Using the built-in capabilities or obtaining 3rd party encryption software protects the data where it is stored. Using TLS certificates for secured web communication between the client and business system protects data in transit. Encryption is a requirement for adhering with industry related compliance and standards criteria.

Secured (hardened) image templates – Security and IT develop the baseline operating system configuration and then create image templates that can be reused as part of autoscaling. As requirements change and patches are released, the baseline image is updated and redeployed.

Antivirus and vulnerability management tools – These tools are updated frequently to keep up with the dynamic security landscape. Instead of installing these tools in the baseline image, consider installing the tools through IaC.

Log collection – The baseline image should be configured to send all logs created by the system to a log collector outside of the system for distribution to the Network Operations Center (NOC) or Security Operations Center (SOC) where additional inspection and analysis for malicious activity can be performed. Consider using DNS instead of IP addresses for the log collector destination.

Blue green deployment

Blue green deployment strategies increase application availability during upgrades. If there is a problem, the system can be quickly reverted to a known secured and good working state. A blue green deployment is a system architecture that seamlessly replaces an old version of the application with a new version.

Blue green deployment

Deployment validation should happen as the application is promoted through each environment. This is because of the configuration items (variables and secrets) that are different between the environments. Typically, validation happens during non-business hours and is extremely taxing on the different groups supporting the application. With a blue green deployment, the new version of an application can be deployed and validated during business hours. Even if there are concerns when end-users are switched over during non-business hours, fewer employees are needed to participate.

Automate security tools installation and scanning

Internet facing application attacks continue to increase because of the ease of access to malicious tools, the speed at which some vulnerabilities can be exploited, and the value of the data extracted. Dynamic Scanning Tools (DAST) are a great way to identify vulnerabilities and fix them before the application is moved into production and released for end-users to access.

DAST tools provide visibility into real-world attacks because they mimic how hackers would attempt to break an application. Automating and scheduling the scanning of applications in a regular cadence helps find and resolve vulnerabilities quickly. Company policy may require vulnerability scanning for compliance with regulatory and standards like PCI, HIPPA or SOC.

DAST for web applications focuses on the OWASP top 10 vulnerabilities like SQL injection and cross-site scripting. Manual penetration (PEN) testing is still required to cover other vulnerabilities like logic errors, race conditions, customized attack payloads, and zero-day vulnerabilities. Also, not all applications are web based so it is important to select and use the right scanning tools for the job. Manual and automatic scanning can also help spot configuration issues that lead to errors in how the application behaves.

Next Steps

Traditional deployments of applications are a laborious process for the development, IT, and security teams. But that has all changed with the introduction of Infrastructure as Code, blue-green deployments, and the Continuous Delivery (CD) methodology. Tasks performed in the middle of the night can be moved to normal business hours. Projects that take weeks of time can be reduced to hours through automation. Automated security scanning can be performed regularly without user interaction. With the application deployed, the focus switches to monitoring and eventually decommissioning it as the final steps in the lifecycle.

The post DevSecOps deploy and operate processes appeared first on Cybersecurity Insiders.

If your organization is having trouble creating policies, I hope that this blog post will help you set a clear path. We’ll discuss setting up your organization up for success by ensuring that you do not treat your policies as a “do once and forget” project. Many organizations I have worked with have done that, but later realized good policy lifecycle is required, and a pillar of good governance.

Organizations often feel that developing and enforcing policies is bureaucratic and tedious, but the importance of policies is often felt when your organization does not have them. Not only are they a cost of doing business, but they are also used to establish the foundation and norms of acquiring, operating, and securing technology and information assets.

The lifecycle, as it implies, should be iterative and continuous, and policies should be revisited at a regular cadence to ensure they remain relevant and deliver value to your business.

IT policy process

 Assess

The first step is to find out where your organization is, this step should shine a light on where, and what gaps exist.

First, determine how you will be assessing your policies; here is a checklist, whether you are building new ones or bringing current ones up to date:

  • Is it current and up to date
  • Does it have a clear purpose or goal
  • Does it have a clear scope (inclusions /exclusions)
  • Does it have a clear ownership
  • Does it have a clear list of affected people
  • Does it have language that is easy to understand
  • Is it detailed enough to avoid misinterpretations
  • Does it follow the laws/regulations/ethical standards
  • Does it reflect the organizational goals/values and culture
  • Are key terms and acronyms defined
  • Have related policies and procedures been identified
  • Are there clear consequences for non-compliance
  • Is it approved and supported by management
  • Is it enforceable

Next, inventory your organization’s policies by listing them and then assessing the quality using the previous list. Based on the quality, identify if your organization needs new policies or if the existing ones need improvement, then determine the amount of work that will be required.

Best practices suggest that you may want to prioritize your efforts on the most significant improvements, those that focus on the most serious business vulnerabilities.

Understand that policy improvement does not end with a new policy document. You will need to plan for communications, training, process changes, and any technology improvements needed to make the policy fair and enforceable.

Develop

After the assessment is done, you should plan on developing your policies or revamping the old ones. Although there is no consensus on what makes a good policy, referenced material [1] [2] [3] [4] suggests the following best practices, policies should have a clear purpose and precise presentation that drives compliance by eliminating misinterpretations;

All policies should include and describe the following:

  • Purpose
  • Expectations
  • Consequences
  • Glossary of terms

For maximum effect, policies should be written:

  • With everyday language
  • With direct and active voice
  • Precisely to avoid misinterpretation
  • Realistically
  • Consistently in keeping with standards

Consider that policies need to be actively sold to the people who are supposed to follow them. You can achieve that by using a communication plan that includes:

  • Goals and objectives
  • Key messages
  • Potential barriers
  • Suggested actions
  • Budget considerations
  • Timelines

Enforcement

A lack of enforcement will create ethical, financial, and legal risks to any organization. Among the risks are loss of productivity due to abuse of privileges, potential wasted resources, and loss of reputation if an employee engages in illegal activities due to poor policy enforcement, which can lead to potential litigation. Make sure that you have clear rules of engagement.

Your organization should establish the proper support framework around Leadership, Process, and Monitoring. Policies should perform against standards. Policies don't always fail due to bad behavior; they fail because:             

  • They are poorly written
  • There is no enforcement
  • They are illegal or unethical
  • They are poorly communicated
  • They go against company culture

If your company feels overwhelmed thinking about all the moving pieces that make up an IT Policy Management Lifecycle. Let AT&T Cybersecurity Consulting help whether you need to amend existing policies, implement one or more brand new policies, or need a complete overhaul of the entire policy portfolio.

References

1) F. H. Alqahtani, “Developing an Information Security Policy: A Case Study Approach,” Science Direct, vol. 124, pp. 691-697, 2017.

2) S. Diver, “SANS White Papers,” SANS , 02 03 2004. [Online]. Available: https://www.sans.org/white-papers/1331/. [Accessed 15

3) S. V. Flowerday and T. Tuyikeze, “Information security policy development and implementation: The what, how, and who,” Science Direct, vol. 61, pp. 169-183, 2016.

4) K. J. Knapp, R. F. Morris, T. E. Marshall and T. A. Byrd, “Information security policy: An Organizational level process model,” Science Direct, vol. 28, no. 7, pp. 493-508, 2007.

The post How to create a continuous lifecycle for your IT Policy Management appeared first on Cybersecurity Insiders.

Introduction

Since my previous blog CMMC Readiness was published in September 2021, the Department of Defense (DoD) has made modifications to the program structure and requirements of the Cybersecurity Maturity Model Certification (CMMC) interim rule first published in September 2020.  CMMC 2.0 was officially introduced in November 2021 with the goal of streamlining and improving CMMC implementation.

In this blog, I will identify the key changes occurring with CMMC 2.0 and discuss an implementation roadmap to CMMC readiness.

Key changes

Key changes in CMMC 2.0 include:

  • Maturity Model reduced from 5 compliance levels to 3
    • Level 3 – Expert
    • Level 2 – Advanced (old Level 3)
    • Level 1 – Foundational
  • Improved alignment with National Institute of Standards and Technology (NIST)
    • NIST SP 800-171
    • NIST SP 800-172
  • Practices reduced from 130 to 110 for Level 2 Certification
  • Independent assessment by C3PAO at Level 2 – Advanced
  • Self-assessment at Level 1 – Foundational, limited at Level 2 – Advanced
  • Removed processes (ML.2.999 Policy, ML.2.998 Practices, and ML.3.997 Resource Plan)

Figure 1. CMMC Model

CMMC model

Source: Acquisition & Sustainment – Office of the Under Secretary of Defense

CMMC requirements at Level 1 and Level 2 now align with National Institute of Standards and Technology (NIST) Special Publication (SP) 800-171 – Protecting Controlled Unclassified Information in Nonfederal Information Systems and Organizations.  This alignment should be beneficial to most DIB organizations since they have been subject to FAR 52.204-21 or DFARS 252.204-7012 and should have been self-attesting to NIST SP 800-171 practices whether it be the 17 NIST practices required for those handling only FCI or the 110 NIST practices for those handing FCI and CUI.  Those organizations that took self-attestation seriously over the years should be able to leverage the work they have previously performed to place themselves in a strong position for CMMC certification.

CMMC 2.0 may have dropped the three Processes (ML.2.999 Policy, ML.2.998 Practices, and ML.3.997 Resource Plan), but that does not eliminate the requirement for formal security policies and control implementation procedures.  CUI security requirements were derived in part from NIST Special Publication 800-53 Security and Privacy Controls for Federal Information Systems and Organizations (NIST SP 800-53).  The tailoring actions addressed in Appendix E of NIST SP 80-171R2 specify that the first control of each NIST SP 800-53 family (e.g., AC-1, AT-1, PE-1, etc.), which prescribe written and managed policies and procedures, are designated as NFO or “expected to be routinely satisfied by nonfederal organizations without specification”.  This means that they are required as part of the organization’s information security management plan and are applicable to the CUI environment.  Refer to Appendix E for other NIST SP 800-53 controls that are designated as NFO and include them in your program.

Implementation roadmap

Although there have been welcomed changes to the structure of CMMC, my recommended approach to implementation first presented last September has changed little.  The following presents a four-step approach to get started down the road to CCMC Level 2 certification. 

CMMC implementation

Education

I cannot stress the importance of educating yourself and your organization on the CMMC 2.0 requirements.  A clear and complete understanding of the statute including the practice requirements and the certification process is critical to achieving and maintaining CMMC certification.  This understanding will be integral to crafting a logical, cost-effective approach to certification and will also provide the information necessary to effectively communicate with your executive leadership team. 

Start your education process by reading the CMMC 2.0 documents relevant to your certification level found at OUSD A&S – Cybersecurity Maturity Model Certification (CMMC) (osd.mil).

  • Cybersecurity Maturity Model Certification (CMMC) Model Overview Version 2.0/December 2021 – presents the CMMC model and each of its elements
  • CMMC Model V2 Mapping Version 2 December 2021 – Excel spreadsheet that presents the CMMC model in spreadsheet format.
  • CMMC Self-Assessment Scope – Level 2 Version 2 December 2021 – Guidance on how to identify and document the scope of your CMMC environment.
  • CMMC Assessment Guide – Level 2 Version 2.0 December 2021 – Assessment guidance for CMMC Level 2 and the protection of Controlled Unclassified Information (CUI).

Define

The CMMC environment that will be subject to the certification assessment must be formally defined and documented.    The first thing that the CMMC Third-Party Assessor Organization (C3PAO) engaged to perform the Level 2 certification must do is review and agree with the CMMC scope presented by the DIB organization.  If there is no agreement on the scope, the C3PAO cannot proceed with the certification assessment. 

Scope

CMMC environment includes all CUI-related associated assets found in the organization’s enterprise, external systems and services, and any network transport solutions.  You should identify all of  the CUI data elements that are present your environment and associate them with one or more business processes.  This includes CUI data elements provided by the Government or a Prime Contractor, as well as any CUI created by you as part of the contract execution.  Formally document the CUI data flow through each business process to visualize the physical and logical boundaries of the CMMC environment.  The information gleaned during this process will be valuable input to complete your System Security Plans (SSPs).

Not sure which data elements are CUI?  Work directly with your legal counsel and DoD business partner(s) to reach a consensus on what data elements will be classified as CUI.   Visit the NARA website at (Controlled Unclassified Information (CUI) | National Archives) for more information concerning the various categories of CUI.   Ensure that the classification discussions held by the team and any decisions that are made are documented for posterity. Do not forget to include CUI data elements that are anticipated to be present under any new agreements.

Figure 2. High-Level CMMC Assessment Scope

CMMC assessment

Based on image from CMMC Assessment Scope – Level 2 Version 2.0 | December 2021

During the scoping exercise, you should look for ways to optimize its CMMC footprint by enclaving CUI business processes from non-CUI business processes through physical or logical segmentation.  File and database consolidation may be helpful in reducing the overall CMMC footprint, as well as avoiding handling CUI that serves no business purpose.

GCC v GCC High

Heads up to those DIB organizations that utilize or plan to utilize cloud-based services to process, store, or transit CUI. The use of cloud services for CUI introduces the GCC vs. GCC High considerations.  The GCC environment is acceptable in those instances where only Basic CUI data elements are present.  GCC High is required if CUI-Specified or ITAR/EAR designated data elements are present.  In some instances, prime contractors that utilized GCC High may require their subcontractors to do the same.

Asset Inventory

Asset inventory is an mandatory and is an important part of scoping.  The table below describes the five categories of CUI assets defined by CMMC 2.0.

Asset

Description

CUI

Assets that process, store, or transmit CUI

Security Protection

Assets that provide security functions or services to the contractor’s CMMC scope.

Contractor Risk Managed

Assets that can, but are not intended to process, store, or transmit CUI due to security controls (policies, standards, and practices) put in place by the contractor.

Specialized

Special group of assets (government property, Internet of Things (IoT), Operational Technology (OT), Restricted Information Systems, and Test Equipment) that may or may not process, store, or transmit CUI.

Out-Of-Scope

Assets that cannot process, store, or transit CUI because they are physically or logically separated from CUI assets.

DIB contractors are required to formally document all CUI assets in an asset inventory as well as in their SSPs.  There are no requirements expressed for what information is to be captured in the inventory, but I would recommend in addition to capturing basic information (i.e., serial numbers, make, models, manufacturer, asset tag id, and location) you consider mapping the assets to their relevant business processes and identify asset ownership.   Owners should be given the responsibility for overseeing the appropriate use and handling of the CUI-associated systems and data throughout their useful lifecycles.  An asset management system is recommended for this activity, but Microsoft Excel should be adequate for capturing and maintaining the CUI inventory for small to midsize organizations.

Figure 3. Asset Inventory

CMMC asset inventory

Assess

Once you have your asset inventories completed and your CMMC scope defined, it’s time to perform a gap analysis to determine your security posture alignment with CMMC requirements.  If you have been performing your annual self-attestation against NIST SP 800-171, you can leverage this work but be sure to assess with greater rigor.  Consider having a CMMC Registered Practitioner from a third-party provider perform the assessment since will provide an unbiased opinion of your posture.  The results of the gap assessment should be placed into a Plan of Action and Milestones (POAM) where you will assign priorities, responsibilities, solutions, and due dates for each gap requiring corrective action.

Remediate

Finally, use the POAM to drive the organizations remediation efforts in preparation for CMMC certification.  Remember that if you contract 3rd-party services as part of remediation (e.g., managed security services, cloud services, etc.) those services become part of your CMMC scope.  Consider performing a second posture assessment after remediation efforts are complete to ensure you are ready for the certification assessment by the C3PAO.  CMMC certification is good for 3 years, so be sure to implement a governance structure to ensure your program is positioned for recertification when the time comes.

Conclusion

I hope this implementation roadmap provides a benefit to you on your CMMC Level 2 certification journey.  Keep in mind, there are no surprising or unusual safeguards involved in the process as CMMC requirements align with industry best practices for cybersecurity.  As with any strong information security program, it is critical that you fully understand the IT environment, relevant business processes, and data assets involved.  As we like to say in cybersecurity, “you can’t protect an asset if you don’t know what it is or where it’s at”.  Completing the upfront administrative work such as education, scope, and inventory will pay dividends as you progress toward independent certification.

The post CMMC 2.0: key changes appeared first on Cybersecurity Insiders.