In the previous article about the coding process, we covered developers using secure coding practices and how to secure the central code repository that represents the single source of truth. After coding is complete, developers move to the build and test processes of the Continuous Integration (CI) phase. These processes use automation to compile code and test it for errors, vulnerabilities, license conformity, unexpected behavior, and of course bugs in the application.

The focus of DevSecOps is to help developers follow secure-coding best practices and open-source licensing policy that were identified in the planning process. In addition, DevSecOps helps testers by providing automated scanning and testing capabilities within the build pipeline.

What is in a build pipeline?

Build pipelines run on highly customizable platforms like Microsoft Azure DevOps, Jenkins, and Gitlab. The build pipeline pulls source code from a repository and packages the software into an artifact. The artifact is then stored in a different repository (called a registry) where it can be retrieved by the release pipeline. Jobs in the build pipeline perform the step-by-step tasks to create an application build. The jobs can be grouped into stages and run sequentially every time the build process is run. Jobs need a build server, or pools of build servers to run the pipeline and return a built application for testing.

Pipeline DevSecOps

DevSecOps partners with developers by inserting additional source code scanning tools as jobs into the build pipeline. The tools used depend on what is being built and is usually determined through DevSecOps collaboration with the development team to understand the architecture and design of the code. For most projects, DevSecOps should implement at a minimum, the scanning tools that look for vulnerabilities, poor coding practices and license violations.

Source code scanners

Pipelines allow automated application security (AppSec) scans to be run every time a new build is created. This capability allows DevSecOps to integrate static analysis (lint) tools like source code scanners that can run early in the software development lifecycle. Security scanners come in two forms: static application security testing (SAST) and dynamic application security testing (DAST).

SAST is run early in the development lifecycle because it scans source code before it is compiled. DAST runs after the development cycle and is focused on finding the same types of vulnerabilities hackers look for while the application is running.

SAST can look for supply chain attacks, source code errors, vulnerabilities, poor coding practices, and free open-source software (FOSS) license violations. SAST speeds up code reviews and delivers valuable information early in the project so developers can incorporate better secure coding practices. Picking the right SAST tool is important because different tools can scan different coding languages. By automating scanning and providing feedback early in the development process, developers are empowered by DevSecOps to be proactive in making security related code changes before the code becomes an application.

Container image scanners

Application builds that create containers for microservices like Docker are stored in a registry as an image artifact. These images have application code, additional software packages, and dependencies that are needed to run the application. Sometimes the images are built by the developers and other times are pulled from a public repository like Github.

Source code scanners review the source code, image scanners review the built application, packages, and dependencies. Image scanners look for container vulnerabilities and exploits like supply chain attacks and crypto jacking software.

Image scanners should be run during the build process so that vulnerabilities are identified and remediated by the development team quickly. Keeping an image small (fewest needed packages and dependencies) is a great (and easy) way for developers to reduce the attack surface of the image and speed up security scanning and remediating vulnerabilities.

In addition to image scanning, DevSecOps recommends the following criteria to protect the application. Images should be configured to not run on the host system using the admin (root) account. This protects the host from privilege escalation if the application is compromised.

Images should be signed by a trusted certificate authority so they have a trusted signature that can be verified when the image is deployed to an environment. Images should be stored in a dedicated image repository so that all internal microservices platforms (Docker and Kubernetes) only pull “approved” images.

Test process

Testing is one of the first environments that an application build is deployed into. Testing teams use tools like Selenium and Cucumber to help automate as much of the testing as possible. Automated test plans can benefit from iterative improvements that increase the test plan quality every time a build is created. DevSecOps has open-source tools like ZAP that support proxying and can sit between the testing tools to perform security scanning as the tests are examining the application. Bringing DevSecOps and the testing teams together helps builds trust and collaboration while speeding up testing and reducing the number of scripts and tools necessary to complete the testing process.

Bending the rules

Outages, quality issues, and common mistakes can happen when there is pressure to deliver in a compressed timeframe. Building and testing is where bending the rules may be accepted or even the current norm within the teams. Security scanners are designed to stop the build process if audits and compliance fail. If the development and testing teams are unaware of this risk, it will appear as builds and tests breaking. They will complain to their leaders who will come to the DevSecOp team and demand the tools get out of the way of the success of DevOps.

DevSecOps overcomes these concerns by being an integral part of the team with developers and testers. Coordination between DevSecOps and developers is also promoted by adding the findings from these tools into the same bug tracking tools used by testers. DevSecOps integrates by speaking about the changes and listening to incorporate the feedback loop, create inclusiveness, and collaborate to help everyone understand what the tools are doing, how they work, and why they are important.

Next steps

Security scanners help developers follow secure-coding and license compliance practices. Scanners and feedback work best when performed as early as possible in the build pipeline so adjustments can be made quickly and with minimal development impact. Using automation encourages developers and testers not to bend the rules. With the application built and tests complete, the software is ready to be packaged as a release.

The post DevSecOps build and test process appeared first on Cybersecurity Insiders.

This blog was written by an independent guest blogger.

Amidst sweeping digital transformation across the globe, numerous organizations have been seeking a better way to manage data. Still in the beginning stages of adoption, data fabric provides many possibilities as an integrated layer that unifies data from across endpoints. 

A combination of factors has created a digital environment where data is stored in several places at once, leaving cracks in security for fraudsters to take advantage of. Cybercrime has reached historic highs, with vulnerabilities that affect crucial industries such as healthcare, eCommerce, and manufacturing. 

Data fabric works like a needle and thread, stitching each business resource together as an interconnected data system that feeds data into one large connector. When each application is connected to every other part of your system, data silos are broken, allowing for complete transparency in the cloud or a hybrid approach. 

What is data fabric? How does it work? And how does data fabric impact cybersecurity? Let’s dive in. 

What is data fabric?

Data fabric is a modern data design concept to better integrate and connect processes across various resources and endpoints. Data fabric can continuously analyze assets to support the design and deployment of reusable data across all environments. 

By utilizing both human and machine capabilities, data fabric identifies and connects disparate data. This supports faster decision-making, re-engineering optimization, and enhanced data management practices.

For example, you could think of data fabric as a passive data observer that only acts when it encounters assets that need to be managed. Based on its specific implementation, data fabrics can automatically govern data and make suggestions for data alternatives. Both humans and machines work together to unify data and improve efficiency overall. 

How does it work?

Data fabric architecture provides strategic security and business advantages for companies and organizations. To better understand how data fabric works, let’s go over the six different layers of data fabric:

  • Data management — This layer is responsible for data security and governance. 
  • Data ingestion — This layer finds connections between structured and unstructured. 
  • Data processing — This layer refines data for accurate and relevant extraction.
  • Data orchestration — This layer makes data usable for teams by transforming, integrating, and cleansing the data. 
  • Data discovery — This layer precipitates new opportunities to integrate and develop insights from disparate data sources. 
  • Data access — Finally, this layer ensures that permissions and compliance conditions are met and allows access through virtual dashboards. 

This integrative and layered approach to data management helps protect organizations against the most prevalent attack types such as client-side, supply chain, business app, and even automated attacks. 

Who can benefit from data fabrics?

Because data fabric use cases are still developing, there are potentially many unknown instances where data fabric can provide a security advantage for organizations. However, the possibilities are endless based on the data fabric’s ability to eliminate silos and integrate data across various sources. Data fabric can be implemented as an identity theft prevention strategy, to improve performance and everything in between.

Here are just a few specific use cases for data fabric architecture:

  • Customer profiles
  • Preventative maintenance
  • Business analysis
  • Risk models
  • Fraud detection

Advantages of data fabric architectures

Even in its early stages, data fabric has been shown to improve efficiency from workflows to product life cycles significantly. In addition to increasing business productivity, here are some other examples of how adopters can benefit from a data fabric architecture:

  1. Intelligent data integration

    Data fabric architectures use AI-powered tools to unify data across numerous endpoints and data types. With the help of metadata management, knowledge graphs, and machine learning, it makes data management easier than ever before. By automating data workloads, a data fabric architecture not only improves efficiency, but also eliminates siloed data, centralized data governance, and improves the quality of your business data. 

  2. Better data accessibility

    The centralized nature of data fabric systems makes accessing data from various endpoints fast and simple. Data bottlenecks are reduced since data permissions can be controlled from a centralized location despite users’ physical locations. And data access can easily be granted when necessary for use by engineers, developers, and analysts. Data fabric enables workers to make business decisions faster and allows teams to prioritize tasks from a holistic business perspective. 

  3. Improved data protection

    Possibly the most crucial aspect of implementing data fabric is that it improves your data security posture. You get the best of both worlds, broad data access, and improved data privacy. With more data governance and security guardrails in place with the help of a unified data fabric. Technical and data security teams can streamline encryption and data masking procedures while still having the ability to access data based on user permissions. 

Data fabric and cybersecurity

As a part of a robust cybersecurity ecosystem, data fabric acts as the foundation from which the entirety of your business data sets upon. When used correctly, data fabric makes business processes more efficient and improves data protection with the right defensive strategies built in. 

Because data fabric acts as a single source for all business data, there are many who wonder about its cybersecurity implications. In fact, most open source security vulnerabilities have a validated fix that must be patched. But many attackers take advantage of these entry points before organizations have time to update their software. 

Organizations using data fabric can also benefit from cybersecurity mesh to combine automation with a strategic security approach. Data mesh relies on the organizational structure to define data security needs so that the data fabric can more efficiently align with those needs. 

Gartner predicts that organizations that adopt a data fabric and cybersecurity mesh architecture will reduce the financial impact of data breaches by 90% by 2024. No other cybersecurity posture comes close to the security implications of data fabric across business applications. 

Data fabric is also essential to cybersecurity infrastructure because it requires that teams adopt a security-by-design outlook. With centralized data fabric built into your environment, organizations can greatly reduce their vulnerabilities and attack vectors from the inside out. 

Putting it all together

Data fabric provides organizations with a way to integrate data sources across platforms, users, and locations so that business data is available to those that need it when it is needed. While this does reduce data management issues, it raises important cybersecurity questions related to its centralized nature. 

However, data fabric and cybersecurity mesh work together to build integrated security controls that include encryption, compliance, virtual perimeter, and even real-time automatic vulnerability mitigation.

Now, stand-alone security solutions protecting numerous data sources can work together to improve security efforts overall. Data fabric is an essential aspect of a business-driven cyber strategy, especially for industries utilizing hybrid cloud setups, businesses struggling with disparate data, and an evolving cybersecurity landscape.

The post What is data fabric and how does it impact Cybersecurity? appeared first on Cybersecurity Insiders.

credit card for ecommerce
Image source: Freepik

This blog was written by an independent guest blogger.

As eCommerce grows, there are more issues concerning payments and security. Customers still don’t enjoy a smooth user experience, can’t access fraud-free transactions, and there are still many declined transactions.

Online shopping still lacks a seamless experience due to the risks of storing and handling sensitive account data.

The payment system uses basic details like CVV2, 3-digit security codes, expiration dates, and primary account numbers. If these details are compromised, a lot of things can go wrong. The industry is adopting a technology called “tokenization” to deal with these issues. 

Today, we will discuss this technology and help you understand how it can help.

What is tokenization?

Tokenization might sound like something complex, but the basic principle behind it is simple. It’s a process of replacing sensitive pieces of data with tokens. These tokens are random data strings that don’t hold any meaning or value to third parties.

These tokens are unique identifiers that can still hold a portions of the essential sensitive data, but they protect its security. The original data is linked to the new tokens but without giving any information that lets people reveal the data, trace it, or decipher it.

Here is a  video overview of tokenization.

The data piece is stored outside the internal system used by the business. Tokens are irreversible, so if they’re exposed, they cannot be returned to their original form.

Since the data is moved elsewhere, it’s almost impossible for someone to compromise this data.

How tokenization works

Tokenization has a wide range of applications. In eCommerce, payment processing is one of the most popular areas of tokenization and companies use tokens to replace account or card numbers, most commonly the primary account number (PAN) associated with a credit card.

The PAN is replaced with a random placeholder token, and the original sensitive data is stored externally. Once the original data needs to be used to complete transaction, it can be exchanged for the token and then transmitted to payment gateways, processors, and other endpoints using various network systems.

Example of tokenization

TokenEx is a typical tokenization platform used for eCommerce payments. The platform first intercepts the sensitive data from whichever channel it is being collected–mobile, desktop, PIN pad, etc. This data is tokenized and stored securely, and then the token is returned to the client for internal use. In the end, the sensitive data is detokenized and sent to payment-processing providers for executing and verifying transactions.

In the image below you can see how data travels on the TokenEx platform.

  1. First, you have the channels through which the data is coming (“Secure Data Collection”).
  2. In the bottom-middle section, you have our platform, where data is tokenized and stored (“Secure Data Storage”) before being returned to a client environment in the top-middle section (“Compliance Safe Harbor”) for safe, compliant internal use.
  3. And then finally, on the right, you have the data being sent to a third party for processing (“Secure Data Transmission”), likely a payment service provider to authorize a digital transaction.

This combination of security and flexibility enables customers to positively impact revenue by improving payment acceptance rates, reducing latency, and minimizing their PCI footprint.

How tokenization works

Image source: TokenEx

Types of tokenization

Tokenization is becoming popular in many different industries and not just eCommerce. Payments are just one of the uses of tokenization, and there are many more applications out there. Not all tokenization processes are the same, as they have different setups depending on the application.

Tokenization outside of the blockchain

Tokenization outside of the blockchain means that digital assets are traded outside of the blockchain and have nothing to do with NFTs or smart contracts. There are a variety of tokens and tokenization types outside the blockchain.

Vaultless tokenization

Vaultless tokenization is typically used in payment processing. Vaultless tokenization uses secure cryptographic devices with specific algorithms created on conversion standards that allow the safe transfer of sensitive data into non-sensitive assets. Vaultless tokens don’t require a tokenization vault database for storage.

Vault tokenization

Vault tokenization is used for traditional payment processing for maintaining secure databases. This secure database is called vault database tokenization, and its role is to store both non-sensitive and sensitive data. Users within the network decrypt tokenized information using both data tables.

NLP tokenization types

The natural language processing domain includes tokenization as one of the most basic functions. In this context, tokenization involves dividing a text into smaller pieces called tokens, allowing machines to understand natural text better. The three categories of NLP tokenization are:

  1. Subword tokenization
  2. Character tokenization
  3. Word tokenization

Blockchain tokenization types

Blockchain tokenization divides asset ownership into multiple tokens. Tokenization on the blockchain is similar to NFTs as they behave as “shares.” However, tokenization also uses fungible tokens, and they have a value directly tied to an asset.

Blockchain tokenization allows decentralized app development. This concept is also known as platform tokenization, where the blockchain network is used as the foundation that provides transactional support and security.

NFT tokenization

One of the most popular tokenizations today is blockchain NFTs. Non-fungible tokens are digital data representing unique assets.

These assets don’t have a predetermined value (that is where the name non-fungible comes from) and can be used as proof of ownership, letting people trade various items or authenticate transactions. NFTs are used for digital art, games, real estate, etc.

Governance tokenization

This kind of tokenization is directed toward voting systems on the blockchain. Governance tokenization allows a better decision-making process with decentralized protocols as all stakeholders can vote, debate, and collaborate fairly on-chain.

Utility tokenization 

Utility tokens are created using a certain protocol allowing access to various services within that protocol. There is no direct investment token creation with utility tokens, and they provide good platform activity for improving the system's economy.

Where tokenization and eCommerce meet

Ecommerce payments have been growing for a long time, even before the global pandemic. We’re seeing a massive shift to online shopping with an exponential growth in sales. Even though the shift towards the digital world is definitive, this trend has introduced new challenges concerning security.

There’s a growing number of hackers and fraudsters looking to steal personal data. According to Risk Based Security research, in 2019 alone there were over 15 million data breaches in eCommerce. Tokenization is quickly being introduced as a way to combat fraud and convert account numbers into digital assets to prevent their theft and abuse.

Payment service providers that specialize in fraud detection can help verify transactions and devices, making it far more difficult for hackers to abuse someone’s information. Credit card and account information tokenization boosts security and protects data from external influences and internal issues.

Benefits of tokenization in eCommerce

Ecommerce companies can use tokenization to improve privacy and security by safeguarding payment information. Data breaches, cyber-attacks, and fraud can seriously affect the success of a business. Here’s how tokenization helps with all these threats. 

  •  No need for extensive data control because tokens aren’t sensitive

Ecommerce businesses need to implement extensive data control protocols for handling sensitive data and ensuring there are no liabilities. It can be a really tiresome and expensive process. Tokenization removes this issue because none of the confidential data is stored internally.

  •  No exposure if someone gets access to tokens

Data breaches are often fatal to businesses. They can lead to destroyed reputations, damaged business operations, loss of customers, and even legal issues. There’s no exposure of sensitive data when hackers access a database with tokenized payment records.

All payment data and personal information are safe since they aren’t stored within your systems. It’s true that this doesn’t prevent hacks, but it prevents the consequences of such events.

  •  Frictionless transactions and convenience

Modern customers love simplicity. Having saved payment information and the option to press one button to make a purchase is crucial for business success. However, providing this kind of experience carries risk as companies must save payment information so that customers can reuse it.

Having multiple cards linked to an account with saved information creates liability. Tokenization can enable seamless payment options for end customers without requiring routing numbers or credit cards to be stored internally.

  •  Companies can more easily comply with the PCI DSS

Companies that accept payment information and store it need to be compliant with various regulations, specifically the Payment Card Industry Data Security Standard. However, meeting these security requirements takes a lot of time and money. Payment tokenization service providers usually already have the required compliance certifications, so you’re outsourcing the majority of this responsibility to someone else.

Conclusion

We hope this post has helped you understand the basics of tokenization and how you can use it in eCommerce. The global tokenization market is estimated to grow at 21.5% CAGR, indicating that tokenization is here to stay. 

Keep in mind that we’re only scratching the surface here.

The post What is tokenization, what are the types of tokenization, and what are its benefits for eCommerce businesses? appeared first on Cybersecurity Insiders.

Stories from the SOC is a blog series that describes recent real-world security incident investigations conducted and reported by the AT&T SOC analyst team for AT&T Managed Threat Detection and Response customers.

Executive summary

The Windows ‘Administrator’ account is a highly privileged account that is created during a Windows installation by default. If this account is not properly secured, attackers may leverage it to conduct privilege escalation and lateral movement. When this account is used for administrative purposes, it can be difficult to distinguish between legitimate and malicious activity. Security best practice is to create and implement user accounts with limited privileges and disable the default ‘Administrator’ account on all machines.

The Managed Threat Detection and Response (MTDR) analyst team received 82 alarms involving the default ‘Administrator’ account successfully logging into multiple assets in the customer environment. The source asset attempting these logons was internal, successfully logging into multiple other internal assets within a short timeframe. Further investigation revealed the use of PowerShell scripts used for network share enumeration, account enumeration, and asset discovery.

Investigation

Initial alarm review

Indicators of Compromise (IOC)

An initial alarm was triggered by a built-in USM Anywhere rule named “Successful Logon to Default Account.” This rule was developed by the Alien Labs team to trigger based on successful login attempts to default Windows accounts, captured by Windows Event Log. This alarm was the first indicator of compromise in this environment which prompted this investigation.

IoC initial

Expanded investigation

Events search

The customer confirmed in prior investigations that the default Administrator account is widely used for legitimate administrative purposes in this environment. How does one distinguish between administrative activity and malicious activity? Additional event searching must be conducted to provide more context into this login and the actions surrounding it. To do this, filters were utilized in USM Anywhere to query for events associated with the Administrator account on the affected asset.

Event deep dive

First, the account Security Identifier (SID) was used to confirm which account was being used for this login. The SID Is a Globally Unique Identifier (GUID) that is unique to each account on a Windows System. The default Administrator Security Identifier (SID) typically ends with the Relative Identifier (RID) of 500 on Windows Systems.

A review of the event attached to this alarm confirms that the default Administrator account was used to sign in, with a SID ending with the RID of 500.

Alarm default admin

To provide more context, events originating from the source asset were queried within the last 24 hours. 40 successful logins using the Administrator account were seen from this source to other internal assets in less than 10 minutes.
default alarm events

These events were captured by the AlienVault Agent, which was installed directly on the source asset  to forward events to USM Anywhere.

Reviewing for additional indicators

Further review into the activity originating from the source asset reveals the use of an encoded and compressed PowerShell script. Encoding and compression effectively allow the attacker to obfuscate scripts being executed, evading detection.

Using open-source tools, we were able to decode and decompress the underlying PowerShell script:

default account PowerShell

The decoded ‘Invoke-ShareFinder’ script seen above is a function used to query for exposed network shares in a Windows domain. This tool can also be used to determine which users have access to each network share.  Exposed and insecure network shares could allow an attacker to obtain sensitive information or conduct lateral movement.

An additional event was found for the PowerShell script “Discovery.psm1” being executed on this asset. This script is used for internal network discovery using various scanning techniques.

PowerShell script

Response

Building the investigation

With all events gathered and analysis completed, an investigation was created and submitted to the customer for review. Due to the severity of this incident and for situational awareness, a call was made to the customer to inform them of this activity.

Customer interaction

The customer took quick action to isolate the source asset, preventing further lateral movement attempts. Additionally, all affected assets were scanned using SentinelOne to ensure they were not infected with malware. Lastly, the default ‘Administrator’ account was disabled on all assets in this environment, effectively preventing future abuse of this account.

Limitations and opportunities

Limitations

The MTDR team lacked visibility into the customer’s SentinelOne EDR environment, which would have allowed for additional context and quicker response action.

Opportunities

AT&T offers Managed Endpoint Security (MES), a tool that provides comprehensive endpoint protection against malware, ransomware, and fileless attacks. MES utilizes behavioral analysis, which would have alerted analysts of malicious activity and prevented the “Discovery” and “Invoke-ShareFinder” scripts from executing on the asset. MES can also be used to conduct response actions such as isolating and scanning affected assets. 

The post Stories from the SOC – Lateral movement using default accounts appeared first on Cybersecurity Insiders.

SOC SASE

Recently the architecture model known as Secure Access Service Edge (SASE) has been gaining momentum. Not surprising, when the model provides benefits – including reduced complexity of management, improved network performance and resiliency, security policy implemented consistently across office and remote users and lower operational expense. In fact, according to a recent ESG survey, 70% of businesses are using or considering a SASE solution. But if SASE is supposed to simplify network and security management, then one may wonder, “what value does a managed services provider (MSP) offer?”

Why an MSP for SASE deployment?

SASE adoption

There are great number of answers to that question, but a good place to start is to create an understanding that the journey to SASE is going to be a little different for every enterprise. There are a many approaches and models in the market and many vendors to choose from.

First of all, one major reason that businesses are utilizing an MSP for SASE is because it’s just difficult and expensive to hire and retain technicians with the specialized skillset they require, particularly if they require 24/7 monitoring. In fact, according to a recent study, 57% of organizations have been negatively impacted by the cybersecurity skills shortage. Sometimes it just makes more financial sense and can improve an organization’s risk posture to outsource this to a trusted third-party.

In addition, while many technology providers claim to offer a complete SASE portfolio, it is important to note that it is not an off-the-shelf solution and can include many different components. There has been a lot of consolidation in the market over the past several years, with vendors acquiring other companies to build a more well-rounded suite, which has resulted in multiple management platforms. Most vendors are working to consolidate these to offer management through a single pane of glass but few have achieved that quite yet.

And then finally, SASE is not a “one and done” or plug-and-play solution. The vast majority of businesses are not going to rip out and replace their entire infrastructure at one time. Rather, it will be a gradual roll out of capabilities as they come upon their refresh cycle or as budgets for new initiatives are approved. Most large or well-established companies will be on a hybrid environment for the foreseeable future, with assets hosted in both the data center as well as in the cloud.

Benefits of working with an MSP

Sometimes it is difficult to know where to start with a multi-faceted solution such as SASE, and that is why it is so important to have a trusted advisor you can count on. Here are some of the key benefits you can expect to realize when working with industry-leading managed service providers:

  • Accelerated time to value and scale: A qualified MSP for SASE implementation will offer consulting services that can determine your organization’s readiness for SASE, identify the best solutions for your unique needs, and help chart a roadmap for rollouts. Should your business acquire other companies, add or reduce locations, or change workplace designations, it is often as simple as contacting your MSP, providing the required information, and signing a contract addendum.
  • Security and networking expertise: Being that SASE is a convergence of software defined wide-area networking and security you will need someone that has knowledge and experience in both disciplines. MSPs can meet this requirement and have the ability to integrate these components to deliver resilient, high-performance connectivity and protection.
  • Solution development experience: With so many vendors and solutions on the market, it may be difficult to know which offer the best mix of capabilities, protection, and performance. Conducting multiple proof of concepts (POCs) can be costly and time consuming. MSPs can remove this burden from your technology teams by evaluating offers, conducting comprehensive interoperability testing, technical validation, and solution certification to deliver the industry’s best technology elements that seamlessly work together.
  • Solution integration acumen: As mentioned above, it is unlikely that your organization will replace every component of their networking and security at the same time, which means that you will have legacy infrastructure that still needs to be supported alongside the new technology components and they may even be from different vendors. Managed service providers have the ability to integrate and manage a vast ecosystem of technology providers and capabilities in order to secure your entire environment.

Conclusion

With the rapid adoption of cloud delivered applications and services, the heightened expectations of customers when it comes to digital experience, and the pressing need to support work from anywhere, it is less a question of whether your business will adopt SASE, but rather when. In fact, you may have already started without knowing it. Regardless of where you are on your journey, an MSP can help ensure you avoid unnecessary detours and that you reach your desired outcomes.

The post Why use a managed services provider for your SASE implementation appeared first on Cybersecurity Insiders.

This blog was written by an independent guest blogger.

In mid-March, Microsoft released a free, open-source tool that can be used to secure MikroTik routers. The tool, RouterOS Scanner, has its source code available on GitHub. It is designed to analyze routers for Indicators of Compromise (IoCs) associated with Trickbot. This article will introduce some background on the MikroTik vulnerability, the Trickbot malware, and some ways you can protect yourself.

Trickbot emerges from the darknet

Trickbot was first discovered in 2016 and, despite efforts by Microsoft to stamp it out, has continued to remain a threat online. One of the main reasons for Trickbot’s persistence is that it has continued to change and evolve over the years. As a result, Trickbot has proven to be an adaptable, sophisticated trojan of modular nature, molding itself for different networks, environments, and devices.

As Trickbot has evolved, it began to reach Internet of Things (IoT) devices like routers. Since Trickbot continuously improves its persistence capabilities by dodging researchers and their reverse engineering attempts, it has been able to maintain the stability of its command-and-control (C2) framework.

Why is the MikroTik security flaw important?

Malware is particularly dangerous because it can be ransomware, a special type of malware that takes control over your computer or devices. Trickbot, as it has grown and evolved, now includes a plug-in for backdoor access for Ryuk, a piece of ransomware with crypto-mining capabilities. 

Once it had expanded its reach to networking devices, Trickbot began infecting MikroTik routers and modules and using them as proxy servers for its C2 servers and redirecting router traffic through alternative non-standard ports.

What makes the infection of MikroTik routers so significant is that they are used by millions of homes and organizations worldwide. The broad distribution of MikroTik routers gave Trickbot extensive infrastructure. Security flaws, like the MikroTik one, can be particularly important for web design because coders that work on the back end have to ensure that web pages are secure.

How does Trickbot work?

Researchers at Microsoft on the Microsoft Defender for IoT team discovered the exact mechanism that Trickbot’s C2 system used to exploit MikroTik devices. Hopefully, by discovering its inner workings, Trickbot will be stamped out for good.

The reason hackers use Trickbot is that it allows compromised IoT devices to communicate between the C2 server and other compromised devices. Hackers then breach target routers, typically using a combination of brute force and exploits.

One of the key ways brute force techniques are used by malware to infect MikroTik devices is by utilizing default MikroTik passwords. They also exploit brute force attacks that utilize passwords harvested from other MikroTik devices. Finally, they exploit the CVE-2018-14847 vulnerability utilizing RouterOS versions older than 6.42. This exploit allows hackers to read files from the device like user.dat, which often contains passwords.

Once they’ve gotten access, they start issuing commands that redirect traffic between two ports on the router. Redirecting traffic creates the communication line between impacted devices and the C2.

In the end, catching on to how Trickbot worked involved sniffing out commands that were specific to the unique operating system, RouterOS and RouterBOARD, used by MikroTik IoT devices.

All IoT devices are vulnerable

The important takeaway for professionals and end-users is that all IoT devices are vulnerable. In fact, many journalists have recently brought attention to the dangers of networked security cameras in your home.

A professionally-installed ADT security system was exploited by a technician who used his access to watch people’s deeply personal private lives. All of these cameras were IoT devices.

Although your smart fridge probably isn’t spying on you, it’s important to remember that the security landscape continues to expand as more and more devices become connected to the Internet. Devices that perform limited functionality, like routers and cameras, can often become prime targets for hackers because they are not regularly updated like smartphones and computers.

How do you protect yourself?

Utilizing special software tools can be a great way to protect yourself from cybersecurity threats. Microsoft’s RouterOS Scanner is the go-to way to resolve the MikroTik router vulnerability. As you can see, exploiting one MikroTik device opens up the possibility for exploiting many more.

Microsoft did the tech community a huge favor by giving away their security tool for free, but this may not be the end for Trickbot. Unfortunately, as long as MikroTik devices continue to operate without having their firmware updated and their devices monitored, Trickbot will probably stay around.

Starting a cybersecurity audit can be a good way to find other ways your company might be at risk. Understanding your digital security needs is the first step in securing your network and enterprise. AT&T offers several enterprise-level cybersecurity network solutions that are worth examining.

Another thing all Internet users should do is change their default passwords to more secure unique passwords. Much of the damage done by Trickbot and the MikroTik exploits was because of default passwords shipped with the devices. Changing your default passwords will ensure that brute-forcing your network will be much harder.

Generating hard-to-guess unique passwords is actually the number one cybersecurity tip. Whether you’re starting a blog for your small business or running a large company with hundreds of staff, creating a strong password is the best way to decrease your vulnerability to cyberattacks and loss of data privacy and security.

Staying educated is another way to ensure you stay on top of cyber security threats. Many large organizations offer training to employees to help them understand the terminology surrounding IT. It’s important to continue to educate yourself, too, as threats can change, vulnerabilities can be patched, and new technologies can make how we approach security shift overnight.

Finally, enable multi-factor authentication or MFA whenever it’s available. MFA can help cut down on unauthorized device access by requiring you to authenticate your identity every time you try to log on. MFA is a critical component of building a zero-trust cybersecurity model, which is the preferred way of securing your business today.

Conclusion

From Russia hacking Ukrainian government websites to the Okta hack that demonstrated even digital security firms are vulnerable to hackers, hacks and exploits have been all over the news lately. The release of Microsoft’s MikroTik router tool marks a turn in digital security and demonstrates that companies and teams are working hard to ensure that digital security can be maintained.

The post Microsoft releases open-source tool for securing MikroTik routers appeared first on Cybersecurity Insiders.

Metaverse abstraction

Photo by Adi Goldstein on Unsplash

This blog was written by an independent guest blogger.

The technical infrastructure of video games requires a significant level of access to private data, whether through client-server side interactions or financial data. This has led to what Computer Weekly describes as a ‘relentless’ attack on the video game industry, with attacks against game hosts and customer credentials rising 224% in 2021. There are several techniques to managing a personal online presence in a way that deters cyber attacks, but the ever-broadening range of games and communication tools used to support gaming communities means these threats are only increasing, and are starting to affect games played in single-player.

Gaming exploits

Gaming hacks and exploits are nothing new. There has long been a industry around compromising game code integrity and releasing games for free, and within those games distributing malicious software to breach private user details and deploy them for the gain of the hacker. These have become less common in recent years due to awareness over online data hygiene, but the risks do remain.

In July, NintendoLife highlighted one particularly notorious hack of the Legend of Zelda series that was sold, unlawfully, and earned the creator over $87,000 in revenue. This exploit showed a common route towards tricking customers – deception. Zelda has a notably strong community where fans help each other out, both in learning the game and defending against common exploits; this is why the malicious actor in question was discovered, and why no further harm was done, but it remains a risk. Awareness is often key in avoiding attempted cyber attacks.

Web services to apps

Video games have become increasingly merged with web services and this, too, is raising the risk of attack. According to CISO mag, a majority of the attacks targeting video game services were conducted via SQL injection, a popular form of web service attack that attempts to breach databases. This, in turn, can result in the extraction of private customer details and financial information.

Games have previously sought to use their own platforms for registration and payments. However, in recent years, and especially with the growth of gaming platforms – such as Battle.net, Steam and EA Origin – user account details are made more vulnerable through their hosting via web services. This is a worrying development when considering the ultimate interface of video gaming, web services, and virtual reality – the up-and-coming Metaverse.

The Metaverse

The Metaverse is a descriptor for an interlinked series of digital worlds that will come together into one VR-powered reality. Pioneered most recently by Mark Zuckerberg and his Meta company, it is considered the future of communication and casual video gaming. According to Hacker Noon, the Metaverse is at unique risk of being subjected to serious cyber attacks.

The Metaverse is unique in that it will require digital currencies to operate. It is envisioned as a world within a world – not simply a service you pay for and then access, but an area where you will actively live and play. That means persistent financial data and constant access to privileged private information. Furthermore, individuals play themselves in the Metaverse; not a created character. One successful attack could claim a significant amount of data from any single user of the Metaverse, making it the ideal target for a new generation of cyber attacks.

In short, the protections that will come up for the Metaverse need to be absolutely world-class. Collaboration is required, and a strong culture of individual diligence and digital hygiene, too. Putting these principles in place today will help to protect the Metaverse before it really gets big, and protect video gamers too.

The post Cyber threats increasingly target video games – The metaverse is next appeared first on Cybersecurity Insiders.

In open source we trust

JavaScript code—used in 98% of all global websites–is a notable contributor to the ongoing software supply chain attack problems. In fact, vulnerable or malicious JavaScript is likely responsible for a sizable portion of the increase in attacks during 2021. With much of the JavaScript code that drives websites originating from open-source libraries, organizations need to understand how open source contributes to the JavaScript supply chain issues, and what they need to do to protect their business and their customers.

What’s going on with the supply chain?

Often called one of the most insidious and dangerous forms of hacking, software supply chain attacks can devastate businesses. In addition to the immediate effects of an attack, such as operational delays, system infiltration, and the theft of sensitive credentials or customer data, the long-term consequences can be significant. Regulatory fines, compliance concerns, reputation damage, attacks on connected businesses, and lost customers are often the consequences of a supply chain attack.

Software supply chain attacks currently dominate the headlines, with recent industry research reporting a 300% increase in 2021. The research found that threat actors tend to focus on open source vulnerabilities and code integrity issues to deliver attacks. In addition, researchers discovered a low level of security across software development environments, with every company examined having vulnerabilities and misconfigurations.

Another industry report discovered a 650% increase over the course of one year in supply chain attacks aimed at upstream public repositories, the objective being to implant malware directly into open source projects to infiltrate the commercial supply chain.

A common thread in all of this is JavaScript—an inherently insecure coding language and one that is often found in open source projects.

Open-source JavaScript: A focal point for attack

Why the software supply chain has become a focal point for threat actors is a question on quite a few minds. The answer comes down to what is easy and most profitable. Software is imperfect. Vulnerabilities and flaws make it ripe for attack. And a connected supply chain ensures a large attack surface. Add to this is the prevalent use of the JavaScript programming language and open-source JavaScript libraries, which often contain flawed and sometimes malicious–third- or fourth-party code, and businesses become ground zero for JavaScript supply chain attacks.

JavaScript serves as one of the core technologies used to build web applications and websites. As mentioned previously, over 98% of websites use it for client-side web page behavioral elements. Additionally, 80% of websites use open-source or a third-party JavaScript library as part of their web application. The Stack Overflow 2021 Developer Survey found that JavaScript remains the most popular programming language with 65% of professional developers using it. Unfortunately, JavaScript wasn’t built with security in mind, making it extremely vulnerable to attack. JavaScript allows threat actors to deliver malicious scripts to run on both the front end and the back end of a system, affecting businesses, employees, and customers alike.

In addition, because there is little to no oversight in open-source libraries, vulnerabilities and malicious scripts can often lay unnoticed for months or even years.

The security firm WhiteSource recently highlighted the problems with open-source libraries and JavaScript, identifying more than 1,300 malicious packages in the most commonly downloaded JavaScript package repository.

What do JavaScript supply chain attacks look like?

As we mentioned earlier, JavaScript was not built with security in mind. Since there are no security permissions built into the JS framework, it is difficult to keep JavaScript code safe from attack. The most common JavaScript security vulnerabilities include:

  • Source code vulnerabilities
  • Input validation
  • Reliance on client-side validation
  • Unintended script execution
  • Session data exposure
  • Unintentional user activity

JavaScript supply chain attacks can take on one of three different forms: attacking the developer directly; embedding the attack on the back end; or embedding the attack on the front end.

Let’s play attack the developer!

The recent news of malicious activity found in npm, the most popular JavaScript package manager, made international headlines. These JavaScript package managers are designed to help developers automate any dependencies associated with a development project. Package managers enable automatic installation of new packages or an update to existing packages with a single command. They’re hugely popular, with new package managers released often, making them difficult to monitor. Since npm is open source, anyone can submit JavaScript packages, even bad actors who intentionally include malicious scripts in their packages.

The thing about package managers is that they install files directly on the developer’s machine, which means threat actors can get almost instant access to a developer’s device and possibly the entire system and network. According to WhiteSource, the organization that discovered the malicious npm packages, much of the malicious activity involved embedding malicious files on developer machines to engage in the ‘reconnaissance’ phase of an attack (based on the MITRE ATT&CK Framework), that is, active or passive information gathering. Researchers also discovered that 14% of the packages were designed to steal sensitive information, such as credentials.

Researchers found that threat actors designed these malicious packages to make them look legitimate, by using the names of well-known and popular npm packages, and then emulating source code. Malicious files observed in the study included obfuscated JavaScript and binaries hidden as JavaScript components.

The researchers in the WhiteSource study also found that in some instances, once the developers downloaded the files, one of the binaries would download a third-party file designed to interact with the Windows Registry to collect system and configuration information. The malicious file would also try to establish a connection with a remote host to enable remote code execution (RCE). The fake JavaScript files also included examples of Cobalt Strike, a tool used during Red Teams and penetration testing to facilitate an attack. According to researchers, the ultimate goal of this malicious software appeared to be cryptocurrency mining directly on the developer machine.

While threat actors did not appear to target specific industries or companies, they did design malicious packages to target certain systems. Researchers also discovered malicious JavaScript packages that used typosquatting, dependency confusion, code obfuscation, and other types of attack techniques.

Back-end JavaScript threats

JavaScript frameworks commonly used for server-side development, or the back end of a web application, are also highly susceptible to attack. As we mentioned earlier, JavaScript wasn’t built with security in mind making it an easy target. Any vulnerabilities in back-end JavaScript source code means threat actors have an easy way to infiltrate systems, install malicious files, and execute attacks. Risks include security misconfigurations, which can enable access to the back end; insecure back-end application design; vulnerable or outdated components; and server-side request forgery (SSRF). Examples of threats include SQL injection for back-end database manipulation, SSRF, and cross-site scripting (XSS).  

Front-end JavaScript threats

The front-end, or “client-side” code is executed within the browser which often lacks the same rigorous security controls that protect the back end of web applications. This makes vulnerable code on the client-side even more deadly due to the lack of visibility and control most organizations don’t have in place today. Since many web applications are written in JavaScript and operate on the client side, web users—such as bank or e-commerce customers—become vulnerable to attacks like Magecart, e-skimming, side loading, cross-site scripting (XSS), and formjacking.

Protect the complete attack surface with these three key steps

To fully manage the risk against breaches and attacks, companies must simultaneously protect both the client side and back end of their application portfolios. This includes anything accessed by the developer, anything written in JavaScript on the back end, and web assets that the customer can see (e.g., text and images) and interact with.

One of the most important actions ​​any business can take is protecting their customers from JavaScript threats. Unfortunately, because of the sophisticated and subtle nature of these attacks, they can be hard to detect until it’s too late. To ensure that businesses are offering a safe and secure digital experience, they must be diligent about securing their website and web applications from dangerous client-side JavaScript attacks.

To protect the client-side attack surface, businesses should apply these three best practices:

  • Review third-party code during software development: Third-party JavaScript is a great way to avoid the time and money associated with developing your own code, but third-party scripts can also contain vulnerabilities or intentional malicious content. Always inspect third- and fourth-party additions for vulnerabilities.
  • Perform automated client-side attack surface monitoring: Inspection activities are critical, but also time consuming if you don’t have an automated solution to review JavaScript code. A purpose-built solution, like Feroot’s Inspector offered in AT&T Managed Vulnerability Program’s Client-side Security Solution, that automates the process can be a fast and easy way to identify malicious script activity.
  • Identify software supply chain risks: Assess and know what third-party code is being used across your web application’s client-side.

Improve your JavaScript supply chain security

JavaScript carries risk. The only way to avoid your business and your customers becoming victims of a JavaScript attack is to apply JavaScript security best practices to your website and web application development process.

The services offered by AT&T’s Managed Vulnerability Program (MVP) allow the MVP team to inspect and monitor customer web applications for malicious JavaScript code that could jeopardize customer and organization security.

AT&T is helping customers strengthen their cybersecurity posture and increase their cyber resiliency by enabling organizations to align cyber risks to business goals, meet compliance and regulatory demands, achieve business outcomes, and be prepared to protect an ever-evolving IT ecosystem.

To learn more about JavaScript security, check out Feroot’s comprehensive Education Center to read more on terminologies, technologies, and threats.

The post JavaScript supply chain issues appeared first on Cybersecurity Insiders.

This blog was written jointly witEduardo Ocete.

Executive summary

Several vulnerabilities for Java Spring framework have been disclosed in the last hours and classified as similar as the vulnerability that caused the Log4Shell incident at the end of 2021. However, as of the publishing of this report, the still ongoing disclosures and events on these vulnerabilities suggest they are not as severe as their predecessor.

Key takeaways:

  • A vulnerability in Spring Cloud Function (CVE-2022-22963) allows adversaries to perform remote code execution (RCE) with only an HTTP request, and the vulnerability affects the majority of unpatched systems. Spring Cloud Function is a project that provides developers cloud-agnostic tools for microservice-based architecture, cloud-based native development, and more.
  • A vulnerability in Spring Core (CVE-2022-22965) also allows adversaries to perform RCE with a single HTTP request. For the leaked proof of concept (PoC) to work, the vulnerability requires the application to run on Tomcat as a WAR deployment which is not present in a default installation and lowers the number of vulnerable systems. However, the nature of the vulnerability is more general, so there could be other potential exploitable scenarios.

In accordance with the Cybersecurity Information Sharing Act of 2015, AT&T is sharing the cyber threat indicator information provided herein exclusively for a cybersecurity purpose to combat cybersecurity threats.

Analysis

At the end of March 2022, several members of the cybersecurity community were discovered spreading news about a potential new vulnerability in Java Spring systems that is easily exploitable and affecting millions of systems. This vulnerability has the potential to originate a new Log4Shell incident.

First, it is important to clarify that the comparisons at this point appear to be searching for sensationalism and spreading panic, instead of providing actionable information. Additionally, two similar vulnerabilities in the Spring framework were disclosed around the same time, adding confusion to the mix. What has been observed by the AT&T Alien Labs™ threat intelligence team as of the publishing of this article is included below.

Spring Cloud Function (CVE-2022-22963)

A vulnerability in Spring Cloud Function has been identified as CVE-2022-22963, and this vulnerability can lead to remote code execution (RCE). The following Spring Cloud Function versions are impacted:

  • 3.1.6
  • 3.2.2
  • Older unsupported versions are also affected

In addition to the vulnerable version, JDK >= 9 must be in use in order for the application to be vulnerable.

The vulnerability is triggered when using the routing functionality. By providing a specially crafted Spring Expression Language (SPeL) as a routing expression, an attacker can access local resources and execute commands in the host. Therefore, this CVE allows an HTTP request header, containing a spring.cloud.function.routing-expression object with a SPeL expression, to be evaluated through the StandardEvaluationContext, leading to an arbitrary RCE.

Java Spring exploitation

Figure 1. Exploitation attempt.

The vulnerability has been assigned a CVSS of 9.0 which means high severity. Exploitation of the vulnerability may lead to a total compromise of the host or the container, and so patching is highly advised. In order to mitigate the vulnerability developers should update Spring Cloud Function to the newest versions, 3.1.7 and 3.2.3, where the issue has already been patched.

AT&T Alien Labs has identified several attempts of exploitation, which we believe are researchers trying to identify how prevailing the vulnerabilities actually is, since the exploitation attempts carried canarytokens as unique payload. Nevertheless, the team will continue to closely monitor the activity, as new scanning activity appears.

Spring Core (CVE-2022-22965)

A vulnerability in Spring Core was tweeted by one of the researchers who first disclosed the Log4Shell vulnerability. The researcher then rapidly deleted the tweet. This vulnerability was originally published without a CVE associated with it, and it is being publicly referred to as “Spring4Shell.” One of the first observed proof of concepts (PoC) was shared by vx-underground on March 30, 2022. It works against Spring’s sample code “Handling Form Submission.” The PoC consists of a single POST request carrying in its payload a jsp webshell that will be dropped in the vulnerable system.

Spring core following PoC

Figure 2. Exploitation attempt following PoC.

Spring has confirmed the vulnerability and has stated that the leak occurred ahead of the CVE publication. The vulnerability has been assigned CVE-2022-22965. As per Spring:

“…The vulnerability impacts Spring MVC and Spring WebFlux applications running on JDK 9+. The specific exploit requires the application to run on Tomcat as a WAR deployment. If the application is deployed as a Spring Boot executable jar, i.e. the default, it is not vulnerable to the exploit. However, the nature of the vulnerability is more general, and there may be other ways to exploit it.”

From the statement above, the specific scenario for the leaked PoC to work would have to match the following conditions:

  • JDK >=9
  • Apache Tomcat as the Servlet container
  • Packaged as WAR
  • spring-webmvc or spring-webflux dependency

However, the scope of the vulnerability is wider, and there could be other exploitable scenarios.

Spring has released new versions for Spring Framework addressing the vulnerability, so updating to versions

5.3.18 and 5.2.20 (already available in Maven Central) should be a priority in order to mitigate the RCE. The new versions for Spring Boot with the patch for CVE-2022-22965 are still under development.

As an alternative mitigation, the suggested workaround is to extend RequestMappingHandlerAdapter to update the WebDataBinder at the end, after all other initialization. To do so, a Spring Boot application can declare a WebMvcRegistrations bean (Spring MVC) or a WebFluxRegistrations bean (Spring WebFlux). At the “Suggested Workarounds” section of the Spring statement one can find an implementation example of such workaround.

According to a publication by Peking University, this vulnerability has been observed being exploited in the wild. However, AT&T Alien Labs has not identified heavy scanning activity on our honeypots for this vulnerability, nor exploitation attempts.

Finally, and just to provide a graphical representation of these vulnerabilities, below is a diagram shared by a CTI researcher from Sophos.

Java Spring vulnerability diagram

Figure 3. Java Spring vulnerability diagram.

Conclusion

Log4Shell was very impactful at the end of 2021, based on the number of exposed vulnerable devices and the facility of its exploitation. These recently disclosed Java Spring vulnerabilities remind us in the cyber community of lessons learned during the Log4Shell incident. Thus, these vulnerabilities have received a quick response by the entire cybersecurity community which is collaborating and sharing available information as soon as possible.

Alien Labs will keep monitoring the situation and will update the corresponding OTX Pulses to keep our customers protected.

Appendix A. Detection methods

The following associated detection methods are in use by Alien Labs. They can be used by readers to tune or deploy detections in their own environments or for aiding additional research.

SURICATA IDS SIGNATURES

alert http $EXTERNAL_NET any -> $HOME_NET any (msg:”AV EXPLOIT Spring Cloud RCE (CVE-2022-22963)”; flow:established,to_server; content:”POST”; http_method; content:”spring.cloud.function.routing-expression”; http_header; pcre:”/(getRuntime|getByName|InetAddress|exec)/HR”; reference:url,sysdig.com/blog/cve-2022-22963-spring-cloud; classtype:attempted-admin; sid:4002725; rev:1;)

alert http $EXTERNAL_NET any -> $HOME_NET any (msg:”AV INFO Spring Core RCE Scanning Activity (March 2022)”; flow:established,to_server; content:”POST”; http_method; content:”class.module.classLoader.resources.context.parent.pipeline.first.pattern”; http_client_body; startswith; reference:url,github.com/TheGejr/SpringShell; classtype:attempted-admin; sid:4002726; rev:1;)

alert http $EXTERNAL_NET any -> $HOME_NET any (msg:”AV EXPLOIT Spring Cloud RCE (CVE-2022-22963)”; flow:established,to_server; content:”POST”; http_method; content:”spring.cloud.function.routing-expression”; http_header; pcre:”/(getRuntime|getByName|InetAddress|exec)/HR”; reference:url,sysdig.com/blog/cve-2022-22963-spring-cloud; classtype:attempted-admin; sid:4002727; rev:1;)

 

AGENT SIGNATURES

Java Process Spawning Scripting Process

Java Process Spawning WMIC

Java Process Spawning Scripting Process via Commandline (For Jenkins servers)

Suspicious process executed by Jenkins Groovy scripts (For Jenkins servers)

Suspicious command executed by a Java listening process (For Linux servers)

 

Appendix C. Mapped to MITRE ATT&CK

The findings of this report are mapped to the following MITRE ATT&CK Matrix techniques:

  • TA0001: Initial Access
    • T1190: Exploit Public-Facing Application

Appendix D. Reporting context

The following source was used by the report author(s) during the collection and analysis process associated with this intelligence report.

1.      AT&T Alien Labs Intelligence and Telemetry

Alien Labs rates sources based on the Intelligence source and information reliability rating system to assess the reliability of the source and the assessed level of confidence we place on the information distributed. The following chart contains the range of possibilities, and the selection applied to this report is A1.

Source reliability

RATING

DESCRIPTION

A – Reliable

No doubt about the source's authenticity, trustworthiness, or competency. History of complete reliability.

B – Usually Reliable

Minor doubts. History of mostly valid information.

C – Fairly Reliable

Doubts. Provided valid information in the past.

D – Not Usually Reliable

Significant doubts. Provided valid information in the past.

E – Unreliable

Lacks authenticity, trustworthiness, and competency. History of invalid information.

F – Reliability Unknown

Insufficient information to evaluate reliability. May or may not be reliable.

 

Information reliability

RATING

DESCRIPTION

1 – Confirmed

Logical, consistent with other relevant information, confirmed by independent sources.

2 – Probably True

Logical, consistent with other relevant information, not confirmed.

3 – Possibly True

Reasonably logical, agrees with some relevant information, not confirmed.

4 – Doubtfully True

Not logical but possible, no other information on the subject, not confirmed.

5 – Improbable

Not logical, contradicted by other relevant information.

6 – Cannot be judged

The validity of the information can not be determined.

 

Feedback

AT&T Alien Labs welcomes feedback about the reported intelligence and delivery process. Please contact the Alien Labs report author or contact labs@alienvault.com.

The post Java Spring vulnerabilities appeared first on Cybersecurity Insiders.

This blog was written by an independent guest blogger.

When assessing the corporate governance of modern companies, one cannot help but note the obvious problems with information security. To solve these problems, it is crucial to carry out initiatives that, on the one hand, are complex, multifaceted, and nonobvious, and on the other, assume the involvement of all employees of the company, including the heads of key departments.

Information security is impossible without help from within the organization

Let us analyze the roles and possible points of interaction of several different management positions (skipping the CISO) responsible for operational resiliency, secure infrastructure, proper resource allocation, reputational risks, incident response, and other aspects of information security.

Chief Executive Officer (CEO)

The company’s management ensures the creation and maintenance of an internal environment that allows employees to participate in achieving strategic goals fully. Information security starts with the CEO and goes down, covering all staff. The CEO is responsible for creating a strong culture of safe behavior. CEOs must personally set an example of the correct attitude towards information security requirements. This attitude and position of the company leader will stimulate the communication between departments allowing them to fight against ransomware and other serious threats more effectively.

Companies today need leaders who combine a high level of technology awareness with an open mind. These leaders must create an open environment in which not only information about success is encouraged but also about information about any negative processes. Creating an atmosphere of transparency is an important task of top management when developing a ransomware protection strategy.

Chief Human ResourcesPeople Officer (CHRO/CPO)

Information security largely depends on the organizational structure and corporate culture of the company, and the role of the HR leader is one of the key ones in ensuring information security.

How is this expressed? First of all, such a leader must take responsibility for all employees hired by the company. These days, many information security incidents happen due to malicious insiders or employee incompetence. Understanding the day-to-day interests and motivations of employees is an important part of the work of the HR department.

Organizations can treat their employees on a “hired and fired” basis. But in this case, you should not expect high levels of personnel loyalty and a good reputation in the labor market. Managing the recruitment and departure of employees, taking into account emerging risks associated with, for example, data breaches, is one of the most important contributions of the HR leader to the security of the company.

Another significant part of HR is the application of advanced information security training programs.

The role of the HR department is also crucial in ensuring the ethics of security measures adopted by the company and in aligning these measures with the tasks and goals of employees. Effective corporate governance cannot rely on employees who are forced to act against their own interests and habits. Monitoring employee actions often raises questions about trust in the staff. HR director should understand the ethical underpinnings of these issues best and can provide advice to the CEO and information security department on whether the adopted security policies will be effective and if they are in line with the corporate culture.

Chief Information Officer (CIO)

It is essential for the CIO that information security increases the stability and reliability of IT systems, affecting the operational resiliency of all business processes.

On the technical side, the company’s top managers are primarily concerned about outages of IT systems or employee dissatisfaction with the use of IT infrastructure.

Throughout the life cycle of company development, it often happens that the information security team comes in and leaves after a short time, while the IT team remains for a long time. This is a consequence of the business’s strategic priorities, which were formed with the development and implementation of IT technologies. Indeed, a mature company has been living with the IT service for about 40 years and is used to following and trusting everything IT people say.

The business has been familiar with information security for the last 10-15 years at best. And it is the information security team that informs the CEO about all the problems of the IT team like the bad habits of employees in terms of using passwords, clicking links, the presence of technical accounts in Active Directory, update management, etc. For instance, employees might be recommended to download VPN services for security reasons whenever they work remotely.

In the fight of the security team with infosec issues, the IT team is formally on the side of information security. Still, in the real world, there is a misunderstanding, rivalry, explicit or hidden actions on the part of IT engineers (IT gurus) who are accustomed to creating certain rules independently. The CIO should make his employees realize the importance of information security for the company's sustainability.

Chief Risk Officer (CRO)

Continuous development and improvement should be obligatory and constant strategic objectives of any company. Identifying risks in the context of business priorities is one of the company's key goals in the field of information security. Therefore, the participation of the CRO in ensuring the information security of the company is directly related to his duties.

Risk prioritization is not a technical task. This is the matter of managing the company. The Chief Risk Officer should play an important role in developing the information security program and overseeing how identified risks are documented and eliminated.

At the same time, tech people need to get rid of the illusion that only they are able to understand information security risks. IT and security departments should share more information about various infosec subtleties so that company executives and risk management staff understand them better.

Chief Audit Executive (CAE)

The activities of the internal audit department are vital both for the information security and IT services as well as for the company’s executives. For information security and IT services, this is a third-party view of cybersecurity problems, focused on the most critical areas of the company's business activities. For top managers, the internal audit department significantly saves time and eliminates routine supervision procedures.

There are, however, some pitfalls in the way the internal audit department works. For this unit, complying with information security requirements may be less of a priority than complying with industry standards and regulations. Top managers should not think that compliance with standards will protect the company from all trouble. It is important here not to neglect other preventive measures proposed by all company stakeholders.

Chief Legal Officer (CLO)

If the specialists of the legal department are well versed in legislation related to the protection of personal data, understand the basics of technology, know reliable legal practices in the field of compliance with information security legislation, then this may indicate that the company has deep legal expertise in security technologies.

Legal specialists play a key role in determining the company's policy on exchanging information with government agencies. They participate in court proceedings. A significant part from the point of view of information security is played by the legal department when responding to data breaches.

Chief Security Officer (CSO)

In modern companies, the organization of physical security is usually outsourced, and the security department primarily deals with internal, strategic, operational, financial, and reputational risks. When investigating incidents, the security service traditionally comes to the fore. The information security team provides all evidence like logs or emails, and the security department brings the investigation to its logical conclusion.

Conclusion

The above-mentioned business divisions and their leaders often look at information security issues differently. Still, under the strong leadership of the CEO, they may come to a mutual understanding of arising problems and effectively determine the cybersecurity strategy.

One of the key conditions for a large number of participants to cooperate successfully is to recognize the roles that each group should play in the company. Top managers play the leading roles in these processes. They have the authority to determine what is vital to the company and what is not.

There are peculiarities and differences in how each department ensures the strong cybersecurity posture of the company. But there is one area where all efforts converge. It is the cybersecurity incident response. Developing and implementing sound, consistent incident response plans is a formidable task that is absolutely essential to a company's success in dealing with negative events. Developing such plans is a multidisciplinary project in which each of the key leaders must play a role.

The solution to many information security problems is impossible without finding a compromise between the participants. Top managers are not used to acting on someone else's orders. Rules introduced by technology leaders who have unexpectedly appeared (CISO) in the company often limit their freedom and infringe upon their pride. Today's business leaders should understand the hidden technological risks and rely on a wide range of opinions in the company when developing a security strategy.

The post Corporate structure and roles in InfoSec appeared first on Cybersecurity Insiders.