This blog was written by an independent guest blogger.

Amidst sweeping digital transformation across the globe, numerous organizations have been seeking a better way to manage data. Still in the beginning stages of adoption, data fabric provides many possibilities as an integrated layer that unifies data from across endpoints. 

A combination of factors has created a digital environment where data is stored in several places at once, leaving cracks in security for fraudsters to take advantage of. Cybercrime has reached historic highs, with vulnerabilities that affect crucial industries such as healthcare, eCommerce, and manufacturing. 

Data fabric works like a needle and thread, stitching each business resource together as an interconnected data system that feeds data into one large connector. When each application is connected to every other part of your system, data silos are broken, allowing for complete transparency in the cloud or a hybrid approach. 

What is data fabric? How does it work? And how does data fabric impact cybersecurity? Let’s dive in. 

What is data fabric?

Data fabric is a modern data design concept to better integrate and connect processes across various resources and endpoints. Data fabric can continuously analyze assets to support the design and deployment of reusable data across all environments. 

By utilizing both human and machine capabilities, data fabric identifies and connects disparate data. This supports faster decision-making, re-engineering optimization, and enhanced data management practices.

For example, you could think of data fabric as a passive data observer that only acts when it encounters assets that need to be managed. Based on its specific implementation, data fabrics can automatically govern data and make suggestions for data alternatives. Both humans and machines work together to unify data and improve efficiency overall. 

How does it work?

Data fabric architecture provides strategic security and business advantages for companies and organizations. To better understand how data fabric works, let’s go over the six different layers of data fabric:

  • Data management — This layer is responsible for data security and governance. 
  • Data ingestion — This layer finds connections between structured and unstructured. 
  • Data processing — This layer refines data for accurate and relevant extraction.
  • Data orchestration — This layer makes data usable for teams by transforming, integrating, and cleansing the data. 
  • Data discovery — This layer precipitates new opportunities to integrate and develop insights from disparate data sources. 
  • Data access — Finally, this layer ensures that permissions and compliance conditions are met and allows access through virtual dashboards. 

This integrative and layered approach to data management helps protect organizations against the most prevalent attack types such as client-side, supply chain, business app, and even automated attacks. 

Who can benefit from data fabrics?

Because data fabric use cases are still developing, there are potentially many unknown instances where data fabric can provide a security advantage for organizations. However, the possibilities are endless based on the data fabric’s ability to eliminate silos and integrate data across various sources. Data fabric can be implemented as an identity theft prevention strategy, to improve performance and everything in between.

Here are just a few specific use cases for data fabric architecture:

  • Customer profiles
  • Preventative maintenance
  • Business analysis
  • Risk models
  • Fraud detection

Advantages of data fabric architectures

Even in its early stages, data fabric has been shown to improve efficiency from workflows to product life cycles significantly. In addition to increasing business productivity, here are some other examples of how adopters can benefit from a data fabric architecture:

  1. Intelligent data integration

    Data fabric architectures use AI-powered tools to unify data across numerous endpoints and data types. With the help of metadata management, knowledge graphs, and machine learning, it makes data management easier than ever before. By automating data workloads, a data fabric architecture not only improves efficiency, but also eliminates siloed data, centralized data governance, and improves the quality of your business data. 

  2. Better data accessibility

    The centralized nature of data fabric systems makes accessing data from various endpoints fast and simple. Data bottlenecks are reduced since data permissions can be controlled from a centralized location despite users’ physical locations. And data access can easily be granted when necessary for use by engineers, developers, and analysts. Data fabric enables workers to make business decisions faster and allows teams to prioritize tasks from a holistic business perspective. 

  3. Improved data protection

    Possibly the most crucial aspect of implementing data fabric is that it improves your data security posture. You get the best of both worlds, broad data access, and improved data privacy. With more data governance and security guardrails in place with the help of a unified data fabric. Technical and data security teams can streamline encryption and data masking procedures while still having the ability to access data based on user permissions. 

Data fabric and cybersecurity

As a part of a robust cybersecurity ecosystem, data fabric acts as the foundation from which the entirety of your business data sets upon. When used correctly, data fabric makes business processes more efficient and improves data protection with the right defensive strategies built in. 

Because data fabric acts as a single source for all business data, there are many who wonder about its cybersecurity implications. In fact, most open source security vulnerabilities have a validated fix that must be patched. But many attackers take advantage of these entry points before organizations have time to update their software. 

Organizations using data fabric can also benefit from cybersecurity mesh to combine automation with a strategic security approach. Data mesh relies on the organizational structure to define data security needs so that the data fabric can more efficiently align with those needs. 

Gartner predicts that organizations that adopt a data fabric and cybersecurity mesh architecture will reduce the financial impact of data breaches by 90% by 2024. No other cybersecurity posture comes close to the security implications of data fabric across business applications. 

Data fabric is also essential to cybersecurity infrastructure because it requires that teams adopt a security-by-design outlook. With centralized data fabric built into your environment, organizations can greatly reduce their vulnerabilities and attack vectors from the inside out. 

Putting it all together

Data fabric provides organizations with a way to integrate data sources across platforms, users, and locations so that business data is available to those that need it when it is needed. While this does reduce data management issues, it raises important cybersecurity questions related to its centralized nature. 

However, data fabric and cybersecurity mesh work together to build integrated security controls that include encryption, compliance, virtual perimeter, and even real-time automatic vulnerability mitigation.

Now, stand-alone security solutions protecting numerous data sources can work together to improve security efforts overall. Data fabric is an essential aspect of a business-driven cyber strategy, especially for industries utilizing hybrid cloud setups, businesses struggling with disparate data, and an evolving cybersecurity landscape.

The post What is data fabric and how does it impact Cybersecurity? appeared first on Cybersecurity Insiders.

credit card for ecommerce
Image source: Freepik

This blog was written by an independent guest blogger.

As eCommerce grows, there are more issues concerning payments and security. Customers still don’t enjoy a smooth user experience, can’t access fraud-free transactions, and there are still many declined transactions.

Online shopping still lacks a seamless experience due to the risks of storing and handling sensitive account data.

The payment system uses basic details like CVV2, 3-digit security codes, expiration dates, and primary account numbers. If these details are compromised, a lot of things can go wrong. The industry is adopting a technology called “tokenization” to deal with these issues. 

Today, we will discuss this technology and help you understand how it can help.

What is tokenization?

Tokenization might sound like something complex, but the basic principle behind it is simple. It’s a process of replacing sensitive pieces of data with tokens. These tokens are random data strings that don’t hold any meaning or value to third parties.

These tokens are unique identifiers that can still hold a portions of the essential sensitive data, but they protect its security. The original data is linked to the new tokens but without giving any information that lets people reveal the data, trace it, or decipher it.

Here is a  video overview of tokenization.

The data piece is stored outside the internal system used by the business. Tokens are irreversible, so if they’re exposed, they cannot be returned to their original form.

Since the data is moved elsewhere, it’s almost impossible for someone to compromise this data.

How tokenization works

Tokenization has a wide range of applications. In eCommerce, payment processing is one of the most popular areas of tokenization and companies use tokens to replace account or card numbers, most commonly the primary account number (PAN) associated with a credit card.

The PAN is replaced with a random placeholder token, and the original sensitive data is stored externally. Once the original data needs to be used to complete transaction, it can be exchanged for the token and then transmitted to payment gateways, processors, and other endpoints using various network systems.

Example of tokenization

TokenEx is a typical tokenization platform used for eCommerce payments. The platform first intercepts the sensitive data from whichever channel it is being collected–mobile, desktop, PIN pad, etc. This data is tokenized and stored securely, and then the token is returned to the client for internal use. In the end, the sensitive data is detokenized and sent to payment-processing providers for executing and verifying transactions.

In the image below you can see how data travels on the TokenEx platform.

  1. First, you have the channels through which the data is coming (“Secure Data Collection”).
  2. In the bottom-middle section, you have our platform, where data is tokenized and stored (“Secure Data Storage”) before being returned to a client environment in the top-middle section (“Compliance Safe Harbor”) for safe, compliant internal use.
  3. And then finally, on the right, you have the data being sent to a third party for processing (“Secure Data Transmission”), likely a payment service provider to authorize a digital transaction.

This combination of security and flexibility enables customers to positively impact revenue by improving payment acceptance rates, reducing latency, and minimizing their PCI footprint.

How tokenization works

Image source: TokenEx

Types of tokenization

Tokenization is becoming popular in many different industries and not just eCommerce. Payments are just one of the uses of tokenization, and there are many more applications out there. Not all tokenization processes are the same, as they have different setups depending on the application.

Tokenization outside of the blockchain

Tokenization outside of the blockchain means that digital assets are traded outside of the blockchain and have nothing to do with NFTs or smart contracts. There are a variety of tokens and tokenization types outside the blockchain.

Vaultless tokenization

Vaultless tokenization is typically used in payment processing. Vaultless tokenization uses secure cryptographic devices with specific algorithms created on conversion standards that allow the safe transfer of sensitive data into non-sensitive assets. Vaultless tokens don’t require a tokenization vault database for storage.

Vault tokenization

Vault tokenization is used for traditional payment processing for maintaining secure databases. This secure database is called vault database tokenization, and its role is to store both non-sensitive and sensitive data. Users within the network decrypt tokenized information using both data tables.

NLP tokenization types

The natural language processing domain includes tokenization as one of the most basic functions. In this context, tokenization involves dividing a text into smaller pieces called tokens, allowing machines to understand natural text better. The three categories of NLP tokenization are:

  1. Subword tokenization
  2. Character tokenization
  3. Word tokenization

Blockchain tokenization types

Blockchain tokenization divides asset ownership into multiple tokens. Tokenization on the blockchain is similar to NFTs as they behave as “shares.” However, tokenization also uses fungible tokens, and they have a value directly tied to an asset.

Blockchain tokenization allows decentralized app development. This concept is also known as platform tokenization, where the blockchain network is used as the foundation that provides transactional support and security.

NFT tokenization

One of the most popular tokenizations today is blockchain NFTs. Non-fungible tokens are digital data representing unique assets.

These assets don’t have a predetermined value (that is where the name non-fungible comes from) and can be used as proof of ownership, letting people trade various items or authenticate transactions. NFTs are used for digital art, games, real estate, etc.

Governance tokenization

This kind of tokenization is directed toward voting systems on the blockchain. Governance tokenization allows a better decision-making process with decentralized protocols as all stakeholders can vote, debate, and collaborate fairly on-chain.

Utility tokenization 

Utility tokens are created using a certain protocol allowing access to various services within that protocol. There is no direct investment token creation with utility tokens, and they provide good platform activity for improving the system's economy.

Where tokenization and eCommerce meet

Ecommerce payments have been growing for a long time, even before the global pandemic. We’re seeing a massive shift to online shopping with an exponential growth in sales. Even though the shift towards the digital world is definitive, this trend has introduced new challenges concerning security.

There’s a growing number of hackers and fraudsters looking to steal personal data. According to Risk Based Security research, in 2019 alone there were over 15 million data breaches in eCommerce. Tokenization is quickly being introduced as a way to combat fraud and convert account numbers into digital assets to prevent their theft and abuse.

Payment service providers that specialize in fraud detection can help verify transactions and devices, making it far more difficult for hackers to abuse someone’s information. Credit card and account information tokenization boosts security and protects data from external influences and internal issues.

Benefits of tokenization in eCommerce

Ecommerce companies can use tokenization to improve privacy and security by safeguarding payment information. Data breaches, cyber-attacks, and fraud can seriously affect the success of a business. Here’s how tokenization helps with all these threats. 

  •  No need for extensive data control because tokens aren’t sensitive

Ecommerce businesses need to implement extensive data control protocols for handling sensitive data and ensuring there are no liabilities. It can be a really tiresome and expensive process. Tokenization removes this issue because none of the confidential data is stored internally.

  •  No exposure if someone gets access to tokens

Data breaches are often fatal to businesses. They can lead to destroyed reputations, damaged business operations, loss of customers, and even legal issues. There’s no exposure of sensitive data when hackers access a database with tokenized payment records.

All payment data and personal information are safe since they aren’t stored within your systems. It’s true that this doesn’t prevent hacks, but it prevents the consequences of such events.

  •  Frictionless transactions and convenience

Modern customers love simplicity. Having saved payment information and the option to press one button to make a purchase is crucial for business success. However, providing this kind of experience carries risk as companies must save payment information so that customers can reuse it.

Having multiple cards linked to an account with saved information creates liability. Tokenization can enable seamless payment options for end customers without requiring routing numbers or credit cards to be stored internally.

  •  Companies can more easily comply with the PCI DSS

Companies that accept payment information and store it need to be compliant with various regulations, specifically the Payment Card Industry Data Security Standard. However, meeting these security requirements takes a lot of time and money. Payment tokenization service providers usually already have the required compliance certifications, so you’re outsourcing the majority of this responsibility to someone else.

Conclusion

We hope this post has helped you understand the basics of tokenization and how you can use it in eCommerce. The global tokenization market is estimated to grow at 21.5% CAGR, indicating that tokenization is here to stay. 

Keep in mind that we’re only scratching the surface here.

The post What is tokenization, what are the types of tokenization, and what are its benefits for eCommerce businesses? appeared first on Cybersecurity Insiders.

Stories from the SOC is a blog series that describes recent real-world security incident investigations conducted and reported by the AT&T SOC analyst team for AT&T Managed Threat Detection and Response customers.

Executive summary

The Windows ‘Administrator’ account is a highly privileged account that is created during a Windows installation by default. If this account is not properly secured, attackers may leverage it to conduct privilege escalation and lateral movement. When this account is used for administrative purposes, it can be difficult to distinguish between legitimate and malicious activity. Security best practice is to create and implement user accounts with limited privileges and disable the default ‘Administrator’ account on all machines.

The Managed Threat Detection and Response (MTDR) analyst team received 82 alarms involving the default ‘Administrator’ account successfully logging into multiple assets in the customer environment. The source asset attempting these logons was internal, successfully logging into multiple other internal assets within a short timeframe. Further investigation revealed the use of PowerShell scripts used for network share enumeration, account enumeration, and asset discovery.

Investigation

Initial alarm review

Indicators of Compromise (IOC)

An initial alarm was triggered by a built-in USM Anywhere rule named “Successful Logon to Default Account.” This rule was developed by the Alien Labs team to trigger based on successful login attempts to default Windows accounts, captured by Windows Event Log. This alarm was the first indicator of compromise in this environment which prompted this investigation.

IoC initial

Expanded investigation

Events search

The customer confirmed in prior investigations that the default Administrator account is widely used for legitimate administrative purposes in this environment. How does one distinguish between administrative activity and malicious activity? Additional event searching must be conducted to provide more context into this login and the actions surrounding it. To do this, filters were utilized in USM Anywhere to query for events associated with the Administrator account on the affected asset.

Event deep dive

First, the account Security Identifier (SID) was used to confirm which account was being used for this login. The SID Is a Globally Unique Identifier (GUID) that is unique to each account on a Windows System. The default Administrator Security Identifier (SID) typically ends with the Relative Identifier (RID) of 500 on Windows Systems.

A review of the event attached to this alarm confirms that the default Administrator account was used to sign in, with a SID ending with the RID of 500.

Alarm default admin

To provide more context, events originating from the source asset were queried within the last 24 hours. 40 successful logins using the Administrator account were seen from this source to other internal assets in less than 10 minutes.
default alarm events

These events were captured by the AlienVault Agent, which was installed directly on the source asset  to forward events to USM Anywhere.

Reviewing for additional indicators

Further review into the activity originating from the source asset reveals the use of an encoded and compressed PowerShell script. Encoding and compression effectively allow the attacker to obfuscate scripts being executed, evading detection.

Using open-source tools, we were able to decode and decompress the underlying PowerShell script:

default account PowerShell

The decoded ‘Invoke-ShareFinder’ script seen above is a function used to query for exposed network shares in a Windows domain. This tool can also be used to determine which users have access to each network share.  Exposed and insecure network shares could allow an attacker to obtain sensitive information or conduct lateral movement.

An additional event was found for the PowerShell script “Discovery.psm1” being executed on this asset. This script is used for internal network discovery using various scanning techniques.

PowerShell script

Response

Building the investigation

With all events gathered and analysis completed, an investigation was created and submitted to the customer for review. Due to the severity of this incident and for situational awareness, a call was made to the customer to inform them of this activity.

Customer interaction

The customer took quick action to isolate the source asset, preventing further lateral movement attempts. Additionally, all affected assets were scanned using SentinelOne to ensure they were not infected with malware. Lastly, the default ‘Administrator’ account was disabled on all assets in this environment, effectively preventing future abuse of this account.

Limitations and opportunities

Limitations

The MTDR team lacked visibility into the customer’s SentinelOne EDR environment, which would have allowed for additional context and quicker response action.

Opportunities

AT&T offers Managed Endpoint Security (MES), a tool that provides comprehensive endpoint protection against malware, ransomware, and fileless attacks. MES utilizes behavioral analysis, which would have alerted analysts of malicious activity and prevented the “Discovery” and “Invoke-ShareFinder” scripts from executing on the asset. MES can also be used to conduct response actions such as isolating and scanning affected assets. 

The post Stories from the SOC – Lateral movement using default accounts appeared first on Cybersecurity Insiders.

SOC SASE

Recently the architecture model known as Secure Access Service Edge (SASE) has been gaining momentum. Not surprising, when the model provides benefits – including reduced complexity of management, improved network performance and resiliency, security policy implemented consistently across office and remote users and lower operational expense. In fact, according to a recent ESG survey, 70% of businesses are using or considering a SASE solution. But if SASE is supposed to simplify network and security management, then one may wonder, “what value does a managed services provider (MSP) offer?”

Why an MSP for SASE deployment?

SASE adoption

There are great number of answers to that question, but a good place to start is to create an understanding that the journey to SASE is going to be a little different for every enterprise. There are a many approaches and models in the market and many vendors to choose from.

First of all, one major reason that businesses are utilizing an MSP for SASE is because it’s just difficult and expensive to hire and retain technicians with the specialized skillset they require, particularly if they require 24/7 monitoring. In fact, according to a recent study, 57% of organizations have been negatively impacted by the cybersecurity skills shortage. Sometimes it just makes more financial sense and can improve an organization’s risk posture to outsource this to a trusted third-party.

In addition, while many technology providers claim to offer a complete SASE portfolio, it is important to note that it is not an off-the-shelf solution and can include many different components. There has been a lot of consolidation in the market over the past several years, with vendors acquiring other companies to build a more well-rounded suite, which has resulted in multiple management platforms. Most vendors are working to consolidate these to offer management through a single pane of glass but few have achieved that quite yet.

And then finally, SASE is not a “one and done” or plug-and-play solution. The vast majority of businesses are not going to rip out and replace their entire infrastructure at one time. Rather, it will be a gradual roll out of capabilities as they come upon their refresh cycle or as budgets for new initiatives are approved. Most large or well-established companies will be on a hybrid environment for the foreseeable future, with assets hosted in both the data center as well as in the cloud.

Benefits of working with an MSP

Sometimes it is difficult to know where to start with a multi-faceted solution such as SASE, and that is why it is so important to have a trusted advisor you can count on. Here are some of the key benefits you can expect to realize when working with industry-leading managed service providers:

  • Accelerated time to value and scale: A qualified MSP for SASE implementation will offer consulting services that can determine your organization’s readiness for SASE, identify the best solutions for your unique needs, and help chart a roadmap for rollouts. Should your business acquire other companies, add or reduce locations, or change workplace designations, it is often as simple as contacting your MSP, providing the required information, and signing a contract addendum.
  • Security and networking expertise: Being that SASE is a convergence of software defined wide-area networking and security you will need someone that has knowledge and experience in both disciplines. MSPs can meet this requirement and have the ability to integrate these components to deliver resilient, high-performance connectivity and protection.
  • Solution development experience: With so many vendors and solutions on the market, it may be difficult to know which offer the best mix of capabilities, protection, and performance. Conducting multiple proof of concepts (POCs) can be costly and time consuming. MSPs can remove this burden from your technology teams by evaluating offers, conducting comprehensive interoperability testing, technical validation, and solution certification to deliver the industry’s best technology elements that seamlessly work together.
  • Solution integration acumen: As mentioned above, it is unlikely that your organization will replace every component of their networking and security at the same time, which means that you will have legacy infrastructure that still needs to be supported alongside the new technology components and they may even be from different vendors. Managed service providers have the ability to integrate and manage a vast ecosystem of technology providers and capabilities in order to secure your entire environment.

Conclusion

With the rapid adoption of cloud delivered applications and services, the heightened expectations of customers when it comes to digital experience, and the pressing need to support work from anywhere, it is less a question of whether your business will adopt SASE, but rather when. In fact, you may have already started without knowing it. Regardless of where you are on your journey, an MSP can help ensure you avoid unnecessary detours and that you reach your desired outcomes.

The post Why use a managed services provider for your SASE implementation appeared first on Cybersecurity Insiders.

This blog was written by an independent guest blogger.

In mid-March, Microsoft released a free, open-source tool that can be used to secure MikroTik routers. The tool, RouterOS Scanner, has its source code available on GitHub. It is designed to analyze routers for Indicators of Compromise (IoCs) associated with Trickbot. This article will introduce some background on the MikroTik vulnerability, the Trickbot malware, and some ways you can protect yourself.

Trickbot emerges from the darknet

Trickbot was first discovered in 2016 and, despite efforts by Microsoft to stamp it out, has continued to remain a threat online. One of the main reasons for Trickbot’s persistence is that it has continued to change and evolve over the years. As a result, Trickbot has proven to be an adaptable, sophisticated trojan of modular nature, molding itself for different networks, environments, and devices.

As Trickbot has evolved, it began to reach Internet of Things (IoT) devices like routers. Since Trickbot continuously improves its persistence capabilities by dodging researchers and their reverse engineering attempts, it has been able to maintain the stability of its command-and-control (C2) framework.

Why is the MikroTik security flaw important?

Malware is particularly dangerous because it can be ransomware, a special type of malware that takes control over your computer or devices. Trickbot, as it has grown and evolved, now includes a plug-in for backdoor access for Ryuk, a piece of ransomware with crypto-mining capabilities. 

Once it had expanded its reach to networking devices, Trickbot began infecting MikroTik routers and modules and using them as proxy servers for its C2 servers and redirecting router traffic through alternative non-standard ports.

What makes the infection of MikroTik routers so significant is that they are used by millions of homes and organizations worldwide. The broad distribution of MikroTik routers gave Trickbot extensive infrastructure. Security flaws, like the MikroTik one, can be particularly important for web design because coders that work on the back end have to ensure that web pages are secure.

How does Trickbot work?

Researchers at Microsoft on the Microsoft Defender for IoT team discovered the exact mechanism that Trickbot’s C2 system used to exploit MikroTik devices. Hopefully, by discovering its inner workings, Trickbot will be stamped out for good.

The reason hackers use Trickbot is that it allows compromised IoT devices to communicate between the C2 server and other compromised devices. Hackers then breach target routers, typically using a combination of brute force and exploits.

One of the key ways brute force techniques are used by malware to infect MikroTik devices is by utilizing default MikroTik passwords. They also exploit brute force attacks that utilize passwords harvested from other MikroTik devices. Finally, they exploit the CVE-2018-14847 vulnerability utilizing RouterOS versions older than 6.42. This exploit allows hackers to read files from the device like user.dat, which often contains passwords.

Once they’ve gotten access, they start issuing commands that redirect traffic between two ports on the router. Redirecting traffic creates the communication line between impacted devices and the C2.

In the end, catching on to how Trickbot worked involved sniffing out commands that were specific to the unique operating system, RouterOS and RouterBOARD, used by MikroTik IoT devices.

All IoT devices are vulnerable

The important takeaway for professionals and end-users is that all IoT devices are vulnerable. In fact, many journalists have recently brought attention to the dangers of networked security cameras in your home.

A professionally-installed ADT security system was exploited by a technician who used his access to watch people’s deeply personal private lives. All of these cameras were IoT devices.

Although your smart fridge probably isn’t spying on you, it’s important to remember that the security landscape continues to expand as more and more devices become connected to the Internet. Devices that perform limited functionality, like routers and cameras, can often become prime targets for hackers because they are not regularly updated like smartphones and computers.

How do you protect yourself?

Utilizing special software tools can be a great way to protect yourself from cybersecurity threats. Microsoft’s RouterOS Scanner is the go-to way to resolve the MikroTik router vulnerability. As you can see, exploiting one MikroTik device opens up the possibility for exploiting many more.

Microsoft did the tech community a huge favor by giving away their security tool for free, but this may not be the end for Trickbot. Unfortunately, as long as MikroTik devices continue to operate without having their firmware updated and their devices monitored, Trickbot will probably stay around.

Starting a cybersecurity audit can be a good way to find other ways your company might be at risk. Understanding your digital security needs is the first step in securing your network and enterprise. AT&T offers several enterprise-level cybersecurity network solutions that are worth examining.

Another thing all Internet users should do is change their default passwords to more secure unique passwords. Much of the damage done by Trickbot and the MikroTik exploits was because of default passwords shipped with the devices. Changing your default passwords will ensure that brute-forcing your network will be much harder.

Generating hard-to-guess unique passwords is actually the number one cybersecurity tip. Whether you’re starting a blog for your small business or running a large company with hundreds of staff, creating a strong password is the best way to decrease your vulnerability to cyberattacks and loss of data privacy and security.

Staying educated is another way to ensure you stay on top of cyber security threats. Many large organizations offer training to employees to help them understand the terminology surrounding IT. It’s important to continue to educate yourself, too, as threats can change, vulnerabilities can be patched, and new technologies can make how we approach security shift overnight.

Finally, enable multi-factor authentication or MFA whenever it’s available. MFA can help cut down on unauthorized device access by requiring you to authenticate your identity every time you try to log on. MFA is a critical component of building a zero-trust cybersecurity model, which is the preferred way of securing your business today.

Conclusion

From Russia hacking Ukrainian government websites to the Okta hack that demonstrated even digital security firms are vulnerable to hackers, hacks and exploits have been all over the news lately. The release of Microsoft’s MikroTik router tool marks a turn in digital security and demonstrates that companies and teams are working hard to ensure that digital security can be maintained.

The post Microsoft releases open-source tool for securing MikroTik routers appeared first on Cybersecurity Insiders.

This is part one of a three-part series, written by an independent guest blogger. Please keep an eye out for the next blog in this series.

Remote work is the new reality for companies of all sizes and across every industry.  As the majority of employees now perform their job functions outside the technology ecosystem of their local office, the cybersecurity landscape has evolved with the adoption of terms such as Zero Trust and Secure Services Edge (SSE).  To accommodate this new landscape, organizations have undergone fundamental changes to allow employees to work from anywhere, using any device, and many times at the expense of data security. As a result, a paradigm shift has occurred that demonstrates employees are increasingly dependent on their smartphones and tablets which have jointly become the new epicenter of endpoint security.

This next-level dependence on mobile devices is consistent across the remote work environment.  There are countless anecdotes about the new reality of hybrid work.  For example, workers using personal tablets to access sensitive data via SaaS apps, or taking a work Zoom call while waiting in the school pickup line.   The constant for each of these stories has been the overwhelming preference to use whatever device is available to complete the task at hand. Therefore, it is extremely logical that bad actors have pivoted to mobile to launch their attacks given the overwhelming use of non-traditional endpoints to send email, edit spreadsheets, update CRMs and craft presentations.  

4.32B Active Mobile Internet Users

56.89% Mobile Internet Traffic as Share of Total Global Online Traffic

Although the experience paradigm quickly changed with the adoption of remote work, the perception of mobile devices as a risk vector has been more gradual for most customers. In fact, Gartner estimates that only 30% of enterprise customers currently employ a mobile threat detection solution.  Many organizations still assume that their UEM solution provides security or that iOS devices are already safe enough. The most shocking feedback from customers indicates that they historically haven’t seen attacks on mobile, so they have no reason to worry about it.  Given this mindset, it’s again no surprise that hackers have trained their focus on mobile as their primary attack vector and entry point to harvest user credentials.

  • 16.1 % of Enterprise Devices Encountered one (or more) Phishing or Malicious links in 3Q2021 globally
  • 51.2% of Personal Devices Encountered one (or more) Phishing or Malicious links in 3Q2021 globally.

What this mindset reveals is a certain naivete from many organizations, regardless of size or industry, that believe mobile devices do not present significant risk and therefore don’t need to be considered in their data security and compliance strategies. This oversight points to two separate tenants that must be addressed when protecting sensitive data via mobile devices:

Endpoint security is an absolute requirement to protect sensitive data and it includes laptops, desktops, and mobile devices

There isn’t a single business that would issue a laptop to an employee without some version of anti-virus or anti-malware security installed yet most mobile devices have no such protections.  The primary explanation for this is that organizations think mobile device management is the same as mobile endpoint security.  While device management tools are capable of locking or wiping a device, they lack the vast majority of capabilities necessary to proactively detect threats. Without visibility into threats like mobile phishing, malicious network connections, or advanced surveillanceware like Pegasus, device management falls far short of providing the necessary capabilities for true mobile security.

Even cybersecurity thought leaders sometimes overlook the reality of cyber-attacks on mobile.  In a recent blog, “5 Endpoint Attacks Your Antivirus Won’t Catch”, the entire story was exclusive to the impact on traditional endpoints even though rootkits and ransomware are just as likely to occur on mobile. 

Traditional security tools do not inherently protect mobile devices

Given the architectural differences that exist between mobile operating systems (iOS/Android) and traditional endpoint OS (MacOS, Windows, Linux, etc.), the methods for securing them are vastly different.  These differences inhibit traditional endpoint security tools, which are not purpose-built for mobile, from providing the right level of protection. 

This is especially true when talking about the leading EPP/EDR vendors such as Carbon Black, SentinelOne and Crowdstrike.  Their core functionality is exclusive to traditional endpoints, although the inclusion of mobile security elements to their solutions is trending.  We’re seeing strategic partnerships emerge and it’s expected that the mobile security and traditional endpoint security ecosystems will continue to merge as customers look to consolidate vendors. 

What’s more is that there are so many ways that users interact with their smartphones and tablets that are unique to these devices. For example, a secure email gateway solution can’t protect against phishing attacks delivered via SMS or QR codes. Also, can you identify all of your devices (managed and unmanaged) that are subject to the latest OS vulnerability that was just identified and needs to be patched immediately?  Did one of your engineers just fall victim to a man-in-the-middle attack when they connected to a malicious WiFi network at a random coffee shop?  These are just some of the examples of the threats and vulnerabilities that can only be mitigated with the use of a mobile endpoint security tool, dedicated to protecting mobile endpoints.

The acceleration of remote work and the “always-on” productivity that's expected has shifted your employees’ preferences for the devices they use to get work done.   Reading email, sending an SMS rather than leaving a voicemail (who still uses voicemail?), and the fact that just about every work-related application now resides in the cloud has changed how business is transacted.  This pivot to mobile has already occurred. It’s well past time that companies acknowledge this fact and update their endpoint security posture to include mobile devices.  

If you would like to learn more or are interested in a Mobile Security Risk Assessment to provide visibility into the threat landscape of your existing mobile fleet, please click here or contact your local AT&T sales team.           

The post Endpoint security and remote work appeared first on Cybersecurity Insiders.

This blog was written jointly witEduardo Ocete.

Executive summary

Several vulnerabilities for Java Spring framework have been disclosed in the last hours and classified as similar as the vulnerability that caused the Log4Shell incident at the end of 2021. However, as of the publishing of this report, the still ongoing disclosures and events on these vulnerabilities suggest they are not as severe as their predecessor.

Key takeaways:

  • A vulnerability in Spring Cloud Function (CVE-2022-22963) allows adversaries to perform remote code execution (RCE) with only an HTTP request, and the vulnerability affects the majority of unpatched systems. Spring Cloud Function is a project that provides developers cloud-agnostic tools for microservice-based architecture, cloud-based native development, and more.
  • A vulnerability in Spring Core (CVE-2022-22965) also allows adversaries to perform RCE with a single HTTP request. For the leaked proof of concept (PoC) to work, the vulnerability requires the application to run on Tomcat as a WAR deployment which is not present in a default installation and lowers the number of vulnerable systems. However, the nature of the vulnerability is more general, so there could be other potential exploitable scenarios.

In accordance with the Cybersecurity Information Sharing Act of 2015, AT&T is sharing the cyber threat indicator information provided herein exclusively for a cybersecurity purpose to combat cybersecurity threats.

Analysis

At the end of March 2022, several members of the cybersecurity community were discovered spreading news about a potential new vulnerability in Java Spring systems that is easily exploitable and affecting millions of systems. This vulnerability has the potential to originate a new Log4Shell incident.

First, it is important to clarify that the comparisons at this point appear to be searching for sensationalism and spreading panic, instead of providing actionable information. Additionally, two similar vulnerabilities in the Spring framework were disclosed around the same time, adding confusion to the mix. What has been observed by the AT&T Alien Labs™ threat intelligence team as of the publishing of this article is included below.

Spring Cloud Function (CVE-2022-22963)

A vulnerability in Spring Cloud Function has been identified as CVE-2022-22963, and this vulnerability can lead to remote code execution (RCE). The following Spring Cloud Function versions are impacted:

  • 3.1.6
  • 3.2.2
  • Older unsupported versions are also affected

In addition to the vulnerable version, JDK >= 9 must be in use in order for the application to be vulnerable.

The vulnerability is triggered when using the routing functionality. By providing a specially crafted Spring Expression Language (SPeL) as a routing expression, an attacker can access local resources and execute commands in the host. Therefore, this CVE allows an HTTP request header, containing a spring.cloud.function.routing-expression object with a SPeL expression, to be evaluated through the StandardEvaluationContext, leading to an arbitrary RCE.

Java Spring exploitation

Figure 1. Exploitation attempt.

The vulnerability has been assigned a CVSS of 9.0 which means high severity. Exploitation of the vulnerability may lead to a total compromise of the host or the container, and so patching is highly advised. In order to mitigate the vulnerability developers should update Spring Cloud Function to the newest versions, 3.1.7 and 3.2.3, where the issue has already been patched.

AT&T Alien Labs has identified several attempts of exploitation, which we believe are researchers trying to identify how prevailing the vulnerabilities actually is, since the exploitation attempts carried canarytokens as unique payload. Nevertheless, the team will continue to closely monitor the activity, as new scanning activity appears.

Spring Core (CVE-2022-22965)

A vulnerability in Spring Core was tweeted by one of the researchers who first disclosed the Log4Shell vulnerability. The researcher then rapidly deleted the tweet. This vulnerability was originally published without a CVE associated with it, and it is being publicly referred to as “Spring4Shell.” One of the first observed proof of concepts (PoC) was shared by vx-underground on March 30, 2022. It works against Spring’s sample code “Handling Form Submission.” The PoC consists of a single POST request carrying in its payload a jsp webshell that will be dropped in the vulnerable system.

Spring core following PoC

Figure 2. Exploitation attempt following PoC.

Spring has confirmed the vulnerability and has stated that the leak occurred ahead of the CVE publication. The vulnerability has been assigned CVE-2022-22965. As per Spring:

“…The vulnerability impacts Spring MVC and Spring WebFlux applications running on JDK 9+. The specific exploit requires the application to run on Tomcat as a WAR deployment. If the application is deployed as a Spring Boot executable jar, i.e. the default, it is not vulnerable to the exploit. However, the nature of the vulnerability is more general, and there may be other ways to exploit it.”

From the statement above, the specific scenario for the leaked PoC to work would have to match the following conditions:

  • JDK >=9
  • Apache Tomcat as the Servlet container
  • Packaged as WAR
  • spring-webmvc or spring-webflux dependency

However, the scope of the vulnerability is wider, and there could be other exploitable scenarios.

Spring has released new versions for Spring Framework addressing the vulnerability, so updating to versions

5.3.18 and 5.2.20 (already available in Maven Central) should be a priority in order to mitigate the RCE. The new versions for Spring Boot with the patch for CVE-2022-22965 are still under development.

As an alternative mitigation, the suggested workaround is to extend RequestMappingHandlerAdapter to update the WebDataBinder at the end, after all other initialization. To do so, a Spring Boot application can declare a WebMvcRegistrations bean (Spring MVC) or a WebFluxRegistrations bean (Spring WebFlux). At the “Suggested Workarounds” section of the Spring statement one can find an implementation example of such workaround.

According to a publication by Peking University, this vulnerability has been observed being exploited in the wild. However, AT&T Alien Labs has not identified heavy scanning activity on our honeypots for this vulnerability, nor exploitation attempts.

Finally, and just to provide a graphical representation of these vulnerabilities, below is a diagram shared by a CTI researcher from Sophos.

Java Spring vulnerability diagram

Figure 3. Java Spring vulnerability diagram.

Conclusion

Log4Shell was very impactful at the end of 2021, based on the number of exposed vulnerable devices and the facility of its exploitation. These recently disclosed Java Spring vulnerabilities remind us in the cyber community of lessons learned during the Log4Shell incident. Thus, these vulnerabilities have received a quick response by the entire cybersecurity community which is collaborating and sharing available information as soon as possible.

Alien Labs will keep monitoring the situation and will update the corresponding OTX Pulses to keep our customers protected.

Appendix A. Detection methods

The following associated detection methods are in use by Alien Labs. They can be used by readers to tune or deploy detections in their own environments or for aiding additional research.

SURICATA IDS SIGNATURES

alert http $EXTERNAL_NET any -> $HOME_NET any (msg:”AV EXPLOIT Spring Cloud RCE (CVE-2022-22963)”; flow:established,to_server; content:”POST”; http_method; content:”spring.cloud.function.routing-expression”; http_header; pcre:”/(getRuntime|getByName|InetAddress|exec)/HR”; reference:url,sysdig.com/blog/cve-2022-22963-spring-cloud; classtype:attempted-admin; sid:4002725; rev:1;)

alert http $EXTERNAL_NET any -> $HOME_NET any (msg:”AV INFO Spring Core RCE Scanning Activity (March 2022)”; flow:established,to_server; content:”POST”; http_method; content:”class.module.classLoader.resources.context.parent.pipeline.first.pattern”; http_client_body; startswith; reference:url,github.com/TheGejr/SpringShell; classtype:attempted-admin; sid:4002726; rev:1;)

alert http $EXTERNAL_NET any -> $HOME_NET any (msg:”AV EXPLOIT Spring Cloud RCE (CVE-2022-22963)”; flow:established,to_server; content:”POST”; http_method; content:”spring.cloud.function.routing-expression”; http_header; pcre:”/(getRuntime|getByName|InetAddress|exec)/HR”; reference:url,sysdig.com/blog/cve-2022-22963-spring-cloud; classtype:attempted-admin; sid:4002727; rev:1;)

 

AGENT SIGNATURES

Java Process Spawning Scripting Process

Java Process Spawning WMIC

Java Process Spawning Scripting Process via Commandline (For Jenkins servers)

Suspicious process executed by Jenkins Groovy scripts (For Jenkins servers)

Suspicious command executed by a Java listening process (For Linux servers)

 

Appendix C. Mapped to MITRE ATT&CK

The findings of this report are mapped to the following MITRE ATT&CK Matrix techniques:

  • TA0001: Initial Access
    • T1190: Exploit Public-Facing Application

Appendix D. Reporting context

The following source was used by the report author(s) during the collection and analysis process associated with this intelligence report.

1.      AT&T Alien Labs Intelligence and Telemetry

Alien Labs rates sources based on the Intelligence source and information reliability rating system to assess the reliability of the source and the assessed level of confidence we place on the information distributed. The following chart contains the range of possibilities, and the selection applied to this report is A1.

Source reliability

RATING

DESCRIPTION

A – Reliable

No doubt about the source's authenticity, trustworthiness, or competency. History of complete reliability.

B – Usually Reliable

Minor doubts. History of mostly valid information.

C – Fairly Reliable

Doubts. Provided valid information in the past.

D – Not Usually Reliable

Significant doubts. Provided valid information in the past.

E – Unreliable

Lacks authenticity, trustworthiness, and competency. History of invalid information.

F – Reliability Unknown

Insufficient information to evaluate reliability. May or may not be reliable.

 

Information reliability

RATING

DESCRIPTION

1 – Confirmed

Logical, consistent with other relevant information, confirmed by independent sources.

2 – Probably True

Logical, consistent with other relevant information, not confirmed.

3 – Possibly True

Reasonably logical, agrees with some relevant information, not confirmed.

4 – Doubtfully True

Not logical but possible, no other information on the subject, not confirmed.

5 – Improbable

Not logical, contradicted by other relevant information.

6 – Cannot be judged

The validity of the information can not be determined.

 

Feedback

AT&T Alien Labs welcomes feedback about the reported intelligence and delivery process. Please contact the Alien Labs report author or contact labs@alienvault.com.

The post Java Spring vulnerabilities appeared first on Cybersecurity Insiders.

In the previous article, we covered the build and test process and why it’s important to use automated scanning tools for security scanning and remediation. The build pipeline compiles the software and packages into an artifact. The artifact is then stored in a repository (called a registry) where it can be retrieved by the release pipeline during the release process. The release process prepares the different components, including the packaged software (or artifacts) to be deployed into an environment. This article will cover the contents of a release and features within a release pipeline that prepare a release for deployment into an environment.

DevOps process

Artifact registry

Artifacts are stored in a registry (separate system from the code repository) and accessible by DevOps tools like release pipelines and the IT systems that the application will be deployed on to. Registries should be managed as an IT system and provided to the Development and DevOps teams as a service. The IT systems that support the registry should be hardened using the corporate security policies. The registry should be private and only accessible within the company if it is not intended to be a public source for artifacts. Password protection and IP whitelisting are also advised to ensure that packages can only be retrieved by approved systems. Logging information needs to be sent to a Security Operations Center (SOC) for monitoring. Encryption of the packages at rest and in transit is also required.

Contents of a release

A release is created by the release pipeline (Azure DevOps, Jenkins, Team City) and uses the artifacts created by a build pipeline. The release pipeline is triggered by the build pipeline and it knows attributes like the latest software version that was created and the name and location of the artifacts. The release pipeline is highly configurable depending on when the release should be scheduled to deploy, what variables and secrets (passwords, certificates, and keys) should be used, which version of the compiled code needs to be deployed, and approval processes to protect environments from having a release replaced without approvals.

Releases are capable of being automatically deployed onto IT systems when a new artifact version is built. DevSecOps best practice encourages automated builds but advises manual approval instead of automated releases to most environments. However, it may be appropriate for release pipelines to automatically deploy into a development environment that is under the development team control. Environments controlled by different teams like Quality Assurance (QA), User Acceptability Testing (UAT), Production, and Disaster Recovery (DR) typically do not have automated release pipelines after every new artifact version is built.

Variables and secrets are how running applications can be adapted to the different environments (development, QA, UAT, Production and DR). Variables are created in the pipeline tools and can be applied during the release pipeline. DevSecOps recommends storing secrets in a separate “key” vault that can be connected to the pipeline. Separate variables and secrets allow the software artifacts to remain the same no matter which environment they are deployed into. When the software is deployed, it looks for variables and secrets that were configured in the build and release processes and uses them to properly set the system up. Variables and secrets are a big reason why releases can be deployed so quickly and multiple times per day.

DevOps release

Version control is mandatory for knowing which version of software is being deployed, having a rollback plan to recover if there is a bug or issue, and keeping track of features and additions to the application as developers work on the code. Every time a build creates an artifact, a version number is applied. The version number is used by the release pipeline so that it knows which artifact to retrieve and deploy. DevSecOps recommends using the semantic versioning system for all artifacts.

Release pipelines have approval features that control the software release. At a minimum, approvals should be setup for each environment, or where separation of duties is required between the development team and other groups in the organization. For example, the QA group should be the only team who can grant approval to release and deploy into the QA environment. This is because QA teams may still be working through their test plans on an older release, and they need to finish before testing the new release.

Build and release agents

The two types of servers (IT) for build and release activities are vendor-hosted and self-hosted. Vendor-hosted agents are managed by the vendor so upgrades and maintenance or taken care of. Also, a new agent is provided every time the build or release pipelines are run. This makes resource management easy for the company but may not be an option for unique build and deploy dependencies. While extremely secure, builds and releases performed by vendor-hosted servers are not in the company’s control.

Self-hosted agents are managed by the company and require the systems to be upgraded, maintained and any dependencies be installed before the agents can be used in build and release activities. Self-hosted agents work well when the DevOps platforms are internally hosted and not using any cloud-based servers. For example, self-hosted Jenkins pipelines use self-hosted servers and remain completely private and in the control of IT.

Next steps

There are many moving parts and components to the release process that need to be architected and designed with security in mind. The parts and components overlay with multiple vendors, all of the different environment owners, security, and IT. This requires the different members of a DevOps team, spread across all the different organizations, to work together and deliver changes to business systems safely and securely. The next article covers how a release is deployed and operated securely

The post DevOps release process appeared first on Cybersecurity Insiders.

This blog was written by an independent guest blogger.

When assessing the corporate governance of modern companies, one cannot help but note the obvious problems with information security. To solve these problems, it is crucial to carry out initiatives that, on the one hand, are complex, multifaceted, and nonobvious, and on the other, assume the involvement of all employees of the company, including the heads of key departments.

Information security is impossible without help from within the organization

Let us analyze the roles and possible points of interaction of several different management positions (skipping the CISO) responsible for operational resiliency, secure infrastructure, proper resource allocation, reputational risks, incident response, and other aspects of information security.

Chief Executive Officer (CEO)

The company’s management ensures the creation and maintenance of an internal environment that allows employees to participate in achieving strategic goals fully. Information security starts with the CEO and goes down, covering all staff. The CEO is responsible for creating a strong culture of safe behavior. CEOs must personally set an example of the correct attitude towards information security requirements. This attitude and position of the company leader will stimulate the communication between departments allowing them to fight against ransomware and other serious threats more effectively.

Companies today need leaders who combine a high level of technology awareness with an open mind. These leaders must create an open environment in which not only information about success is encouraged but also about information about any negative processes. Creating an atmosphere of transparency is an important task of top management when developing a ransomware protection strategy.

Chief Human ResourcesPeople Officer (CHRO/CPO)

Information security largely depends on the organizational structure and corporate culture of the company, and the role of the HR leader is one of the key ones in ensuring information security.

How is this expressed? First of all, such a leader must take responsibility for all employees hired by the company. These days, many information security incidents happen due to malicious insiders or employee incompetence. Understanding the day-to-day interests and motivations of employees is an important part of the work of the HR department.

Organizations can treat their employees on a “hired and fired” basis. But in this case, you should not expect high levels of personnel loyalty and a good reputation in the labor market. Managing the recruitment and departure of employees, taking into account emerging risks associated with, for example, data breaches, is one of the most important contributions of the HR leader to the security of the company.

Another significant part of HR is the application of advanced information security training programs.

The role of the HR department is also crucial in ensuring the ethics of security measures adopted by the company and in aligning these measures with the tasks and goals of employees. Effective corporate governance cannot rely on employees who are forced to act against their own interests and habits. Monitoring employee actions often raises questions about trust in the staff. HR director should understand the ethical underpinnings of these issues best and can provide advice to the CEO and information security department on whether the adopted security policies will be effective and if they are in line with the corporate culture.

Chief Information Officer (CIO)

It is essential for the CIO that information security increases the stability and reliability of IT systems, affecting the operational resiliency of all business processes.

On the technical side, the company’s top managers are primarily concerned about outages of IT systems or employee dissatisfaction with the use of IT infrastructure.

Throughout the life cycle of company development, it often happens that the information security team comes in and leaves after a short time, while the IT team remains for a long time. This is a consequence of the business’s strategic priorities, which were formed with the development and implementation of IT technologies. Indeed, a mature company has been living with the IT service for about 40 years and is used to following and trusting everything IT people say.

The business has been familiar with information security for the last 10-15 years at best. And it is the information security team that informs the CEO about all the problems of the IT team like the bad habits of employees in terms of using passwords, clicking links, the presence of technical accounts in Active Directory, update management, etc. For instance, employees might be recommended to download VPN services for security reasons whenever they work remotely.

In the fight of the security team with infosec issues, the IT team is formally on the side of information security. Still, in the real world, there is a misunderstanding, rivalry, explicit or hidden actions on the part of IT engineers (IT gurus) who are accustomed to creating certain rules independently. The CIO should make his employees realize the importance of information security for the company's sustainability.

Chief Risk Officer (CRO)

Continuous development and improvement should be obligatory and constant strategic objectives of any company. Identifying risks in the context of business priorities is one of the company's key goals in the field of information security. Therefore, the participation of the CRO in ensuring the information security of the company is directly related to his duties.

Risk prioritization is not a technical task. This is the matter of managing the company. The Chief Risk Officer should play an important role in developing the information security program and overseeing how identified risks are documented and eliminated.

At the same time, tech people need to get rid of the illusion that only they are able to understand information security risks. IT and security departments should share more information about various infosec subtleties so that company executives and risk management staff understand them better.

Chief Audit Executive (CAE)

The activities of the internal audit department are vital both for the information security and IT services as well as for the company’s executives. For information security and IT services, this is a third-party view of cybersecurity problems, focused on the most critical areas of the company's business activities. For top managers, the internal audit department significantly saves time and eliminates routine supervision procedures.

There are, however, some pitfalls in the way the internal audit department works. For this unit, complying with information security requirements may be less of a priority than complying with industry standards and regulations. Top managers should not think that compliance with standards will protect the company from all trouble. It is important here not to neglect other preventive measures proposed by all company stakeholders.

Chief Legal Officer (CLO)

If the specialists of the legal department are well versed in legislation related to the protection of personal data, understand the basics of technology, know reliable legal practices in the field of compliance with information security legislation, then this may indicate that the company has deep legal expertise in security technologies.

Legal specialists play a key role in determining the company's policy on exchanging information with government agencies. They participate in court proceedings. A significant part from the point of view of information security is played by the legal department when responding to data breaches.

Chief Security Officer (CSO)

In modern companies, the organization of physical security is usually outsourced, and the security department primarily deals with internal, strategic, operational, financial, and reputational risks. When investigating incidents, the security service traditionally comes to the fore. The information security team provides all evidence like logs or emails, and the security department brings the investigation to its logical conclusion.

Conclusion

The above-mentioned business divisions and their leaders often look at information security issues differently. Still, under the strong leadership of the CEO, they may come to a mutual understanding of arising problems and effectively determine the cybersecurity strategy.

One of the key conditions for a large number of participants to cooperate successfully is to recognize the roles that each group should play in the company. Top managers play the leading roles in these processes. They have the authority to determine what is vital to the company and what is not.

There are peculiarities and differences in how each department ensures the strong cybersecurity posture of the company. But there is one area where all efforts converge. It is the cybersecurity incident response. Developing and implementing sound, consistent incident response plans is a formidable task that is absolutely essential to a company's success in dealing with negative events. Developing such plans is a multidisciplinary project in which each of the key leaders must play a role.

The solution to many information security problems is impossible without finding a compromise between the participants. Top managers are not used to acting on someone else's orders. Rules introduced by technology leaders who have unexpectedly appeared (CISO) in the company often limit their freedom and infringe upon their pride. Today's business leaders should understand the hidden technological risks and rely on a wide range of opinions in the company when developing a security strategy.

The post Corporate structure and roles in InfoSec appeared first on Cybersecurity Insiders.

Cyber insurance coverage? Through the roof these days. Also, coverage is not that easy to get. The many breaches and the dollar judgements handed down make cyber insurance another costly operating investment. A mid-sized client of mine, as an example, pays $1 million in annual cyber insurance costs just to do business with its commercial and government customers.

The issue adds another twist to the topic of third-party risk. Typically, a corporation’s top tier of vendors has some form of cyber insurance. Such vendor coverage generally protects their customers from financial liability involving the breach of customer sensitive data such as Personal Identifiable Information (PII).  

Breach incidents can also include disruptions, intellectual property exfiltration, and website defacements. Lately ransom threats where the hacker demands payment for not releasing data onto dark sites have escalated. For those vendor corporations handling customer data, ranging from sales histories to financial transactions, such vendor coverage is a must instead of an option.

Yet there are those smaller supplier companies which eschew cyber insurance either by choice or through lack of awareness. Estimates vary, but those smaller uninsured companies range from 28 to 41%, according to industry reports.  Rising costs, coupled with the rigors of insurance requirements, ratchet down coverage as a priority.  

This is the crux of an escalating vendor issue facing CISO’s today: which ones pose uninsured risks? Is it simply the smaller boutique vendor? Or does scope include second tier and third tier suppliers to main vendors as well? What precautions can be taken in advance to pre-empt lack of vendor coverage across tiers? These problems have been echoed by the CISO community now faced by increasing attacks channeled through third parties.  

Here are three immediate mitigation steps CISO’s can take:   

  • Know vendors to the nth degree.  Besides the standard inventory of cyber and IT suppliers, identify who are those who supply them. Do these secondary vendors have adequate coverage, and how about their subcontractors? This is not an easy task. But AT&T Cybersecurity offers vendor discovery tools, along with % risk levels, from partners such as NetSkope and BitSight. These tools help spare inter-vendor finger pointing and the “shock and surprise” in event of breach.       
  • Lock down contracts. There are any number of cyber insurance requirement clauses that can be added to new contracts in progress and ones for renewal. Here’s where the CISO finds Finance and Legal resources to be invaluable partners. Together they can determine if adequate vendor coverage exists for legal fees, breach recovery and cyber vandalism.
  • Cyber hygiene vigilance. Third parties still pose the greatest threat of breach despite the best of plans. No one wants to in a position where they must execute on cyber insurance in the first place CISO’s can keep cyber fences “horse high” with basic defense mechanisms such as:
  • Complex passwords
  • VPN use
  • Encryption
  • Multi-factor Authentication (MFA)
  • Sound firewall rules
  • Strong anti-virus
  • User security awareness

Within any of these intertwined areas of defense, AT&T Cybersecurity can be of assistance.

To summarize the complete evaluation of third-party risk must now include cyber insurance readiness as a factor. No CISO is an island here, and it becomes a protective opportunity rather than a headache once the right internal business partners are engaged.  

The post Next CISO headache: Vendor cyber insurance appeared first on Cybersecurity Insiders.