Google, the tech giant known for its extensive digital presence, is integrating cutting-edge confidential computing technology to strengthen the data security measures employed in its digital advertising campaigns. This advanced technology leverages specialized software and hardware known as Trusted Execution Environments (TEEs) to protect sensitive business data from unauthorized access.

Alphabet Inc’s subsidiary has announced that these TEEs will be isolated even from Google’s own engineers, providing a high level of security and trust for users involved in various business ad campaigns. Previously, protection was offered through Customer Matching data safeguards.

This implementation of confidential computing technology marks a groundbreaking move, potentially setting a global precedent. It promises enhanced privacy while allowing businesses to analyze and quantify audience profiles with greater efficiency.

Despite the advancements, some tech experts have expressed concerns. They argue that integrating this technology into cloud data processing—both at rest and in transit—could have further elevated the security and utility of data storage solutions.

Confidential computing technology, akin to what is used in the financial industry for securing sensitive information like credit card details and passwords, is now being applied in a novel way within the realm of digital advertising.

In parallel, the Irish Data Protection Commission has initiated an investigation into how Google utilizes data to train its AI models, including the Pathways Language Model 2 (PaLM2). The European Union regulators based in Dublin will also review Google’s compliance with the General Data Protection Regulation (GDPR), focusing on whether user data is processed with or without consent.

For those following the latest developments, PaLM2 harnesses vast amounts of data to create AI capabilities and has recently begun employing this technology to provide email summary features for Gmail users.

The post Google Enhances Data Security with Confidential Computing Technology appeared first on Cybersecurity Insiders.

Innovation and the subsequent shift from on-premises applications and infrastructure has completely altered the role of IT across the business landscape. While the cloud is undoubtedly a key enabler for any business wanting to succeed on a global scale, organizations are subsequently spending a major part of their IT budget every year on a multitude of security solutions to avoid data loss. The challenge is that data is scattered across a variety of different locations in a structured or unstructured format and different file types. Organizations have to make sure that they control all sorts of channels, where data can get lost accidentally or on purpose, like email, endpoints, cloud and even USB sticks…

However, IT teams face a number of hurdles in controlling these channels, such as overreliance on manual operations, alert overload, and security gaps in a multi-cloud environment. Different security point products that are operating in silos without  correlation and triggering random alerts are the core of the dilemma when it comes to monitoring and preventing data loss. When those hurdles are combined with issues around legacy infrastructure, complex tech stacks, and a lack of visibility through a single source, organizations are operating in a house of cards and setting themselves up for failure to prevent accidental or malicious data loss.

While it’s true that there has been a 288% increase in threat actors targeting the cloud, recent research from Stanford University finds that approximately 88% of all data breaches are caused by employee mistakes. What’s clear is that whether it’s at home, on the move using a mobile device, or in the office, the task for organizations now is to secure an increasingly mobile workforce. By 2025, cloud data is due to grow from 33 ZB (zettabytes) to 175, and 95% of workloads will be in the cloud. Prioritizing the security of data in the cloud is becoming non-negotiable for organizations. 

With that in mind, here are five of the most common ways that a company may lose sensitive data and how to mitigate these issues as efficiently as possible.

1.Accidental & negligent data loss

For employees it is easy to adopt negligent practices in handling data. More often than not, data loss is not born out of malicious behavior and is usually due out of convenience or trying to be more productive, laziness, or even a lack of awareness about a corporate data security program. For instance, an employee might upload data to an unsanctioned cloud application, or copy data to personal cloud storage, instead of following approved corporate processes. Awareness training for employees on what can be sensitive data adds to the level of security in the first place. 

The key to solving this issue is through a practice that essentially recruits the end user into the data protection program through workflow automation tools. These tools allow the security team to have an ongoing conversation with the end user, and where they really drive value is through coaching end users. The Data Loss Prevention (DLP) program allows IT teams to spot patterns and train people to learn from their mistakes, helping users to make smarter decisions. Additionally it is key for organizations to classify their data according to criticality and sensitivity and set up prevention strategies against these ways of accidental data loss. 

2.BYOD: Bring Your Own Device

Across the modern working landscape, it’s becoming increasingly common that contractors and third parties are using their own devices to access corporate data. Whereas in the past, IT might have shipped a corporate device over to a temporary team member so IT has full control of what the person has access to, it’s now a lot simpler and more streamlined to allow people to use their own devices to conduct their work. However, this modern way of working comes with added risk. For instance, a contractor might find a document particularly interesting and decide to save it for later, or even access a risky website. So, the question now is how to enable productivity without allowing corporate data landing or remaining on a contractor’s laptop once their contract is over?

Here, browser isolation can be a valuable solution. If a contractor needed to access Salesforce for example, the IT team can enable them access via a cloud browser rather than direct entry to the application. This enables the contractor to access the data as they need to conduct their work on their personal device, but IT will retain overall control of what data they can use and move while in the browser. IT might limit the contractor from being able to download any data in the browser onto a hard drive or not allow them access to print hard copy. Essentially, your organization can enable the use of somewhat risky applications and for third party team members to retain their most productive ways of working, without incurring risk through or to the user. 

3.Gen AI Apps: Enabling productivity, safely

Since Gen AI catapulted to the forefront of the business conversation in late 2022, organizations and their employees alike have been experimenting with the resulting applications. Given their obvious productivity enhancing abilities, users are gravitating towards them so that they can do their jobs more efficiently. For IT departments however, it’s essentially another example of a shadow IT application that might lead to loss of sensitive data.

Generative AI is what can be called a big brain in the sky. By which is meant it retains the information that it is fed without understanding the level of confidentiality. Say, for example, the communications team is preparing a press release about an acquisition that your organization had made and they used a GenAI tool to help them. They are risking the tool passing on that information to someone outside the company. If they happened to ask the tool to share the latest news on your company, it could easily pass on embargoed and sensitive information without realizing that it was sensitive.  At the moment, we’re seeing varying levels of response from IT teams. Some organizations are blocking GenAI tools outright, some are creating their own applications, and some simply don’t know how to respond to the issue. 

The truth is, that end users can’t be stopped from seeking out the benefits of these applications, so organizations need to have a structured plan in place to prevent data loss. Once more browser isolation can be the security approach of choice as previously mentioned for BYOD. Users can be allowed to work with the application but the corporate IT has control of the Gen AI Session to prevent certain data which is classified as sensitive being stored by the application for public use. Additionally, DLP inspection tools can drive real value through their automated policies. They can identify if something such as source code is headed to a  Gen AI app, and block it accordingly. When choosing a DLP tool to tackle Gen AI apps, the suggestion is to look for one that has a lot of predefined dictionaries. Such tools have a wide variety of rules that regulate the breadth of potential data loss  across GenAI apps such as source code, payments information, customer information, health information or any personal information. These can then be assigned to the various channels, such as web, email and apps such as ChatGPT to prevent data loss. 

4.USB and mobile storage

There are plenty of circumstances where end users might see the value in copying information to a USB drive. They might be moving to another company, trying to share information, or even have just picked up a USB at a trade show. Here, it’s important that an organization can control the channels on devices, whether that’s USB, network sharing, or even file sharing applications, such as Dropbox, or Google Drive. While efficiency is certainly there to be gained, it can leave a gaping hole in the network, and hackers are enjoying plenty of success by targeting these tier 1 applications. 

This is where a physical endpoint DLP tool can really show its value by implanting physical controls around DLP.  By enabling control over common endpoint channels like USB, Network Shares, or Personal Storage, sensitive data can be secured from loss.

5.Protecting against malicious internal actors

While corporate espionage and external attacks where data is leaked or taken ransom is one side of the coin, what’s more common are instances where someone might be moving to a new job and attempting to take data with them. It could be CVs for job applications, marketing databases or even just documents that contain information around best practices and processes that they don’t want to lose. All the same, this is data that is being stolen. 

To prevent this, security teams must be able to understand and flag suspicious internal behaviors that act as indicators and stop them. A User and Entity Behavior Analytics (UEBA) tool is an excellent means to track risky or abnormal user behaviors, as it will use AI to flag abnormal activity and inform the team. For instance, if a user is downloading a 6GB file at 3am and is not a typical practice of that employee then EUBA will inform the IT department that this incident requires further investigation. Having this early warning system gives the IT team greater forewarning or potential risks and allows them to cut them off before they turn into data losses, rather than having to mitigate the issues after the loss has already occurred.

How to scale up data loss prevention?

A complex security setup with a multitude of different point products that don’t work in harmony will end up causing plenty of organizations data loss issues in the long term if they don’t make a change. IT teams will consistently be hamstrung unless they alter their approach and find ways to simplify their data loss prevention tools and find a way to see all aspects of the organizational environment through one single pane of glass. Opting for a cloud security platform solution will enable organizations to easily inspect structured and unstructured data across all the channels, ultimately allowing them to bring greater security to their enterprise and reduce data loss. 

 

 

The post Five ways to lose your data appeared first on Cybersecurity Insiders.

In the realm of digital security, managing access credentials effectively is crucial. Two popular approaches to safeguarding online accounts are traditional password management and the emerging use of passkeys. While both aim to enhance security, they operate differently and offer distinct advantages and limitations. This article delves into the nuances of password management and passkeys to help you understand their differences and choose the best solution for your needs.

1. Password Management

Definition: Password management involves the use of software tools designed to store, organize, and secure passwords for various online accounts. These tools simplify the process of managing numerous passwords by securely storing them and enabling users to access their accounts through a single master password.

How It Works:

Password Storage: A password manager securely stores passwords using strong encryption algorithms. Users need to remember only one master password to access all their stored credentials.   

Autofill and Generation: Most password managers offer autofill capabilities, automatically entering login details on websites. They also provide password generation features, creating strong and unique passwords for each account. 

Synchronization: Many password managers offer cloud synchronization, allowing users to access their passwords across multiple devices seamlessly.

Additional Features: Password managers often include features like secure note storage, digital vaults for sensitive information, and breach monitoring.

Advantages:

Enhanced Security: Generates and stores strong, unique passwords for each account, reducing the risk of password reuse and breaches.

Convenience: Autofill and password generation save time and reduce the likelihood of using weak or repeated passwords.

Cross-Device Access: Synchronization across devices ensures users can access their passwords from anywhere.

Limitations:

Master Password Vulnerability: The security of the entire system hinges on the strength of the master password. If compromised, it could jeopardize all stored credentials.

Dependency on Software: Password managers rely on software, which can be a target for cyberattacks. Users need to keep the software updated to mitigate risks.

2. Passkeys

Definition: Passkeys are a modern authentication method that leverages cryptographic keys to provide a secure and passwordless way of accessing online accounts. They are a part of the broader shift towards passwordless authentication, aiming to enhance security and user experience.

How It Works:

Public and Private Keys: Passkeys consist of a pair of cryptographic keys: a public key stored on the server and a private key kept securely on the user’s device. Authentication occurs when the server verifies the public key against the private key.

 Authentication Process: When logging in, the user’s device proves its identity to the server using the private key. The server validates the authentication request without needing to store or transmit passwords.

Biometric and PIN Integration: Many passkey systems integrate with biometric authentication (like fingerprint or facial recognition) or device PINs to ensure secure access.

Advantages:

Increased Security: Passkeys eliminate the need for passwords, reducing the risk of password-related attacks such as phishing and credential stuffing.

Enhanced User Experience: Users can authenticate quickly and easily using biometric methods or device PINs, streamlining the login process.

Resistance to Phishing: Since passkeys do not involve passwords, they are immune to phishing attacks that target login credentials.

Limitations:

Adoption and Compatibility: Passkeys are relatively new and may not be supported by all websites and services. Users may encounter compatibility issues or limitations in their use.

Device Dependence: The private key is stored on the user’s device, so access is tied to that device. If the device is lost or damaged, recovery options might be needed.

Comparison Summary

Security: Passkeys generally offer higher security compared to traditional passwords due to their resistance to phishing and credential theft. Password managers provide strong security if used correctly but rely on the master password’s strength.

User Experience: Passkeys streamline authentication with biometric and PIN options, while password managers simplify password management but require remembering and entering a master password.

Implementation: Password managers are widely used and compatible with many services, while passkeys are still in the process of broader adoption and may have compatibility constraints.

Conclusion

Both password management and passkeys represent significant advancements in digital security, each with its own strengths and limitations. Password managers offer a practical solution for managing multiple passwords securely, while passkeys provide a promising approach to passwordless authentication with enhanced security and user convenience. Understanding these differences can help users make informed decisions about their digital security practices and adopt the solution that best fits their needs.

The post Understanding the Differences Between Password Management and Passkeys appeared first on Cybersecurity Insiders.

In the realm of cybersecurity, where data has become an invaluable asset, precise understanding of technical terms is essential for professionals. Yet, many in the tech field find key data security terms perplexing. 

To address this gap, Kiteworks has analyzed search data to reveal the most frequently misunderstood data security concepts in the U.S. As cyber threats become increasingly sophisticated, mastering these terms is crucial for effective risk management. Kiteworks provides expert insights to clarify these critical concepts and underscores the need for comprehensive data protection strategies in 2024.

The Most Misunderstood Data Security Terms:

Please see the full dataset here.

VPN is the Most Misunderstood Data Security Term in the U.S.

The most misunderstood data security term in the U.S. is “Virtual Private Network (VPN),” which sees an average of 57,840 searches per month or 694,080 annually. Despite its significance in securing online connections and protecting sensitive data, many are unclear about the full scope of VPNs. 

Tim Freestone, Chief Strategy and Marketing Officer at Kiteworks, comments: “A Virtual Private Network (VPN) is essential for ensuring secure and private connections over the internet. A VPN is designed to encrypt your online activities, making it harder for cybercriminals, and even your internet provider, to intercept your data. Nevertheless, VPNs have their limitations: there remains an underlying risk when you open a VPN tunnel into your employer’s network from an untrusted home or public Wi-Fi network.

Understanding VPNs is crucial not just for protecting personal privacy but also for securing sensitive business information, particularly in remote work environments. Many organizations use VPNs as a fundamental layer of their cybersecurity strategy, highlighting their importance in safeguarding against potential breaches and unauthorized access.”

HIPAA is the Second Most Misunderstood Data Security Term

Following closely is the “Health Insurance Portability and Accountability Act (HIPAA),” with 13,700 searches each month or 164,400 annually. Despite its significance in safeguarding sensitive health information, many are unclear about the definition of HIPAA

“In 2023, healthcare organizations experienced the most data breaches since 2009, with the industry paying the highest average data breach cost compared to other industries since 2010. The HIPAA Privacy Rule is a key federal law which establishes national standards for protecting individuals’ medical records and other personal health information. 

Understanding HIPAA is not just essential for compliance but also for protecting patients from potential data breaches and protected health information (PHI) loss, which could have severe consequences. Some organizations that don’t work in the healthcare sector still use HIPAA as a measure for the maturity of their data security, signifying its importance.”

Malware Ranks Third Among the Most Misunderstood Data Security Terms

The third most misunderstood data security term is “Malware,” with 13,200 monthly searches and 158,400 annually. Although widely used, the term still causes confusion, making it a critical point of concern.

Freestone clarifies: “Malware, or malicious software, is designed to infiltrate, damage, or disable computers and systems. It encompasses various types, including viruses, ransomware, and spyware. Given the rising sophistication of cyberattacks, understanding malware and its potential impact on an organization’s infrastructure and sensitive data is vital. Failure to recognize the threats posed by malware can lead to devastating breaches and significant financial losses. By protecting their infrastructure against malware, organizations can ensure the systems and data they rely on to function and grow is secured.”

Digital Rights Management (DRM) and Secure File Transfer Protocol (SFTP) rank in the Top 10 Most Misunderstood Data Security Terms

In the top 10, “Digital Rights Management (DRM)” ranks eighth with 5,770 monthly searches or 69,240 annually. DRM, which refers to technologies used to control the access to and use of digital content, is often misunderstood despite its widespread application in protecting intellectual property and other sensitive content. “Secure File Transfer Protocol (SFTP)” also makes the list, with 4,950 monthly searches and 59,400 annually. SFTP is a crucial tool for securely transferring files over a network, yet its functionality and benefits are frequently unclear to many users.

“Digital Rights Management (DRM) is a critical tool for safeguarding intellectual property like eBooks, software, and videos, but also increasingly other sensitive, proprietary content that needs to be shared with select partners for short time periods. This can include contracts, proposals, and customer records . DRM works by encrypting the digital content so that only authorized users can access it, restricting how it can be used and distributed. The primary function of DRM is to prohibit content copying or limit the number of people or devices that can access a piece of content. 

Secure File Transfer Protocol (SFTP), by contrast, is vital for transferring files securely, reducing the risk of interception and unauthorized access. SFTP is the file transfer tool of choice in many organizations, encrypting the credentials and the content  to unreadable format. This encryption ensures that sensitive information remains protected even if the data is intercepted during transmission.”

Why Understanding Data Security Terms is Crucial for Organisations

As cyber threats become increasingly frequent and sophisticated, it is crucial for organizations to have a comprehensive understanding of key data security terms to safeguard sensitive information. Knowledge of concepts such as VPNs, HIPAA regulations, and malware empowers companies to protect personal data, ensure compliance, and fortify their defenses against potential breaches.

 

The post The Most Misunderstood Data Security Terms in The U.S. appeared first on Cybersecurity Insiders.

In the realm of cybersecurity, where data has become an invaluable asset, precise understanding of technical terms is essential for professionals. Yet, many in the tech field find key data security terms perplexing. 

To address this gap, Kiteworks has analyzed search data to reveal the most frequently misunderstood data security concepts in the U.S. As cyber threats become increasingly sophisticated, mastering these terms is crucial for effective risk management. Kiteworks provides expert insights to clarify these critical concepts and underscores the need for comprehensive data protection strategies in 2024.

The Most Misunderstood Data Security Terms:

Please see the full dataset here.

VPN is the Most Misunderstood Data Security Term in the U.S.

The most misunderstood data security term in the U.S. is “Virtual Private Network (VPN),” which sees an average of 57,840 searches per month or 694,080 annually. Despite its significance in securing online connections and protecting sensitive data, many are unclear about the full scope of VPNs. 

Tim Freestone, Chief Strategy and Marketing Officer at Kiteworks, comments: “A Virtual Private Network (VPN) is essential for ensuring secure and private connections over the internet. A VPN is designed to encrypt your online activities, making it harder for cybercriminals, and even your internet provider, to intercept your data. Nevertheless, VPNs have their limitations: there remains an underlying risk when you open a VPN tunnel into your employer’s network from an untrusted home or public Wi-Fi network.

Understanding VPNs is crucial not just for protecting personal privacy but also for securing sensitive business information, particularly in remote work environments. Many organizations use VPNs as a fundamental layer of their cybersecurity strategy, highlighting their importance in safeguarding against potential breaches and unauthorized access.”

HIPAA is the Second Most Misunderstood Data Security Term

Following closely is the “Health Insurance Portability and Accountability Act (HIPAA),” with 13,700 searches each month or 164,400 annually. Despite its significance in safeguarding sensitive health information, many are unclear about the definition of HIPAA

“In 2023, healthcare organizations experienced the most data breaches since 2009, with the industry paying the highest average data breach cost compared to other industries since 2010. The HIPAA Privacy Rule is a key federal law which establishes national standards for protecting individuals’ medical records and other personal health information. 

Understanding HIPAA is not just essential for compliance but also for protecting patients from potential data breaches and protected health information (PHI) loss, which could have severe consequences. Some organizations that don’t work in the healthcare sector still use HIPAA as a measure for the maturity of their data security, signifying its importance.”

Malware Ranks Third Among the Most Misunderstood Data Security Terms

The third most misunderstood data security term is “Malware,” with 13,200 monthly searches and 158,400 annually. Although widely used, the term still causes confusion, making it a critical point of concern.

Freestone clarifies: “Malware, or malicious software, is designed to infiltrate, damage, or disable computers and systems. It encompasses various types, including viruses, ransomware, and spyware. Given the rising sophistication of cyberattacks, understanding malware and its potential impact on an organization’s infrastructure and sensitive data is vital. Failure to recognize the threats posed by malware can lead to devastating breaches and significant financial losses. By protecting their infrastructure against malware, organizations can ensure the systems and data they rely on to function and grow is secured.”

Digital Rights Management (DRM) and Secure File Transfer Protocol (SFTP) rank in the Top 10 Most Misunderstood Data Security Terms

In the top 10, “Digital Rights Management (DRM)” ranks eighth with 5,770 monthly searches or 69,240 annually. DRM, which refers to technologies used to control the access to and use of digital content, is often misunderstood despite its widespread application in protecting intellectual property and other sensitive content. “Secure File Transfer Protocol (SFTP)” also makes the list, with 4,950 monthly searches and 59,400 annually. SFTP is a crucial tool for securely transferring files over a network, yet its functionality and benefits are frequently unclear to many users.

“Digital Rights Management (DRM) is a critical tool for safeguarding intellectual property like eBooks, software, and videos, but also increasingly other sensitive, proprietary content that needs to be shared with select partners for short time periods. This can include contracts, proposals, and customer records . DRM works by encrypting the digital content so that only authorized users can access it, restricting how it can be used and distributed. The primary function of DRM is to prohibit content copying or limit the number of people or devices that can access a piece of content. 

Secure File Transfer Protocol (SFTP), by contrast, is vital for transferring files securely, reducing the risk of interception and unauthorized access. SFTP is the file transfer tool of choice in many organizations, encrypting the credentials and the content  to unreadable format. This encryption ensures that sensitive information remains protected even if the data is intercepted during transmission.”

Why Understanding Data Security Terms is Crucial for Organisations

As cyber threats become increasingly frequent and sophisticated, it is crucial for organizations to have a comprehensive understanding of key data security terms to safeguard sensitive information. Knowledge of concepts such as VPNs, HIPAA regulations, and malware empowers companies to protect personal data, ensure compliance, and fortify their defenses against potential breaches.

 

The post The Most Misunderstood Data Security Terms in The U.S. appeared first on Cybersecurity Insiders.

In the realm of cybersecurity, where data has become an invaluable asset, precise understanding of technical terms is essential for professionals. Yet, many in the tech field find key data security terms perplexing. 

To address this gap, Kiteworks has analyzed search data to reveal the most frequently misunderstood data security concepts in the U.S. As cyber threats become increasingly sophisticated, mastering these terms is crucial for effective risk management. Kiteworks provides expert insights to clarify these critical concepts and underscores the need for comprehensive data protection strategies in 2024.

The Most Misunderstood Data Security Terms:

Please see the full dataset here.

VPN is the Most Misunderstood Data Security Term in the U.S.

The most misunderstood data security term in the U.S. is “Virtual Private Network (VPN),” which sees an average of 57,840 searches per month or 694,080 annually. Despite its significance in securing online connections and protecting sensitive data, many are unclear about the full scope of VPNs. 

Tim Freestone, Chief Strategy and Marketing Officer at Kiteworks, comments: “A Virtual Private Network (VPN) is essential for ensuring secure and private connections over the internet. A VPN is designed to encrypt your online activities, making it harder for cybercriminals, and even your internet provider, to intercept your data. Nevertheless, VPNs have their limitations: there remains an underlying risk when you open a VPN tunnel into your employer’s network from an untrusted home or public Wi-Fi network.

Understanding VPNs is crucial not just for protecting personal privacy but also for securing sensitive business information, particularly in remote work environments. Many organizations use VPNs as a fundamental layer of their cybersecurity strategy, highlighting their importance in safeguarding against potential breaches and unauthorized access.”

HIPAA is the Second Most Misunderstood Data Security Term

Following closely is the “Health Insurance Portability and Accountability Act (HIPAA),” with 13,700 searches each month or 164,400 annually. Despite its significance in safeguarding sensitive health information, many are unclear about the definition of HIPAA

“In 2023, healthcare organizations experienced the most data breaches since 2009, with the industry paying the highest average data breach cost compared to other industries since 2010. The HIPAA Privacy Rule is a key federal law which establishes national standards for protecting individuals’ medical records and other personal health information. 

Understanding HIPAA is not just essential for compliance but also for protecting patients from potential data breaches and protected health information (PHI) loss, which could have severe consequences. Some organizations that don’t work in the healthcare sector still use HIPAA as a measure for the maturity of their data security, signifying its importance.”

Malware Ranks Third Among the Most Misunderstood Data Security Terms

The third most misunderstood data security term is “Malware,” with 13,200 monthly searches and 158,400 annually. Although widely used, the term still causes confusion, making it a critical point of concern.

Freestone clarifies: “Malware, or malicious software, is designed to infiltrate, damage, or disable computers and systems. It encompasses various types, including viruses, ransomware, and spyware. Given the rising sophistication of cyberattacks, understanding malware and its potential impact on an organization’s infrastructure and sensitive data is vital. Failure to recognize the threats posed by malware can lead to devastating breaches and significant financial losses. By protecting their infrastructure against malware, organizations can ensure the systems and data they rely on to function and grow is secured.”

Digital Rights Management (DRM) and Secure File Transfer Protocol (SFTP) rank in the Top 10 Most Misunderstood Data Security Terms

In the top 10, “Digital Rights Management (DRM)” ranks eighth with 5,770 monthly searches or 69,240 annually. DRM, which refers to technologies used to control the access to and use of digital content, is often misunderstood despite its widespread application in protecting intellectual property and other sensitive content. “Secure File Transfer Protocol (SFTP)” also makes the list, with 4,950 monthly searches and 59,400 annually. SFTP is a crucial tool for securely transferring files over a network, yet its functionality and benefits are frequently unclear to many users.

“Digital Rights Management (DRM) is a critical tool for safeguarding intellectual property like eBooks, software, and videos, but also increasingly other sensitive, proprietary content that needs to be shared with select partners for short time periods. This can include contracts, proposals, and customer records . DRM works by encrypting the digital content so that only authorized users can access it, restricting how it can be used and distributed. The primary function of DRM is to prohibit content copying or limit the number of people or devices that can access a piece of content. 

Secure File Transfer Protocol (SFTP), by contrast, is vital for transferring files securely, reducing the risk of interception and unauthorized access. SFTP is the file transfer tool of choice in many organizations, encrypting the credentials and the content  to unreadable format. This encryption ensures that sensitive information remains protected even if the data is intercepted during transmission.”

Why Understanding Data Security Terms is Crucial for Organisations

As cyber threats become increasingly frequent and sophisticated, it is crucial for organizations to have a comprehensive understanding of key data security terms to safeguard sensitive information. Knowledge of concepts such as VPNs, HIPAA regulations, and malware empowers companies to protect personal data, ensure compliance, and fortify their defenses against potential breaches.

 

The post The Most Misunderstood Data Security Terms in The U.S. appeared first on Cybersecurity Insiders.

Research from Universidad de la República (Udelar) in Uruguay has unveiled a new security vulnerability involving HDMI cables, which are commonly used to connect computers to TVs and large screens. The study reveals that hackers can exploit these cables to steal passwords and conduct espionage.

According to the research paper, artificial intelligence (AI) can be used to decode electromagnetic emissions from HDMI connectors, allowing attackers to reconstruct the content previously displayed on a computer screen with up to 60% accuracy. Federico Larroca, a researcher involved in the study, explained that while executing such operations is technically challenging, the combination of software-defined radio equipment and AI makes it feasible to capture and reconstruct text and images.

Larroca noted that the accuracy of this data reconstruction is expected to improve as AI algorithms advance, leading to reduced error rates in character recognition.

Despite efforts to design modern HDMI cables with reduced interference, AI advancements are still capable of overcoming these safeguards, potentially jeopardizing the security of numerous large-screen setups.

Cybersecurity experts recommend using encrypted connections when displaying sensitive information on external screens and avoiding the display of passwords via HDMI-connected monitors. Some experts also suggest utilizing wireless screencasting options, though this method may lead to data loss.

In terms of market impact, a survey by Technavio predicts that the HDMI cable market will reach $900 million between 2024 and 2028. This growth is expected to be driven by increased usage of smart devices and rising demand for ultra-high-definition 8K video and audio transmission.

The post Now espionage through HDMI Cables say experts appeared first on Cybersecurity Insiders.

Proxmox VE is mainly suitable for small and medium-sized organizations that require advanced virtualization capabilities but have limited budgets. Proxmox VE is an open-source solution with particular advantages and disadvantages. On one side, it offers flexibility and adaptability that allow you to build an efficient environment according to your needs. However, the advanced configuration and maintenance requirements can make it challenging to achieve the desired performance, compatibility, and security. 

The data that organizations process and store on Proxmox VMs can be critical to production and revenue. Additionally, that data can fall under compliance and legal protection requirements. Organizations can face financial fines and reputational damage in case of an IT incident leading to the loss of such data. Implementing a Proxmox backup solution and ensuring reliable VM data protection is key to avoiding such disasters, supporting production continuity, and generating stable revenue. 

NAKIVO a leader in data protection and disaster recovery solutions, has announced the recent release of NAKIVO Backup & Replication v10.11.2, featuring an advanced backup solution for Proxmox environments. You can try the free version and benefit from the Proxmox agent-based backup solution without any additional cost until the end of 2024. 

Read on to explore the main challenges to consider when integrating Proxmox backups into your environment. 

Proxmox Backup Challenges

Proxmox Backup Server, the native backup and recovery solution for Proxmox VMs can perform management, data deduplication, and encryption via the web-based interface and CLI to provide data protection, replication, and recovery. However, the tool has some limitations that push users to consider alternative solutions. 

Backup tiering 

The IT industry standard for backup data reliability is the 3-2-1 rule, which supposes at least three (3) data copies, stored in two (2) different repositories, one of which is offsite or in the cloud. Proxmox Backup Server allows users to configure cloud backup synchronization but the process involves manual setup. This process is prone to human error even before initiating the first workflow. 

Additionally, the overall level of native Proxmox backup automation can be insufficient for organizations with large data assets. In some cases, you can successfully tier backups after spending some time studying Proxmox’ extensive knowledge base. However, you may want your in-house IT specialists to focus on production instead.

Ransomware resilience 

Nowadays, hackers target backups along with production data when planning cyberattacks, which makes anti-ransomware protection of backup copies critical. Although Proxmox Backup Server provides some room to set up data security, configuring immutability for PBS to protect backups can require advanced knowledge and third-party integrations. This extends the supply chain and may lead to compatibility issues and can further complicate your environment.

Multi-platform support

Proxmox Backup Server is a native solution designed to enhance data protection in Proxmox VE infrastructures and Linux-based machines in general. If you build a homogenous Proxmox-based virtualization system, this can work well. But when your production environment spans multiple platforms, numerous issues might arise. 

If the native VM backup solution by Proxmox doesn’t suit you due to backup tiering flexibility, platform limitations and security concerns, finding an efficient and user-friendly alternative can be the best option.

The Proxmox VE Backup Solution by NAKIVO

With the backup solution from NAKIVO, you can create fast and efficient backups to protect Proxmox VM data and implement one of the essential points of a disaster recovery plan. The Promox agentless backup is currently in development. 

Integrating NAKIVO’s Proxmox backup solution into your infrastructure provides the following benefits: 

  • Fast, automated, incremental, and app-aware Proxmox backups that you can run by schedule and on demand.
  • Centralized web-based interface to maintain and monitor data protection workflows across your infrastructure. 
  • Onsite, offsite, cloud, and NAS storage options for backup tiering. 
  • Backup immutability and encryption for better security and ransomware resilience. 
  • Flexible recovery options to achieve tight RPO and RTO.

Fast operation, reliability, and qualified support are the main reasons customers choose NAKIVO. In addition, the solution is affordable: subscription licenses start at $2.50 per workload/month; perpetual licenses start at $58 per VM.

Benefits 

With the NAKIVO solution, you can ensure high-level automation of your data protection processes. NAKIVO Backup & Replication is designed with deployment and configuration simplicity in mind. You can easily install the solution and run the first Proxmox backup. 

you can also use the advanced set of features to optimize storage space and boost performance. Schedule and complete incremental Proxmox backups, dynamically balance the available network resources and cut backup windows using deduplication. By managing the available hardware resources and efficiently utilizing storage space, you can further reduce the total cost of your Proxmox backup system.

Initial Configuration

NAKIVO Backup & Replication uses agent-based backup and recovery. The agentless backup functionality is in development, the release is scheduled for later in 2024. To start integrating advanced data protection workflows into your Proxmox environment, you can use the Proxmox VM with Linux Ubuntu to deploy the solution and set the onboard backup repository. 

Check NAKIVO’s user guide for more installation instructions. You have local, shared, and cloud datastore options that you can use to tier backup repositories and enhance the system’s resilience. 

After that, add Proxmox virtual machines to the inventory in NAKIVO Backup & Replication. Note that you need to add Proxmox VMs as physical machines. Now you can create a backup job. Check this guide for additional Proxmox backup and recovery instructions.

Conclusion

NAKIVO Backup & Replication provides agent-based, incremental, and app-aware Proxmox backup. You can simplify both backup and recovery configuration and processes and configure the set of security features for optimal system performance. Lastly, you should apply the virtual machine backup best practices to enhance the resilience of your data and ensure the availability of your Proxmox environment.

 

The post Proxmox Backup by NAKIVO: Powerful VM Data Protection appeared first on Cybersecurity Insiders.

The technologies that will enable optimised data security already exist, but businesses are resting on their laurels.

Data gathered by Governing indicates that in 2023 over 353 million individuals were affected by data compromises, including data breaches, leakage, and exposure. Figures this high call for systemic and industry-wide change and suggest that a revamp in how data security is approached is long overdue.

Simon Bain, OmniIndex’s CEO argues that there are three key areas to consider when building and deploying competent and modern security defenses. Luckily, all three of them already exist and are available to use:

1. Adopting a zero trust model

Zero trust is a framework for securing infrastructure and data for today’s modern digital transformation. It addresses the modern challenges of business, including securing remote workers, hybrid cloud environments, and ransomware threats.

Bain: “With the amount of data that is created today, managing it, storing it and making sure that only authorized people have access to it is becoming one of the hardest jobs within data management. To help mitigate this problem, a zero trust model will enable an organization to put in place best practice for data access and storage. However, this alone does not stop data leakages or ransomware attacks.”

“While many vendors have tried to create their own definitions of zero trust, there are a number of standards from recognized organizations that can help you align Zero Trust with your organization. By adding zero trust to the data in addition to the other areas within an organization, you’ll create a more durable foundation for the data.”

2. Leveraging the advancements of an open source database

PostgreSQL is a powerful, free, and open source database system built for reliability and handling massive amounts of data. This makes it an attractive choice for companies to store everything from user accounts and customer data through to website logs.

Bain: “As PostgreSQL is open source, it is constantly being improved by a large community of developers that use it. It is a direct result of this thorough and collaborative approach that PostgreSQL has high levels of resilience, integrity, and correctness.

“PostgreSQL is already used as the primary data store or data warehouse for many web, mobile, geospatial, and analytics applications including Apple, Netflix, Spotify and Uber as well as tens of thousands of other companies of all sizes.”

Stack Overflow named Postgres as the most used database in their annual developers survey with 45% reporting to use it compared to 41% for MySQL. As such, people know how to use it with its system being industry standard and fully embedded in people’s workflows.

Bain continued: “In truth, it is actually one of the most advanced database engines in the world with support for many different technical services including AI, ML and SQL. What truly makes Postgres stand out and keeps it at the top of the field is that it is actively maintained with a lot of developers maintaining and extending it to keep it not just up to date with the latest advancements, but also ahead of the trends.”

3. Web3: That’s all blockchain and crypto, isn’t it?

Web3 is the third generation of the world wide web, which involves direct immersion into the digital world. It encompasses individual control of personal data as well as the use of cryptocurrencies and blockchain.

Much more than cryptocurrency, web3 remains in constant development and acts as a vision of a decentralized and open web with greater utility for its users.

Bain: “Blockchain certainly uses web3 and the concepts of web3 are built on to the foundations of Blockchain. However, it actually is far more than this. It adds layers of security that mean data is better protected, there is clearer oversight of data use, without compromising the data.

“And the data is more resilient to cyberattacks because only those people who have been granted permission by the data owner are able to access the data. By using a web3 database, your data is not only more resilient to attacks, but it becomes a core part of your zero trust model.”

Out with the old, in with the new

Bain: “There are many areas where data security is hampered by the outdated attitudes and belief systems of the 2000s and 2010s. For example, modern security features shouldn’t compromise data use, data should be analyzed at insertion, data must be encrypted at all times and the data owner should decide who has access.

“Addressing the challenges of modern data security starts with the three key areas mentioned, and relies on progressive attitudes to security that embrace modern technology. Relying on outdated, frequently breached methods and models will only see the figures on data theft and breach rise year-on-year.”

The post The three pillars of the next generation in data security: PostgreSQL, zero trust and web3 appeared first on Cybersecurity Insiders.

A recent study revealed that employees typically download around 30GB of data monthly from SaaS applications to their devices, including mobile phones, laptops, and desktops. This high volume illustrates the large amounts of unsecured data flowing across networks and devices, underscoring the critical need for advanced data protection measures. 

To ensure end-to-end protection, security analysts need to employ advanced, all-encompassing data protection measures. This can be done by utilizing technology that addresses all critical aspects of data movement and handling, the “Who, What, Where, and How” of data’s origin. Capabilities like origin-based data identification, manipulation detection, and data egress controls allow security analysts to effectively monitor and manage data throughout its entire lifecycle, ensuring protection across all endpoints. However, the benefits of advanced data protection go beyond simply securing data in motion.  

In the age of AI, employees can access a world of different Generative AI (GenAI) platforms with just a single click. Although convenient, many employees don’t recognize or understand the potential threats caused by inputting sensitive data into GenAI platforms. As a result, modern solutions that prevent unauthorized sharing of sensitive information are a necessity given today’s cyberthreat landscape. 

A recent survey highlights a troubling trend in unauthorized SaaS application usage among organizations. Key findings show that a staggering 73% of security professionals admitted using non-approved SaaS apps, with significant risks such as data loss (65%) and data breaches (52%) cited among the top concerns. Despite this awareness, only 37% have established clear policies to address these risks, revealing a significant gap in security governance that urgently needs addressing to prevent serious compliance and security issues.  

There are numerous DLP solutions available in today’s market, but not every solution accounts for the evolving data security risks security teams face daily. Next DLP’s recent announcement of Secure Data Flow, a feature within the Reveal Platform, paves the way for high-performance risk detection and protection capabilities, streamlining data management, improving data sensitivity recognition, and reducing ongoing content inspection costs. Next-gen DLP solutions such as this have the capabilities to identify and track data through its entire lifecycle from origin to egress. By analyzing data’s origin and content, these platforms can prevent data that is traveling to external locations or networks.

Focusing on data lineage enables companies to enhance their cybersecurity strategies by precisely identifying and monitoring high-risk employee groups or individuals. This targeted approach not only allows for the early detection of potential date exfiltration activities but also aids in tracing the flow of data across the organization. For example, this can look like monitoring employee activity following a RIF by detecting and immediately flagging suspicious actions. Or, for instance, consider a disgruntled employee following a return to office mandate. 

Whatever the instance, Secure Data Flow tracks an employee downloading IP, renaming files, or archiving data to then exfiltrate that data into a personal Shadow SaaS service or application. 

It’s crucial to recognize that not all insider risks and data loss incidents  stem from intentional malice. In many instances, employees may inadvertently create security risks in their pursuit of efficiency—for instance, by downloading intellectual property or sensitive data from corporate repositories and uploading it to personal or unsanctioned GenAI and shadow SaaS solutions to enhance job performance. 

With the latest capabilities, data loss and insider risk solutions are designed to coach employees on handling sensitive information. By educating employees, reinforcing good behavior, and providing continuous feedback and training, this approach helps foster a culture of security within the company. This reduces the likelihood of insider threats by making employees more knowledgeable and vigilant about data protection. 

With the right technology, security teams can: 

  • Achieve Comprehensive Data Tracking: Advanced data protection enables organizations to secure critical business data not only during transit but also at rest and in use within SaaS applications. Comprehensive tracking capabilities ensure that sensitive information remains protected regardless of its location or state, whether stored in cloud environments or accessed from mobile devices and desktops. By monitoring data flows across the entire ecosystem, security teams gain visibility into how data moves within and outside the organization, facilitating proactive risk management and compliance with data privacy regulations.
  • Enhance Data Protection: Organizations can effectively safeguard their intellectual property, proprietary information, and other sensitive data from potential loss, leakage, or theft. Advanced data protection solutions leverage encryption, access controls, and data masking techniques to ensure that only authorized personnel have access to sensitive information. 
  • Provide Insightful Investigations: Security analysts benefit from advanced data protection tools by gaining contextual insights into the origin, manipulation, and lineage of data. These insights enable swift and accurate incident response, helping analysts identify and mitigate security incidents before they escalate. By tracing data movement and access patterns, security teams can conduct thorough investigations into suspicious activities, track data breaches, and assess the impact on business operations. 

As new technologies are introduced and our digital ecosystems expand, investing in advanced data protection is a strategic imperative and necessity for maintaining a strong security posture. By shifting to revolutionized advanced solutions that safeguard data in motion, at rest, and in use, organizations can proactively defend against emerging threats. These advanced technologies enable security teams to actively fortify their defenses, transform their workflow, and significantly enhance their overall security infrastructure. 

 

The post How Advanced Data Protection Revolutionizes Security Analysts’ Workflow appeared first on Cybersecurity Insiders.