For all those who were accustomed to sharing their Netflix passwords with friends and family, here’s an important update on how the company has taken action against this practice. Netflix released an official statement last Wednesday, indicating that its efforts to curb password sharing have been highly successful, resulting in the addition of approximately 8.8 million new users to its platform.

In other words, the company’s strategy to discourage password sharing has proven to be a significant boost to its revenue this year. Last year, Netflix took a proactive stance against password sharing and informed its user base about its intention to crack down on this practice, as it was significantly impacting its monthly and yearly revenue collections.

Now, the popular video streaming service is reaping the rewards of this approach, having welcomed more than 8 million new customers, representing a substantial 30% increase in its user database. This surge in subscribers can be seen as a summer bonus for the company.

Password sharing has long been a source of concern, as it can lead to various issues. Misuse of account credentials can result in account blocks and user account cancellations. Furthermore, in the wrong hands, these credentials can be used for scams, potentially draining the account holder’s e-wallets or bank accounts.

In a time when companies often struggle to generate revenue, such challenges can have a noticeable impact on the quality of services and content offered to users. Delays in service provision, subpar customer care experiences, and payment delays for content creators are some of the common consequences.

This summer, however, things are expected to be different from a revenue perspective. Netflix has implemented a price increase for its basic subscription in the United States, raising it from $2 to $11.99. Likewise, in the UK, subscription costs will see a £2 increase, bringing the overall cost to £18. This move is likely to contribute to the company’s revenue growth in the coming months.

The post Netflix password sharing crackdown yields excellent results appeared first on Cybersecurity Insiders.

During the shopping season, a significant portion of the United Kingdom’s population was eagerly turning to the Temu online shopping application, enticed by its promise of unbelievable prices. The application, adorned with an eye-catching orange logo, had generated high expectations for excellent profits during the Christmas 2023 shopping frenzy.

However, law enforcement authorities have issued a stark warning about this online marketplace. They have uncovered evidence of the app harvesting customer data and expressed concerns that this data may find its way into Chinese hands.

These growing concerns about data privacy have dominated headlines on Google, casting a shadow of doubt over the app’s trustworthiness. Many potential users are now hesitant to explore Temu for their shopping needs, especially considering that it’s marketing Chinese products under the guise of a Singaporean cover. Adding to these apprehensions is a recent alert circulating on the Telegram app, suggesting that the electronic devices supplied by Temu may harbor malware capable of espionage in the future.

Operating under the name Tee-Moo in Beijing, this app prides itself on delivering products directly from factories, boasting of cost-effective prices that not only delight customers but also keep the business thriving.

Currently, these ultra-low prices are exclusively available to customers in the United Kingdom. However, the company has ambitions to expand its services across Europe, contingent on compliance with the General Data Protection Regulation (GDPR) and privacy regulations in the UK.

The owner of the Temu app, E-commerce giant PDD Holdings Inc, has maintained a stance of silence on this issue. They’ve clarified that their products and services do not contain any hidden malware or malicious software. They assert that the data they collect is solely for the purpose of improving their services.

So, what’s the best course of action?

It is advisable to exercise caution. Avoid sharing sensitive banking details such as credit and debit card numbers or CVVs unless absolutely necessary. If possible, opt for the Cash on Delivery (COD) payment method when shopping through the Temu app to minimize potential risks.

The post China Temu App caused data privacy concerns in United Kingdom appeared first on Cybersecurity Insiders.

QR code phishing also known as ‘Quishing’ is a cyberattack that leverages Quick Response (QR) codes to deceive individuals into revealing sensitive information or taking malicious actions. QR codes are two-dimensional barcodes that can store various types of data, including website URLs, contact information, and text. Cyber-criminals use these codes to disguise their malicious intent.

Here’s how QR code phishing typically works:

Distribution: Attackers distribute QR codes through various means, such as emails, SMS messages, social media, or physical printouts. These QR codes may appear legitimate and may be accompanied by enticing offers, discounts, or urgent messages to lure victims.

Scanning: Victims scan the QR code using their smartphone or QR code scanner app, believing it to be a harmless link or promotion.

Redirect: Once scanned, the QR code redirects the victim to a malicious website or landing page designed to mimic a legitimate site. This fake website often closely resembles a well-known brand, a banking portal, or an e-commerce platform.

Phishing: On the fake website, victims are prompted to enter sensitive information, such as login credentials, credit card details, or personal identification information. Some QR code phishing attacks might also prompt victims to download malicious files or apps.

Data Theft or Malware Installation: Attackers collect the entered information for illegal purposes, such as identity theft or financial fraud. In some cases, malware may be installed on the victim’s device, allowing the attacker to gain further access and control.

To protect yourself from QR code phishing:

a.) Verify the Source: Only scan QR codes from trusted sources. Be cautious of QR codes received via unsolicited emails, text messages, or social media.

b.) Inspect the URL: Before providing any sensitive information, review the URL displayed after scanning the QR code. Ensure it matches the legitimate website of the organization in question.

c.) Use a QR Code Scanner with Security Features: Some QR code scanner apps have built-in security features that can check URLs for authenticity and flag potential threats.

d.) Enable Two-Factor Authentication (2FA): Whenever possible, enable 2FA on your accounts to add an extra layer of security, making it harder for attackers to gain access even if they obtain your credentials.

e.) Keep Your Device Secure: Regularly update your smartphone’s operating system and apps to patch vulnerabilities that attackers might exploit.

f.) Educate Yourself: Stay informed about common phishing tactics, including QR code phishing, to recognize and avoid potential threats.

g.) Report Suspected Phishing: If you encounter a suspicious QR code or website, report it to relevant authorities or the organization being impersonated.

QR code phishing is a relatively new form of cyberattack, and attackers are constantly evolving their techniques. Staying vigilant and exercising caution when scanning QR codes is crucial to protect your personal and financial information from potential threats.

The post Understanding and Safeguarding against QR Code Phishing Attacks aka Quishing appeared first on Cybersecurity Insiders.

The European Union has imposed a substantial €345 million fine on the popular video-sharing platform TikTok for its failure to adequately protect children’s data. The penalty comes following a notice from Ireland’s Data Protection Commission (DPC), an EU data privacy authority, which cited eight privacy and information processing violations and issued a three-month ultimatum for the company to rectify its practices.

One of the key issues identified by the DPC was that TikTok’s default profile settings for children were set to ‘public,’ exposing their content to anyone. Additionally, the ‘Family Pairing’ feature, designed to allow parents to connect with their child’s content and send direct messages, was also accessible to all users, posing significant risks to these accounts.

TikTok’s failure to adequately inform users under the age of 16 about the potentially invasive privacy options while posting videos constituted a clear violation of the European Data Protection Board (EDPB) regulations and a significant breach of the General Data Protection Regulation (GDPR).

In response to the imposed penalty, TikTok has initiated an appeal while taking steps to address the privacy concerns. They have now made the videos posted by users aged 12-15 private by default and enabled customization of viewership for users below the age of 16.

To ensure compliance with the newly enforced regulations, the company, owned by ByteDance and based in Singapore, has revamped its user account registration process for individuals above 17 years of age. They have also restricted parents from sending direct messages to accounts marked as ‘Private.’

Previously, TikTok primarily catered to users under the age of 40, but during the lockdown period, the platform experienced a remarkable 45% increase in registrations from users aged 40 and above.

In light of these developments, the mobile application-driven business platform is committed to resolving these privacy issues discreetly, recognizing the potential adverse effects on its revenue. TikTok is determined to address data privacy concerns amicably, separate from the ongoing business endeavors of Douyin and its affiliated infotech platform, Musical.

The post TikTok slapped with €345m Child Privacy penalty by EU appeared first on Cybersecurity Insiders.

Apple’s recent Wonderlust event has garnered significant attention, particularly in the realm of digital viewership statistics. However, one noteworthy message from the tech giant has reverberated across the globe: safeguard your data by keeping it out of the cloud.

Apple, the American iPhone manufacturer that assembles its devices in the Indian subcontinent while branding them as US-developed due to their proprietary software exclusively designed by Cupertino engineers, delivered this advice.

While Apple didn’t delve into the specific reasons for this emphasis on data avoidance in the cloud, it did provide a comprehensive explanation of how its A17 Pro-System chip, powering the iPhone 15 Pro, maintains data locally, away from cloud-based storage.

The advantages of processing data at the edge, rather than in the cloud, are well-established. Edge processing minimizes latency issues associated with weak Wi-Fi or cellular networks and mitigates threats such as man-in-the-middle attacks, orchestrated by sophisticated criminals.

Apple’s iPhone 15 Pro and its Watch Series 9 implement this strategy by retaining sensitive information, like health data, directly on the device. This approach not only ensures swift data access but also eliminates many security concerns since a significant portion of the data never traverses cloud environments.

This achievement is made possible through the utilization of a Neural Engine equipped with 16 cores, employing machine learning and AI language processing to handle personal data without transmitting it to the cloud. Remarkably, the same number of cores also manages the M2 and M2 Max silicon wafers found in Mac devices.

In the United States, the iPhone 15 is priced between $800 and $1200, with the iPhone Pro Max version at the upper end of the spectrum. Interestingly, these prices translate to a relatively favorable exchange rate when considering the Dirham currency.

The post Apple says better keep data out of cloud appeared first on Cybersecurity Insiders.

In the ever-evolving landscape of cybersecurity, threats continue to take on new forms and adapt to advanced defense mechanisms. One such emerging threat that has gained prominence in recent years is “data poisoning.” Data poisoning is a covert tactic employed by cyber criminals to compromise the integrity of data, machine learning algorithms, and artificial intelligence systems.

This article delves into what data poisoning is, its implications for cybersecurity, and ways to mitigate this evolving threat.

Understanding Data Poisoning: Data poisoning is a form of cyberattack that involves manipulating or injecting malicious data into a dataset or system. Its primary goal is to corrupt the quality and reliability of data used for decision-making, analytics, and training machine learning models. Unlike traditional cyber threats, data poisoning operates by subtly altering data rather than directly infiltrating a system. It often goes unnoticed until it causes significant harm.

Implications for Cybersecurity:

1. Compromised Decision-Making: Data poisoning can deceive algorithms and AI sys-tems into making incorrect decisions or predictions. For instance, it could impact the accuracy of autonomous vehicles, financial fraud detection, or even medical diagnoses, leading to potentially disastrous consequences.

2. Undermining Machine Learning: Machine learning models rely heavily on clean, unbiased data for training. Data poisoning attacks can introduce biases, rendering models less effective and potentially discriminatory.

3. Exploiting Vulnerabilities: Cybercriminals can manipulate data to exploit vulnerabilities in systems, paving the way for more significant cyberattacks, such as ransomware or data breaches.

4. Eroding Trust: Data poisoning erodes trust in data-driven decision-making, discouraging organizations from relying on advanced technologies.

Methods Employed by Data Poisoning Attacks:

Data poisoning attacks can take various forms, including:

1. Adversarial Attacks: Attackers make small, imperceptible changes to data, which can lead to significant errors in AI systems.

2. Label Flipping: Attackers manipulate data labels, causing models to misclassify information.

3. Data Injection: Malicious data is injected into training datasets to introduce bias or errors.

4. Model Inversion: Attackers exploit machine learning models to retrieve sensitive information.

Mitigating Data Poisoning Threats:

To defend against data poisoning attacks, organizations must implement proactive measures:

1. Data Sanitization: Regularly audit and cleanse datasets to remove malicious or erroneous data.

2. Anomaly Detection: Implement robust anomaly detection mechanisms to identify unusual data patterns.

3. Model Robustness: Train models to resist adversarial attacks by incorporating security features.

4. Data Diversity: Collect diverse and representative datasets to reduce the risk of bias.

5. Regular Updates: Keep cybersecurity tools and models up-to-date to protect against evolving threats.

Conclusion:

Data poisoning represents a subtle yet potent threat to cybersecurity in our data-driven world. Cybercriminals are becoming increasingly adept at manipulating data to undermine decision-making processes and compromise AI systems. Recognizing the risks and implementing stringent data hygiene practices, as well as robust security measures, is crucial to defending against this evolving threat and ensuring the continued integrity of our digital ecosystems.

The post Data Poisoning: A Growing Threat to Cybersecurity and AI Datasets appeared first on Cybersecurity Insiders.

In an era marked by rapid technological advancement, data privacy experts like Ken Cox, president of private cloud provider Hostirian, are ringing alarm bells. Our recent conversation with Ken revealed a nuanced perspective on the capabilities of generative language models like ChatGPT and their implications for cybersecurity. This article dives into the crux of the discussion, including the risks these technologies pose, innovative threats emerging from AI, and the practical measures one could adopt for protection.

The Landscape of Risks

Ken Cox doesn’t paint OpenAI and ChatGPT as inherently malicious. In fact, he acknowledges that the creators have instilled a degree of ethical and moral guidelines into the system. However, the problem arises with the open-source versions of these large language models, which can be customized by anyone, for any purpose, ethical or otherwise.

Lowering Barriers to Entry for Bad Actors

The democratization of AI technologies has resulted in a new generation of “script kiddies,” only far more potent. These individuals can employ generative language models to create sophisticated attacks with minimal expertise. As Cox eloquently puts it, ChatGPT has “lowered the barrier to entry for bad players by a lot.”

The Current Threat Landscape

Cox indicated that the tools generated by AI are increasingly becoming capable. One example is the evolution of keylogging, which has now moved from capturing keystrokes at the system level to recreating what you’re typing by analyzing captured Wi-Fi signal patterns and even click sound waves, thanks to AI-assisted frequency mapping.

The Rise of Social Engineering 2.0

Perhaps the most harrowing example is the ability of these models to assist social engineering attacks at an unprecedented scale and sophistication. By ingesting rich data from social media profiles, attackers can easily impersonate people you know or entities you trust. This brings to light deeply rooted concerns about digital personas and even deep fakes, further exacerbating the battle between “good and bad” on the internet. Ken Cox believes that businesses must familiarize themselves with the current AI landscape, advocating for a more sophisticated level of AI literacy among organizations.

Authoritative Source of Authenticity

In the long term, Cox sees the need for an “authoritative source of authenticity,” and suggests that blockchain could offer a solution by establishing verifiable keys tied to individuals or businesses. Traditional measures like robust encryption and granular access controls still hold significant value in this new landscape, adds Cox.

From Pseudonymity to Full Exposure

Cox takes us back to the early days of the internet when user handles were pseudonymous and using real names was a taboo. This paradigm was shattered with the advent of Facebook in 2006, which encouraged people to be themselves online. The cultural shift led to the erosion of pure internet anonymity, transforming the internet into a space of variable anonymity levels.

The Case of Synonymous Blockchain Identities

With the emergence of blockchain technologies like Bitcoin, the modern internet landscape has become more nuanced. Cox describes this new form of identity as “synonymous.” While transactions within a blockchain can remain anonymous, the second a user’s wallet interacts with the real world—be it through a bank or a credit card—the anonymity cloak is lifted.

Future Directions in Identity Verification

Cox outlines his vision for the future of identity verification—blockchain-based personal keys. This approach would allow for pre-authenticated, encrypted communication channels between individuals, customized for each interaction. These personal keys could serve as a decentralized “secret word,” ensuring that communications are genuine.

Multi-Level Encryption Channels

Cox foresees a more intricate system where each entity you interact with has its unique encryption channel. Your bank, your family members, and your service providers will each have different keys to communicate with you, ensuring a multi-layered approach to security.

A Clarion Call for Trust Infrastructure

In his concluding thoughts, Cox underlines the dire need for a new paradigm of trust on the Internet. He believes that companies should focus more on building trust-based technologies to secure our digital future.

The conversation with Ken Cox serves as a vital check on the euphoria surrounding biometrics and other seemingly foolproof identity verification methods. It brings forth a pressing need for multi-layered, decentralized identity verification systems, and perhaps most importantly, a complete rethinking of how trust is established online. As we hurtle toward a future teeming with technological advances, Cox’s insights remind us that innovation must walk hand-in-hand with ethical considerations and security measures to build a safer, more reliable digital world.

Image by Freepik

The post The Double-Edged Sword of AI – How Artificial Intelligence is Shaping the Future of Privacy and Personal Identity appeared first on Cybersecurity Insiders.

By Erik Gaston, Vice President of Global Executive Engagement, Tanium

Cyber-criminals are nothing if not opportunistic. While the e-commerce industry is far from the “Wild, Wild West” – where infamous masked highway robbers ganged-up and ran rampant – today’s outlaws are still seeking to exploit loose security and regulation to prey on vulnerable targets and make a quick buck.

For those in the market of selling counterfeit or stolen products online – their confidence may be wavering – and consumers can now hope to see a greater crackdown on scammers.

The aptly named INFORM Consumers Act – or the rather long-winded Integrity, Notification, and Fairness in Online Retail Marketplaces for Consumers Act – was signed into law after the bipartisan legislation was passed by Congress in December 2022 and has been in effect since June 27, 2023. The goal of the INFORM Consumers Act is to provide greater transparency in online marketplace transactions and ultimately, to protect buyers.

The reason for this Act is clear – in 2022, the Federal Trade Commission received over 350,000 complaints stemming from online fraud. Clearly, something had to be done, but these changes won’t be easy to navigate for what the act defines as “online marketplaces,” especially given noncompliance carries hefty penalties, with fines that can exceed over $50k for each violation.

What’s changing?

The Act defines “high-volume third-party sellers” as those in online marketplaces with more than 200 transactions at $5k per year. This is significant because although it affects the Amazons and eBays of the world, there are a number of smaller marketplaces that will also be impacted. The e-commerce market accounted for 15.1% of total retail sales and reached $272.6 billion in Q1 2023 alone, so Amazon is by no means the only game in town.

The Act requires that the online marketplaces collect, verify, and disclose certain information about third-party sellers within 10 days of them qualifying as a “high-volume third-party seller.” If they are unable to do so, they must suspend activity for that seller. While this sounds simple enough, the information included is sensitive banking and financial data – known targets of malicious cyber criminals. Suddenly, these marketplaces must protect substantially more data and are being forced to assume increased liability brought on by the INFORM Act.

The challenges for marketplaces

Ultimately, the whole dynamic of online marketplaces is going to change. Originally, online marketplaces were established to bring buyers and sellers together. The INFORM Act changes this relationship significantly, as the marketplace itself is now an intermediary that must police its sellers and ensure that high volume marketplaces are safe for the consumer. The implications of this are clear – the cost of doing business for these marketplaces will inevitably go up and they will have to adopt robust, yet efficient, cybersecurity practices.

By having to collect and retain specific data like names, legal entities, bank accounts, tax IDs, and general contact information, online marketplaces will need to revisit their strategies for multi-factor authentication, Zero Trust, antivirus, etc., and establish an asset lifecycle-based program. It now becomes essential that these companies truly understand what their security posture is and recognize that they are a prime target for bad actors. This presents the necessity for an “outside-in” vantage point that places data management architects in the shoes of the scammers.

Complicating matters is the reality of the sudden shift in buying and working habits brought on by COVID-19. In response to the pandemic, many new infrastructures were hastily built to accommodate the online-only marketplace, and despite the recent resurgence in brick-and-mortar sales, online shopping remains much more prevalent than it was before COVID-19. This means many of the marketplaces haven’t had the breathing room to ever really catch up – but they will have no choice with the INFORM Act. It is truly an adapt or fail situation for many businesses.

In summary…

It is not all doom and gloom for marketplaces – they can certainly make the necessary changes to efficiently recognize, collect, and retain data on the identity and whereabouts of third-party sellers. Doing so, though, will require a well thought out and organized plan with cybersecurity at the core.

Some questions for online marketplaces to consider:

  • How many assets do I have and what is the scope of my network from the outside-in?
  • What is running on ALL devices in my network? (Do I need better version control and to deprecate assets per my lifecycle program?)
  • What is going in and out of our network at any given time?
  • What do I look like to an attacker/scammer?
  • Are our controls present and effective?
  • Where does our data come from and where is it stored?
  • Are our teams properly training the way they “race?” And do we have a common language across security and IT operations?

Implementing cybersecurity programs can achieve the goal of being able to ask and answer these questions in real time, at any time, 24/7. Once this is done, it will undoubtably be an effective way to reduce the prevalence of online fraud.

 

[Image by rawpixel.com on Freepik]

The post Reading between the Lines – How the INFORM Consumers Act Impacts Online Retailers appeared first on Cybersecurity Insiders.

In today’s digital world, businesses are increasingly adopting hybrid cloud solutions to harness the benefits of both public and private cloud infrastructures. While hybrid cloud offers unprecedented flexibility and scalability, it also introduces complex challenges in securing sensitive data across these diverse environments. This article delves into essential strategies and best practices for effectively safeguarding data across hybrid cloud architectures.

Comprehensive Data Encryption: One of the fundamental steps in protecting data across hybrid cloud environments is implementing end-to-end encryption. This entails encrypting data both at rest and in transit. Utilizing encryption mechanisms ensures that even if data is intercepted, it remains unintelligible to unauthorized individuals. Employ industry-standard encryption protocols and manage encryption keys securely to maintain data confidentiality.

Robust Identity and Access Management (IAM): Implementing a robust IAM framework is crucial for managing user identities, roles, and permissions across the hybrid cloud. Apply the principle of least privilege (PoLP) to grant users only the permissions they require for their tasks. Multi-factor authentication (MFA) adds an extra layer of security by necessitating multiple forms of verification for accessing critical resources.

Data Classification and Segmentation: Categorize data based on its sensitivity and criticality. Apply appropriate security controls and policies based on data classifications. Segmenting data into different security zones helps in isolating critical assets and limiting lateral movement in case of a breach. This approach mitigates the potential impact of a security incident.

Consistent Security Policies: Maintain consistent security policies across all cloud environments within the hybrid setup. This includes public and private clouds as well as on-premises infrastructure. Automated policy enforcement guarantees that security configurations remain uniform and aligned with compliance requirements.

Regular Monitoring and Auditing: Implement continuous monitoring of all activities within the hybrid cloud environment. Utilize intrusion detection systems (IDS) and intrusion prevention systems (IPS) to identify and thwart suspicious activities. Regular audits and log analysis help in identifying potential vulnerabilities or anomalies, allowing for timely corrective actions.

Data Backup and Recovery: Backup data regularly and ensure that backups are securely stored across both cloud environments. Establish a robust disaster recovery plan that outlines procedures for data restoration in case of data loss or breaches. Regularly test the recovery process to ensure its effectiveness.

Vendor Security Assessment: When using third-party services or solutions within the hybrid cloud environment, conduct thorough security assessments of vendors. Evaluate their security protocols, data handling practices, and compliance standards. Ensure that any third-party services adhere to your organization’s security standards.

Employee Training and Awareness: Educate employees about the importance of security in hybrid cloud environments. Offer training on recognizing phishing attempts, best practices for data handling, and the potential risks associated with cloud computing. An informed workforce is a critical line of defense against social engineering attacks.

Conclusion:

As organizations continue to adopt hybrid cloud architectures, securing data across these complex environments becomes paramount. By implementing a combination of encryption, robust access controls, data classification, consistent policies, monitoring, and other best practices, businesses can fortify their hybrid cloud security posture. Adapting a proactive and holistic approach ensures that data remains safe, even in the face of evolving cyber threats.

The post Best Practices to safeguard Data Across Hybrid Cloud Environments appeared first on Cybersecurity Insiders.

Any marketing company or team operating worldwide typically shares a common practice: extracting data from social media platforms and utilizing this information for digital marketing endeavors. Similarly, certain online marketing firms provide data scraping tools to premium users, enabling them to gather sensitive details from profiles sourced on the internet.

However, a concerted effort has emerged to curtail this practice. At least 12 international privacy watchdogs have joined forces to advocate for major social networking companies to block these deceptive marketing tactics that pose a threat to data privacy.

In response, industry giants such as Facebook, Microsoft, TikTok, YouTube, Twitter (also known as X), Instagram, and WhatsApp have collectively issued a stern warning to both individuals and companies, cautioning against engaging in data mining practices that could result in legal complications and potentially substantial penalties.

Extracting publicly available information from online platforms runs afoul of data protection and privacy regulations in numerous countries worldwide. Consequently, widespread data scraping will be treated as a breach of data privacy and could lead to severe consequences.

A notable case that serves as an example is that of Cambridge Analytica, a UK-based marketing firm that was a subsidiary of Facebook. This firm came under scrutiny for conducting a survey on Facebook users during the 2016 US presidential election, which ultimately saw Donald Trump emerge victorious. Notably, Trump is currently facing potential legal consequences in Georgia.

Cambridge Analytica’s activities were exposed by a data watchdog in 2017, leading to Congressional actions against the firm following testimony from Mark Zuckerberg. However, those behind the deceptive data marketing practices managed to dissolve the company before final legal actions could be taken.

A similar scenario unfolded in the realm of AI data feeds with ChatGPT, developed by OpenAI in conjunction with Microsoft. This led to a lawsuit against the company, accusing it of scraping substantial amounts of data from various online platforms to train its chat-based conversational AI.

Consequently, these practices are now being classified as illegal, and privacy regulators from countries including Australia, Switzerland, Norway, New Zealand, Colombia, Jersey, Morocco, India, Argentina, and Mexico—each a member of the Global Privacy Assembly International Enforcement Cooperation—have collectively taken the stance that such activities constitute a breach of data privacy regulations, except in the case of the Indian subcontinent.

The post Social media companies to stop data scraping appeared first on Cybersecurity Insiders.