Amazon France Logistique, a subsidiary of the American retail behemoth Amazon Inc., has been hit with a substantial fine of $35 million or €32 million by the Commission Nationale Informatique & Libertes (CNIL), the French data protection authority. The penalty was imposed due to the alleged intrusive surveillance of warehouse staff.

CNIL asserted that the surveillance camera systems employed in the warehouse were deemed illegal, crossing boundaries of privacy and causing excessive stress among employees. The watchdog argued that the monitoring systems, designed to track employee inactivity and scanning speed, constituted a clear violation of articles 12 and 13 of the General Data Protection Regulation (GDPR).

After a thorough 31-day investigation prompted by multiple employee complaints, CNIL found that the Warehouse Management Systems were not adhering to industry standards in inventory and package processing. These systems were reported to be placing undue pressure on workers, creating an environment contrary to the expectations of customer service.

In response, Amazon defended its position, stating that the surveillance systems were implemented to prevent processing errors commonly encountered during order fulfillment. The tech giant argued that these measures were necessary to uphold the quality and efficiency of its business operations.

CNIL disclosed these findings after notifying Amazon of the penalty on December 27, 2023. The authority released an official statement to the press on January 23, 2024, outlining the breach of privacy and the subsequent penalty imposed on Amazon France Logistique.

The post France slaps 32 million Euros penalty on Amazon for data privacy concerns among employees appeared first on Cybersecurity Insiders.

Fujitsu, a Japanese multinational company specializing in software and technology services, has issued an apology in response to the IT scandal that unfolded within the UK Post Office. The company is currently facing allegations that its IT staff, tasked with serving the Post Offices in the United Kingdom, had unauthorized access to manipulate databases, raising serious privacy concerns among the British public.

Investigations into the Post Office IT scandal have revealed that the Fujitsu Software Support Centre (SSC) had privileged access to servers storing user information from August 2002 until 2023. The lack of IT service audits during this period allowed the access to persist, resulting in potential financial fraud, data errors leading to service disruptions, and overall violations of data protection regulations. Consequently, this issue has become a prominent and widely discussed topic across the UK.

John Simpkins, a spokesperson for SSC, acknowledged that the software giant indeed had unrestricted access to the systems as part of a technology contract. However, he refuted claims that the staff committed financial fraud, manipulated transactions on Horizon POS machines, or engaged in data theft for illicit purposes.

The Business and Trade Committee’s selected MPs are set to question Paul Patterson, Chief of Fujitsu European Division, on Thursday. The aim is to determine whether Post Office branch managers could face prosecution for alleged data theft and manipulation of account software.

For those unfamiliar with the background, Fujitsu has been providing digitalization and IT services to Post Offices across the UK since 1996, securing a Horizon contract from the government in 1999. Prior to Fujitsu, the British company ICL was responsible for delivering related services to the government organization.

Further updates on this unfolding situation will be provided shortly.

The post Fujitsu issues apology for IT and Data Privacy scandal of UK Post Offices appeared first on Cybersecurity Insiders.

Lack of online data security globally

In today’s almost entirely digitized, cyber world, it’s imperative that private data and passwords remain secure and protected at all times. According to Business Insider (2022), Bitcoin investors are likely to lose up to $545 million in 2023, owing to various reasons like forgetting passwords to their wallets or wrongly recording their seed phrases. In most cases, safeguarding sensitive access credentials requires entrusting them to third-party databases. This has proven to be a highly unreliable strategy, with data servers being unavailable or becoming compromised more and more frequently. A new type of solution is needed to combat the ever-growing proliferation of unauthorized data breaches. 

A new level of data cyber safety 

Cyqur is an easy-to-install browser extension, geared towards developers, DeFi enthusiasts, NFT collectors, remote workers, artists and creators, digital natives, and anyone else who is reluctant to entrust third parties with their most sensitive credentials. The simple, yet powerful encryption and decryption web extension facilitates the storage and transmission of private data (passwords, seed phrases, etc.). The patented solution helps users achieve unparalleled peace of mind for their digital profiles. 

What makes Cyqur different

By design, the Cyqur solution does not store the user data on his behalf either through third-party servers, or any other centralized means. Instead, it encrypts the user data and then fragments that encrypted data across a number of cloud storage locations chosen and controlled by the user. Because of the fragmentation and decentralization of data storage in this manner, it is virtually impossible for all cloud storage locations to be compromised at the same time. This is a level of security that modern password managers and vaults can’t offer.  

How Cyqur helps users achieve peace of mind

  1. Secure storage: User access credentials are duplicated, fragmented, encrypted, and scattered across multiple cloud storage locations that the user alone 100% owns and controls, all the while ensuring your data fragments remain encrypted at rest. Cyqur does not have access to any of your data and it doesn’t scrape or share user browsing data. 
  2. Proprietary approach: Safeguards user data by using an immutable, automated, unique, independent, public blockchain proof of record for every access credential secured. User digital profile succession planning is ensured through Custodian of Last Resort.
  3. Breach protection: In case of a breach, hackers only access incomplete and useless data, while you retain uninterrupted complete access to your user credentials that remain protected and safe.
  4. Crypto wallet protection: Specifically designed to provide next-level peace of mind by securing all access credentials to your valuable wallets, including your previous seed words.
  5. Uninterrupted access and credential sharing: Users retain complete and uninterrupted access to their most important data, even offline. 

Limited opportunity to purchase at a discounted rate

Users who purchase Cyqur at this time will receive a 70% early-bird discount (from €48 to €15) for the first year. Users can get their annual license and start securing their cyber data here.

About Cyqur

Cyqur was brought to market by Binarii Labs with the goal of offering a new way of securing data. Designed with the utmost care and attention to detail, it provides unprecedented security in online data storage, which isn’t reliant on third-party solutions. Whether it’s seed phrases, passwords, NFTs, pins, private blockchain keys, usernames, exchange accounts, hot & cold wallets, or any other access credentials that need to remain safe, Cyqur offers its users this high level of protection.

Cyqur. Patented Password Protection.

The post Cyqur Launches A Game-Changing Data Encryption and Fragmentation Web Extension appeared first on Cybersecurity Insiders.

Wow:

To test PIGEON’s performance, I gave it five personal photos from a trip I took across America years ago, none of which have been published online. Some photos were snapped in cities, but a few were taken in places nowhere near roads or other easily recognizable landmarks.

That didn’t seem to matter much.

It guessed a campsite in Yellowstone to within around 35 miles of the actual location. The program placed another photo, taken on a street in San Francisco, to within a few city blocks.

Not every photo was an easy match: The program mistakenly linked one photo taken on the front range of Wyoming to a spot along the front range of Colorado, more than a hundred miles away. And it guessed that a picture of the Snake River Canyon in Idaho was of the Kawarau Gorge in New Zealand (in fairness, the two landscapes look remarkably similar).

This kind of thing will likely get better. And even if it is not perfect, it has some pretty profound privacy implications (but so did geolocation in the EXIF data that accompanies digital photos).

In today’s modern era, the concept of privacy in our daily lives seems elusive, given the widespread sharing of personal details on various social media platforms. From Facebook and Instagram to WhatsApp, Twitter, and even the notorious Telegram, users often divulge extensive aspects of their lives, only to later express concerns about privacy.

A notable trend on Instagram, a popular platform for sharing photos and videos, is the “Get to Know Me” phenomenon. Users willingly disclose intimate details such as age, phobias, tattoos, piercings, birthmarks, preferences, height, and various personal likes and dislikes.

Eliana Shiloh, a cybersecurity expert at Deloitte, has raised a red flag regarding this trend, labeling it a threat to privacy. Ms. Shiloh expressed her concerns in a TikTok video, sparking 72 instances of privacy concerns and prompting over 100 Instagram users to consider filing complaints against this potential threat with the Facebook-owned subsidiary.

The issue is particularly noticeable among female users who share videos and photos meant to be private but inadvertently expose themselves to a wider audience. Disclosing additional information like age and personal preferences poses a significant risk, potentially attracting the attention of hackers who, with minimal effort, can exploit this information through phishing to uncover more sensitive details.

So, what’s the solution to mitigate this risk?

The solution is straightforward: only disclose necessary details and keep everything else private. Sharing information like age may inadvertently enable hackers to deduce the date of birth, providing a potential entry point for accessing an individual’s private life by navigating through security questions designed to protect online accounts.

It is essential for online users to refrain from sharing critical information such as account credentials, bank details, contact numbers, and personal details about family or children on the internet. Such revelations can draw the unwarranted attention of hackers, who are always on the lookout for digital activities to exploit and invade private lives.

The post Beware of this Instagram trend that compromise Data Privacy appeared first on Cybersecurity Insiders.

Google is poised to delve into a potential data privacy quagmire in its pursuit of AI advancement with the impending release of its ChatGPT counterpart, Gemini, stemming from the 2017 ‘Project Ellmann’ and slated for an April 2024 debut. With Android smartphones deeply entrenched in the fabric of daily life, housing the digital troves of emails, documents, audio files, videos, and photos within the tech giant’s cloud data centers, privacy concerns loom large.

Given that Google possesses a wealth of personal data in digital format on its servers, speculations arise about the potential utilization of this information to nourish its AI chatbot. This chatbot not only comprehends text but also exhibits the capability to analyze and extract content from images, videos, and audio files stored by users.

Recent revelations suggest that Google intends to permit its new Gemini Chatbot access to user photos and search histories for constructing search response content, a move that is increasingly perceived as a privacy infringement under the guise of AI development.

Insiders from Alphabet Inc’s subsidiary have reportedly leaked information on Telegram, disclosing that Google has already incorporated AI into users’ phones and Chrome devices to enhance service delivery. The introduction of the app ‘Private Compute Services,’ running inconspicuously in the background as an update on Pixel phones, raises questions about its true nature as an AI tool.

The opaqueness surrounding the operations of private companies within their data centers and their interactions with devices integral to everyday life, such as smartphones and chat assistants, adds a layer of uncertainty for users. Notably, Google faced allegations in 2019 of amassing billions of medical records pertaining to the U.S. public through ‘Project Nightingale.’ However, this issue seemingly waned amidst the chaos of the Covid-19 pandemic and subsequent crises, eclipsed by more sensational headlines.

The pivotal question emerges: is Project Gemini, Google’s expansive language AI model, a potential threat to the privacy of online users? While technology itself isn’t inherently at fault, the focus shifts to the ethical considerations and decisions made by the individuals developing and utilizing it. An introspective examination by both employees and the user base becomes imperative in navigating the delicate balance between technological innovation and safeguarding privacy.

The post Google to do data privacy invasion in the name of Gemini AI Development appeared first on Cybersecurity Insiders.

Elon Musk has been making headlines recently, not only for his contentious remarks against his company’s investors but also for the abrupt dismissal of his Information Security head. The focus of the controversy lies in allegations made by Alan Rosa, the former head of Information Security and Technology at TESLA.

Rosa accused Musk of pushing for financial cuts in both physical and digital security, a move that he believes compromises data privacy and security. According to Rosa, these budget reductions not only pose potential risks but also contradict existing compliance and regulatory laws in the United States.

In response to Rosa’s outspoken objections, he was promptly asked to leave the company, a termination that reportedly violated labor laws by not providing prior notice. Subsequently, a complaint was filed on Tuesday in the Federal Court of New Jersey on behalf of Rosa, with the legal proceedings scheduled for the second week of January next year.

Elon Musk has yet to respond to the circulating claims on social media, as his legal team is actively seeking more information on the matter. Steve Davis, formerly known as X and currently an advisor at Twitter, is also entangled in this legal dispute. Davis’s involvement stems from his actions in initiating mass layoffs at the beginning of 2023. However, investigations suggest that he was acting upon directives from the StarLink Chief and that his decisions were not specifically targeted at Twitter’s C-suite employees.

Further updates will be provided as the investigation unfolds.

The post Twitter fired its Information Security head for cutting budget on data security and privacy appeared first on Cybersecurity Insiders.

Artificial intelligence is no longer just the stuff of science fiction; generative AI tools are seeing massive adoption rates. Unsurprisingly, the marketing and advertising industry has embraced AI-driven tools with the most enthusiasm. According to the latest data from January 2023, 23% of marketers and ad professionals in the US use so-called AI to assist them in their daily work.

If you’re a marketing professional, you’ve probably spent a lot of time this year experimenting with these new tools, learning about use cases, reading guides on how to make perfect prompts, and figuring out productivity hacks. And that’s great.

But like all revolutionary inventions except, perhaps, for the wheel, AI-driven tools provide as many answers as they raise questions. The sheer scope of possibilities is hard to imagine. The same can be said of the risks, many of which we still need to address as countries, companies, or individuals.

There is indeed widespread confusion among policymakers about what to do with AI-generated content and how to establish healthy boundaries without stymying growth. Some companies have developed internal rules. Some industries are more directly concerned than others (think of the actors’ strike in Hollywood).

While regulations catch up with rapid progress, let’s look at the risks associated with AI today and how to safeguard your privacy online, both as a professional and a private individual.

AI-driven tools: a playground for creative people and hackers

On March 20, 2023, during a nine-hour window, the data of approximately 1.2 million ChatGPT subscribers was exposed. The data breach included an alarming number of data points: name, surname, email address, payment address, credit card type, credit card number (the last four digits only), and the credit card expiration date. Scary? It’s a foretaste of what’s to come. Let’s look at the two main AI-related privacy concerns.

Where does all of this data come from?

AI-driven marketing tools collect copious amounts of data to optimize output and train algorithms. This data is often personal and sensitive and can include information such as Social Security numbers and health records.

AI systems collect data for machine learning from various sources using a range of methods:

  • Web scraping: to extract information such as text, images, reviews, and prices from websites.
  • APIs (Application Programming Interfaces): offered by online platforms.
  • User inputs: data provided directly and voluntarily via surveys and forms.
  • Internet of Things (IoT): connected devices like your fitness watch or home assistant.
  • Social media APIs: allowing AI systems to access user-generated content, social media profiles, and interactions.
  • Data partnerships: where data providers share their data for machine learning purposes.

That’s a lot of data! On top of that, AI-driven tools also collect the prompts you use when generating content. Let’s take a closer look at the fine print.

The privacy policy of your favorite AI-driven tool

If you’ve ever used Midjourney to create visuals for a blog post or an ad campaign, information such as your username, IP address, text and image prompts, public chats, email address, and more has been collected. Midjourney also generously shares this information with “service providers, third-party vendors, consultants, and other business partners.”

Midjourney is an example here, but it’s no exception. Other popular tools, including Woodpecker for cold outreach emails or rapide.ly for social media content, are just as “generous.” Now, think of how many times you used financial figures or your name, the name of your company, or that of your clients when writing prompts.

AI best practices you can adopt right now

If you’re concerned about the privacy of your data and that of your clients, it’s best not to wait for legislators to draft and vote on privacy compliance measures. Create best practices for your team, even if you’re a one-person band. Here are our recommendations:

  • Read the privacy policy before you decide to use a tool or a subscription.
  • Ask your team to be transparent about the AI tools they are using and review them.
  • Make sure your team understands the privacy and security risks associated with AI tools.
  • Do not use names, including your company name, in your prompts. If you need a name, use a fake one instead. You can replace it later.
  • Avoid using any information that could harm you or your business when in the wrong hands: tax identification numbers, Social Security numbers, phone numbers, or financial information. Assume this information could be part of a data breach.
  • AI extensions often ask for extensive permissions. Read them carefully before accepting.

Once the damage is done, it can be undone (to some extent)

What if your personal information was part of a data breach, or you’ve been carelessly sharing it online? Don’t despair; it’s hard to keep anything private online, and this is bound to happen to most of us. There are several things you can do.

Let’s say you find your name and other data (e.g., age and address) when performing an online search. You can remove it or set it to private if it’s one of your social media profiles. When it’s a website you don’t own, you can contact the owner and kindly ask them to take it down. If that doesn’t work, you can ask the search engine to remove it from the results anyway. Google offers this option, but only if specific conditions are met.

Lastly, you’ll find that your data also lives in the vast repositories of people search sites. Fortunately, they are all legally obliged to offer a way to opt out, and you can usually do this in a couple of “easy” steps.

That said, if your personal information has been exposed in a data breach, chances are that it’s stored, shared, and sold without your knowledge in other places, too. To delete it from these obscure places, you can try an automated data removal service, which will contact data brokers for you and get your data off their extensive lists. An additional perk is that a good data removal tool will perform this clean-up regularly and periodically so that once deleted, your data doesn’t find its way back onto the data market.

Let’s celebrate but stay on guard

It goes without saying that AI-driven tools are the way of the future. We’re only getting started. Imperfect as they are for now, AI systems will gradually become better at processing data and “understanding” our needs. We’ll keep improving their algorithms and feeding them data to make this happen. Safeguarding our own data and using others’ data responsibly will become just as important. Given the current legislative limbo, this responsibility is all on our shoulders for now.

The post Privacy in the Age of AI: Strategies for Protecting Your Data appeared first on Cybersecurity Insiders.

In today’s digital landscape, many online service providers offer the convenience of using a single password across multiple services. A prime example of this is Google, which allows users to access various platforms like Gmail, Drive, Google Photos, Maps, Sheets, and more with a single login. In this era of interconnected digital services, the art of creating a strong password has become paramount, as a single misjudgment can expose an innocent online user to potential hacking threats.

Here are some valuable tips for crafting a robust and cybersecure password:

1.) Resist Predictability: Gone are the days when hackers relied on basic personal details like birthdates, favorite foods, or colors to guess an individual’s password. Modern cybercriminals employ Artificial Intelligence-powered software to streamline this process. Such tools employ vast datasets, including common phrases, foods, birthdates, and color combinations, to rapidly deduce passwords through permutations and combinations. Therefore, it’s crucial to avoid passwords that are easily guessable, such as the names of celebrities or favorite sports teams.

2.) Opt for Passphrases: Consider using a passphrase as your password—a combination of words that is memorable for you but challenging for cybercriminals to crack. For example, “ilikechickennoodles” is a passphrase that is easy for you to recall but highly unlikely for a hacker to predict.

3.) Length Matters: The National Institute of Standards and Technology (NIST) recommends creating passwords that are between 18 to 60 characters in length. However, excessively long passwords like a Bitcoin token might be difficult to enter accurately and may result in errors. NIST also advises allowing spaces between words and the use of special characters, as this greatly increases the time it takes for a threat actor to guess your password.

4.) Avoid Frequent Changes: Contrary to some conventional wisdom, frequently changing your password by just altering a character or interchanging a few characters can be counterproductive. It’s often more effective to maintain a consistent but strong password.

5.) Utilize a Password Manager: Password managers, while valuable for personal use, may not be suitable for enterprise-level security needs. Threat actors have demonstrated the capability to exploit vulnerabilities in password manager software to access sensitive information.

6.) Implement Multi-Factor Authentication: Relying solely on a password does not guarantee the security of your account. Using multi-factor authentication, which involves receiving a passcode through email, text messages, or a dedicated app, is an effective way to fortify your data security.

7.) Steer Clear of Common Passwords: Avoid using easily guessable passwords, such as “123456,” “iloveyou,” “qwerty,” common names of politicians, sports figures, or Hollywood celebrities, and the names of football teams. These passwords can be cracked within seconds by experienced hackers.

By following these guidelines, you can significantly enhance your online security and reduce the risk of unauthorized access to your accounts and personal information.

The post How to craft a password meticulously appeared first on Cybersecurity Insiders.