Longtime NSA-watcher James Bamford has a long article on the reauthorization of Section 702 of the Foreign Intelligence Surveillance Act (FISA).
Category: Privacy
In today’s interconnected world, the advent of smart cars has brought convenience and innovation to the automotive industry. However, with this connectivity comes a new set of cybersecurity challenges, particularly concerning user privacy. Modern cars, equipped with sophisticated onboard systems and internet connectivity, are susceptible to cyber threats that can compromise the personal data and safety of their users.
1. Data Collection and Privacy Concerns : Smart cars gather an extensive amount of data, ranging from GPS locations and driving patterns to personal preferences and vehicle diagnostics. While this data is intended to enhance user experience and improve vehicle performance, it also raises significant privacy concerns. Unauthorized access to this data can potentially reveal sensitive information about users’ daily routines, travel habits, and even their physical locations, leading to privacy breaches and potential misuse.
2. Hacking and Remote Control Risks: The connectivity of smart cars makes them vulnerable to hacking attempts. Malicious actors can exploit vulnerabilities in car software or wireless networks to gain unauthorized access to vehicle systems. This could enable them to remotely control critical functions such as brakes, steering, and acceleration, posing severe safety risks to passengers and other road users.
3. Tracking and Surveillance Issues: The continuous collection of data by smart cars opens the door to potential tracking and surveillance concerns. Without adequate security measures, third parties could track a vehicle’s movements or monitor its occupants without their knowledge or consent. This invasion of privacy undermines user trust and raises ethical questions about data ownership and usage.
4. Manufacturer and Third-Party Data Handling : Automakers and third-party service providers often store and process user data to deliver personalized services and improve product performance. However, the handling of this data may not always align with stringent privacy standards. Data breaches or unauthorized data sharing could expose users to identity theft, fraud, or other malicious activities, highlighting the importance of robust data protection measures throughout the automotive ecosystem.
Addressing Cybersecurity in Smart Cars
To mitigate these risks and safeguard user privacy, stakeholders in the automotive industry must prioritize cybersecurity measures:
Encryption and Secure Communication: Implement strong encryption protocols to protect data transmission between the car, external systems, and cloud services.
Regular Software Updates: Ensure timely updates and patches to address vulnerabilities and improve system security.
User Consent and Transparency: Provide clear information to users about data collection practices, purposes, and rights, obtaining explicit consent for data processing.
Cybersecurity Testing and Audits: Conduct regular security assessments and audits of vehicle systems and connected infrastructure to identify and address potential weaknesses proactively.
Regulatory Compliance: Adhere to regulatory frameworks such as GDPR (General Data Protection Regulation) and industry standards to uphold data protection principles and accountability.
Conclusion
While smart cars offer numerous benefits in terms of connectivity and functionality, they also introduce complex cybersecurity challenges, particularly concerning user privacy. As technology continues to evolve, collaboration between automakers, cybersecurity experts, regulators, and consumers is essential to develop robust security measures that protect personal data and ensure safe driving experiences. By addressing these challenges proactively, the automotive industry can foster trust and confidence among users while embracing the potential of smart and connected vehicles.
The post How cars can pose a cyber threat to user privacy appeared first on Cybersecurity Insiders.
Microsoft recently caught state-backed hackers using its generative AI tools to help with their attacks. In the security community, the immediate questions weren’t about how hackers were using the tools (that was utterly predictable), but about how Microsoft figured it out. The natural conclusion was that Microsoft was spying on its AI users, looking for harmful hackers at work.
Some pushed back at characterizing Microsoft’s actions as “spying.” Of course cloud service providers monitor what users are doing. And because we expect Microsoft to be doing something like this, it’s not fair to call it spying.
We see this argument as an example of our shifting collective expectations of privacy. To understand what’s happening, we can learn from an unlikely source: fish.
In the mid-20th century, scientists began noticing that the number of fish in the ocean—so vast as to underlie the phrase “There are plenty of fish in the sea”—had started declining rapidly due to overfishing. They had already seen a similar decline in whale populations, when the post-WWII whaling industry nearly drove many species extinct. In whaling and later in commercial fishing, new technology made it easier to find and catch marine creatures in ever greater numbers. Ecologists, specifically those working in fisheries management, began studying how and when certain fish populations had gone into serious decline.
One scientist, Daniel Pauly, realized that researchers studying fish populations were making a major error when trying to determine acceptable catch size. It wasn’t that scientists didn’t recognize the declining fish populations. It was just that they didn’t realize how significant the decline was. Pauly noted that each generation of scientists had a different baseline to which they compared the current statistics, and that each generation’s baseline was lower than that of the previous one.
What seems normal to us in the security community is whatever was commonplace at the beginning of our careers.
Pauly called this “shifting baseline syndrome” in a 1995 paper. The baseline most scientists used was the one that was normal when they began their research careers. By that measure, each subsequent decline wasn’t significant, but the cumulative decline was devastating. Each generation of researchers came of age in a new ecological and technological environment, inadvertently masking an exponential decline.
Pauly’s insights came too late to help those managing some fisheries. The ocean suffered catastrophes such as the complete collapse of the Northwest Atlantic cod population in the 1990s.
Internet surveillance, and the resultant loss of privacy, is following the same trajectory. Just as certain fish populations in the world’s oceans have fallen 80 percent, from previously having fallen 80 percent, from previously having fallen 80 percent (ad infinitum), our expectations of privacy have similarly fallen precipitously. The pervasive nature of modern technology makes surveillance easier than ever before, while each successive generation of the public is accustomed to the privacy status quo of their youth. What seems normal to us in the security community is whatever was commonplace at the beginning of our careers.
Historically, people controlled their computers, and software was standalone. The always-connected cloud-deployment model of software and services flipped the script. Most apps and services are designed to be always-online, feeding usage information back to the company. A consequence of this modern deployment model is that everyone—cynical tech folks and even ordinary users—expects that what you do with modern tech isn’t private. But that’s because the baseline has shifted.
AI chatbots are the latest incarnation of this phenomenon: They produce output in response to your input, but behind the scenes there’s a complex cloud-based system keeping track of that input—both to improve the service and to sell you ads.
Shifting baselines are at the heart of our collective loss of privacy. The U.S. Supreme Court has long held that our right to privacy depends on whether we have a reasonable expectation of privacy. But expectation is a slippery thing: It’s subject to shifting baselines.
The question remains: What now? Fisheries scientists, armed with knowledge of shifting-baseline syndrome, now look at the big picture. They no longer consider relative measures, such as comparing this decade with the last decade. Instead, they take a holistic, ecosystem-wide perspective to see what a healthy marine ecosystem and thus sustainable catch should look like. They then turn these scientifically derived sustainable-catch figures into limits to be codified by regulators.
In privacy and security, we need to do the same. Instead of comparing to a shifting baseline, we need to step back and look at what a healthy technological ecosystem would look like: one that respects people’s privacy rights while also allowing companies to recoup costs for services they provide. Ultimately, as with fisheries, we need to take a big-picture perspective and be aware of shifting baselines. A scientifically informed and democratic regulatory process is required to preserve a heritage—whether it be the ocean or the Internet—for the next generation.
This essay was written with Barath Raghavan, and previously appeared in IEEE Spectrum.
Brian Krebs reports on research into geolocating routers:
Apple and the satellite-based broadband service Starlink each recently took steps to address new research into the potential security and privacy implications of how their services geolocate devices. Researchers from the University of Maryland say they relied on publicly available data from Apple to track the location of billions of devices globally—including non-Apple devices like Starlink systems—and found they could use this data to monitor the destruction of Gaza, as well as the movements and in many cases identities of Russian and Ukrainian troops.
Really fascinating implications to this research.
Research paper: “Surveilling the Masses with Wi-Fi-Based Positioning Systems:
Abstract: Wi-Fi-based Positioning Systems (WPSes) are used by modern mobile devices to learn their position using nearby Wi-Fi access points as landmarks. In this work, we show that Apple’s WPS can be abused to create a privacy threat on a global scale. We present an attack that allows an unprivileged attacker to amass a worldwide snapshot of Wi-Fi BSSID geolocations in only a matter of days. Our attack makes few assumptions, merely exploiting the fact that there are relatively few dense regions of allocated MAC address space. Applying this technique over the course of a year, we learned the precise
locations of over 2 billion BSSIDs around the world.The privacy implications of such massive datasets become more stark when taken longitudinally, allowing the attacker to track devices’ movements. While most Wi-Fi access points do not move for long periods of time, many devices—like compact travel routers—are specifically designed to be mobile.
We present several case studies that demonstrate the types of attacks on privacy that Apple’s WPS enables: We track devices moving in and out of war zones (specifically Ukraine and Gaza), the effects of natural disasters (specifically the fires in Maui), and the possibility of targeted individual tracking by proxy—all by remotely geolocating wireless access points.
We provide recommendations to WPS operators and Wi-Fi access point manufacturers to enhance the privacy of hundreds of millions of users worldwide. Finally, we detail our efforts at responsibly disclosing this privacy vulnerability, and outline some mitigations that Apple and Wi-Fi access point manufacturers have implemented both independently and as a result of our work.