Lukasz Olejnik writes about device fingerprinting, and why Google’s policy change to allow it in 2025 is a major privacy setback.
EDITED TO ADD (1/12): Shashdot thread.
This is going to be interesting.
It’s a video of someone trying on a variety of printed full-face masks. They won’t fool anyone for long, but will survive casual scrutiny. And they’re cheap and easy to swap.
Researchers at Google have developed a watermark for LLM-generated text. The basics are pretty obvious: the LLM chooses between tokens partly based on a cryptographic key, and someone with knowledge of the key can detect those choices. What makes this hard is (1) how much text is required for the watermark to work, and (2) how robust the watermark is to post-generation editing. Google’s version looks pretty good: it’s detectable in text as small as 200 tokens.
Two students have created a demo of a smart-glasses app that performs automatic facial recognition and then information lookups. Kind of obvious—something similar was done in 2011—but the sort of creepy demo that gets attention.
News article.
It’s possible to cancel other people’s voter registrations:
On Friday, four days after Georgia Democrats began warning that bad actors could abuse the state’s new online portal for canceling voter registrations, the Secretary of State’s Office acknowledged to ProPublica that it had identified multiple such attempts…
…the portal suffered at least two security glitches that briefly exposed voters’ dates of birth, the last four digits of their Social Security numbers and their full driver’s license numbers—the exact information needed to cancel others’ voter registrations.
I get that this is a hard problem to solve. We want the portal to be easy for people to use—even non-tech-savvy people—and hard for fraudsters to abuse, and it turns out to be impossible to do both without an overarching digital identity infrastructure. But Georgia is making it easy to abuse.
EDITED TO ADD (8/14): There was another issue with the portal, making it easy to request cancellation of any Georgian’s registration. The elections director said that cancellations submitted this way wouldn’t have been processed because they didn’t have all the necessary information, which I guess is probably true, but it shows just how sloppy the coding is.
A group of cryptographers have analyzed the eiDAS 2.0 regulation (electronic identification and trust services) that defines the new EU Digital Identity Wallet.
The Washington Post is reporting on the FBI’s increasing use of push notification data—”push tokens”—to identify people. The police can request this data from companies like Apple and Google without a warrant.
The investigative technique goes back years. Court orders that were issued in 2019 to Apple and Google demanded that the companies hand over information on accounts identified by push tokens linked to alleged supporters of the Islamic State terrorist group.
But the practice was not widely understood until December, when Sen. Ron Wyden (D-Ore.), in a letter to Attorney General Merrick Garland, said an investigation had revealed that the Justice Department had prohibited Apple and Google from discussing the technique.
[…]
Unlike normal app notifications, push alerts, as their name suggests, have the power to jolt a phone awake—a feature that makes them useful for the urgent pings of everyday use. Many apps offer push-alert functionality because it gives users a fast, battery-saving way to stay updated, and few users think twice before turning them on.
But to send that notification, Apple and Google require the apps to first create a token that tells the company how to find a user’s device. Those tokens are then saved on Apple’s and Google’s servers, out of the users’ reach.
The article discusses their use by the FBI, primarily in child sexual abuse cases. But we all know how the story goes:
“This is how any new surveillance method starts out: The government says we’re only going to use this in the most extreme cases, to stop terrorists and child predators, and everyone can get behind that,” said Cooper Quintin, a technologist at the advocacy group Electronic Frontier Foundation.
“But these things always end up rolling downhill. Maybe a state attorney general one day decides, hey, maybe I can use this to catch people having an abortion,” Quintin added. “Even if you trust the U.S. right now to use this, you might not trust a new administration to use it in a way you deem ethical.”
A helpful summary of which US retail stores are using facial recognition, thinking about using it, or currently not planning on using it. (This, of course, can all change without notice.)
Three years ago, I wrote that campaigns to ban facial recognition are too narrow. The problem here is identification, correlation, and then discrimination. There’s no difference whether the identification technology is facial recognition, the MAC address of our phones, gait recognition, license plate recognition, or anything else. Facial recognition is just the easiest technology right now.
They’re not that good:
Security researchers Jesse D’Aguanno and Timo Teräs write that, with varying degrees of reverse-engineering and using some external hardware, they were able to fool the Goodix fingerprint sensor in a Dell Inspiron 15, the Synaptic sensor in a Lenovo ThinkPad T14, and the ELAN sensor in one of Microsoft’s own Surface Pro Type Covers. These are just three laptop models from the wide universe of PCs, but one of these three companies usually does make the fingerprint sensor in every laptop we’ve reviewed in the last few years. It’s likely that most Windows PCs with fingerprint readers will be vulnerable to similar exploits.