The memorial service for Ross Anderson will be held on Saturday, at 2:00 PM BST. People can attend remotely on Zoom. (The passcode is “L3954FrrEF”.)
Category: security engineering
Last week, I posted a short memorial of Ross Anderson. The Communications of the ACM asked me to expand it. Here’s the longer version.
EDITED TO ADD (4/11): Two weeks before he passed away, Ross gave an 80-minute interview where he told his life story.
Ross Anderson unexpectedly passed away Thursday night in, I believe, his home in Cambridge.
I can’t remember when I first met Ross. Of course it was before 2008, when we created the Security and Human Behavior workshop. It was well before 2001, when we created the Workshop on Economics and Information Security. (Okay, he created both—I helped.) It was before 1998, when we wrote about the problems with key escrow systems. I was one of the people he brought to the Newton Institute, at Cambridge University, for the six-month cryptography residency program he ran (I mistakenly didn’t stay the whole time)—that was in 1996.
I know I was at the first Fast Software Encryption workshop in December 1993, another conference he created. There I presented the Blowfish encryption algorithm. Pulling an old first-edition of Applied Cryptography (the one with the blue cover) down from the shelf, I see his name in the acknowledgments. Which means that sometime in early 1993—probably at Eurocrypt in Lofthus, Norway—I, as an unpublished book author who had only written a couple of crypto articles for Dr. Dobb’s Journal, asked him to read and comment on my book manuscript. And he said yes. Which means I mailed him a paper copy. And he read it. And mailed his handwritten comments back to me. In an envelope with stamps. Because that’s how we did it back then.
I have known Ross for over thirty years, as both a colleague and a friend. He was enthusiastic, brilliant, opinionated, articulate, curmudgeonly, and kind. Pick up any of his academic papers—there are many—and odds are that you will find a least one unexpected insight. He was a cryptographer and security engineer, but also very much a generalist. He published on block cipher cryptanalysis in the 1990s, and the security of large-language models last year. He started conferences like nobody’s business. His masterwork book, Security Engineering—now in its third edition—is as comprehensive a tome on cybersecurity and related topics as you could imagine. (Also note his fifteen-lecture video series on that same page. If you have never heard Ross lecture, you’re in for a treat.) He was the first person to understand that security problems are often actually economic problems. He was the first person to make a lot of those sorts of connections. He fought against surveillance and backdoors, and for academic freedom. He didn’t suffer fools in either government or the corporate world.
He’s listed in the acknowledgments as a reader of every one of my books from Beyond Fear on. Recently, we’d see each other a couple of times a year: at this or that workshop or event. The last time I saw him was last June, at SHB 2023, in Pittsburgh. We were having dinner on Alessandro Acquisti‘s rooftop patio, celebrating another successful workshop. He was going to attend my Workshop on Reimagining Democracy in December, but he had to cancel at the last minute. (He sent me the talk he was going to give. I will see about posting it.) The day before he died, we were discussing how to accommodate everyone who registered for this year’s SHB workshop. I learned something from him every single time we talked. And I am not the only one.
My heart goes out to his wife Shireen and his family. We lost him much too soon.
EDITED TO ADD (4/10): I wrote a longer version for Communications of the ACM.
The Atlantic Council has published a report on securing the Internet of Things: “Security in the Billions: Toward a Multinational Strategy to Better Secure the IoT Ecosystem.” The report examines the regulatory approaches taken by four countries—the US, the UK, Australia, and Singapore—to secure home, medical, and networking/telecommunications devices. The report recommends that regulators should 1) enforce minimum security standards for manufacturers of IoT devices, 2) incentivize higher levels of security through public contracting, and 3) try to align IoT standards internationally (for example, international guidance on handling connected devices that stop receiving security updates).
This report looks to existing security initiatives as much as possible—both to leverage existing work and to avoid counterproductively suggesting an entirely new approach to IoT security—while recommending changes and introducing more cohesion and coordination to regulatory approaches to IoT cybersecurity. It walks through the current state of risk in the ecosystem, analyzes challenges with the current policy model, and describes a synthesized IoT security framework. The report then lays out nine recommendations for government and industry actors to enhance IoT security, broken into three recommendation sets: setting a baseline of minimally acceptable security (or “Tier 1”), incentivizing above the baseline (or “Tier 2” and above), and pursuing international alignment on standards and implementation across the entire IoT product lifecycle (from design to sunsetting). It also includes implementation guidance for the United States, Australia, UK, and Singapore, providing a clearer roadmap for countries to operationalize the recommendations in their specific jurisdictions—and push towards a stronger, more cohesive multinational approach to securing the IoT worldwide.
Note: One of the authors of this report was a student of mine at Harvard Kennedy School, and did this work with the Atlantic Council under my supervision.
This is an interesting attack I had not previously considered.
The variants are interesting, and I think we’re just starting to understand their implications.
Yet another adversarial ML attack:
Most deep neural networks are trained by stochastic gradient descent. Now “stochastic” is a fancy Greek word for “random”; it means that the training data are fed into the model in random order.
So what happens if the bad guys can cause the order to be not random? You guessed it—all bets are off. Suppose for example a company or a country wanted to have a credit-scoring system that’s secretly sexist, but still be able to pretend that its training was actually fair. Well, they could assemble a set of financial data that was representative of the whole population, but start the model’s training on ten rich men and ten poor women drawn from that set then let initialisation bias do the rest of the work.
Does this generalise? Indeed it does. Previously, people had assumed that in order to poison a model or introduce backdoors, you needed to add adversarial samples to the training data. Our latest paper shows that’s not necessary at all. If an adversary can manipulate the order in which batches of training data are presented to the model, they can undermine both its integrity (by poisoning it) and its availability (by causing training to be less effective, or take longer). This is quite general across models that use stochastic gradient descent.
Research paper.