Consumer Reports has analyzed a bunch of popular Internet-connected video doorbells. Their security is terrible.

First, these doorbells expose your home IP address and WiFi network name to the internet without encryption, potentially opening your home network to online criminals.

[…]

Anyone who can physically access one of the doorbells can take over the device—no tools or fancy hacking skills needed.

The widely reported story last week that 1.5 million smart toothbrushes were hacked and used in a DDoS attack is false.

Near as I can tell, a German reporter talking to someone at Fortinet got it wrong, and then everyone else ran with it without reading the German text. It was a hypothetical, which Fortinet eventually confirmed.

Or maybe it was a stock-price hack.

New law journal article:

Smart Device Manufacturer Liability and Redress for Third-Party Cyberattack Victims

Abstract: Smart devices are used to facilitate cyberattacks against both their users and third parties. While users are generally able to seek redress following a cyberattack via data protection legislation, there is no equivalent pathway available to third-party victims who suffer harm at the hands of a cyberattacker. Given how these cyberattacks are usually conducted by exploiting a publicly known and yet un-remediated bug in the smart device’s code, this lacuna is unreasonable. This paper scrutinises recent judgments from both the Supreme Court of the United Kingdom and the Supreme Court of the Republic of Ireland to ascertain whether these rulings pave the way for third-party victims to pursue negligence claims against the manufacturers of smart devices. From this analysis, a narrow pathway, which outlines how given a limited set of circumstances, a duty of care can be established between the third-party victim and the manufacturer of the smart device is proposed.

In 2016, I wrote about an Internet that affected the world in a direct, physical manner. It was connected to your smartphone. It had sensors like cameras and thermostats. It had actuators: Drones, autonomous cars. And it had smarts in the middle, using sensor data to figure out what to do and then actually do it. This was the Internet of Things (IoT).

The classical definition of a robot is something that senses, thinks, and acts—that’s today’s Internet. We’ve been building a world-sized robot without even realizing it.

In 2023, we upgraded the “thinking” part with large-language models (LLMs) like GPT. ChatGPT both surprised and amazed the world with its ability to understand human language and generate credible, on-topic, humanlike responses. But what these are really good at is interacting with systems formerly designed for humans. Their accuracy will get better, and they will be used to replace actual humans.

In 2024, we’re going to start connecting those LLMs and other AI systems to both sensors and actuators. In other words, they will be connected to the larger world, through APIs. They will receive direct inputs from our environment, in all the forms I thought about in 2016. And they will increasingly control our environment, through IoT devices and beyond.

It will start small: Summarizing emails and writing limited responses. Arguing with customer service—on chat—for service changes and refunds. Making travel reservations.

But these AIs will interact with the physical world as well, first controlling robots and then having those robots as part of them. Your AI-driven thermostat will turn the heat and air conditioning on based also on who’s in what room, their preferences, and where they are likely to go next. It will negotiate with the power company for the cheapest rates by scheduling usage of high-energy appliances or car recharging.

This is the easy stuff. The real changes will happen when these AIs group together in a larger intelligence: A vast network of power generation and power consumption with each building just a node, like an ant colony or a human army.

Future industrial-control systems will include traditional factory robots, as well as AI systems to schedule their operation. It will automatically order supplies, as well as coordinate final product shipping. The AI will manage its own finances, interacting with other systems in the banking world. It will call on humans as needed: to repair individual subsystems or to do things too specialized for the robots.

Consider driverless cars. Individual vehicles have sensors, of course, but they also make use of sensors embedded in the roads and on poles. The real processing is done in the cloud, by a centralized system that is piloting all the vehicles. This allows individual cars to coordinate their movement for more efficiency: braking in synchronization, for example.

These are robots, but not the sort familiar from movies and television. We think of robots as discrete metal objects, with sensors and actuators on their surface, and processing logic inside. But our new robots are different. Their sensors and actuators are distributed in the environment. Their processing is somewhere else. They’re a network of individual units that become a robot only in aggregate.

This turns our notion of security on its head. If massive, decentralized AIs run everything, then who controls those AIs matters a lot. It’s as if all the executive assistants or lawyers in an industry worked for the same agency. An AI that is both trusted and trustworthy will become a critical requirement.

This future requires us to see ourselves less as individuals, and more as parts of larger systems. It’s AI as nature, as Gaia—everything as one system. It’s a future more aligned with the Buddhist philosophy of interconnectedness than Western ideas of individuality. (And also with science-fiction dystopias, like Skynet from the Terminator movies.) It will require a rethinking of much of our assumptions about governance and economy. That’s not going to happen soon, but in 2024 we will see the first steps along that path.

This essay previously appeared in Wired.

The Biden administration today issued its vision for beefing up the nation’s collective cybersecurity posture, including calls for legislation establishing liability for software products and services that are sold with little regard for security. The White House’s new national cybersecurity strategy also envisions a more active role by cloud providers and the U.S. military in disrupting cybercriminal infrastructure, and it names China as the single biggest cyber threat to U.S. interests.

The strategy says the White House will work with Congress and the private sector to develop legislation that would prevent companies from disavowing responsibility for the security of their software products or services.

Coupled with this stick would be a carrot: An as-yet-undefined “safe harbor framework” that would lay out what these companies could do to demonstrate that they are making cybersecurity a central concern of their design and operations.

“Any such legislation should prevent manufacturers and software publishers with market power from fully disclaiming liability by contract, and establish higher standards of care for software in specific high-risk scenarios,” the strategy explains. “To begin to shape standards of care for secure software development, the Administration will drive the development of an adaptable safe harbor framework to shield from liability companies that securely develop and maintain their software products and services.”

Brian Fox, chief technology officer and founder of the software supply chain security firm Sonatype, called the software liability push a landmark moment for the industry.

“Market forces are leading to a race to the bottom in certain industries, while contract law allows software vendors of all kinds to shield themselves from liability,” Fox said. “Regulations for other industries went through a similar transformation, and we saw a positive result — there’s now an expectation of appropriate due care, and accountability for those who fail to comply. Establishing the concept of safe harbors allows the industry to mature incrementally, leveling up security best practices in order to retain a liability shield, versus calling for sweeping reform and unrealistic outcomes as previous regulatory attempts have.”

THE MOST ACTIVE, PERSISTENT THREAT

In 2012 (approximately three national cyber strategies ago), then director of the U.S. National Security Agency (NSA) Keith Alexander made headlines when he remarked that years of successful cyber espionage campaigns from Chinese state-sponsored hackers represented “the greatest transfer of wealth in history.”

The document released today says the People’s Republic of China (PRC) “now presents the broadest, most active, and most persistent threat to both government and private sector networks,” and says China is “the only country with both the intent to reshape the international order and, increasingly, the economic, diplomatic, military, and technological power to do so.”

Many of the U.S. government’s efforts to restrain China’s technology prowess involve ongoing initiatives like the CHIPS Act, a new law signed by President Biden last year that sets aside more than $50 billion to expand U.S.-based semiconductor manufacturing and research and to make the U.S. less dependent on foreign suppliers; the National Artificial Intelligence Initiative; and the National Strategy to Secure 5G.

As the maker of most consumer gizmos with a computer chip inside, China is also the source of an incredible number of low-cost Internet of Things (IoT) devices that are not only poorly secured, but are probably more accurately described as insecure by design.

The Biden administration said it would continue its previously announced plans to develop a system of labeling that could be applied to various IoT products and give consumers some idea of how secure the products may be. But it remains unclear how those labels might apply to products made by companies outside of the United States.

FIGHTING BADNESS IN THE CLOUD

One could convincingly make the case that the world has witnessed yet another historic transfer of wealth and trade secrets over the past decade — in the form of ransomware and data ransom attacks by Russia-based cybercriminal syndicates, as well as Russian intelligence agency operations like the U.S. government-wide Solar Winds compromise.

On the ransomware front, the White House strategy seems to focus heavily on building the capability to disrupt the digital infrastructure used by adversaries that are threatening vital U.S. cyber interests. The document points to the 2021 takedown of the Emotet botnet — a cybercrime machine that was heavily used by multiple Russian ransomware groups — as a model for this activity, but says those disruptive operations need to happen faster and more often.

To that end, the Biden administration says it will expand the capacity of the National Cyber Investigative Joint Task Force (NCIJTF), the primary federal agency for coordinating cyber threat investigations across law enforcement agencies, the intelligence community, and the Department of Defense.

“To increase the volume and speed of these integrated disruption campaigns, the Federal Government must further develop technological and organizational platforms that enable continuous, coordinated operations,” the strategy observes. “The NCIJTF will expand its capacity to coordinate takedown and disruption campaigns with greater speed, scale, and frequency. Similarly, DoD and the Intelligence Community are committed to bringing to bear their full range of complementary authorities to disruption campaigns.”

The strategy anticipates the U.S. government working more closely with cloud and other Internet infrastructure providers to quickly identify malicious use of U.S.-based infrastructure, share reports of malicious use with the government, and make it easier for victims to report abuse of these systems.

“Given the interest of the cybersecurity community and digital infrastructure owners and operators in continuing this approach, we must sustain and expand upon this model so that collaborative disruption operations can be carried out on a continuous basis,” the strategy argues. “Threat specific collaboration should take the form of nimble, temporary cells, comprised of a small number of trusted operators, hosted and supported by a relevant hub. Using virtual collaboration platforms, members of the cell would share information bidirectionally and work rapidly to disrupt adversaries.”

But here, again, there is a carrot-and-stick approach: The administration said it is taking steps to implement Executive Order (EO) 13984 –issued by the Trump administration in January 2021 — which requires cloud providers to verify the identity of foreign persons using their services.

“All service providers must make reasonable attempts to secure the use of their infrastructure against abuse or other criminal behavior,” the strategy states. “The Administration will prioritize adoption and enforcement of a risk-based approach to cybersecurity across Infrastructure-as-a-Service providers that addresses known methods and indicators of malicious activity including through implementation of EO 13984.”

Ted Schlein, founding partner of the cybersecurity venture capital firm Ballistic Ventures, said how this gets implemented will determine whether it can be effective.

“Adversaries know the NSA, which is the elite portion of the nation’s cyber defense, cannot monitor U.S.-based infrastructure, so they just use U.S.-based cloud infrastructure to perpetrate their attacks,” Schlein said. “We have to fix this. I believe some of this section is a bit pollyannaish, as it assumes a bad actor with a desire to do a bad thing will self-identify themselves, as the major recommendation here is around KYC (‘know your customer’).”

INSURING THE INSURERS

One brief but interesting section of the strategy titled “Explore a Federal Cyber Insurance Backdrop” contemplates the government’s liability and response to a too-big-to-fail scenario or “catastrophic cyber incident.”

“We will explore how the government can stabilize insurance markets against catastrophic risk to drive better cybersecurity practices and to provide market certainty when catastrophic events do occur,” the strategy reads.

When the Bush administration released the first U.S. national cybersecurity strategy 20 years ago after the 9/11 attacks, the popular term for that same scenario was a “digital Pearl Harbor,” and there was a great deal of talk then about how the cyber insurance market would soon help companies shore up their cybersecurity practices.

In the wake of countless ransomware intrusions, many companies now hold cybersecurity insurance to help cover the considerable costs of responding to such intrusions. Leaving aside the question of whether insurance coverage has helped companies improve security, what happens if every one of these companies has to make a claim at the same time?

The notion of a Digital Pearl Harbor incident struck many experts at the time as a hyperbolic justification for expanding the government’s digital surveillance capabilities, and an overstatement of the capabilities of our adversaries. But back in 2003, most of the world’s companies didn’t host their entire business in the cloud.

Today, nobody questions the capabilities, goals and outcomes of dozens of nation-state level cyber adversaries. And these days, a catastrophic cyber incident could be little more than an extended, simultaneous outage at multiple cloud providers.

The full national cybersecurity strategy is available from the White House website (PDF).

California just legalized digital license plates, which seems like a solution without a problem.

The Rplate can reportedly function in extreme temperatures, has some customization features, and is managed via Bluetooth using a smartphone app. Rplates are also equipped with an LTE antenna, which can be used to push updates, change the plate if the vehicle is reported stolen or lost, and notify vehicle owners if their car may have been stolen.

Perhaps most importantly to the average car owner, Reviver said Rplate owners can renew their registration online through the Reviver mobile app.

That’s it?

Right now, an Rplate for a personal vehicle (the battery version) runs to $19.95 a month for 48 months, which will total $975.60 if kept for the full term. If opting to pay a year at a time, the price is $215.40 a year for the same four-year period, totaling $861.60. Wired plates for commercial vehicles run $24.95 for 48 months, and $275.40 if paid yearly.

That’s a lot to pay for the luxury of not having to find an envelope and stamp.

Plus, the privacy risks:

Privacy risks are an obvious concern when thinking about strapping an always-connected digital device to a car, but the California law has taken steps that may address some of those concerns.

“The bill would generally prohibit an alternative device [i.e. digital plate] from being equipped with GPS or other vehicle location tracking capability,” California’s legislative digest said of the new law. Commercial fleets are exempt from the rule, unsurprisingly.

More important are the security risks. Do we think for a minute that your digital license plate is secure from denial-of-service attacks, or number swapping attacks, or whatever new attacks will be dreamt up? Seems like a piece of stamped metal is the most secure option.

The Atlantic Council has published a report on securing the Internet of Things: “Security in the Billions: Toward a Multinational Strategy to Better Secure the IoT Ecosystem.” The report examines the regulatory approaches taken by four countries—the US, the UK, Australia, and Singapore—to secure home, medical, and networking/telecommunications devices. The report recommends that regulators should 1) enforce minimum security standards for manufacturers of IoT devices, 2) incentivize higher levels of security through public contracting, and 3) try to align IoT standards internationally (for example, international guidance on handling connected devices that stop receiving security updates).

This report looks to existing security initiatives as much as possible—both to leverage existing work and to avoid counterproductively suggesting an entirely new approach to IoT security—while recommending changes and introducing more cohesion and coordination to regulatory approaches to IoT cybersecurity. It walks through the current state of risk in the ecosystem, analyzes challenges with the current policy model, and describes a synthesized IoT security framework. The report then lays out nine recommendations for government and industry actors to enhance IoT security, broken into three recommendation sets: setting a baseline of minimally acceptable security (or “Tier 1”), incentivizing above the baseline (or “Tier 2” and above), and pursuing international alignment on standards and implementation across the entire IoT product lifecycle (from design to sunsetting). It also includes implementation guidance for the United States, Australia, UK, and Singapore, providing a clearer roadmap for countries to operationalize the recommendations in their specific jurisdictions—and push towards a stronger, more cohesive multinational approach to securing the IoT worldwide.

Note: One of the authors of this report was a student of mine at Harvard Kennedy School, and did this work with the Atlantic Council under my supervision.