In the span of just weeks, the US government has experienced what may be the most consequential security breach in its history—not through a sophisticated cyberattack or an act of foreign espionage, but through official orders by a billionaire with a poorly defined government role. And the implications for national security are profound.

First, it was reported that people associated with the newly created Department of Government Efficiency (DOGE) had accessed the US Treasury computer system, giving them the ability to collect data on and potentially control the department’s roughly $5.45 trillion in annual federal payments.

Then, we learned that uncleared DOGE personnel had gained access to classified data from the US Agency for International Development, possibly copying it onto their own systems. Next, the Office of Personnel Management—which holds detailed personal data on millions of federal employees, including those with security clearances—was compromised. After that, Medicaid and Medicare records were compromised.

Meanwhile, only partially redacted names of CIA employees were sent over an unclassified email account. DOGE personnel are also reported to be feeding Education Department data into artificial intelligence software, and they have also started working at the Department of Energy.

This story is moving very fast. On Feb. 8, a federal judge blocked the DOGE team from accessing the Treasury Department systems any further. But given that DOGE workers have already copied data and possibly installed and modified software, it’s unclear how this fixes anything.

In any case, breaches of other critical government systems are likely to follow unless federal employees stand firm on the protocols protecting national security.

The systems that DOGE is accessing are not esoteric pieces of our nation’s infrastructure—they are the sinews of government.

For example, the Treasury Department systems contain the technical blueprints for how the federal government moves money, while the Office of Personnel Management (OPM) network contains information on who and what organizations the government employs and contracts with.

What makes this situation unprecedented isn’t just the scope, but also the method of attack. Foreign adversaries typically spend years attempting to penetrate government systems such as these, using stealth to avoid being seen and carefully hiding any tells or tracks. The Chinese government’s 2015 breach of OPM was a significant US security failure, and it illustrated how personnel data could be used to identify intelligence officers and compromise national security.

In this case, external operators with limited experience and minimal oversight are doing their work in plain sight and under massive public scrutiny: gaining the highest levels of administrative access and making changes to the United States’ most sensitive networks, potentially introducing new security vulnerabilities in the process.

But the most alarming aspect isn’t just the access being granted. It’s the systematic dismantling of security measures that would detect and prevent misuse—including standard incident response protocols, auditing, and change-tracking mechanisms—by removing the career officials in charge of those security measures and replacing them with inexperienced operators.

The Treasury’s computer systems have such an impact on national security that they were designed with the same principle that guides nuclear launch protocols: No single person should have unlimited power. Just as launching a nuclear missile requires two separate officers turning their keys simultaneously, making changes to critical financial systems traditionally requires multiple authorized personnel working in concert.

This approach, known as “separation of duties,” isn’t just bureaucratic red tape; it’s a fundamental security principle as old as banking itself. When your local bank processes a large transfer, it requires two different employees to verify the transaction. When a company issues a major financial report, separate teams must review and approve it. These aren’t just formalities—they’re essential safeguards against corruption and error. These measures have been bypassed or ignored. It’s as if someone found a way to rob Fort Knox by simply declaring that the new official policy is to fire all the guards and allow unescorted visits to the vault.

The implications for national security are staggering. Sen. Ron Wyden said his office had learned that the attackers gained privileges that allow them to modify core programs in Treasury Department computers that verify federal payments, access encrypted keys that secure financial transactions, and alter audit logs that record system changes. Over at OPM, reports indicate that individuals associated with DOGE connected an unauthorized server into the network. They are also reportedly training AI software on all of this sensitive data.

This is much more critical than the initial unauthorized access. These new servers have unknown capabilities and configurations, and there’s no evidence that this new code has gone through any rigorous security testing protocols. The AIs being trained are certainly not secure enough for this kind of data. All are ideal targets for any adversary, foreign or domestic, also seeking access to federal data.

There’s a reason why every modification—hardware or software—to these systems goes through a complex planning process and includes sophisticated access-control mechanisms. The national security crisis is that these systems are now much more vulnerable to dangerous attacks at the same time that the legitimate system administrators trained to protect them have been locked out.

By modifying core systems, the attackers have not only compromised current operations, but have also left behind vulnerabilities that could be exploited in future attacks—giving adversaries such as Russia and China an unprecedented opportunity. These countries have long targeted these systems. And they don’t just want to gather intelligence—they also want to understand how to disrupt these systems in a crisis.

Now, the technical details of how these systems operate, their security protocols, and their vulnerabilities are now potentially exposed to unknown parties without any of the usual safeguards. Instead of having to breach heavily fortified digital walls, these parties  can simply walk through doors that are being propped open—and then erase evidence of their actions.

The security implications span three critical areas.

First, system manipulation: External operators can now modify operations while also altering audit trails that would track their changes. Second, data exposure: Beyond accessing personal information and transaction records, these operators can copy entire system architectures and security configurations—in one case, the technical blueprint of the country’s federal payment infrastructure. Third, and most critically, is the issue of system control: These operators can alter core systems and authentication mechanisms while disabling the very tools designed to detect such changes. This is more than modifying operations; it is modifying the infrastructure that those operations use.

To address these vulnerabilities, three immediate steps are essential. First, unauthorized access must be revoked and proper authentication protocols restored. Next, comprehensive system monitoring and change management must be reinstated—which, given the difficulty of cleaning a compromised system, will likely require a complete system reset. Finally, thorough audits must be conducted of all system changes made during this period.

This is beyond politics—this is a matter of national security. Foreign national intelligence organizations will be quick to take advantage of both the chaos and the new insecurities to steal US data and install backdoors to allow for future access.

Each day of continued unrestricted access makes the eventual recovery more difficult and increases the risk of irreversible damage to these critical systems. While the full impact may take time to assess, these steps represent the minimum necessary actions to begin restoring system integrity and security protocols.

Assuming that anyone in the government still cares.

This essay was written with Davi Ottenheimer, and originally appeared in Foreign Policy.

KnowBe4, cybersecurity platform that comprehensively addresses human risk management, today released a new white paper that provides data-driven evidence on the effectiveness of security awareness training (SAT) in reducing data breaches.

Over 17,500 data breaches from the Privacy Rights Clearinghouse database were analysed along with KnowBe4’s extensive customer data to quantify the impact of SAT on organisational cybersecurity. This research provides an in-depth perspective on the effectiveness of security awareness training in preventing data breaches.

Key findings from the research include:

  1. Organisations with effective SAT programs are 8.3 times less likely to appear on public data breach lists annually compared to general statistics.
  2. 97.6% of KnowBe4’s current U.S. customers have not suffered a public data breach since 2005.
  3. Customers who experienced breaches were 65% less likely to suffer subsequent breaches after becoming KnowBe4 customers.
  4. 73% of breaches involving current KnowBe4 customers occurred before they implemented the company’s SAT program.

 

KnowBe4 advises organisations to implement SAT programs with at least quarterly training sessions and simulated phishing tests, noting that more frequent engagement can lead to even greater risk mitigation. The study addresses a critical question in cybersecurity: Does security awareness training measurably reduce an organisation’s risk of real-world cyberattacks? The analysis demonstrates that organisations practicing regular and effective SAT see significant decreases in human risk factors and fewer real-world compromises.

“If you add up all other causes for successful cyberattacks together, they do not come close to equaling the damage done by social engineering and phishing alone,” said Roger Grimes, data-driven defence evangelist at KnowBe4. “The evidence is compelling and clear. Effective security awareness training, with regular simulated phishing exercises, educates employees and significantly reduces the human risk of cybersecurity threats.”

This research provides valuable insights into the substantial role that security awareness training plays in preventing data breaches, particularly given that social engineering and phishing account for 70% to 90% of data breaches. KnowBe4 defines an effective SAT program as one that includes at least monthly training and simulated phishing campaigns.

The full white paper, “Effective Security Awareness Training Really Does Reduce Breaches,” is available for download here.

The post KnowBe4 Research Confirms Effective Security Awareness Training Significantly Reduces Data Breaches appeared first on IT Security Guru.

In the interconnected digital world, we live in today, a single cyber incident can trigger a chain reaction of consequences, often referred to as the “domino effect.” This concept describes how a small event, such as a security breach or cyberattack on one organization or system, can lead to a cascading series of negative impacts—affecting not only the direct targets but also their partners, customers, industries, and even entire economies. Understanding this domino effect is critical for businesses, governments, and individuals in managing cybersecurity risks.

1. The Initial Breach: How It All Begins

A domino effect in cybersecurity often starts with a seemingly small breach. This could be any-thing from a phishing email tricking an employee into revealing login credentials, to a vulnerability in a software system being exploited by cybercriminals. Once the attacker gains access, they can move laterally through the network, compromising sensitive data or disrupting operations.

For example, a cyberattack on a retail company may start with the breach of an employee’s email account. From there, the attacker could infiltrate the company’s customer database, stealing sensitive payment information. While the initial breach might seem limited, it sets off a chain of events with far-reaching consequences.

2. Financial Consequences: Direct and Indirect Costs

Once the initial attack has occurred, the financial repercussions can spread like falling dominos. Direct costs include the immediate expenses related to the breach, such as paying for IT support, legal fees, and notification to affected customers. For instance, if customer data is compromised, the company might face the costs of providing credit monitoring services to those impacted.

Indirect costs are even more damaging in the long term. They may involve loss of business due to reputational damage, decreased customer trust, and stock market drops (for publicly traded companies). For example, the 2017 Equifax breach cost the company an estimated $1.4 billion in settlements, fines, and reputational damage, with the consequences extending far beyond the breach itself.

3. Impact on Customers and Supply Chains

The domino effect doesn’t stop with the breached organization. The impact spreads outward to customers, suppliers, and business partners. If customer data is stolen, individuals may suffer from identity theft, fraudulent charges, or compromised privacy. In turn, customers may lose confidence in the company’s ability to protect their data, resulting in reduced business.

Additionally, supply chains can be severely impacted. Cyberattacks can cripple suppliers, disrupt logistics, and cause delays in production. For example, the 2020 SolarWinds cyberattack—where Russian hackers infiltrated the company’s software updates—had a ripple effect across thousands of organizations, including major U.S. government agencies and private sector firms. This attack disrupted operations and forced organizations to divert resources to mitigate its impact.

4. Damage to Critical Infrastructure and National Security

As the domino effect progresses, cybersecurity incidents can escalate to threaten critical infrastructure. For instance, if a cyberattack targets an energy provider or a water treatment facility, the attack can lead to widespread service outages, affecting entire cities or regions. The 2007 cyberattacks on Estonia are a prime example of how a large-scale incident can bring down government websites, banking services, and media outlets, paralyzing the country’s digital infra-structure.

Similarly, cyberattacks on healthcare organizations—especially those involving ransomware—can have grave consequences for public health. Hospitals, medical centers, and even research institutions may face disruptions in critical services, potentially delaying patient care and treatment. In the worst-case scenario, lives can be lost due to delayed medical procedures or misdiagnoses caused by compromised data.

5. Legal and Regulatory Fallout

In addition to financial losses, companies may face significant legal and regulatory consequences following a cybersecurity incident. Breached organizations could be subject to lawsuits from affected customers or partners, as well as penalties for failing to comply with data protection laws, such as the European Union’s General Data Protection Regulation (GDPR) or the U.S. Health Insurance Portability and Accountability Act (HIPAA).

Furthermore, as the domino effect continues, lawmakers and regulators may impose stricter cybersecurity regulations on entire industries. A high-profile breach may lead to new cybersecurity laws or requirements for companies to improve their data protection practices, thereby increasing operational costs and compliance burdens for businesses.

6. Widespread Societal Impact and Loss of Trust

Beyond the immediate business consequences, the domino effect of cyber incidents can lead to a broader societal impact. Public trust in digital services may erode, especially if sensitive data, such as healthcare records or financial information, is compromised. As more organizations fall victim to cyberattacks, the public may become more hesitant to use digital services, affecting everything from e-commerce to online banking.

The ongoing rise of cybercrime—ranging from data breaches to ransomware attacks—can also create an environment of fear and uncertainty. Citizens may feel increasingly vulnerable to identity theft, financial fraud, or the loss of privacy. This eroded trust can diminish the effectiveness of digital platforms and stymie technological progress in areas like e-governance, online education, and telemedicine.

7. The Global Ripple Effect: Cybersecurity as a Geopolitical Tool

In the most severe cases, the domino effect of cyber incidents can extend to the global stage. State-sponsored cyberattacks, such as those allegedly launched by Russia, China, or North Korea, may target not just specific countries but entire regions or industries. The 2007 cyber attacks on Estonia, which some attributed to Russian hackers, serve as a stark example of how cyberattacks can be used as a tool of political warfare.

Similarly, cyberattacks on critical infrastructure in one country can have a ripple effect on international relations, trade, and security. In 2020, the SolarWinds hack—which affected U.S. government agencies and businesses—demonstrated the extent to which a well-coordinated cyberattack could undermine international trust and cooperation. Such attacks can strain diplomatic relations, provoke retaliatory cyberattacks, or even escalate into physical conflicts.

8. Preparing for the Domino Effect: Proactive Cybersecurity Measures

Given the cascading nature of cyber incidents, it’s crucial for organizations to adopt a proactive approach to cybersecurity. Strong security measures, such as regular patching, multi-factor authentication, and employee training, can help mitigate the risk of breaches and limit their potential impact. Additionally, organizations should develop robust incident response plans to contain and manage breaches quickly, preventing the domino effect from spiraling out of control.

Collaboration across industries and governments is also essential to prevent the spread of cyber incidents. Information sharing, threat intelligence, and international cybersecurity agreements can help reduce vulnerabilities and enhance global cybersecurity resilience.

Conclusion

The domino effect of cyber incidents illustrates how deeply interconnected our digital ecosystem has become. A single breach, whether it’s a ransomware attack, data leak, or espionage effort, can set off a chain of events with devastating consequences for businesses, governments, and individuals. As the digital landscape continues to evolve, understanding and mitigating the ripple effects of cyber incidents will be crucial in maintaining trust, security, and stability in an increasingly interconnected world.

The post The Domino Effect of Cyber Incidents: Understanding the Ripple Impact of Cybersecurity Breaches appeared first on Cybersecurity Insiders.

Last year, 1 in 3 people in the US were hit by healthcare data breaches in a record year for cyber-attacks on the sector, while this year has already seen one of the most serious attacks in history when Change Health was hit by ransomware gang ALPHV. The ongoing digitalization of health services data may bring convenience for providers and patients alike, but it’s clear that security infrastructure is not keeping up with the rapidly increasing risk level faced by hospitals and the vendors that support them.

Such breaches are disastrous for everyone involved. The immediate impact is a delay in medical treatment if health systems are shut down by an attack, while protected health information (PHI) leaking can result in patients becoming targets for further crimes if sensitive data is sold via online black markets. As for healthcare and healthtech companies, they can be hit with hefty fines for HIPAA violations and find themselves on the receiving end of class action lawsuits, not to mention the reputational damage that might ultimately be more costly in the long run.

It’s too late to put the brakes on digitalization, so what can the healthcare industry do to secure its data?

How healthcare became the number one target for cybercriminals

The healthcare sector is the ideal target for cybercriminals. For one, PHI is especially valuable on the black market due to its sensitivity and the intimate details it reveals about the patient. This data is stored and processed in vast quantities, and a single breach can see attackers take off with thousands or even millions of records. Then there is the massive potential for serious, life-threatening disruption, which means that ransomware attacks can demand a higher price to bring systems back online.

Not only is the incentive high for cybercriminals but there are numerous vulnerabilities they can exploit due to the complexity of today’s healthcare systems. Hospitals, clinics, pharmacies, payment processors, insurance providers, and professional and patient-owned medical devices have all been brought online, all transfer data between them, and all provide vectors for attack. One link in this data supply chain might have airtight security but, if the link next to it is weak, then it is still vulnerable.

As healthcare systems become more vulnerable to attacks, cybercriminals are becoming more sophisticated. For example, where typical attacks used to rely on an unwitting victim downloading executable code, we now see a rise in “fileless attacks” where trusted programs running in memory are corrupted to become malware instead, making them much harder to detect.

The barrier to entry for being a cybercriminal has also lowered thanks to the proliferation of ransomware-as-a-service (RaaS). In the same way software-as-a-service (SaaS) has simplified access to various technologies, RaaS allows people with little to no development knowledge to launch ransomware attacks with “leased” malware. Cybercrime has proven to be an innovative technology sector of its own.

Why emails are still the biggest vulnerability in healthcare cybersecurity

The first and most important step healthcare companies can take to protect themselves is fortifying their email security as it is the most common attack vector in cyber-attacks. Healthcare companies must also scrutinize the security of their entire email supply chain; the massive HCA Healthcare hack that exposed 11 million records — last year’s largest healthcare breach — originated at an external location used for automated email formatting.

Phishing — where seemingly legitimate emails are used to trigger an action in the receiver that creates a vulnerability — is the classic email-based attack, but more concerning is the rise in business email compromise (BEC) attacks. Whereas phishing emails can be detected by email security systems if the sender is flagged as suspicious, BEC attacks are launched from compromised or spoofed legitimate organizational emails, making them more convincing to security systems and users alike.

Basic email security relies on blocklists and greylists — constantly updated records of suspicious IP addresses, sender domains, and web domains — to filter out phishing and spam in real-time, but the rise in BEC attacks has rendered this approach obsolete. Blocklists can even be counterproductive, as a legitimate email address being used to launch an attack can result in an organization’s entire email system or even its wider network being blocked.

There are many steps healthcare companies can take to bolster their email security: mandatory multi-factor authentication (MFA) can prevent unauthorized logins; domain key identified email (DKIM) uses cryptography to ensure emails come from authorized servers; access to distribution lists should be restricted to limit the damage of a BEC attack; and removing open relays can prevent hackers from hijacking trusted mail servers.

But even with deploying multi-layered protection controls, email attacks can bypass security programs as they exploit human gullibility through carefully tuned social engineering. Staff training on how to identify and avoid phishing and BEC attacks can reduce risk but it cannot eliminate it; all it takes is one person in an organization to be compromised for cybercriminals to gain a foothold to launch attacks.

AI is the new arms race between email security and cybercriminals

The sheer scale of the healthcare sector — which accounts for almost 10% of employment in the United States and reaches almost the entire population — means that training-based phishing and BEC attack prevention is always going to be a Band-Aid on a bullet wound. Recent advances in AI technology — particularly machine learning (ML) and large language models (LLMs) — can finally provide effective and scalable mitigation against email attacks that exploit human error.

A large part of email security has always involved pattern recognition to detect and block anomalies, and AI takes this principle — usually applied to data signals like IP addresses and domains — and expands it to the body of emails. Apply an adaptive learning engine to an organization’s entire email system, and it can be trained to recognize normal communication, right down to language and syntax, allowing immediate alerts to any emails that don’t align with established patterns.

Of course, it’s not just email security systems that have access to AI, and now that the technology’s genie is out of the bottle, cybercriminals are deploying it as well. AI-generated phishing kits enable rapid, automated, multi-prompt engagements that can closely mimic normal communications, and can even be trained to become more effective over time, while AI-assisted coding makes it easier to develop ransomware tailored to exploit specific systems.

The best defense against AI will be more AI, which sets the scene for the next decade of cybersecurity innovation and where healthcare companies should be investing their resources. Staying ahead in this arms race will be vital to resisting the rising tide of email-based cyber-attacks, and email security systems without AI capabilities are already hurtling towards obsolescence against cybercriminals that are more sophisticated and more incentivised than ever before.

The post Digital diagnosis: Why are email security breaches escalating in healthcare? appeared first on Cybersecurity Insiders.

Joe Sullivan, Uber’s CEO during their 2016 data breach, is appealing his conviction.

Prosecutors charged Sullivan, whom Uber hired as CISO after the 2014 breach, of withholding information about the 2016 incident from the FTC even as its investigators were scrutinizing the company’s data security and privacy practices. The government argued that Sullivan should have informed the FTC of the 2016 incident, but instead went out of his way to conceal it from them.

Prosecutors also accused Sullivan of attempting to conceal the breach itself by paying $100,000 to buy the silence of the two hackers behind the compromise. Sullivan had characterized the payment as a bug bounty similar to ones that other companies routinely make to researchers who report vulnerabilities and other security issues to them. His lawyers pointed out that Sullivan had made the payment with the full knowledge and blessing of Travis Kalanick, Uber’s CEO at the time, and other members of the ride-sharing giant’s legal team.

But prosecutors described the payment and an associated nondisclosure agreement that Sullivan’s team wanted the hackers to sign as an attempt to cover up what was in effect a felony breach of Uber’s network.

[…]

Sullivan’s fate struck a nerve with many peers and others in the industry who perceived CISOs as becoming scapegoats for broader security failures at their companies. Many argued ­ and continue to argue ­ that Sullivan acted with the full knowledge of his supervisors but in the end became the sole culprit for the breach and the associated failures for which he was charged. They believed that if Sullivan could be held culpable for his failure to report the 2016 breach to the FTC ­- and for the alleged hush payment—then so should Kalanick at the very least, and probably others as well.

It’s an argument that Sullivan’s lawyers once again raised in their appeal of the obstruction conviction this week. “Despite the fact that Mr. Sullivan was not responsible at Uber for the FTC’s investigation, including the drafting or signing any of the submissions to the FTC, the government singled him out among over 30 of his co-employees who all had information that Mr. Sullivan is alleged to have hidden from the FTC,” Swaminathan said.

I have some sympathy for that view. Sullivan was almost certainly scapegoated here. But I do want executives personally liable for what their company does. I don’t know enough about the details to have an opinion in this particular case.

New paper: “Lessons Lost: Incident Response in the Age of Cyber Insurance and Breach Attorneys“:

Abstract: Incident Response (IR) allows victim firms to detect, contain, and recover from security incidents. It should also help the wider community avoid similar attacks in the future. In pursuit of these goals, technical practitioners are increasingly influenced by stakeholders like cyber insurers and lawyers. This paper explores these impacts via a multi-stage, mixed methods research design that involved 69 expert interviews, data on commercial relationships, and an online validation workshop. The first stage of our study established 11 stylized facts that describe how cyber insurance sends work to a small numbers of IR firms, drives down the fee paid, and appoints lawyers to direct technical investigators. The second stage showed that lawyers when directing incident response often: introduce legalistic contractual and communication steps that slow-down incident response; advise IR practitioners not to write down remediation steps or to produce formal reports; and restrict access to any documents produced.

So, we’re not able to learn from these breaches because the attorneys are limiting what information becomes public. This is where we think about shielding companies from liability in exchange for making breach data public. It’s the sort of thing we do for airplane disasters.

EDITED TO ADD (6/13): A podcast interview with two of the authors.

New reporting from Wired reveals that the Department of Justice detected the SolarWinds attack six months before Mandiant detected it in December 2020, but didn’t realize what it detected—and so ignored it.

WIRED can now confirm that the operation was actually discovered by the DOJ six months earlier, in late May 2020­—but the scale and significance of the breach wasn’t immediately apparent. Suspicions were triggered when the department detected unusual traffic emanating from one of its servers that was running a trial version of the Orion software suite made by SolarWinds, according to sources familiar with the incident. The software, used by system administrators to manage and configure networks, was communicating externally with an unfamiliar system on the internet. The DOJ asked the security firm Mandiant to help determine whether the server had been hacked. It also engaged Microsoft, though it’s not clear why the software maker was also brought onto the investigation.

[…]

Investigators suspected the hackers had breached the DOJ server directly, possibly by exploiting a vulnerability in the Orion software. They reached out to SolarWinds to assist with the inquiry, but the company’s engineers were unable to find a vulnerability in their code. In July 2020, with the mystery still unresolved, communication between investigators and SolarWinds stopped. A month later, the DOJ purchased the Orion system, suggesting that the department was satisfied that there was no further threat posed by the Orion suite, the sources say.

EDITED TO ADD (5/4): More details about the SolarWinds attack from Wired.com.

In early 2021, IEEE Security and Privacy asked a number of board members for brief perspectives on the SolarWinds incident while it was still breaking news. This was my response.

The penetration of government and corporate networks worldwide is the result of inadequate cyberdefenses across the board. The lessons are many, but I want to focus on one important one we’ve learned: the software that’s managing our critical networks isn’t secure, and that’s because the market doesn’t reward that security.

SolarWinds is a perfect example. The company was the initial infection vector for much of the operation. Its trusted position inside so many critical networks made it a perfect target for a supply-chain attack, and its shoddy security practices made it an easy target.

Why did SolarWinds have such bad security? The answer is because it was more profitable. The company is owned by Thoma Bravo partners, a private-equity firm known for radical cost-cutting in the name of short-term profit. Under CEO Kevin Thompson, the company underspent on security even as it outsourced software development. The New York Times reports that the company’s cybersecurity advisor quit after his “basic recommendations were ignored.” In a very real sense, SolarWinds profited because it secretly shifted a whole bunch of risk to its customers: the US government, IT companies, and others.

This problem isn’t new, and, while it’s exacerbated by the private-equity funding model, it’s not unique to it. In general, the market doesn’t reward safety and security—especially when the effects of ignoring those things are long term and diffuse. The market rewards short-term profits at the expense of safety and security. (Watch and see whether SolarWinds suffers any long-term effects from this hack, or whether Thoma Bravo’s bet that it could profit by selling an insecure product was a good one.)

The solution here is twofold. The first is to improve government software procurement. Software is now critical to national security. Any system of procuring that software needs to evaluate the security of the software and the security practices of the company, in detail, to ensure that they are sufficient to meet the security needs of the network they’re being installed in. If these evaluations are made public, along with the list of companies that meet them, all network buyers can benefit from them. It’s a win for everybody.

But that isn’t enough; we need a second part. The only way to force companies to provide safety and security features for customers is through regulation. This is true whether we want seat belts in our cars, basic food safety at our restaurants, pajamas that don’t catch on fire, or home routers that aren’t vulnerable to cyberattack. The government needs to set minimum security standards for software that’s used in critical network applications, just as it sets software standards for avionics.

Without these two measures, it’s just too easy for companies to act like SolarWinds: save money by skimping on safety and security and hope for the best in the long term. That’s the rational thing for companies to do in an unregulated market, and the only way to change that is to change the economic incentives.

This essay originally appeared in the March/April 2021 issue of IEEE Security & Privacy.” I forgot to publish it here.

Last August, LastPass reported a security breach, saying that no customer information—or passwords—were compromised. Turns out the full story is worse:

While no customer data was accessed during the August 2022 incident, some source code and technical information were stolen from our development environment and used to target another employee, obtaining credentials and keys which were used to access and decrypt some storage volumes within the cloud-based storage service.

[…]

To date, we have determined that once the cloud storage access key and dual storage container decryption keys were obtained, the threat actor copied information from backup that contained basic customer account information and related metadata including company names, end-user names, billing addresses, email addresses, telephone numbers, and the IP addresses from which customers were accessing the LastPass service.

The threat actor was also able to copy a backup of customer vault data from the encrypted storage container which is stored in a proprietary binary format that contains both unencrypted data, such as website URLs, as well as fully-encrypted sensitive fields such as website usernames and passwords, secure notes, and form-filled data.

That’s bad. It’s not an epic disaster, though.

These encrypted fields remain secured with 256-bit AES encryption and can only be decrypted with a unique encryption key derived from each user’s master password using our Zero Knowledge architecture. As a reminder, the master password is never known to LastPass and is not stored or maintained by LastPass.

So, according to the company, if you chose a strong master password—here’s my advice on how to do it—your passwords are safe. That is, you are secure as long as your password is resilient to a brute-force attack. (That they lost customer data is another story….)

Fair enough, as far as it goes. My guess is that many LastPass users do not have strong master passwords, even though the compromise of your encrypted password file should be part of your threat model. But, even so, note this unverified tweet:

I think the situation at @LastPass may be worse than they are letting on. On Sunday the 18th, four of my wallets were compromised. The losses are not significant. Their seeds were kept, encrypted, in my lastpass vault, behind a 16 character password using all character types.

If that’s true, it means that LastPass has some backdoor—possibly unintentional—into the password databases that the hackers are accessing. (Or that @Cryptopathic’s “16 character password using all character types” is something like “P@ssw0rdP@ssw0rd.”)

My guess is that we’ll learn more during the coming days. But this should serve as a cautionary tale for anyone who is using the cloud: the cloud is another name for “someone else’s computer,” and you need to understand how much or how little you trust that computer.

If you’re changing password managers, look at my own Password Safe. Its main downside is that you can’t synch between devices, but that’s because I don’t use the cloud for anything.

News articles. Slashdot thread.

EDITED TO ADD: People choose lousy master passwords.