For the past 25 years, I’ve watched the digital world evolve from the early days of the Internet to the behemoth it is today.

Related: Self-healing devices on the horizon

What started as a decentralized, open platform for innovation has slowly but surely been carved up, controlled, and monetized by a handful of tech giants.

Now, a new wave of technological development—edge computing, decentralized identity, and privacy-first networking—is promising to reverse that trend. Companies like Muchich-based semi-conductor manufacturer Infineon Technologies are embedding intelligence directly into sensors and controllers, giving devices the ability to process data locally instead of shipping everything off to centralized cloud servers.

Meanwhile, privacy-focused projects like Session and Veilid are pushing for decentralized communication networks that don’t rely on Big Tech.

On the surface, this all sounds like a step in the right direction. But I can’t help but ask: Does any of this actually change the power dynamics of the digital world? Or will decentralization, like so many tech revolutions before it, just get absorbed into the existing system?

Disrupting business as usual

The move toward decentralized control at the edge is more than just hype. Companies like Infineon are developing zonal computing architectures in modern vehicles, where instead of having a single central control unit, intelligence is distributed throughout the car. This makes the system more responsive, more efficient, and less dependent on a cloud connection.

In smart cities, factories, and even consumer devices, similar trends are taking shape. Edge AI chips, secure microcontrollers, and embedded processors are allowing real-time decision-making without needing to send every bit of data to a distant data center.

Less data movement means fewer security risks, lower latency, and—potentially—less corporate control over user data.

But here’s the catch: technology alone doesn’t change who profits. The entire economic foundation of Big Tech is built on centralization, data extraction, and monetization. And unless that changes, decentralized infrastructure will just be a more sophisticated way for companies to keep controlling users.

We’ve seen this play out before. Apple, for instance, touts privacy as a key feature—offering on-device encryption, Secure Enclave, and privacy-first AI processing. Yet Apple’s actual business model still locks users into its ecosystem and rakes in billions through services, cloud storage, and app store commissions.

The same thing could happen with decentralization—Big Tech could give us just enough edge computing to improve efficiency while still keeping all the real control.

Needed change

For decentralization to actually shift power back to users, we need more than just technical advancements. We need a fundamental shift in the way digital businesses make money.

Right now, most of Big Tech runs on:

•Data extraction (Google, Meta, OpenAI) – AI models are hungry for data, and companies will keep finding ways to feed them, whether through search history, chat inputs, or enterprise contracts.

•Subscription lock-in (Microsoft, Adobe, Amazon AWS) – Even as infrastructure becomes more decentralized, companies still design services that tether users to their ecosystem through proprietary features and recurring fees.

•Cloud dependency (IoT, Smart Devices, Enterprise AI) – Even if devices get smarter at the edge, they’re still linked back to centralized platforms that dictate the rules.

So how do we break that cycle?

Reversing the pendulum

There are a handful of efforts trying to disrupt the status quo. Some of the more promising ones include:

Decentralized identity (DID) – Projects like DXC Technology’s decentralized identity initiatives allow users to control their own authentication credentials, instead of relying on Google, Apple, or Microsoft to log into everything.

•Privacy-first communication – Apps like Session (a decentralized, onion-routed messaging service) and Secure Scuttlebutt (a peer-to-peer social network) are proving that people don’t need to rely on Big Tech to communicate securely.

•Distributed storage and compute – Technologies like IPFS (InterPlanetary File System) and Urbit are moving away from cloud-based storage in favor of fully decentralized data ownership.

But there’s a problem: most people still opt for convenience over privacy. That’s why Facebook survived the Cambridge Analytica privacy debacle. That’s why people still use Gmail despite deep-rooted privacy concerns. That’s why Amazon’s smart home ecosystem remains dominant, even though it’s clear that users are giving up control to a monetization-obsessed corporation.

Role, limits of regulation

Regulators—particularly in Europe—are trying to push back.

The Digital Markets Act (DMA) and GDPR enforcement actions have forced some minor course corrections, and OpenAI, Google, and Meta have all faced scrutiny for how they handle personal data.

But is it enough? History suggests that Big Tech would rather pay fines than change its core business model. In the U.S., regulators have been even more reluctant to intervene, allowing tech companies to grow unchecked under the guise of “innovation.”

So while regulatory efforts help, they’re not the real solution. The real change will only happen if decentralized business models become financially competitive with centralized ones.

The wildcard may yet prove to be hardware-driven decentralization. One of the biggest reasons Big Tech has been able to maintain its grip is the cloud-based nature of digital services. But edge computing advancements could change that—not because of privacy concerns, but because they make devices cheaper, faster, and more resilient.

Infineon’s work on zonal computing in vehicles, for example, isn’t driven by ideology—it’s a practical, cost-saving innovation that also happens to decentralize control. If similar trends take hold in smart factories, industrial automation, and consumer electronics, companies may start decentralizing for efficiency reasons rather than because of user demand.

That could be the key. If decentralization delivers real cost, speed, and security benefits, businesses might start shifting in that direction—even if reluctantly.

Course change is possible

Where Does This Leave Us? We’re at a turning point. The technology for decentralization is here, but the business models haven’t caught up. If companies continue monetizing user control the way they always have, then decentralization will just be a buzzword—absorbed into the existing system without shifting power in any meaningful way.

For real change, we need:

•Economic incentives that make privacy-preserving, user-controlled services profitable.–Hardware-driven decentralization that forces change from the bottom up.

•Regulatory frameworks that go beyond fines and actually reshape the competitive landscape.

•Consumer awareness that demands real control, not just convenience.

The next few years will decide whether decentralization actually shifts power to users or just becomes another selling point for Big Tech.

The technical advancements in IoT infrastructure—decentralized control, edge computing, and embedded intelligence—are promising steps toward reducing reliance on centralized data processing and improving privacy, efficiency, and system resilience.

But without a corresponding shift in business models, these innovations could still end up reinforcing the same exploitative data practices we’ve seen in cloud computing and social media.

For decentralization to truly matter, companies need to rethink how they monetize technology. The entrenched tech giants will have to be forced to change; it’s going to require pressure from consumers and regulators – and competition from innovators with a different mindset.

Companies like Infineon are providing the technical foundation that could enable a different model—if startups, policymakers, and forward-thinking enterprises push in that direction.

So the key question is: Will the next wave of tech entrepreneurs build on this decentralized foundation, or will Big Tech co-opt it into another walled garden? Right now, it could go either way.

I’ll keep watch and keep reporting.

Acohido

Pulitzer Prize-winning business journalist Byron V. Acohido is dedicated to fostering public awareness about how to make the Internet as private and secure as it ought to be.


 

 

The post My Take: Will decentralizing connected systems redistribute wealth or reinforce Big Tech’s grip? first appeared on The Last Watchdog.

Augmented reality use cases have become prevalent in our society.

The technology, which first emerged primarily in the world of gaming and entertainment, now promises to reshape our reality with interactive information and immersive experiences. In short, AR is undoubtedly a groundbreaking technology that will reinvent how we interact with the digital world.

Related: Is the Metaverse truly secure?

However, before we get too carried away, it is crucial to explore the symbiotic relationship between AR and cybersecurity.

This is primarily because AR is still relatively new and a rapidly evolving technology, which ultimately means that it is bound to bring about unprecedented opportunities, challenges, and even risks to cybersecurity.

Are there any applications of augmented reality in cybersecurity?

While exploring the impact of every new form of technology on cybersecurity, there comes the unpleasant thought of looming cyberthreats. There is no doubt that AR will bring a new wave of sophisticated cyberattacks that may transform the dynamics of the cybersecurity world. However, looking at the brighter side, the immersive nature of this technology can also make it applicable in various cybersecurity domains.

Quite like how pilots use AR simulation in training, cybersecurity professionals can use AR-enabled training simulations that immerse them in hyper-realistic scenarios, offering hands-on cyber defense training and education. For example, AR-based training programs can simulate a phishing attack, allowing users to learn detection methods and experience the process of neutralizing the threat. It could also help users identify various cybersecurity attacks, whether they are types of spoofing, phishing, social engineering, or malware.

Waqas

Apart from the training aspect, AR technology can also be used to enhance threat detection in real-time. Since threat detection often requires analyzing complex, multi-layered patterns, AR can help SOC professionals and cybersecurity analysts interact with this data visually, making it easier to identify anomalies and weak points in security protocols. With AR interfaces, alerts for potential threats could be flagged and displayed on-screen as layered icons, instantly allowing personnel to assess risks and prioritize responses.

Possible challenges

Integrating AR in cybersecurity may come with several benefits, but it is not without its own set of challenges. Foremost among these are privacy and security concerns. Since AR programs rely on collecting and processing a significant amount of data, using them in cybersecurity training would mean exposing them to sensitive information. Most of the data fed to AR modules could include images of secure environments, system layouts, and other confidential information. Therefore, unauthorized access to cybersecurity-centric AR technology could lead to serious security breaches.

Apart from the security and privacy concerns, another main challenge is the implementation cost. There is no doubt that AR technology, especially AR glasses and other custom-built training application systems or modules, can be expensive. For an organization planning to integrate AR into its cybersecurity infrastructure, it is necessary to consider the cost of integrating AR with existing infrastructure and whether the benefits justify the investment.

Are there any security risks involved?

Although the use of AR technology in cybersecurity might seem promising, there is a high chance that these technologies could become a funnel for live cyberattacks. One significant risk is the potential for the technology to become a host to sophisticated social engineering attacks.

Additionally, there is a possibility that cybercriminals might misuse AR technology to create convincing deepfakes, duping gullible victims into revealing sensitive information.

Privacy risks

Another major area of concern is that AR devices collect vast amounts of data when in use, specifically tracking, GPS, and mapping information. Malicious actors could gain unauthorized access to this information and track an individual without their knowledge.

Furthermore, the advent of AR can also lead to digital vandalism. To overlay digital objects in the real world, AR technology must process live images through a device. Without proper protocols in place, criminals could hijack these overlays to digitally prank or vandalize a user’s space, potentially causing mental distress or even physical incidents.

Is using AR in cybersecurity worth it? – A summary

The future of AR technology in cybersecurity looks promising, particularly as the technology becomes more affordable, advanced, and accessible. The convergence of AR with cybersecurity could further enhance its impact, providing proactive threat detection with predictive capabilities for identifying potential attack vectors before they occur.

AR in cybersecurity is still an emerging field, yet it holds tremendous promise for redefining how organizations approach threat management, incident response, and training. As AR technology continues to evolve, its role in cybersecurity will likely expand, equipping professionals with powerful tools to address the dynamic challenges of digital security. However, for the technology to reach its true potential, it is crucial for developers to address the security risks associated with it and to mitigate them as much as possible.

About the essayist: Iam Waqas is a cybersecurity blogger and the Founder of DontSpoof, a dedicated project focused on cybersecurity awareness and phishing prevention.

The post GUEST ESSAY: The promise and pitfalls of using augmented reality– ‘AR’ — in cybersecurity first appeared on The Last Watchdog.

What does the recent CrowdStrike outage tell us about the state of digital resiliency?

Related: CrowdStrike’s consolation backfires

On a resiliency scale of one to 10, most enterprises are at about two. This was clear over the weekend when over 4000 flights were grounded, hospitals had to postpone services, and financial systems went down.

The only reason the impact was not broader was luck – not everybody runs CrowdStrike, and not all processes have been digitized.

The world was also lucky that this outage was due to a mistake by a legitimate vendor, and the recovery steps were relatively straightforward, albeit laborious. This made it possible for all users to recover cleanly and be sure that they have completely recovered.

Barde

Imagine that instead of a mistake by CrowdStrike, it was a malicious actor who subverted the CrowdStrike distribution channel and leveraged it as a Trojan for a data theft or ransomware attack. By the time such an attack gets detected, it would 100 percent  impossible to size up the damage exactly, and the damagewould be so distributed that no single vendor (not CrowdStrike, not Microsoft) would be able to provide full recovery guidance.

So, what is it going to take to start making meaningful steps towards achieving digital resiliency across Internet-centric services?

Redundancy is vital

A multi-pronged approach is needed to ensure resiliency. Both vendors and enterprises have a role to play in this.

The first prong is prevention. Vendors need to test every update thoroughly, and have a clear rollback mechanism for every update. They need to release every update in phases to their users, starting with a small set of users who have opted to take the latest bits, so that if they missed an issue in their testing, it is at least detected early before it goes out to all users.

This is common practice among telecom and cloud service providers. Vendors should ideally also enable enterprises to control how to distribute their updates within the enterprise. An example is how Microsoft enables enterprises to control how to roll out Windows updates. Enterprises also need to adopt such controls where they are offered.

The second prong is containment. Mistakes happen. Enterprises need to avoid single points of failure, by diversifying their supply chain and their own technical implementations, so that a mistake in one component does not bring down all their systems.

Final prong is governance. Every commercial entity trades off how much they invest in containing risks with the impact if they do not. In some cases the financial incentives are not big enough.

In the end, the ones who suffer are consumers. In select cases, regulatory bodies may need to consider regulations for vendors and enterprises, to protect consumers.

About the essayist: Sumedh Barde is Head of Product at Simbian, which supplies fully autonomous security systems for intelligent defense.

 

The post GUEST ESSAY: CrowdStrike outage fallout — stricter regulations required to achieve resiliency first appeared on The Last Watchdog.

The rapid adoption of mobile banking has revolutionized how we manage our finances.

Related: Deepfakes aimed at mobile banking apps

With millions of users worldwide relying on mobile apps for their banking needs, the convenience is undeniable. However, this surge in digital banking also brings about substantial security concerns.

Alarmingly, 85% of banks are predicted to be at risk from rising cyber threats. The increasing sophistication of cyber attacks, including phishing, malware, and man-in-the-middle attacks, poses a serious threat to both users and financial institutions. This essay offers insights into best practices for secure mobile banking to help mitigate these risks.

Surging attacks

Mobile banking has become a prime target for cybercriminals. The increasing sophistication of cyber attacks, including phishing, malware, and man-in-the-middle attacks, poses a serious threat to both users and financial institutions. The recent surge in mobile banking fraud highlights the pressing need for enhanced security measures.

Implementing robust security practices is essential for safeguarding mobile banking transactions. According to a comprehensive analysis on cybersecurity in banking, adopting stringent measures is crucial. Here are some best practices that can help mitigate the risks associated with mobile banking:

Users bests practices:

•Use Strong Passwords and Biometrics: A strong password is crucial for protecting your account. Users should create complex passwords that are difficult to guess. Additionally, enabling biometric authentication (such as fingerprint or facial recognition) adds an extra layer of security.

•Enable Two-Factor Authentication (2FA): Two-factor authentication significantly enhances account security by requiring a second form of verification, such as a code sent to your mobile device, in addition to your password. This makes it much harder for attackers to gain access to your accounts.

•Regularly Update Software: Keeping your mobile banking app and operating system up-to-date ensures that you have the latest security patches. Regular updates help protect against known vulnerabilities that cybercriminals might exploit.

•Be Cautious with Public Wi-Fi: Avoid accessing your mobile banking app over public Wi-Fi networks, which are often unsecured. If you must use public Wi-Fi, consider using a virtual private network (VPN) to encrypt your internet connection and protect your data.

•Monitor Account Activity: Regularly checking your bank statements and account activity for any unauthorized transactions can help detect and prevent significant financial loss.

Banks’ best practices:

•Implement Advanced Encryption: Financial institutions should use advanced encryption methods to protect data transmitted between the mobile app and the bank’s servers. End-to-end encryption ensures that even if data is intercepted, it cannot be read by unauthorized parties.

•Conduct Regular Security Audits: Regular security audits and vulnerability assessments can help identify and rectify potential security weaknesses in mobile banking applications. This proactive approach helps prevent security breaches before they occur.

•Provide User Education: Educating users about the importance of mobile banking security and how to protect themselves can significantly reduce the risk of cyber attacks. Financial institutions should offer resources and tips on secure mobile banking practices.

•Utilize Behavioral Analytics: Implementing behavioral analytics can help detect unusual patterns of behavior that may indicate fraudulent activity. By monitoring how users typically interact with their accounts, financial institutions can identify and respond to anomalies in real-time.

•Develop a Robust Incident Response Plan: Having a comprehensive incident response plan in place ensures that financial institutions can quickly and effectively respond to security breaches. This plan should include procedures for communication, mitigation, and recovery to minimize the impact of any incidents.

The trend towards mobile banking is set to continue, making it imperative for both users and financial institutions to prioritize security. By following these best practices, we can mitigate the risks and protect sensitive financial information.

Author Bio: Hira Ehtesham is a Senior Content Writer at VPNRanks, focusing on cybersecurity, AI, and privacy. With a passion for writing and a commitment to providing insightful and engaging content, Hira helps users navigate the complexities of digital security.

The post GUEST ESSAY: Consumers, institutions continue to shoulder burden for making mobile banking secure first appeared on The Last Watchdog.

Passwords have been the cornerstone of basic cybersecurity hygiene for decades.

Related: Passwordless workpace long way off

However, as users engage with more applications across multiple devices, the digital security landscape is shifting from passwords and password managers towards including passwordless authentication, such as multi-factor authentication (MFA), biometrics, and, as of late, passkeys.

But as secure and user-friendly as these authentication methods are, cybercriminals are already busily sidestepping all forms of authentication – passwords, MFA, and passkeys – to sometimes devastating effect.

Passwordless work arounds

Without a doubt, passwordless authentication is a significant improvement over traditional passwords and effectively addresses the persistent risk of easy to guess passwords and password reuse. Most passkeys available to consumers leverage unique biometric authentication data and cryptographically secure means to authenticate users when they access websites and applications.

This new authentication technique is gaining traction, especially since the FIDO Alliance has advocated for its implementation over the last year. Moreover, leading tech companies like Google, Microsoft, and Apple have developed robust frameworks to integrate this system of authentication.

Yet history reminds us that cyber threats evolve alongside our defenses. As we move towards a passwordless world, bad actors are finding new avenues to exploit, including simply working around passwordless authentication with session hijacking attacks and other forms of next-generation account takeover – and the tradeoff is significant.

The most alarming threat to users and businesses today, bar none, is malware. Criminals increasingly use infostealer malware and other low-cost and highly effective malware-as-a-service tools to exfiltrate valid identity data needed for authentication, like session cookies.

The role of infostealers

Hilligoss

Infostealers pose a significant challenge for websites and servers that validate user identities. Armed with an anti-detect browser and a valid cookie, bad actors can mimic a trusted device or user, easily sidestep authentication methods, and seamlessly blend in without raising any red flags. Once the session is hijacked, criminals can access a user’s accounts, and masquerade as the user to perpetrate additional cyber incidents such as fraud and ransomware.

And this attack method is on the rise.  In 2023, infostealer malware use tripled, with 61% of breaches attributable to this threat. SpyCloud researchers highlighted how malware infections are a major player in identity exposures in the recent 2024 Identity Exposure Report.

While most infostealer malware are non-persistent in their infiltration, and extraction of information takes only a matter of seconds, leaving the device with nary a sign, the threat of the stolen data to a user and organization security is much more persistent. A valid session cookie will remain on a person’s browser until it expires or a proactive security team invalidates it. Some cookies can last for months or years. As long as cookie data remains valid, it can be sold and traded multiple times and used to perpetrate different attacks.

Lateral exposures

Criminals are interested in the data but even more so, the level of access the data can grant. So beyond cookies they are also accessing keychains, local files, single-sign on logins, and escalating privileges – essentially instigating a wide range of actions from a single entry point, whether it’s within a browser or on a device.

The use of single sign-on (SSO) only exacerbates the problem, as a successful breach can potentially grant unauthorized access to multiple linked accounts and services across multiple business and personal devices.

Case in point: In January 2023, the continuous integration and delivery platform CircleCI announced it had experienced a data breach caused by infostealer malware deployed to an engineer’s laptop. The malware stole a valid, two-factor-backed SSO session, executed a session cookie theft, impersonated the employee, and escalated access to a subset of the company’s production systems, potentially accessing and stealing encrypted customer data.

Security practitioners often fail to recognize the extensive scope of the session hijacking issue or take steps to mitigate it. Even when teams have visibility into stolen session cookies, our research has found that 39% fail to terminate them.

Despite having short timeouts, MFA, and passkeys in place, there will still be security gaps. This is particularly true due to the use of third parties having unmanaged or under-managed devices, which security teams may not have access to or sufficient control over.

Additional strategies

Passwordless security authentication is still an important part of any layered security strategy, but since it can still be sidestepped via stolen cookies for session hijacking, it’s not a silver bullet to combat cyber attacks.

Additional strategies, such as monitoring for compromised web sessions, invalidating stolen cookies, and promptly resetting exposed user credentials are critical. This means quickly and accurately being able to determine when any component of an employee, contractor, vendor, or customer’s identity is compromised and moving fast to remediate and negate the value of stolen identity data. This takes the steps traditionally set forth of cleaning and re-imaging a machine one step further to properly remediate the data that could still be floating on the criminal underground and nullifying it.

As criminals step up their game, failing to make this shift could leave organizations vulnerable to a wide array of next-generation attack methods. And with passkeys and other passwordless authentication methods soaring in popularity, time is of the essence.

About the essayist: Trevor Hilligoss served nine years in the U.S. Army and has an extensive background in federal law enforcement, tracking threat actors for both the DoD and FBI. He is a member of the Joint Ransomware Task Force and serves in an advisory capacity for multiple cybersecurity-focused non-profits. He currently serves as the Vice President of SpyCloud Labs at SpyCloud

The post GUEST ESSAY: How cybercriminals are using ‘infostealers’ to sidestep passwordless authentication first appeared on The Last Watchdog.

There is a lot written about technology’s threats to democracy. Polarization. Artificial intelligence. The concentration of wealth and power. I have a more general story: The political and economic systems of governance that were created in the mid-18th century are poorly suited for the 21st century. They don’t align incentives well. And they are being hacked too effectively.

At the same time, the cost of these hacked systems has never been greater, across all human history. We have become too powerful as a species. And our systems cannot keep up with fast-changing disruptive technologies.

We need to create new systems of governance that align incentives and are resilient against hacking … at every scale. From the individual all the way up to the whole of society.

For this, I need you to drop your 20th century either/or thinking. This is not about capitalism versus communism. It’s not about democracy versus autocracy. It’s not even about humans versus AI. It’s something new, something we don’t have a name for yet. And it’s “blue sky” thinking, not even remotely considering what’s feasible today.

Throughout this talk, I want you to think of both democracy and capitalism as information systems. Socio-technical information systems. Protocols for making group decisions. Ones where different players have different incentives. These systems are vulnerable to hacking and need to be secured against those hacks.

We security technologists have a lot of expertise in both secure system design and hacking. That’s why we have something to add to this discussion.

And finally, this is a work in progress. I’m trying to create a framework for viewing governance. So think of this more as a foundation for discussion, rather than a road map to a solution. And I think by writing, and what you’re going to hear is the current draft of my writing—and my thinking. So everything is subject to change without notice.

OK, so let’s go.

We all know about misinformation and how it affects democracy. And how propagandists have used it to advance their agendas. This is an ancient problem, amplified by information technologies. Social media platforms that prioritize engagement. “Filter bubble” segmentation. And technologies for honing persuasive messages.

The problem ultimately stems from the way democracies use information to make policy decisions. Democracy is an information system that leverages collective intelligence to solve political problems. And then to collect feedback as to how well those solutions are working. This is different from autocracies that don’t leverage collective intelligence for political decision making. Or have reliable mechanisms for collecting feedback from their populations.

Those systems of democracy work well, but have no guardrails when fringe ideas become weaponized. That’s what misinformation targets. The historical solution for this was supposed to be representation. This is currently failing in the US, partly because of gerrymandering, safe seats, only two parties, money in politics and our primary system. But the problem is more general.

James Madison wrote about this in 1787, where he made two points. One, that representatives serve to filter popular opinions, limiting extremism. And two, that geographical dispersal makes it hard for those with extreme views to participate. It’s hard to organize. To be fair, these limitations are both good and bad. In any case, current technology—social media—breaks them both.

So this is a question: What does representation look like in a world without either filtering or geographical dispersal? Or, how do we avoid polluting 21st century democracy with prejudice, misinformation and bias. Things that impair both the problem solving and feedback mechanisms.

That’s the real issue. It’s not about misinformation, it’s about the incentive structure that makes misinformation a viable strategy.

This is problem No. 1: Our systems have misaligned incentives. What’s best for the small group often doesn’t match what’s best for the whole. And this is true across all sorts of individuals and group sizes.

Now, historically, we have used misalignment to our advantage. Our current systems of governance leverage conflict to make decisions. The basic idea is that coordination is inefficient and expensive. Individual self-interest leads to local optimizations, which results in optimal group decisions.

But this is also inefficient and expensive. The U.S. spent $14.5 billion on the 2020 presidential, senate and congressional elections. I don’t even know how to calculate the cost in attention. That sounds like a lot of money, but step back and think about how the system works. The economic value of winning those elections are so great because that’s how you impose your own incentive structure on the whole.

More generally, the cost of our market economy is enormous. For example, $780 billion is spent world-wide annually on advertising. Many more billions are wasted on ventures that fail. And that’s just a fraction of the total resources lost in a competitive market environment. And there are other collateral damages, which are spread non-uniformly across people.

We have accepted these costs of capitalism—and democracy—because the inefficiency of central planning was considered to be worse. That might not be true anymore. The costs of conflict have increased. And the costs of coordination have decreased. Corporations demonstrate that large centrally planned economic units can compete in today’s society. Think of Walmart or Amazon. If you compare GDP to market cap, Apple would be the eighth largest country on the planet. Microsoft would be the tenth.

Another effect of these conflict-based systems is that they foster a scarcity mindset. And we have taken this to an extreme. We now think in terms of zero-sum politics. My party wins, your party loses. And winning next time can be more important than governing this time. We think in terms of zero-sum economics. My product’s success depends on my competitors’ failures. We think zero-sum internationally. Arms races and trade wars.

Finally, conflict as a problem-solving tool might not give us good enough answers anymore. The underlying assumption is that if everyone pursues their own self interest, the result will approach everyone’s best interest. That only works for simple problems and requires systemic oppression. We have lots of problems—complex, wicked, global problems—that don’t work that way. We have interacting groups of problems that don’t work that way. We have problems that require more efficient ways of finding optimal solutions.

Note that there are multiple effects of these conflict-based systems. We have bad actors deliberately breaking the rules. And we have selfish actors taking advantage of insufficient rules.

The latter is problem No. 2: What I refer to as “hacking” in my latest book: “A Hacker’s Mind.” Democracy is a socio-technical system. And all socio-technical systems can be hacked. By this I mean that the rules are either incomplete or inconsistent or outdated—they have loopholes. And these can be used to subvert the rules. This is Peter Thiel subverting the Roth IRA to avoid paying taxes on $5 billion in income. This is gerrymandering, the filibuster, and must-pass legislation. Or tax loopholes, financial loopholes, regulatory loopholes.

In today’s society, the rich and powerful are just too good at hacking. And it is becoming increasingly impossible to patch our hacked systems. Because the rich use their power to ensure that the vulnerabilities don’t get patched.

This is bad for society, but it’s basically the optimal strategy in our competitive governance systems. Their zero-sum nature makes hacking an effective, if parasitic, strategy. Hacking isn’t a new problem, but today hacking scales better—and is overwhelming the security systems in place to keep hacking in check. Think about gun regulations, climate change, opioids. And complex systems make this worse. These are all non-linear, tightly coupled, unrepeatable, path-dependent, adaptive, co-evolving systems.

Now, add into this mix the risks that arise from new and dangerous technologies such as the internet or AI or synthetic biology. Or molecular nanotechnology, or nuclear weapons. Here, misaligned incentives and hacking can have catastrophic consequences for society.

This is problem No. 3: Our systems of governance are not suited to our power level. They tend to be rights based, not permissions based. They’re designed to be reactive, because traditionally there was only so much damage a single person could do.

We do have systems for regulating dangerous technologies. Consider automobiles. They are regulated in many ways: drivers licenses + traffic laws + automobile regulations + road design. Compare this to aircrafts. Much more onerous licensing requirements, rules about flights, regulations on aircraft design and testing and a government agency overseeing it all day-to-day. Or pharmaceuticals, which have very complex rules surrounding everything around researching, developing, producing and dispensing. We have all these regulations because this stuff can kill you.

The general term for this kind of thing is the “precautionary principle.” When random new things can be deadly, we prohibit them unless they are specifically allowed.

So what happens when a significant percentage of our jobs are as potentially damaging as a pilot’s? Or even more damaging? When one person can affect everyone through synthetic biology. Or where a corporate decision can directly affect climate. Or something in AI or robotics. Things like the precautionary principle are no longer sufficient. Because breaking the rules can have global effects.

And AI will supercharge hacking. We have created a series of non-interoperable systems that actually interact and AI will be able to figure out how to take advantage of more of those interactions: finding new tax loopholes or finding new ways to evade financial regulations. Creating “micro-legislation” that surreptitiously benefits a particular person or group. And catastrophic risk means this is no longer tenable.

So these are our core problems: misaligned incentives leading to too effective hacking of systems where the costs of getting it wrong can be catastrophic.

Or, to put more words on it: Misaligned incentives encourage local optimization, and that’s not a good proxy for societal optimization. This encourages hacking, which now generates greater harm than at any point in the past because the amount of damage that can result from local optimization is greater than at any point in the past.

OK, let’s get back to the notion of democracy as an information system. It’s not just democracy: Any form of governance is an information system. It’s a process that turns individual beliefs and preferences into group policy decisions. And, it uses feedback mechanisms to determine how well those decisions are working and then makes corrections accordingly.

Historically, there are many ways to do this. We can have a system where no one’s preference matters except the monarch’s or the nobles’ or the landowners’. Sometimes the stronger army gets to decide—or the people with the money.

Or we could tally up everyone’s preferences and do the thing that at least half of the people want. That’s basically the promise of democracy today, at its ideal. Parliamentary systems are better, but only in the margins—and it all feels kind of primitive. Lots of people write about how informationally poor elections are at aggregating individual preferences. It also results in all these misaligned incentives.

I realize that democracy serves different functions. Peaceful transition of power, minimizing harm, equality, fair decision making, better outcomes. I am taking for granted that democracy is good for all those things. I’m focusing on how we implement it.

Modern democracy uses elections to determine who represents citizens in the decision-making process. And all sorts of other ways to collect information about what people think and want, and how well policies are working. These are opinion polls, public comments to rule-making, advocating, lobbying, protesting and so on. And, in reality, it’s been hacked so badly that it does a terrible job of executing on the will of the people, creating further incentives to hack these systems.

To be fair, the democratic republic was the best form of government that mid 18th century technology could invent. Because communications and travel were hard, we needed to choose one of us to go all the way over there and pass laws in our name. It was always a coarse approximation of what we wanted. And our principles, values, conceptions of fairness; our ideas about legitimacy and authority have evolved a lot since the mid 18th century. Even the notion of optimal group outcomes depended on who was considered in the group and who was out.

But democracy is not a static system, it’s an aspirational direction. One that really requires constant improvement. And our democratic systems have not evolved at the same pace that our technologies have. Blocking progress in democracy is itself a hack of democracy.

Today we have much better technology that we can use in the service of democracy. Surely there are better ways to turn individual preferences into group policies. Now that communications and travel are easy. Maybe we should assign representation by age, or profession or randomly by birthday. Maybe we can invent an AI that calculates optimal policy outcomes based on everyone’s preferences.

Whatever we do, we need systems that better align individual and group incentives, at all scales. Systems designed to be resistant to hacking. And resilient to catastrophic risks. Systems that leverage cooperation more and conflict less. And are not zero-sum.

Why can’t we have a game where everybody wins?

This has never been done before. It’s not capitalism, it’s not communism, it’s not socialism. It’s not current democracies or autocracies. It would be unlike anything we’ve ever seen.

Some of this comes down to how trust and cooperation work. When I wrote “Liars and Outliers” in 2012, I wrote about four systems for enabling trust: our innate morals, concern about our reputations, the laws we live under and security technologies that constrain our behavior. I wrote about how the first two are more informal than the last two. And how the last two scale better, and allow for larger and more complex societies. They enable cooperation amongst strangers.

What I didn’t appreciate is how different the first and last two are. Morals and reputation are both old biological systems of trust. They’re person to person, based on human connection and cooperation. Laws—and especially security technologies—are newer systems of trust that force us to cooperate. They’re socio-technical systems. They’re more about confidence and control than they are about trust. And that allows them to scale better. Taxi driver used to be one of the country’s most dangerous professions. Uber changed that through pervasive surveillance. My Uber driver and I don’t know or trust each other, but the technology lets us both be confident that neither of us will cheat or attack each other. Both drivers and passengers compete for star rankings, which align local and global incentives.

In today’s tech-mediated world, we are replacing the rituals and behaviors of cooperation with security mechanisms that enforce compliance. And innate trust in people with compelled trust in processes and institutions. That scales better, but we lose the human connection. It’s also expensive, and becoming even more so as our power grows. We need more security for these systems. And the results are much easier to hack.

But here’s the thing: Our informal human systems of trust are inherently unscalable. So maybe we have to rethink scale.

Our 18th century systems of democracy were the only things that scaled with the technology of the time. Imagine a group of friends deciding where to have dinner. One is kosher, one is a vegetarian. They would never use a winner-take-all ballot to decide where to eat. But that’s a system that scales to large groups of strangers.

Scale matters more broadly in governance as well. We have global systems of political and economic competition. On the other end of the scale, the most common form of governance on the planet is socialism. It’s how families function: people work according to their abilities, and resources are distributed according to their needs.

I think we need governance that is both very large and very small. Our catastrophic technological risks are planetary-scale: climate change, AI, internet, bio-tech. And we have all the local problems inherent in human societies. We have very few problems anymore that are the size of France or Virginia. Some systems of governance work well on a local level but don’t scale to larger groups. But now that we have more technology, we can make other systems of democracy scale.

This runs headlong into historical norms about sovereignty. But that’s already becoming increasingly irrelevant. The modern concept of a nation arose around the same time as the modern concept of democracy. But constituent boundaries are now larger and more fluid, and depend a lot on context. It makes no sense that the decisions about the “drug war”—or climate migration—are delineated by nation. The issues are much larger than that. Right now there is no governance body with the right footprint to regulate Internet platforms like Facebook. Which has more users world-wide than Christianity.

We also need to rethink growth. Growth only equates to progress when the resources necessary to grow are cheap and abundant. Growth is often extractive. And at the expense of something else. Growth is how we fuel our zero-sum systems. If the pie gets bigger, it’s OK that we waste some of the pie in order for it to grow. That doesn’t make sense when resources are scarce and expensive. Growing the pie can end up costing more than the increase in pie size. Sustainability makes more sense. And a metric more suited to the environment we’re in right now.

Finally, agility is also important. Back to systems theory, governance is an attempt to control complex systems with complicated systems. This gets harder as the systems get larger and more complex. And as catastrophic risk raises the costs of getting it wrong.

In recent decades, we have replaced the richness of human interaction with economic models. Models that turn everything into markets. Market fundamentalism scaled better, but the social cost was enormous. A lot of how we think and act isn’t captured by those models. And those complex models turn out to be very hackable. Increasingly so at larger scales.

Lots of people have written about the speed of technology versus the speed of policy. To relate it to this talk: Our human systems of governance need to be compatible with the technologies they’re supposed to govern. If they’re not, eventually the technological systems will replace the governance systems. Think of Twitter as the de facto arbiter of free speech.

This means that governance needs to be agile. And able to quickly react to changing circumstances. Imagine a court saying to Peter Thiel: “Sorry. That’s not how Roth IRAs are supposed to work. Now give us our tax on that $5B.” This is also essential in a technological world: one that is moving at unprecedented speeds, where getting it wrong can be catastrophic and one that is resource constrained. Agile patching is how we maintain security in the face of constant hacking—and also red teaming. In this context, both journalism and civil society are important checks on government.

I want to quickly mention two ideas for democracy, one old and one new. I’m not advocating for either. I’m just trying to open you up to new possibilities. The first is sortition. These are citizen assemblies brought together to study an issue and reach a policy decision. They were popular in ancient Greece and Renaissance Italy, and are increasingly being used today in Europe. The only vestige of this in the U.S. is the jury. But you can also think of trustees of an organization. The second idea is liquid democracy. This is a system where everybody has a proxy that they can transfer to someone else to vote on their behalf. Representatives hold those proxies, and their vote strength is proportional to the number of proxies they have. We have something like this in corporate proxy governance.

Both of these are algorithms for converting individual beliefs and preferences into policy decisions. Both of these are made easier through 21st century technologies. They are both democracies, but in new and different ways. And while they’re not immune to hacking, we can design them from the beginning with security in mind.

This points to technology as a key component of any solution. We know how to use technology to build systems of trust. Both the informal biological kind and the formal compliance kind. We know how to use technology to help align incentives, and to defend against hacking.

We talked about AI hacking; AI can also be used to defend against hacking, finding vulnerabilities in computer code, finding tax loopholes before they become law and uncovering attempts at surreptitious micro-legislation.

Think back to democracy as an information system. Can AI techniques be used to uncover our political preferences and turn them into policy outcomes, get feedback and then iterate? This would be more accurate than polling. And maybe even elections. Can an AI act as our representative? Could it do a better job than a human at voting the preferences of its constituents?

Can we have an AI in our pocket that votes on our behalf, thousands of times a day, based on the preferences it infers we have. Or maybe based on the preferences it infers we would have if we read up on the issues and weren’t swayed by misinformation. It’s just another algorithm for converting individual preferences into policy decisions. And it certainly solves the problem of people not paying attention to politics.

But slow down: This is rapidly devolving into technological solutionism. And we know that doesn’t work.

A general question to ask here is when do we allow algorithms to make decisions for us? Sometimes it’s easy. I’m happy to let my thermostat automatically turn my heat on and off or to let an AI drive a car or optimize the traffic lights in a city. I’m less sure about an AI that sets tax rates, or corporate regulations or foreign policy. Or an AI that tells us that it can’t explain why, but strongly urges us to declare war—right now. Each of these is harder because they are more complex systems: non-local, multi-agent, long-duration and so on. I also want any AI that works on my behalf to be under my control. And not controlled by a large corporate monopoly that allows me to use it.

And learned helplessness is an important consideration. We’re probably OK with no longer needing to know how to drive a car. But we don’t want a system that results in us forgetting how to run a democracy. Outcomes matter here, but so do mechanisms. Any AI system should engage individuals in the process of democracy, not replace them.

So while an AI that does all the hard work of governance might generate better policy outcomes. There is social value in a human-centric political system, even if it is less efficient. And more technologically efficient preference collection might not be better, even if it is more accurate.

Procedure and substance need to work together. There is a role for AI in decision making: moderating discussions, highlighting agreements and disagreements helping people reach consensus. But it is an independent good that we humans remain engaged in—and in charge of—the process of governance.

And that value is critical to making democracy function. Democratic knowledge isn’t something that’s out there to be gathered: It’s dynamic; it gets produced through the social processes of democracy. The term of art is “preference formation.” We’re not just passively aggregating preferences, we create them through learning, deliberation, negotiation and adaptation. Some of these processes are cooperative and some of these are competitive. Both are important. And both are needed to fuel the information system that is democracy.

We’re never going to remove conflict and competition from our political and economic systems. Human disagreement isn’t just a surface feature; it goes all the way down. We have fundamentally different aspirations. We want different ways of life. I talked about optimal policies. Even that notion is contested: optimal for whom, with respect to what, over what time frame? Disagreement is fundamental to democracy. We reach different policy conclusions based on the same information. And it’s the process of making all of this work that makes democracy possible.

So we actually can’t have a game where everybody wins. Our goal has to be to accommodate plurality, to harness conflict and disagreement, and not to eliminate it. While, at the same time, moving from a player-versus-player game to a player-versus-environment game.

There’s a lot missing from this talk. Like what these new political and economic governance systems should look like. Democracy and capitalism are intertwined in complex ways, and I don’t think we can recreate one without also recreating the other. My comments about agility lead to questions about authority and how that interplays with everything else. And how agility can be hacked as well. We haven’t even talked about tribalism in its many forms. In order for democracy to function, people need to care about the welfare of strangers who are not like them. We haven’t talked about rights or responsibilities. What is off limits to democracy is a huge discussion. And Butterin’s trilemma also matters here: that you can’t simultaneously build systems that are secure, distributed, and scalable.

I also haven’t given a moment’s thought to how to get from here to there. Everything I’ve talked about—incentives, hacking, power, complexity—also applies to any transition systems. But I think we need to have unconstrained discussions about what we’re aiming for. If for no other reason than to question our assumptions. And to imagine the possibilities. And while a lot of the AI parts are still science fiction, they’re not far-off science fiction.

I know we can’t clear the board and build a new governance structure from scratch. But maybe we can come up with ideas that we can bring back to reality.

To summarize, the systems of governance we designed at the start of the Industrial Age are ill-suited to the Information Age. Their incentive structures are all wrong. They’re insecure and they’re wasteful. They don’t generate optimal outcomes. At the same time we’re facing catastrophic risks to society due to powerful technologies. And a vastly constrained resource environment. We need to rethink our systems of governance; more cooperation and less competition and at scales that are suited to today’s problems and today’s technologies. With security and precautions built in. What comes after democracy might very well be more democracy, but it will look very different.

This feels like a challenge worthy of our security expertise.

This text is the transcript from a keynote speech delivered during the RSA Conference in San Francisco on April 25, 2023. It was previously published in Cyberscoop. I thought I posted it to my blog and Crypto-Gram last year, but it seems that I didn’t.

Quantum computers are probably coming, though we don’t know when—and when they arrive, they will, most likely, be able to break our standard public-key cryptography algorithms. In anticipation of this possibility, cryptographers have been working on quantum-resistant public-key algorithms. The National Institute for Standards and Technology (NIST) has been hosting a competition since 2017, and there already are several proposed standards. Most of these are based on lattice problems.

The mathematics of lattice cryptography revolve around combining sets of vectors—that’s the lattice—in a multi-dimensional space. These lattices are filled with multi-dimensional periodicities. The hard problem that’s used in cryptography is to find the shortest periodicity in a large, random-looking lattice. This can be turned into a public-key cryptosystem in a variety of different ways. Research has been ongoing since 1996, and there has been some really great work since then—including many practical public-key algorithms.

On April 10, Yilei Chen from Tsinghua University in Beijing posted a paper describing a new quantum attack on that shortest-path lattice problem. It’s a very dense mathematical paper—63 pages long—and my guess is that only a few cryptographers are able to understand all of its details. (I was not one of them.) But the conclusion was pretty devastating, breaking essentially all of the lattice-based fully homomorphic encryption schemes and coming significantly closer to attacks against the recently proposed (and NIST-approved) lattice key-exchange and signature schemes.

However, there was a small but critical mistake in the paper, on the bottom of page 37. It was independently discovered by Hongxun Wu from Berkeley and Thomas Vidick from the Weizmann Institute in Israel eight days later. The attack algorithm in its current form doesn’t work.

This was discussed last week at the Cryptographers’ Panel at the RSA Conference. Adi Shamir, the “S” in RSA and a 2002 recipient of ACM’s A.M. Turing award, described the result as psychologically significant because it shows that there is still a lot to be discovered about quantum cryptanalysis of lattice-based algorithms. Craig Gentry—inventor of the first fully homomorphic encryption scheme using lattices—was less impressed, basically saying that a nonworking attack doesn’t change anything.

I tend to agree with Shamir. There have been decades of unsuccessful research into breaking lattice-based systems with classical computers; there has been much less research into quantum cryptanalysis. While Chen’s work doesn’t provide a new security bound, it illustrates that there are significant, unexplored research areas in the construction of efficient quantum attacks on lattice-based cryptosystems. These lattices are periodic structures with some hidden periodicities. Finding a different (one-dimensional) hidden periodicity is exactly what enabled Peter Shor to break the RSA algorithm in polynomial time on a quantum computer. There are certainly more results to be discovered. This is the kind of paper that galvanizes research, and I am excited to see what the next couple of years of research will bring.

To be fair, there are lots of difficulties in making any quantum attack work—even in theory.

Breaking lattice-based cryptography with a quantum computer seems to require orders of magnitude more qubits than breaking RSA, because the key size is much larger and processing it requires more quantum storage. Consequently, testing an algorithm like Chen’s is completely infeasible with current technology. However, the error was mathematical in nature and did not require any experimentation. Chen’s algorithm consisted of nine different steps; the first eight prepared a particular quantum state, and the ninth step was supposed to exploit it. The mistake was in step nine; Chen believed that his wave function was periodic when in fact it was not.

Should NIST be doing anything differently now in its post–quantum cryptography standardization process? The answer is no. They are doing a great job in selecting new algorithms and should not delay anything because of this new research. And users of cryptography should not delay in implementing the new NIST algorithms.

But imagine how different this essay would be were that mistake not yet discovered? If anything, this work emphasizes the need for systems to be crypto-agile: to be able to easily swap algorithms in and out as research continues. And for using hybrid cryptography—multiple algorithms where the security rests on the strongest—where possible, as in TLS.

And—one last point—hooray for peer review. A researcher proposed a new result, and reviewers quickly found a fatal flaw in the work. Efforts to repair the flaw are ongoing. We complain about peer review a lot, but here it worked exactly the way it was supposed to.

This essay originally appeared in Communications of the ACM.

Meeting the demands of the modern-day SMB is one of the challenges facing many business leaders and IT operators today. Traditional, office-based infrastructure was fine up until the point where greater capacity was needed than those servers could deliver, vendor support became an issue, or the needs of a hybrid workforce weren’t being met.

Related: SMB brand spoofing

In the highly competitive SMB space, maintaining and investing in a robust and efficient IT infrastructure can be one of the ways to stay ahead of competitors.

Thankfully, with the advent of cloud offerings, a new scalable model has entered the landscape; whether it be 20 or 20,000 users, the cloud will fit all and with it comes a much simpler, per user cost model. This facility to integrate modern computing environments in the day-to-day workplace, means businesses can now stop rushing to catch up and with this comes the invaluable peace of mind that these operations will scale up or down as required. Added to which, the potential cost savings and added value will better serve each business and help to future-proof the organisation, even when on a tight budget. Cloud service solutions are almost infinitely flexible, rather than traditional on-premises options and won’t require in-house maintenance.

Cloud-sourced sustainability

Sibley

When it comes to environmental impact and carbon footprint, data centres are often thought to be a threat, contributing to climate change, but in reality, cloud is a great option. The scalability of cloud infrastructure and the economies of scale they leverage facilitate not just cost but carbon savings too.  Rather than a traditional model where a server runs in-house at 20% capacity, using power 24/7/365 and pumping out heat, cloud data centres are specifically designed to run and cater for multiple users more efficiently, utilising white space cooling, for example, to optimise energy consumption.

When it comes to the bigger players like Microsoft and Amazon, they are investing heavily in sustainable, on-site energy generation to power their data centres; even planning to feedback excess power into the National Grid. Simply put, it’s more energy efficient for individual businesses to use a cloud offering than to run their own servers – the carbon footprint for each business using a cloud solution becomes much smaller.

Simplified scaling 

With many security solutions now being cloud based too, security doesn’t need to be compromised and can be managed remotely via SOC teams either in-house or via the security provider (where the resources are greater and have far more specialist expertise).

Ultimately, a cloud services solution, encompassing servers, storage, security and more, will best service SMBs; it’s scalable, provides economies of scale and relieves in-house IT teams from many mundane yet critical tasks, allowing them to focus on more profitable activities.

 About the essayist: Brian Sibley, Solutions Architect, Espria. A Solutions Architect with over 40 years industry experience, over 25 years of which are based on Microsoft and associated third party technologies, reinforced by relevant certifications and training

Businesses today need protection from increasingly frequent and sophisticated DDoS attacks. Service providers, data center operators, and enterprises delivering critical infrastructure all face risks from attacks.

Related: The care and feeding of DDoS defenses

But to protect their networks, they’ll need to enable accurate attack detection while keeping operations manageable and efficient.

Traditional static baselining methods fall short on both of these counts. To begin with, they rely on resource-intensive manual processes to define an organization’s “normal” traffic patterns, imposing a burden on both the protected organization and their own security personnel. The uncertainty and approximation inherent in this approach leads to tradeoffs on exactly where to establish the baseline. Set it too high and you’ll miss smaller attacks. Set it too low and you’ll deal with constant false positives.

Dynamic baselining makes it possible to offer more accurate and efficient DDoS protection and protection-as-a-service. By allowing the system to learn its own baseline traffic patterns, set its own thresholds, and adapt automatically as traffic changes, service providers and large enterprises can simplify operations while ensuring more accurate attack detection.

Limits of static baselining

Under ordinary circumstances, an increase in network traffic can seem like good news. A DDoS attack, on the other hand, is distinctly bad news. By flooding a victim’s network with bogus traffic, an attacker can slow performance or even knock its services offline entirely.

Organizations can help mitigate the threat of a DDoS attack, but first, they need to be able to recognize the difference between normal or “peacetime” activity and abnormal, malicious traffic. This can be tricky if thresholds are simply set to detect large-scale DDoS attacks while missing smaller ones, presenting this as an acceptable risk.

A security team, seeking a more accurate level of detection, may query the protected organization or application owners on what their normal traffic levels are in order to establish tailored baselines. This seems reasonable, except that many companies don’t have this kind of detail readily available. It also imposes an additional operational burden.

Another approach employed by security teams is to assume the burden of monitoring the traffic for a period of weeks and come up with a proposed baseline. This is likely more effective in terms of accuracy, but it’s far from scalable as a service model for DDoS protection-as-a-service.

Choose Your Poison

When organizations can’t tailor a DDoS detection threshold to specific needs or specific end subscribers, they have two options. One is to set a level that’s much higher than what normal traffic would realistically reach. You’ll catch large-scale attacks, but you may be exposed to any number of smaller attacks, degrading performance for their business and the end users.

Or you can choose to set the threshold lower in order to catch more attacks. Unfortunately, you’ll also get more false positives. In that event, traffic will be diverted to a mitigation device, subjecting end users to an unnecessary increase in latency and degradation of the user experience. This is particularly noticeable by users and the application owners when the mitigation device or facility is in a geographic location different from that of the servers. 

Accurate, efficient protection 

Static baselining imposes too much of an operational burden on organizations — and even then, the resulting attack detection is too inaccurate.

Abdelhalim

Dynamic baselining alleviates that operational workload while enabling a better understanding of normal and suspicious network activity. The system automatically learns the peacetime baseline for customers, sets thresholds that reflect the observed patterns, and then adapts those thresholds over time as traffic changes. Able to differentiate between the types of increases associated with the dynamic business environment or end-user behavior on one hand, and malicious surges originating from botnets on the other hand, the system can alert accurately on genuine attacks of all sizes while avoiding the disruptions of false positives or false negatives.

The efficiency of automated, dynamic baselining allows organizations to provide better DDoS protection to protect critical infrastructure, whether a service provider or a digital business enterprise.

As organizations tackle the critical need of DDoS protection, the key to success will be a combination of autonomous learning capabilities and operational efficiency. By moving from static baselining to automated, dynamic baselining, you can provide more accurate and responsive protection while easing the workload for strapped security teams. 

About the essayist: Ahmed Abdelhalim, Senior Director, Security Solutions, A10 Networks

 

For all the discussion around the sophisticated technology, strategies, and tactics hackers use to infiltrate networks, sometimes the simplest attack method can do the most damage.

The recent Unitronics hack, in which attackers took control over a Pennsylvania water authority and other entities, is a good example. In this instance, hackers are suspected to have exploited simple cybersecurity loopholes, including the fact that the software shipped with easy-to-guess default passwords.

Related: France hit by major DDoS attack

The Unitronics hack was particularly effective given the nature of the target. Unitronics software is used by critical infrastructure (CI) organizations throughout the U.S. in different industries, including energy, manufacturing, and healthcare. Unitronics systems are exposed to the Internet and a single intrusion caused a ripple effect felt across organizations in multiple states.

Attacks like the one on Unitronics are a good reminder for all CI organizations to reassess their cybersecurity policies and procedures to ensure they can repel and mitigate cybersecurity threats. Here are three strategies they should pursue in 2024 to minimize the chance of a Unitronics-style hack.

Attack surface

Building perimeter defense systems and keeping services in-house have traditionally been two of the most common ways to defend IT infrastructure. The problem with this from a security perspective is that there tends to be no segregation between services. All an attacker needs to do is infiltrate one application to have access to the entire network.

Moving services to the cloud segregates applications and significantly reduces the potential blast radius. Years ago there was some skepticism about public cloud service providers’ security policies, but the reality is that most of those services are now highly secure. The largest ones, such as Amazon and Microsoft, have stringent protocols for securing their cloud infrastructures.

Still, CI organizations need to perform the appropriate due diligence before signing any agreements. At a minimum, cloud providers should have the same robust security practices as the organizations themselves. It’s also important to assess the provider’s patching environment and cadence, the processes they use to discover and manage vulnerabilities, whether they have a security operations center, and so forth.

Vetting process

Normally, the vetting process for a technology provider falls strictly under the purview of IT. But as cybersecurity threats evolve, it’s equally important to involve the chief information security officer (CISO) and their team in the due diligence process for any vendor an organization may consider using.Once again, the Unitronics attack offers a great example of why involving security teams early and often is a good idea. An advisory issued by the Cybersecurity and Infrastructure Security Agency (CISA) noted that attackers achieved their mission “likely by compromising internet-accessible devices with default passwords” included in Unitronics software. An IT team primarily interested in functionality, features, and integration capabilities may overlook such flaws. However, security experts are trained to identify these issues and therefore can ensure that the software is vulnerability-free and follows good cybersecurity best practices.

Eventually, more organizations may want to consider appointing their CISOs to head all of IT. Having a shared organizational structure in which IT reports directly to the CISO will help make certain that both the technical and security needs of the organizations are met, and that security is at the forefront of all technology purchasing decisions.

In the meantime, security teams should be the points of contact for Cybersecurity Maturity Model Certification (CMMC) audits. These audits are performed by third-party assessor organizations and are used to gauge the cybersecurity maturity of organizations that supply technology to the defense industrial base, including CI organizations. The CMMC program includes a progressive framework to ensure vendors meet National Institutes of Standards and Technology (NIST) cybersecurity standards. Vendors that meet these standards are less likely to contain vulnerabilities that could infect CI organizations through their supply chains.

Continual testing

While performing rigorous assessments before vendors are onboarded is important, so is performing ongoing internal and external penetration tests to simulate attacks and test for potential weaknesses. For example, OT systems have become highly connected, making them an obvious target for hackers. Penetration testing can identify vulnerabilities within these systems and allow security teams to find areas where traditional network segmentation techniques aren’t effective. This is often the case with nation-state threats and other highly skilled threat actors.

Once the systems are physically separated, organizations can install data diodes and data guards to ensure the secure transfer of information between networks in ways that prevent threat actors from compromising them. A data diode facilitates a uni-directional stream of information from one device to another, preventing bi-directional data flow. A data guard, meanwhile, ensures that only the intended structured and unstructured data is transferred across these networks.

These strategies denote a shift from reactive to proactive cybersecurity and a new way of thinking about cybersecurity defense. Organizations must move from a “trust but verify” mindset to a Zero Trust approach. Organizations that adopt this mindset while embracing the cloud, employing a shared responsibility model, and performing continual testing will take the fight to the attackers and gain a much-needed advantage.

About the essayist: Joseph Bell is Chief Information Security Officer at Everfox.