SAN FRANCISCO — Cloud security is stirring buzz as RSA Conference 2024 ramps up at Moscone Convention Center here.

Related: The fallacy of ‘security-as-a-cost-center’

Companies are clambering to mitigate unprecedented exposures spinning out of their increasing reliance on cloud hosted resources. The unfolding disruption of Generative AI — and rising compliance requirements — add to the mix.

Thus, cloud-native security tools have risen to the fore. I’ve reported in years past on the introduction of cloud access security brokers (CASBs), cloud workload protection platforms (CWPP), and cloud security posture management (CSPM) tools.

In 2024, it’s all about integrating cloud-native security solutions and improving orchestration.

I had the chance to discuss this with Kevin Kiley, chief revenue officer of Lacework, a Mountain View, Calif.-based supplier of advanced cloud security tools solving some of the most complex cybersecurity challenges in the cloud. For a full drill down, please give the accompanying podcast a listen.

Lacework is a cloud security platform that saves teams time and resources by ingesting massive amounts of threat and risk data to monitor for anomalous activity. It’s a Cloud Native Application Protection Platform (CNAPP) that offers code to cloud coverage on a single platform, including: cloud workload protection, threat detection, code security, compliance monitoring, providing visibility into customer environments ranging from pre-deployed code to containers to identity and entitlements to runtime apps, he told me.

For instance, Lacework’s CSPM capabilities enable organizations to continually assess their cloud security posture and identify any vulnerabilities; remediation is automated.

This includes automated checks to assure compliance with PCI DSS, HIPAA, GDPR and CIS benchmarks. Lacework’s platform also integrates with cloud platforms, DevOps tools and legacy security systems.

The shift from reactive, on-premises defense to proactive edge-oriented security is picking up steam. I’ll keep watch and keep reporting.

Acohido

Pulitzer Prize-winning business journalist Byron V. Acohido is dedicated to fostering public awareness about how to make the Internet as private and secure as it ought to be.


(LW provides consulting services to the vendors we cover.)

SAN FRANCISCO — On the eve of what promises to be a news-packed RSA Conference 2024, opening here on Monday, Microsoft is putting its money where its mouth is.

Related: Shedding light on LLM vulnerabilities

More precisely the software titan is putting money within reach of its senior executives’ mouths.

Screenshot

In a huge development, Microsoft announced today that it is revising its security practices, organizational structure, and, most importantly, its executive compensation in an attempt to shore up major security issues with its flagship product, not to mention quell rising pressure from regulators and customers.

A shout out to my friend Todd Bishop, co-founder of GeekWire, for staying on top of this development. His breaking news coverage is as thorough as you’d expect as a Microsoft beat writer with institutional knowledge going back a couple of decades.

Org overhaul

As Todd reports, not only is Microsoft basing a portion of senior executive compensation on progress toward security goals, it also will install deputy chief information security officers (CISOs) in each product group,and bring together teams from its major platforms and product teams in “engineering waves” to overhaul security.

This instantly brought to mind something eerily similar that happened 22 years ago – something both Todd and I wrote about at the time. On January 15, 2002, Bill Gates issued his famous “Trustworthy Computing” (TC) company-wide memo, slamming the brakes on Windows Server 2003 development and temporarily redirecting his top engineers to emphasize security as a top priority.

Gates

This “security stand down” allowed Microsoft to conduct a comprehensive review and overhaul of their software design  practices, as part of a broad effort to integrate security deeply into the software development process at Microsoft. Given its stature as an 800 lb gorilla, Microsoft certainly influenced cybersecurity as a whole, arguably setting a course for application security principles and practices that were to evolve in the wake of TC.

Pressure redux

But now, once again, Microsoft is feeling enough pressure from its enterprise customers to recalibrate its approach to security. Just as Gates’ memo became a charter to infuse security, privacy, and reliability across all Windows products, Satya Nadella’s Secure Future Initiative (SFI) is aimed at deepening this ethos in an environment now dominated by sophisticated cyber threats, cloud-based data and pervasive AI technologies.

The common denominator is trust—critical then and now. Initially, TC was about setting a security baseline within the fabric of software development during the internet’s formative years. SFI expands this vision, emphasizing intrinsic security in the design, deployment, and operation of Microsoft’s vast array of products and services, focusing notably on the challenges posed by AI and cloud vulnerabilities.

Under Gates, TC catalyzed a transformation within Microsoft that rippled out across the tech industry, prompting a heightened focus on developing software that was secure by design.

TC’s legacy

An argument certainly can be made that TC foreshadowed “shift left” software security development practices and, ultimately, DevSecOps. The core principle is that every phase of software development should be infused with some aspect of security.

Nadella

I’d argue that TC laid the groundwork for continuous security integration, a core component of DevSecOps. This approach ensures that security considerations are not an afterthought but are embedded throughout the development lifecycle. Extending from this foundation, SFI seems well-positioned to push these boundaries further, integrating AI to proactively manage security threats and embedding robust security measures as default settings in new products.

While TC reshaped traditional software security, SFI has a chance to help not just Microsoft customers, but the tech sector as a whole. The massive task at hand is to reconcile privacy and security concerns when it comes to securing complex AI algorithms and sprawling cloud networks.

Funny how even as the pace of change accelerates, the core privacy and security concerns remain the same. I’ll keep watch and keep reporting.

Acohido

Pulitzer Prize-winning business journalist Byron V. Acohido is dedicated to fostering public awareness about how to make the Internet as private and secure as it ought to be.


 

At the start, Distributed Denial of Service (DDoS) attacks were often motivated by bragging rights or mischief.

Related: The role of ‘dynamic baselining’

DDoS attack methodology and defensive measures have advanced steadily since then. Today, DDoS campaigns are launched by political activists, state-sponsored operatives and even by business rivals.

Targets can be high-profile web services and critical infrastructure, not just utilities like power and water, but also the telco companies that supply the Internet backbone. High-profile DDoS attacks have spun out of Russia’s invasion of Ukraine, the Israel-Hamas War and unrest in France.

As RSA Conference 2024 gets underway next week at San Francisco’s Moscone Center, dealing with the privacy and security fall out of those back-to-back disruptive developments will command a lot attention.

Ahead of conference, I had the chance to visit with Ahmed Abdelhalim, senior director of security solutions, A10 Networks. We discussed how defensive tools and strategies have advanced, as well, and why it’s more crucial than ever for organizations to make proactive and continuous use of them.

For a full drill down, please give the accompanying podcast a listen.

Notable strides have been made in enhancing detection technologies. A10, for instance, has helped pioneer the development of “dynamic baselining,” a means to adapt detection thresholds in real-time, learning from traffic patterns to differentiate between normal fluctuations and potential threats.

“The old static models just don’t cut it anymore,” Abdelhalim observes. “We need systems that learn and adapt as quickly as the attackers do.”

No one expects the frequency of DDoS attacks to decline; companies need to stay vigilant. I’ll keep watch and keep reporting.

Acohido

Pulitzer Prize-winning business journalist Byron V. Acohido is dedicated to fostering public awareness about how to make the Internet as private and secure as it ought to be.


(LW provides consulting services to the vendors we cover.)

Businesses today need protection from increasingly frequent and sophisticated DDoS attacks. Service providers, data center operators, and enterprises delivering critical infrastructure all face risks from attacks.

Related: The care and feeding of DDoS defenses

But to protect their networks, they’ll need to enable accurate attack detection while keeping operations manageable and efficient.

Traditional static baselining methods fall short on both of these counts. To begin with, they rely on resource-intensive manual processes to define an organization’s “normal” traffic patterns, imposing a burden on both the protected organization and their own security personnel. The uncertainty and approximation inherent in this approach leads to tradeoffs on exactly where to establish the baseline. Set it too high and you’ll miss smaller attacks. Set it too low and you’ll deal with constant false positives.

Dynamic baselining makes it possible to offer more accurate and efficient DDoS protection and protection-as-a-service. By allowing the system to learn its own baseline traffic patterns, set its own thresholds, and adapt automatically as traffic changes, service providers and large enterprises can simplify operations while ensuring more accurate attack detection.

Limits of static baselining

Under ordinary circumstances, an increase in network traffic can seem like good news. A DDoS attack, on the other hand, is distinctly bad news. By flooding a victim’s network with bogus traffic, an attacker can slow performance or even knock its services offline entirely.

Organizations can help mitigate the threat of a DDoS attack, but first, they need to be able to recognize the difference between normal or “peacetime” activity and abnormal, malicious traffic. This can be tricky if thresholds are simply set to detect large-scale DDoS attacks while missing smaller ones, presenting this as an acceptable risk.

A security team, seeking a more accurate level of detection, may query the protected organization or application owners on what their normal traffic levels are in order to establish tailored baselines. This seems reasonable, except that many companies don’t have this kind of detail readily available. It also imposes an additional operational burden.

Another approach employed by security teams is to assume the burden of monitoring the traffic for a period of weeks and come up with a proposed baseline. This is likely more effective in terms of accuracy, but it’s far from scalable as a service model for DDoS protection-as-a-service.

Choose Your Poison

When organizations can’t tailor a DDoS detection threshold to specific needs or specific end subscribers, they have two options. One is to set a level that’s much higher than what normal traffic would realistically reach. You’ll catch large-scale attacks, but you may be exposed to any number of smaller attacks, degrading performance for their business and the end users.

Or you can choose to set the threshold lower in order to catch more attacks. Unfortunately, you’ll also get more false positives. In that event, traffic will be diverted to a mitigation device, subjecting end users to an unnecessary increase in latency and degradation of the user experience. This is particularly noticeable by users and the application owners when the mitigation device or facility is in a geographic location different from that of the servers. 

Accurate, efficient protection 

Static baselining imposes too much of an operational burden on organizations — and even then, the resulting attack detection is too inaccurate.

Abdelhalim

Dynamic baselining alleviates that operational workload while enabling a better understanding of normal and suspicious network activity. The system automatically learns the peacetime baseline for customers, sets thresholds that reflect the observed patterns, and then adapts those thresholds over time as traffic changes. Able to differentiate between the types of increases associated with the dynamic business environment or end-user behavior on one hand, and malicious surges originating from botnets on the other hand, the system can alert accurately on genuine attacks of all sizes while avoiding the disruptions of false positives or false negatives.

The efficiency of automated, dynamic baselining allows organizations to provide better DDoS protection to protect critical infrastructure, whether a service provider or a digital business enterprise.

As organizations tackle the critical need of DDoS protection, the key to success will be a combination of autonomous learning capabilities and operational efficiency. By moving from static baselining to automated, dynamic baselining, you can provide more accurate and responsive protection while easing the workload for strapped security teams. 

About the essayist: Ahmed Abdelhalim, Senior Director, Security Solutions, A10 Networks

 

It took some five years to get to 100 million users of the World Wide Web and it took just one year to get to 100 million Facebook users.

Related: LLM risk mitigation strategies

Then along came GenAI and Large Language Models (LLM) and it took just a couple of weeks to get to 100 million ChatGPT users.

LLM is a game changer in the same vein as the Gutenberg Press and the Edison light bulb. It gives any literate human the ability to extract value from data.

Companies in all sectors are in a mad scramble to reap its benefits, even as cyber criminals feast on a new tier of exposures. As RSAC 2024 gets under way next week in San Francisco, the encouraging news is that the cybersecurity industry is racing to protect business networks, as well.

Case in point, the open-source community has coalesced to produce the OWASP Top Ten for Large Language Model Applications. Amazingly, just a little over a year ago this was a mere notion dreamt up by Exabeam CPO Steve Wilson.

“I spent some time on a weekend drawing up a scratch version of a Top Ten list, partly by having a discussion with ChatGPT about it,” Wilson told me. “The first thing I asked was, ‘Do you know what an OWASP Top Ten list is?’ And it said, ‘Yes.’  And I said, ‘Build me one for LLM.’  It did, but it wasn’t very good . . . I then spent a lot of time feeding it data about things and coaching it and cajoling it and having a discussion.”

By the end of an afternoon of prompting, Wilson had a list he thought was “pretty interesting,” which he socialized in his professional communities. That was a little over a year ago. What happened next is unprecedented. For a full drill down, please give the accompanying podcast a listen.

The pace of change is accelerating. I’ll keep watch and keep reporting.

Acohido

Pulitzer Prize-winning business journalist Byron V. Acohido is dedicated to fostering public awareness about how to make the Internet as private and secure as it ought to be.


(LW provides consulting services to the vendors we cover.

At the close of 2019, API security was a concern, though not necessarily a top priority for many CISOs.

Related: GenAI ignites 100x innovation

Then Covid 19 hit, and API growth skyrocketed, a trajectory that only steepened when Generative AI (GenAI) and Large Language Models (LLMs) burst onto the scene.

As RSA Conference 2024 gets underway next week at San Francisco’s Moscone Center, dealing with the privacy and security fall out of those back-to-back disruptive developments will command a lot attention.

Ahead of conference, I had the chance to visit with Sanjay Nagaraj, CTO and co-founder, Traceable.ai, a supplier of advanced API security systems.

We discussed how enterprises in 2019 were deep into making the transition from on-premises networks to cloud-centric, edge-oriented operations when the global pandemic hit. Instantly, API connections skyrocketed to support connected services for a quarantined world. Then machine learning made a giant leap forward as GenAI and LLMs made AI capabilities directly accessible to every man, woman and child.

At this moment, companies are in a mad scramble to innovate cool, new user experiences, and thus drive-up revenue, Nagaraj observes. Of course, cybercriminals are in intensive innovation mode, as well.

It has become table stakes for companies to discover all of their APIs, now imperative for companies not just to discover all of their APIs, but also to understand them and categorize them according to risk level, Nagaraj argues. For a full drill down, please give the accompanying podcast a listen.

APIs are the synaptic connections of our hyper-interconnected existence. Securing them has become paramount. I’ll keep watch and keep reporting.

Acohido

Pulitzer Prize-winning business journalist Byron V. Acohido is dedicated to fostering public awareness about how to make the Internet as private and secure as it ought to be.


(LW provides consulting services to the vendors we cover.

Tel Aviv, Israel – April 30, 2024 – Cybersixgill, the global cyber threat intelligence data provider, broke new ground today by introducing its Third-Party Intelligence module.

The new module delivers vendor-specific cybersecurity and threat intelligence to organizations’ security teams, enabling them to continuously monitor and detect risks to their environment arising from third-party suppliers and take preemptive action before an attack executes.

The Third-Party Intelligence module combines vendor-specific cyber threat intelligence (CTI) with cybersecurity posture data from suppliers’ tech environments, exposing a critical blind spot for security teams. With this intelligence, threat analysts and security operations teams can identify threats from the supply chain and expand their threat exposure management efforts.

Research shows that in 2023, there were 245,000 software supply chain attacks, costing organizations $46 billion. That amount will likely rise to $60 billion in 2025. Additionally, nearly two-thirds (61%) of U.S. businesses were directly impacted by a software supply chain attack in the 12-month period ending in April 2023, while 66% of companies say they do not trust their third parties to notify them of a major breach.

“Cybersixgill’s new Third-Party Intelligence is a significant advancement in delivering actionable threat intelligence insights to security teams and CISOs to help them strengthen and protect the organization’s risk posture,” said Chris Steffen, Vice President of Research, Security, and Risk Management for Enterprise Management Associates (EMA). “Threat intelligence that shines a broad, bright light on threats from within a company’s third-party network has been a glaring missing piece in organizations’ cybersecurity programs. I applaud their efforts to bring this much-needed solution to market.”

“Security teams can take every precaution to protect their organization’s environment. But if they lack intelligence about the risks facing their third-party supply chain and the impact on their security posture, the consequences can be costly to the company’s brand and bottom line,” said Gabi Reish, Chief Product Officer for Cybersixgill. “With the rising cost of supply chain attacks, our new Third-Party Intelligence module gives security operations and threat analysts critical insights to protect their organization and its network of suppliers and partners.”

For more information, including a video walk-through of Cybersixgill’s new Third-Party Intelligence, visit https://cybersixgill.com/products/cyber-threat-intelligence/third-party-intelligence.

About Cybersixgill: Cybersixgill continuously collects and exposes the earliest indications of risk by threat actors moments after they surface on the clear, deep, and dark web. The company’s vast intelligence data lake, derived from millions of underground sources, is processed, correlated, and enriched using automation and advanced AI. Cybersixgill captures, processes, and alerts teams to emerging threats, TTPs, IOCs, and their exposure to risk based on each organization’s complete attack surface and internal context. Its expert intelligence and insights, available through a range of seamlessly integrated options, enable customers to pre-empt threats before they materialize into attacks. The company serves and partners with global enterprises, financial institutions, MSSPs, and government and law enforcement agencies. For more information, visit https://www.cybersixgill.com/ and follow us on Twitter and LinkedIn. To schedule a demo, please visit https://cybersixgill.com/book-a-demo.  

For all the discussion around the sophisticated technology, strategies, and tactics hackers use to infiltrate networks, sometimes the simplest attack method can do the most damage.

The recent Unitronics hack, in which attackers took control over a Pennsylvania water authority and other entities, is a good example. In this instance, hackers are suspected to have exploited simple cybersecurity loopholes, including the fact that the software shipped with easy-to-guess default passwords.

Related: France hit by major DDoS attack

The Unitronics hack was particularly effective given the nature of the target. Unitronics software is used by critical infrastructure (CI) organizations throughout the U.S. in different industries, including energy, manufacturing, and healthcare. Unitronics systems are exposed to the Internet and a single intrusion caused a ripple effect felt across organizations in multiple states.

Attacks like the one on Unitronics are a good reminder for all CI organizations to reassess their cybersecurity policies and procedures to ensure they can repel and mitigate cybersecurity threats. Here are three strategies they should pursue in 2024 to minimize the chance of a Unitronics-style hack.

Attack surface

Building perimeter defense systems and keeping services in-house have traditionally been two of the most common ways to defend IT infrastructure. The problem with this from a security perspective is that there tends to be no segregation between services. All an attacker needs to do is infiltrate one application to have access to the entire network.

Moving services to the cloud segregates applications and significantly reduces the potential blast radius. Years ago there was some skepticism about public cloud service providers’ security policies, but the reality is that most of those services are now highly secure. The largest ones, such as Amazon and Microsoft, have stringent protocols for securing their cloud infrastructures.

Still, CI organizations need to perform the appropriate due diligence before signing any agreements. At a minimum, cloud providers should have the same robust security practices as the organizations themselves. It’s also important to assess the provider’s patching environment and cadence, the processes they use to discover and manage vulnerabilities, whether they have a security operations center, and so forth.

Vetting process

Normally, the vetting process for a technology provider falls strictly under the purview of IT. But as cybersecurity threats evolve, it’s equally important to involve the chief information security officer (CISO) and their team in the due diligence process for any vendor an organization may consider using.Once again, the Unitronics attack offers a great example of why involving security teams early and often is a good idea. An advisory issued by the Cybersecurity and Infrastructure Security Agency (CISA) noted that attackers achieved their mission “likely by compromising internet-accessible devices with default passwords” included in Unitronics software. An IT team primarily interested in functionality, features, and integration capabilities may overlook such flaws. However, security experts are trained to identify these issues and therefore can ensure that the software is vulnerability-free and follows good cybersecurity best practices.

Eventually, more organizations may want to consider appointing their CISOs to head all of IT. Having a shared organizational structure in which IT reports directly to the CISO will help make certain that both the technical and security needs of the organizations are met, and that security is at the forefront of all technology purchasing decisions.

In the meantime, security teams should be the points of contact for Cybersecurity Maturity Model Certification (CMMC) audits. These audits are performed by third-party assessor organizations and are used to gauge the cybersecurity maturity of organizations that supply technology to the defense industrial base, including CI organizations. The CMMC program includes a progressive framework to ensure vendors meet National Institutes of Standards and Technology (NIST) cybersecurity standards. Vendors that meet these standards are less likely to contain vulnerabilities that could infect CI organizations through their supply chains.

Continual testing

While performing rigorous assessments before vendors are onboarded is important, so is performing ongoing internal and external penetration tests to simulate attacks and test for potential weaknesses. For example, OT systems have become highly connected, making them an obvious target for hackers. Penetration testing can identify vulnerabilities within these systems and allow security teams to find areas where traditional network segmentation techniques aren’t effective. This is often the case with nation-state threats and other highly skilled threat actors.

Once the systems are physically separated, organizations can install data diodes and data guards to ensure the secure transfer of information between networks in ways that prevent threat actors from compromising them. A data diode facilitates a uni-directional stream of information from one device to another, preventing bi-directional data flow. A data guard, meanwhile, ensures that only the intended structured and unstructured data is transferred across these networks.

These strategies denote a shift from reactive to proactive cybersecurity and a new way of thinking about cybersecurity defense. Organizations must move from a “trust but verify” mindset to a Zero Trust approach. Organizations that adopt this mindset while embracing the cloud, employing a shared responsibility model, and performing continual testing will take the fight to the attackers and gain a much-needed advantage.

About the essayist: Joseph Bell is Chief Information Security Officer at Everfox.

At the end of 2000, I was hired by USA Today to cover Microsoft, which at the time was being prosecuted by the U.S. Department of Justice.

Related: Why proxies aren’t enough

Microsoft had used illegal monopolistic practices to crush Netscape Navigator thereby elevating Internet Explorer (IE) to become far and away the No. 1 web browser.

IE’s reign proved to be fleeting. Today Google’s Chrome browser —  based on the open-source code  Chromium — reigns supreme.

I bring all this up, because in 2019 Microsoft ditched its clunky browser source code and launched its Edge browser, based on open-source Chromium. And this opened the door to a great leap forward in web browser security: enterprise browsers.

As RSAC 2024 gets ready to open next week, the practicality of embedding advanced security tools in company-sanctioned web browsers is in the spotlight. I had a wide-ranging discussion about this with Uy Huynh, vice president of solutions engineering at Island, a leading supplier of enterprise browsers. For a full drill down, please give the accompanying podcast a listen.

As an open-source project, Chromium promotes web standards compliance, ensuring that web developers can create content that works consistently across different browsers. Island has seized the opportunity to innovate browser security features that enable companies to reduce their reliance on VDI environments and shrink their SaaS authentication sprawl, Huynh told me

Enterprise browsers could emerge as a key component of the evolving network security platforms that will carry us forward. I’ll keep watch and keep reporting.

Acohido

Pulitzer Prize-winning business journalist Byron V. Acohido is dedicated to fostering public awareness about how to make the Internet as private and secure as it ought to be.


(LW provides consulting services to the vendors we cover.)

Critical infrastructure like electrical, emergency, water, transportation and security systems are vital for public safety but can be taken out with a single cyberattack. How can cybersecurity professionals protect their cities?

In 2021, a lone hacker infiltrated a water treatment plant in Oldsmar, Florida. One of the plant operators noticed abnormal activity but assumed it was one of the technicians remotely troubleshooting an issue.

Only a few hours later, the employee watched as the hacker remotely accessed the supervisory control and data acquisition (SCADA) system to raise the amount of sodium hydroxide to 11,100 parts per million, up from 100 parts per million. Such an increase would make the drinking water caustic.

The plant operator hurriedly took control of the SCADA system and reversed the change. In a later statement, the company revealed redundancies and alarms would have alerted it, regardless. Still, the fact that it was able to happen in the first place highlights a severe issue with smart cities.

The hacker was able to infiltrate the water treatment plant because its computers were running on an outdated operating system, shared the same password for remote access and were connected to the internet without a firewall.

Deadly exposure

Securing critical infrastructure is crucial for the safety and comfort of citizens. Cyberattacks on smart cities aren’t just inconvenient — they can be deadly. They can result in:

•Injuries and fatalities. When critical infrastructure fails, people can get hurt. The Oldsmar water treatment plant hacking is an excellent example of this fact, as a city of 15,000 people would have drank caustic water without realizing it. Malicious tampering can cause crashes, contamination and casualties.

Amos

•Service interruption. Unexpected downtime can be deadly when it happens to critical infrastructure. Smart security and emergency alert systems ranked No. 1 for attack impact because the entire city relies on them for awareness of impending threats like tornadoes, wildfires and flash floods.

•Data theft. Hackers can steal a wealth of personally identifiable information (PII) from smart city critical infrastructure to sell or trade on the dark web. While this action doesn’t impact the city directly, it can harm citizens. Stolen identities, bank fraud and account takeover are common outcomes.

•Irreversible damage. Hackers irreversibly damage critical infrastructure. For example, ransomware could permanently encrypt Internet of Things (IoT) traffic lights, making them unusable. Proactive action is essential since experts predict this cyberattack type will occur every two seconds by 2031

Security level of smart cities

While no standard exists to objectively rank smart cities’ infrastructure since their adoption pace and scale vary drastically, experts recognize most of their efforts are lacking. Their systems are interconnected, complex and expansive — making them highly vulnerable.

Despite the abundance of guidance, best practices and expert advice available, many smart cities make the mistake the Oldsmar water treatment plant did. They neglect updates, vulnerabilities and security weaknesses for convenience and budgetary reasons.

Minor changes can have a massive impact on smart cities’ cybersecurity posture. Here are a few essential components of securing critical Infrastructure:

•Data cleaning and anonymization. Cleaning and anonymization make smart cities less likely targets — de-identified details aren’t as valuable. These techniques verify that information is accurate and genuine, lowering the chances of data-based attacks. Also, pseudonymization can protect citizens’ PII.

•Network segmentation. Network segmentation confines attackers to a single space, preventing them from moving laterally through a network. It minimizes the damage they do and can even deter them from attempting future attacks.

•Zero-trust architecture. The concept of zero-trust architecture revolves around the principle of least privilege and authentication measures. It’s popular because it’s effective. Over eight in 10 organizations say implementing it is a top or high priority. Limiting access decreases attack risk.

•Routine risk assessments. Smart cities should conduct routine risk assessments to identify likely threats to their critical infrastructure. When they understand what they’re up against, they can handcraft robust detection and incident response practices.

•Real-time system monitoring. The Oldsmar water treatment plant’s hacking is a good example of why real-time monitoring is effective since the operator immediately detected and reversed the attacker’s changes. Smart cities should implement these systems to protect themselves.

Although smart city cyberattacks don’t make the news daily, they’re becoming more frequent. Proactive effort is essential to prevent them from growing worse. Public officials must collaborate with cybersecurity leaders to find permanent, reliable solutions.

About the essayist: Zac Amos writes about cybersecurity and the tech industry, and he is the Features Editor at ReHack. Follow him on Twitter or LinkedIn for more articles on emerging cybersecurity trends.