For the past 25 years, I’ve watched the digital world evolve from the early days of the Internet to the behemoth it is today.

Related: Self-healing devices on the horizon

What started as a decentralized, open platform for innovation has slowly but surely been carved up, controlled, and monetized by a handful of tech giants.

Now, a new wave of technological development—edge computing, decentralized identity, and privacy-first networking—is promising to reverse that trend. Companies like Muchich-based semi-conductor manufacturer Infineon Technologies are embedding intelligence directly into sensors and controllers, giving devices the ability to process data locally instead of shipping everything off to centralized cloud servers.

Meanwhile, privacy-focused projects like Session and Veilid are pushing for decentralized communication networks that don’t rely on Big Tech.

On the surface, this all sounds like a step in the right direction. But I can’t help but ask: Does any of this actually change the power dynamics of the digital world? Or will decentralization, like so many tech revolutions before it, just get absorbed into the existing system?

Disrupting business as usual

The move toward decentralized control at the edge is more than just hype. Companies like Infineon are developing zonal computing architectures in modern vehicles, where instead of having a single central control unit, intelligence is distributed throughout the car. This makes the system more responsive, more efficient, and less dependent on a cloud connection.

In smart cities, factories, and even consumer devices, similar trends are taking shape. Edge AI chips, secure microcontrollers, and embedded processors are allowing real-time decision-making without needing to send every bit of data to a distant data center.

Less data movement means fewer security risks, lower latency, and—potentially—less corporate control over user data.

But here’s the catch: technology alone doesn’t change who profits. The entire economic foundation of Big Tech is built on centralization, data extraction, and monetization. And unless that changes, decentralized infrastructure will just be a more sophisticated way for companies to keep controlling users.

We’ve seen this play out before. Apple, for instance, touts privacy as a key feature—offering on-device encryption, Secure Enclave, and privacy-first AI processing. Yet Apple’s actual business model still locks users into its ecosystem and rakes in billions through services, cloud storage, and app store commissions.

The same thing could happen with decentralization—Big Tech could give us just enough edge computing to improve efficiency while still keeping all the real control.

Needed change

For decentralization to actually shift power back to users, we need more than just technical advancements. We need a fundamental shift in the way digital businesses make money.

Right now, most of Big Tech runs on:

•Data extraction (Google, Meta, OpenAI) – AI models are hungry for data, and companies will keep finding ways to feed them, whether through search history, chat inputs, or enterprise contracts.

•Subscription lock-in (Microsoft, Adobe, Amazon AWS) – Even as infrastructure becomes more decentralized, companies still design services that tether users to their ecosystem through proprietary features and recurring fees.

•Cloud dependency (IoT, Smart Devices, Enterprise AI) – Even if devices get smarter at the edge, they’re still linked back to centralized platforms that dictate the rules.

So how do we break that cycle?

Reversing the pendulum

There are a handful of efforts trying to disrupt the status quo. Some of the more promising ones include:

Decentralized identity (DID) – Projects like DXC Technology’s decentralized identity initiatives allow users to control their own authentication credentials, instead of relying on Google, Apple, or Microsoft to log into everything.

•Privacy-first communication – Apps like Session (a decentralized, onion-routed messaging service) and Secure Scuttlebutt (a peer-to-peer social network) are proving that people don’t need to rely on Big Tech to communicate securely.

•Distributed storage and compute – Technologies like IPFS (InterPlanetary File System) and Urbit are moving away from cloud-based storage in favor of fully decentralized data ownership.

But there’s a problem: most people still opt for convenience over privacy. That’s why Facebook survived the Cambridge Analytica privacy debacle. That’s why people still use Gmail despite deep-rooted privacy concerns. That’s why Amazon’s smart home ecosystem remains dominant, even though it’s clear that users are giving up control to a monetization-obsessed corporation.

Role, limits of regulation

Regulators—particularly in Europe—are trying to push back.

The Digital Markets Act (DMA) and GDPR enforcement actions have forced some minor course corrections, and OpenAI, Google, and Meta have all faced scrutiny for how they handle personal data.

But is it enough? History suggests that Big Tech would rather pay fines than change its core business model. In the U.S., regulators have been even more reluctant to intervene, allowing tech companies to grow unchecked under the guise of “innovation.”

So while regulatory efforts help, they’re not the real solution. The real change will only happen if decentralized business models become financially competitive with centralized ones.

The wildcard may yet prove to be hardware-driven decentralization. One of the biggest reasons Big Tech has been able to maintain its grip is the cloud-based nature of digital services. But edge computing advancements could change that—not because of privacy concerns, but because they make devices cheaper, faster, and more resilient.

Infineon’s work on zonal computing in vehicles, for example, isn’t driven by ideology—it’s a practical, cost-saving innovation that also happens to decentralize control. If similar trends take hold in smart factories, industrial automation, and consumer electronics, companies may start decentralizing for efficiency reasons rather than because of user demand.

That could be the key. If decentralization delivers real cost, speed, and security benefits, businesses might start shifting in that direction—even if reluctantly.

Course change is possible

Where Does This Leave Us? We’re at a turning point. The technology for decentralization is here, but the business models haven’t caught up. If companies continue monetizing user control the way they always have, then decentralization will just be a buzzword—absorbed into the existing system without shifting power in any meaningful way.

For real change, we need:

•Economic incentives that make privacy-preserving, user-controlled services profitable.–Hardware-driven decentralization that forces change from the bottom up.

•Regulatory frameworks that go beyond fines and actually reshape the competitive landscape.

•Consumer awareness that demands real control, not just convenience.

The next few years will decide whether decentralization actually shifts power to users or just becomes another selling point for Big Tech.

The technical advancements in IoT infrastructure—decentralized control, edge computing, and embedded intelligence—are promising steps toward reducing reliance on centralized data processing and improving privacy, efficiency, and system resilience.

But without a corresponding shift in business models, these innovations could still end up reinforcing the same exploitative data practices we’ve seen in cloud computing and social media.

For decentralization to truly matter, companies need to rethink how they monetize technology. The entrenched tech giants will have to be forced to change; it’s going to require pressure from consumers and regulators – and competition from innovators with a different mindset.

Companies like Infineon are providing the technical foundation that could enable a different model—if startups, policymakers, and forward-thinking enterprises push in that direction.

So the key question is: Will the next wave of tech entrepreneurs build on this decentralized foundation, or will Big Tech co-opt it into another walled garden? Right now, it could go either way.

I’ll keep watch and keep reporting.

Acohido

Pulitzer Prize-winning business journalist Byron V. Acohido is dedicated to fostering public awareness about how to make the Internet as private and secure as it ought to be.


 

 

The post My Take: Will decentralizing connected systems redistribute wealth or reinforce Big Tech’s grip? first appeared on The Last Watchdog.

Critical infrastructure like electrical, emergency, water, transportation and security systems are vital for public safety but can be taken out with a single cyberattack. How can cybersecurity professionals protect their cities?

In 2021, a lone hacker infiltrated a water treatment plant in Oldsmar, Florida. One of the plant operators noticed abnormal activity but assumed it was one of the technicians remotely troubleshooting an issue.

Only a few hours later, the employee watched as the hacker remotely accessed the supervisory control and data acquisition (SCADA) system to raise the amount of sodium hydroxide to 11,100 parts per million, up from 100 parts per million. Such an increase would make the drinking water caustic.

The plant operator hurriedly took control of the SCADA system and reversed the change. In a later statement, the company revealed redundancies and alarms would have alerted it, regardless. Still, the fact that it was able to happen in the first place highlights a severe issue with smart cities.

The hacker was able to infiltrate the water treatment plant because its computers were running on an outdated operating system, shared the same password for remote access and were connected to the internet without a firewall.

Deadly exposure

Securing critical infrastructure is crucial for the safety and comfort of citizens. Cyberattacks on smart cities aren’t just inconvenient — they can be deadly. They can result in:

•Injuries and fatalities. When critical infrastructure fails, people can get hurt. The Oldsmar water treatment plant hacking is an excellent example of this fact, as a city of 15,000 people would have drank caustic water without realizing it. Malicious tampering can cause crashes, contamination and casualties.

Amos

•Service interruption. Unexpected downtime can be deadly when it happens to critical infrastructure. Smart security and emergency alert systems ranked No. 1 for attack impact because the entire city relies on them for awareness of impending threats like tornadoes, wildfires and flash floods.

•Data theft. Hackers can steal a wealth of personally identifiable information (PII) from smart city critical infrastructure to sell or trade on the dark web. While this action doesn’t impact the city directly, it can harm citizens. Stolen identities, bank fraud and account takeover are common outcomes.

•Irreversible damage. Hackers irreversibly damage critical infrastructure. For example, ransomware could permanently encrypt Internet of Things (IoT) traffic lights, making them unusable. Proactive action is essential since experts predict this cyberattack type will occur every two seconds by 2031

Security level of smart cities

While no standard exists to objectively rank smart cities’ infrastructure since their adoption pace and scale vary drastically, experts recognize most of their efforts are lacking. Their systems are interconnected, complex and expansive — making them highly vulnerable.

Despite the abundance of guidance, best practices and expert advice available, many smart cities make the mistake the Oldsmar water treatment plant did. They neglect updates, vulnerabilities and security weaknesses for convenience and budgetary reasons.

Minor changes can have a massive impact on smart cities’ cybersecurity posture. Here are a few essential components of securing critical Infrastructure:

•Data cleaning and anonymization. Cleaning and anonymization make smart cities less likely targets — de-identified details aren’t as valuable. These techniques verify that information is accurate and genuine, lowering the chances of data-based attacks. Also, pseudonymization can protect citizens’ PII.

•Network segmentation. Network segmentation confines attackers to a single space, preventing them from moving laterally through a network. It minimizes the damage they do and can even deter them from attempting future attacks.

•Zero-trust architecture. The concept of zero-trust architecture revolves around the principle of least privilege and authentication measures. It’s popular because it’s effective. Over eight in 10 organizations say implementing it is a top or high priority. Limiting access decreases attack risk.

•Routine risk assessments. Smart cities should conduct routine risk assessments to identify likely threats to their critical infrastructure. When they understand what they’re up against, they can handcraft robust detection and incident response practices.

•Real-time system monitoring. The Oldsmar water treatment plant’s hacking is a good example of why real-time monitoring is effective since the operator immediately detected and reversed the attacker’s changes. Smart cities should implement these systems to protect themselves.

Although smart city cyberattacks don’t make the news daily, they’re becoming more frequent. Proactive effort is essential to prevent them from growing worse. Public officials must collaborate with cybersecurity leaders to find permanent, reliable solutions.

About the essayist: Zac Amos writes about cybersecurity and the tech industry, and he is the Features Editor at ReHack. Follow him on Twitter or LinkedIn for more articles on emerging cybersecurity trends.

CISOs can sometimes be their own worst enemy, especially when it comes to communicating with the board of directors.

Related: The ‘cyber’ case for D&O insurance

Vanessa Pegueros knows this all too well. She serves on the board of several technology companies and also happens to be steeped in cyber risk governance.

I recently attended an IoActive-sponsored event in Seattle at which Pegueros gave a presentation titled: “Merging Cybersecurity, the Board & Executive Team”

Pegueros shed light on the land mines that enshroud cybersecurity presentations made at the board level. She noted that most board members are non-technical, especially when it comes to the intricate nuances of cybersecurity, and that their decision-making is primarily driven by concerns about revenue and costs.

Thus, presenting a sky-is-falling scenario to justify a fatter security budget, “does not resonate at the board level,” she said in her talk. “Board members must be very optimistic; they have to believe in the vision for the company. And to some extent, they don’t always deal with the reality of what the situation really is.

“So when a CISO or anybody comes into a board room and says, ‘if we don’t do this, this is going to happen,’ it makes them all feel anxious and they start to close down their thought processes around it.”

This suggests that CISOs must take a strategic approach, Pegueros observed, which includes building relationships up the chain of command and mastering the art of framing messages to fit the audience.

Last Watchdog engaged Pegueros after her presentation to drill down on some of the notions she highlighted in her talk. Here’s that exchange, edited for clarity and length.

LW: Why do so many CISOs still not get it that FUD and doom-and-gloom don’t work?

Pigueros: I think this is the case where CISOs understand the true gravity and risk of the situation and they feel a sense of urgency to drive action by senior management and the board.  When that action does not materialize as they think it should, they start to use worst case scenarios to drive action.

Pegueros

In the end, the CISOs are just trying to do the right thing and resolve the issues threatening the organization. What they fail to realize is that the Board does not truly understand the risk of the situation and since nothing has happened up until that point, why would it happen now?

LW: What are fundamental steps CISOs can take to start to think and act strategically and communicate more effectively

Pigueros:  First, they need to understand the business including financials, customer concerns, product deficiencies and any macro level issues and how they are impacting the business.  Next, they need to understand the priorities of the business and frame all the security priorities in the context of the business priorities.

If the CISO wants to drive better compliance, then they talk about how compliance is key to enabling sales and how the customers are demanding compliance to do business with the company.  If they want better patching, then the CISOs should talk about how patched systems will improve availability of the product and therefore service to the customers.

If they want improved visibility around security logs, they can talk about the benefits of better visibility to the overall troubleshooting and improved efficiencies in operations.   Boards won’t argue with more revenue, better availability (which drives revenue) or greater efficiencies (which save money)

LW: Is compliance an ace in-the-hole, in a sense, for CISOs? How does the SEC’s stricter rules come into play, for instance.

Pigueros: Compliance is not going to fix all the security risks.  Many companies who are compliant with various regulations or frameworks have had breaches.  I believe compliance sets a minimum bar and a CISO must leverage compliance initiatives to drive overall better security, but it is not sufficient in and of itself.

Compliance brings visibility to a topic.  For example, with the SEC Cybersecurity Rules, Boards are now much more aware of the importance of cyber and are having more robust conversations relative to cybersecurity.

LW: Is it overly optimistic to suggest that companies will soon start viewing security as a business enabler instead of a cost center?

Pigueros: Sound cybersecurity practices and risk management are a differentiator for many non-regulated companies and are table stakes for highly regulated organizations.   Enterprise customers are demanding and driving the conversation around cybersecurity.

They are demanding to understand how their vendors could potentially impact their customers and their reputation.  The evolving and interrelated ecosystem that most companies exist in has the entrance fee of sound cybersecurity practices.  In time, organizations who do not pay this entrance fee will be kicked out.

LW: Massively interconnected, highly interoperable digital systems of the near future hold great promise. Don’t we have to solve security to get there?

Pigueros: Understanding digital connectedness, the benefits, and risks of that relationship and how it enables strategic objectives is key for the board to understand.  Security is just one risk element of this reality.

Boards need to dig in and understand all the key connection points and how they could enable or potentially hinder growth for the organization.  We have a long way to go relative to boards because technology is disrupting the established norms and modes of operations relative to governance.  Boards must evolve or their organizations will fail.

Acohido

Pulitzer Prize-winning business journalist Byron V. Acohido is dedicated to fostering public awareness about how to make the Internet as private and secure as it ought to be.


 

The National Institute of Standards and Technology (NIST) has updated their widely used Cybersecurity Framework (CSF) — a free respected landmark guidance document for reducing cybersecurity risk.

Related: More background on CSF

However, it’s important to note that most of the framework core has remained the same. Here are the core components the security community knows:

Govern (GV): Sets forth the strategic path and guidelines for managing cybersecurity risks, ensuring harmony with business goals and adherence to legal requirements and standards. This is the newest addition which was inferred before but is specifically illustrated to touch every aspect of the framework. It seeks to establish and monitor your company’s cybersecurity risk management strategy, expectations, and policy.

•Identify (ID): Entails cultivating a comprehensive organizational comprehension of managing cybersecurity risks to systems, assets, data, and capabilities.

•Protect (PR): Concentrates on deploying suitable measures to guarantee the provision of vital services.Detect (DE): Specifies the actions for recognizing the onset of a cybersecurity incident.

•Respond (RS): Outlines the actions to take in the event of a cybersecurity incident.

•Recover (RC): Focuses on restoring capabilities or services that were impaired due to a cybersecurity incident.

Noteworthy updates

The new 2.0 edition is structured for all audiences, industry sectors, and organization types, from the smallest startups and nonprofits to the largest corporations and government departments — regardless of their level of cybersecurity preparedness and complexity.

Emphasis is placed on the framework’s expanded scope, extending beyond critical infrastructure to encompass all organizations. Importantly, it better incorporates and expands upon supply chain risk management processes. It also  introduces a new focus on governance, highlighting cybersecurity as a critical enterprise risk with many dependencies. This is critically important with the emergence of artificial intelligence.

To make it easier for a wide variety of organizations to implement the CSF 2.0, NIST has developed quick-start guides customized for various audiences, along with case studies showcasing successful implementations, and a searchable catalog of references, all aimed at facilitating the adoption of CSF 2.0 by diverse organizations.

The CSF 2.0 is aligned with the National Cybersecurity Strategy and includes a suite of resources to adapt to evolving cybersecurity needs, emphasizing a comprehensive approach to managing cybersecurity risk. New adopters can benefit from implementation examples and quick-start guides tailored to specific user types, facilitating easier integration into their cybersecurity practices.

Swenson

The CSF 2.0 Reference Tool simplifies implementation, enabling users to access, search, and export core guidance data in user-friendly and machine-readable formats. A searchable catalog of references allows organizations to cross-reference their actions with the CSF, linking to over 50 other cybersecurity documents – facilitating comprehensive risk management. The Cybersecurity and Privacy Reference Tool (CPRT) contextualizes NIST resources with other popular references, facilitating communication across all levels of an organization.

NIST aims to continually enhance CSF resources based on community feedback, encouraging users to share their experiences to improve collective understanding and management of cybersecurity risk. The CSF’s international adoption is significant, with translations of previous versions into 13 languages. NIST expects CSF 2.0 to follow suit, further expanding its global reach. NIST’s collaboration with ISO/IEC aligns cybersecurity frameworks internationally, enabling organizations to utilize CSF functions in conjunction with ISO/IEC resources for comprehensive cybersecurity management.

About the essayist: Jeremy Swenson is a disruptive-thinking security entrepreneur, futurist/researcher, and senior management tech risk consultant.

Achieving “digital trust” is not going terribly well globally.

Related: How decentralized IoT boosts decarbonization

Yet, more so than ever, infusing trustworthiness into modern-day digital services has become mission critical for most businesses. Now comes survey findings that could perhaps help to move things in the right direction.

According to DigiCert’s 2024 State of Digital Trust Survey results, released today, companies proactively pursuing digital trust are seeing boosts in revenue, innovation and productivity. Conversely, organizations lagging may be flirting with disaster.

“The gap between the leaders and the laggards is growing,” says Brian Trzupek, DigiCert’s senior vice president of product. “If you factor in where we are in the world today with things like IoT, quantum computing and generative AI, we could be heading for a huge trust crisis.”

DigiCert polled some 300 IT, cybersecurity and DevOps professionals across North America, Europe and APAC. I sat down with Trzupek and Mike Nelson, DigiCert’s Global Vice President of Digital Trust, to discuss the wider implications of the survey findings. My takeaways:

Bungled innovation

Digital trust refers to companies meeting the reasonable expectation that the digital services they offer not only protects users, but also upholds societal expectations and values. The tech sector has been preaching this for several years, acknowledging the fact that preserving trust, as digital services advance, is proving to be extremely difficult — yet crucial nonetheless.

“Trust has become absolutely paramount in the world,” Nelson observes. “Trust can be lost when you introduce digital connectivity — and digital connectivity is everywhere.”

DigiCert’s survey presents hard evidence that trust can be the basis of a winning business model. The top 33 percent of digital ‘trust leaders’ identified in DigiCert’s poll said they can respond more effectively to outages and incidents and found themselves to be in a much better position to effectively leverage innovation. Meanwhile, the bottom 33 percent found it increasingly difficult to tap into innovation.

This tug-and-pull is happening in an operating environment where digital innovation, from a global perspective, is being bungled. That’s the assessment of the 2024 Edelman Trust Barometer, a study highlighting the rapid erosion of digital trust, to the point of exacerbating polarized political views.

Trzupek

In such an environment, companies have a terrific opportunity to set themselves apart as being trustworthy, Trzupek argues. “The companies we view as the most trustworthy on the planet are able to provide very reliable digital services in consistent ways,” he says. “They’re able to connect people through trusted experiences.”

Emerging standards

Indeed, advanced technologies, new protocols and emerging best practices are at hand to help companies build and sustain trust.

And supply chain participants and individual consumers are eager recipients, naturally gravitating to trusted services, Nelson observes. Digital trust has, in fact, become a crucial factor in consumer purchasing decisions and corporate procurement strategies, he says.

This dynamic is highlighted by support of the Matter smart home devices standard. Matter is part of a fresh slate of technical standards that must take hold to enable massively interconnected, highly interoperable digital systems.

Since it was introduced two years ago, Matter has been embraced by some 400 manufacturers of IoT devices and close to one million Matter certificates have been issued, Nelson told me. “It’s not just in smart homes,” he says. “We’re building trust into devices in automotive and we’re seeing it in healthcare, as well.”

For its part, DigiCert has continued to advance it’s DigiCert ONE platform of tools and services to help companies manage their digital certificates and Public Key Infrastructure (PKI.) DigiCert’s clients and prospects are steadily modernizing the way digital connections get authenticated and sensitive assets get encrypted, Trzupek told me.

“In visiting our customers over the past 18 months, I’ve seen a newfound energy for closely examining and more effectively managing PKI infrastructure, both internally and externally,” he says.  “Companies are moving to update decades old PKI systems because they realize how pivotal this is to digital trust and everything they do.”

DigiCert has also been a leader in championing the concept of “crypto agility” —the capacity to update and adapt cryptographic routines swiftly—something Trzupek and Nelson argued is rapidly becoming a business imperative.

A starting point

Nelson

Leveraging advanced tools and embracing emerging best practices is all well and good for the trust leaders. But what about the laggards? For the organizations just starting down the path towards achieving and sustaining digital trust, Nelson outlined this framework:

•Knowledge and inventory: Begin with taking inventory of cryptographic assets and understanding how they’re utilized within the organization.

•Policies and enforcement: Next, establish organizational policies that outline appropriate and inappropriate behaviors regarding digital assets. Assure that these policies are enforceable.

•Centralized security: Streamline control over various business units that may have disparate practices, thereby improving visibility and the ability to mitigate risks.

•Factor in business impact: Finally, prioritize security efforts based on the potential business impact. Evaluate the consequences should certain assets go offline; focus on protecting the most critical areas first.

Lagging really is no longer an option. Geo-political conflict, remote work exposures, unpredictable usage of generative AI; these all stand to further undermine digital trust for months and years to come.

Will the laggards follow the trust leaders? I’ll keep watch and keep reporting.

Acohido

Pulitzer Prize-winning business journalist Byron V. Acohido is dedicated to fostering public awareness about how to make the Internet as private and secure as it ought to be.


(LW provides consulting services to the vendors we cover.)

AI chatbots are computer programs that talk like humans, gaining popularity for quick responses. They boost customer service, efficiency and user experience by offering constant help, handling routine tasks, and providing prompt and personalized interactions.

Related: The security case for AR, VR

AI chatbots use natural language processing, which enables them to understand and respond to human language and machine learning algorithms. This helps them improve their performance over time by gaining data from interactions.

In 2022, 88% of users relied on chatbots when interacting with businesses. These tools saved 2.5 billion work hours in 2023 and helped raise customer satisfaction to 69% for $0.50 to $0.70 per interaction. Forty-eight percent of consumers favor their efficiency prioritization.

Popular AI platforms

Communication channels like websites, messaging apps and voice assistants are increasingly adopting AI chatbots. By 2026, the integration of conversational AI in contact centers will lead to a substantial $80 billion reduction in labor costs for agents.

This widespread integration enhances accessibility and user engagement, allowing businesses to provide seamless interactions across various platforms. Examples of AI chatbot platforms include:

•Dialogflow: Developed by Google, Dialogflow is renowned for its comprehension capabilities. It excels in crafting human-like interactions in customer support. In e-commerce, it facilitates smooth product inquiries and order tracking. Health care benefits from its ability to interpret medical queries with precision.

•Microsoft Bot Framework: Microsoft’s offering is a robust platform providing bot development, deployment and management tools. In customer support, it seamlessly integrates with Microsoft’s ecosystem for enhanced productivity. E-commerce platforms leverage its versatility for order processing and personalized shopping assistance tasks. Health care adopts it for appointment scheduling and health-related inquiries.

IBM Watson Assistant: IBM Watson Assistant stands out for its AI-powered capabilities, enabling sophisticated interactions. Customer support experiences a boost with its ability to understand complex queries. In e-commerce, it aids in crafting personalized shopping experiences. Health care relies on it for intelligent symptom analysis and health information dissemination.

Checklist of vulnerabilities

Potential attack vectors can be exploited in AI chatbots, such as:

Input validation and sanitation: User inputs are gateways, and ensuring their validation and sanitation is paramount. Neglecting this can lead to injection attacks,, jeopardizing user data integrity.

Authentication and authorization vulnerabilities: Weak authentication methods and compromised access tokens can provide unauthorized access. Inadequate authorization controls may result in unapproved interactions and data exposure, posing significant security threats.

Privacy and data leakage vulnerability: Handling sensitive user information requires robust measures to prevent breaches. Data leakage compromises user privacy and has legal implications, emphasizing the need for stringent protection protocols.

Malicious intent or manipulation: AI chatbots can be exploited to spread misinformation, execute social engineering attacks or launch phishing. Such manipulation can harm user trust, tarnish brand reputation and have broader social consequences.

Machine learning helps AI chatbots adapt to and prevent new cyber threats. Its anomaly detection identifies suspicious behavior, proactively defending against potential breaches. Implement systems that continuously monitor and respond to security incidents for swift and effective defense.

Best security practices

Implementing these best practices establishes a robust security foundation for AI chatbots, ensuring a secure and trustworthy interaction environment for organizations and users:

Amos

Guidelines for organizations and developers: Conduct periodic security assessments and penetration testing to identify and address vulnerabilities in AI chatbot systems.

Multi-factor authentication: Implement multi-factor authentication for administration and privileged users to enhance access control and prevent unauthorized entry. Using MFA can prevent 99.9% of cyber security attacks.

•Secure communication channels: Ensure all communication channels between the chatbot and users are secure and encrypted, safeguarding sensitive data from potential breaches.

•Educating users for safe interaction: Provide clear instructions on how users can identify and report suspicious activities, fostering a collaborative approach to security.

•Avoiding sensitive information sharing: Encourage users to refrain from sharing sensitive information with chatbots, promoting responsible and secure interaction.

While AI chatbots have cybersecurity vulnerabilities, adopting proactive measures like secure development practices and regular assessments can effectively mitigate risks. These practices allow AI chatbots to provide valuable services while maintaining user trust and organizational security.

About the essayist: Zac Amos writes about cybersecurity and the tech industry, and he is the Features Editor at ReHack. Follow him on Twitter or LinkedIn for more articles on emerging cybersecurity trends.

San Mateo, Calif., Feb. 13, 2023 – The U.S. White House announced groundbreaking collaboration between OpenPolicy and leading innovation companies, including Kiteworks, which delivers data privacy and compliance for sensitive content communications through its Private Content Network.

The U.S. Department of Commerce’s National Institute of Standards and Technology (NIST) Artificial Intelligence Safety Institute Consortium (AISIC) will act as a collaborative platform where both public sector and private sector leading organizations will provide guidance on standards and methods in the development of trustworthy AI.

The Kiteworks platform provides customers with a Private Content Network that enables them to employ zero-trust policy management in the governance and protection of sensitive content communications, including the ingestion of sensitive content into generative AI (GenAI).

Kiteworks unifies, tracks, controls, and secures sensitive content moving within, into, and out of organizations. With Kiteworks, organizations can significantly improve risk management and ensure regulatory compliance on all sensitive content communications.

Raimondo

The consortium, AISIC, brings together over 200 of the nation’s foremost AI stakeholders to support the development and deployment of trustworthy and safe AI technologies. This initiative aligns with President Biden’s Executive Order on Artificial Intelligence, focusing on key priorities, such as red-teaming, capability evaluations, risk management, safety, and security guidelines, and watermarking synthetic content.

According to U.S. Commerce Secretary Gina M. Raimondo, “Through President Biden’s landmark Executive Order, we will ensure America is at the front of the pack – and by working with this group of leaders from industry, civil society, and academia, together we can confront these challenges to develop the measurements and standards we need to maintain America’s competitive edge and develop AI responsibly.”

Freestone

Tim Freestone, Chief Strategy Officer at Kiteworks, expressed his enthusiasm about the collaboration: “Kiteworks’ selection underscores our commitment to protect sensitive content from being ingested into public GenAI large language models (LLMs). Kiteworks is very excited to play a pivotal role as a groundbreaking member of the NIST AI Safety Institute Consortium, tapping our expertise in data security and compliance to help guide the responsible development and management of AI solutions.”

For further insights into this groundbreaking collaboration and Kiteworks’ involvement, Kiteworks’ Freestone is available for interviews and discussions.

About Kiteworks: Kiteworks’ mission is to empower organizations to effectively manage risk in every send, share, receive, and save of sensitive content. The Kiteworks platform provides customers with a Private Content Network that delivers content governance, compliance, and protection. The platform unifies, tracks, controls, and secures sensitive content moving within, into, and out of their organization, significantly improving risk management and ensuring regulatory compliance on all sensitive content communications. Headquartered in Silicon Valley, Kiteworks protects over 100 million end users for over 3,650 global enterprises and government agencies.

Silver Spring, Maryland, Jan. 30, 2024 — Aembit, the Workload Identity and Access Management (IAM) platform that enables DevOps and security teams to discover, manage, enforce and audit access between workloads, today announced the availability of a new integration with the industry-leading CrowdStrike Falcon® platform to give enterprises the ability to dynamically manage and enforce conditional access policies based on the real-time security posture of their applications and services. This integration signifies a significant leap in Aembit’s mission to empower organizations to apply Zero Trust principles to make workload-to-workload access more secure and manageable.

Workload IAM transforms enterprise security by securing workload-to-workload access through policy-driven, identity-based, and secretless access controls, moving away from the legacy unmanaged, secrets-based approach.

Bernard

“Today’s attacks are increasingly identity-based, which is why enforcing identity-protection across the enterprise at every layer is critical for modern security. The CrowdStrike Falcon platform is rapidly becoming the center of cybersecurity’s ecosystem. This integration with Aembit enables organizations to secure machine identities as part of a holistic approach to security.” said Daniel Bernard, chief business officer at CrowdStrike.

Through this partnership, the Aembit Workload IAM solution checks to see if a CrowdStrike Falcon agent is running on the workload and evaluates its real-time security posture to drive workload access decisions to applications and data.

With this approach, now enterprises can protect their workloads from unauthorized access, even against the backdrop of changing conditions and dynamic access requirements.

Goldschlag

“The launch of the Aembit Workload IAM Platform on the CrowdStrike Marketplace represents a significant advancement in our joint mission to securely manage workload-to-workload access,” said David Goldschlag, CEO and co-founder at Aembit.

Additional customer benefits from this partnership include:

•Managed workload-to-workload access: Enforce and manage workload access to other applications, SaaS services, and third-party APIs based on identity and policy set by the security team, driving down risk.

•Seamless deployment: Drive consolidation by effortlessly integrating the Aembit Workload IAM Platform with the Falcon platform in a few clicks, providing a unified experience for managing workload identities while understanding workload security posture.

•Zero Trust security model: Embrace a Zero Trust approach, ensuring that every access request, regardless of the source, is verified before granting access rights. Aembit’s solution enforces the principle of least privilege based on identity, policy, and workload security posture, minimizing potential security vulnerabilities.

•Visibility and monitoring: Gain extensive visibility into workload identities and access permissions, enabling swift detection and response to potential security threats. Monitor and audit access logs based on identity for comprehensive security oversight.

This industry-first collaboration builds on the recent CrowdStrike Falcon Fund strategic investment in Aembit, underscoring the global cybersecurity leader’s commitment to fostering innovation within the space.

“We are excited to bring the power of Aembit’s Workload IAM to the CrowdStrike Marketplace. This collaboration enables us to deliver Zero Trust for workload access in a way that simplifies and automates the evolving security challenges faced by DevOps and DevSecOps teams,” said Apurva Dave, CMO at Aembit.

The investment reflects the recognition of the growing demands for securing workload access.

Aembit Workload IAM is available in the CrowdStrike Marketplace, a one-stop destination and world-class ecosystem of third party products. See more here.

About Aembit: Aembit is the Workload Identity and Access Management (IAM) Platform that lets every business safely build its next generation of applications by inherently trusting how it connects to partners, customers, and cloud services. Aembit provides seamless and secure access from your workloads to the services they depend on, like APIs, databases, and cloud resources, while simplifying application development, delivery, compliance, and audit.

Media contact: Apurva Dave, Chief Marketing Officer,press@aembit.io

Notable progress was made in 2023 in the quest to elevate Digital Trust.

Related: Why IoT standards matter

Digital Trust refers to the level of confidence both businesses and consumers hold in digital products and services – not just that they are suitably reliable, but also that they are as private and secure as they need to be.

We’re not yet at a level of Digital Trust needed to bring the next generation of connected IT into full fruition – and the target keeps moving. This is because the hyper interconnected, highly interoperable buildings, transportation systems and utilities of the near future must necessarily spew forth trillions of new digital connections.

And each new digital connection must be trustworthy. Therein lies the monumental challenge of achieving the level of  Digital Trust needed to carry us forward. And at this moment, wild cards – especially generative AI and quantum computing — are adding to the complexity of that challenge.

I had the opportunity to sit down with DigiCert’s Jason Sabin, Chief Technology Officer and Avesta Hojjati, Vice President of Engineering to chew this over. We met at DigiCert Trust Summit 2023.

We drilled down on a few significant developments expected to play out in 2024 and beyond. Here are my takeaways:

PKI renaissance

Trusted digital connections. This is something we’ve come to take for  granted. And while most of our digital connections are, indeed, robustly protected, a material percentage are not; these range from loosely configured cloud IT infrastructure down to multiplying API connectors that many companies are leaving wide open, all too many APIs simply going unaccounted for.

Each time we use a mobile app or website-hosted service, digital certificates and the Public Key Infrastructure (PKI) come into play — to assure authentication and encrypt sensitive data transfers. This is a fundamental component of Digital Trust – and the foundation for securing next-gen digital connections.

The goal is lofty: companies and consumers need to feel very confident that each device, each document, and each line of code can be trusted implicitly. And PKI is the best technology we’ve got to get us there.

Sabin

“PKI has been around for 30 years in lots of different reincarnations,” Sabin noted. “We’re hitting a massive resurgence, almost a renaissance of PKI right now, because there are so many use cases where the simple ingredients of PKI can be used very effectively to solve the business needs of today.”

Enter the concept of “cryptographic agility” —  a reference to the rise of a new, much more flexible approach to encrypting digital assets. Crypto agility has arisen because digital connections are firing off more dynamically than ever before. Thus companies increasingly require the ability to update encrypted assets in a timely manner and even switch them out as needed, Sabin says.

Post-quantum crypto

A high level of Digital Trust, one that leverages crypto agility, is needed for companies to thrive in environment where cyber attacks are becoming more targeted and severe – and with generative AI providing a great boon to the attackers.

What’s more, a fresh layer of risks posed by the rise of quantum computing looms large. And this is were something called “post-quantum cryptography” (PQC) comes into play.

The National Institute of Standards and Technology (NIST) is in the late stages of formally adopting established standards for PQC; this will result in NIST-recommended encryption algorithms that can withstand potential threats posed by quantum computers.

Sabin pointed me to a recent Ponemon Institute polling of 1,426 IT security pros that reveals a worrying lack of PQC-readiness among companies across the US, Europe, the Middle East and Asia-Pacific. The survey found a skills shortage, budget constraints and uncertainty about PQC causing some 61 percent of respondents to acknowledge that their organizations are not prepared.

Yet quantum computing exposures are happening today. Threat actors are pursuing a “harvest now, decrypt later” strategy, Savin told me. They’re hoarding stolen cyber assets encrypted with current day algorithms, he says, and patiently waiting for quantum hacking routines to emerge that will enable them to crack in.

PKI playground

To aid and abet the PQC transition, DigiCert has been collaborating with industry partners to develop encryption methods that can withstand the threats posed by quantum computing. DigiCert recently released the DigiCert PQC Playground—a part of DigiCert Labs designed to let security code writers and tech enthusiasts experiment with the NIST-endorsed PQC algorithms which are slated to go into effect in 2024.

Hojjati

Playground visitors can get in the practice of issuing certificates and PKI keys under NIST’s three most advanced encryption algorithms: CRYSTALS-Dilithium, FALCON, and SPHINCS+. Hojjati told me this free tool is intended to be an incubator for development and innovation, demystifying PQC by providing a user-friendly environment for experimentation.

The aim is to alleviate apprehension surrounding the deployment of PQC algorithms and certificates, Hojjati says. This will give software developers, CISOs and other stakeholders a sandbox to test and understand the practical implications of integrating the new NIST algorithms into their systems, he says.

As standards and best practices solidify, a new senior leadership role — , the Chief Digital Trust Officer – has cropped up. The office of CDTO is gaining traction in large enterprises that are proactively pursuing Digital Trust. These new security leaders are not just technologists, Sabin says, they are strategists and visionaries.

“In the last 18 months we’re already seeing a number of companies create this new C-level role, recognizing that Digital Trust is critical to their capabilities, their business objectives and the vision of the company,” Sabin says.

A we turn the corner into 2024, Digital Trust is in sight. I’ll keep watch and keep reporting.

Acohido

Pulitzer Prize-winning business journalist Byron V. Acohido is dedicated to fostering public awareness about how to make the Internet as private and secure as it ought to be.


(LW provides consulting services to the vendors we cover.)

 

Threat intelligence sharing has come a long way since Valentine’s Day 2015.

Related: How ‘Internet Access Brokers’ fuel ransomware

I happened to be in the audience at Stanford University when President Obama took to the stage to issue an executive order challenging the corporate sector and federal government to start collaborating as true allies.

Obama’s clarion call led to the passage of the Cybersecurity Information Sharing Act, the creation of Information Sharing and Analysis Organizations (ISAOs) and the jump-starting of several private-sector sharing consortiums.

Material progress in threat intel sharing, indeed, has been made. Yet, there remains much leeway for improvements. I had the chance to discuss this with Christopher Budd, director of Sophos X-Ops, the company’s cross-operational task force of security defenders.

Budd explained how Sophos X-Ops is designed to dismantle security silos internally, while also facilitating external sharing, for the greater good.

For a full drill down, please view the accompanying videocast. Here are my takeaways.

Overcoming inertia

Threat actors haven’t been exactly sitting on their laurels. Case in point: fresh intel just released in Sophos’  Active Adversary Report for Security Practitioners discloses how telemetry measuring network activity has begun turning up missing on a grand scale – in nearly 42 percent of the incident response cases examined by Sophos’ analysts between January 2022 and June 2023.

These gaps in telemetry illustrate just how deep and dynamic the cat vs. mouse chase has become; in some 82 percent of these cases the attackers purposefully disabled or wiped out the telemetry to hide their tracks.

“Because of improved network defenses, the attackers are innovating ways to get in and out as fast as they can,” Budd says.  “We’ve been dealing with this arms race for decades; at this point, not only is it an arms race, but it is also a highly caffeinated arms race.”

Budd

Overcoming inertia remains a big challenge, Budd adds. Historically, network security has been marked by siloed security operations; unilateral teams got stood up to carry out email security, vulnerability patching, incident response, etc. — interoperability really wasn’t on anyone’s radar.

Meanwhile, the network attack surface has inexorably expanded, even more so post Covid 19, as companies intensified their reliance on cloud-centric IT resources. And today, with the mainstreaming of next-gen AI tools, attackers enjoy an abundance of viable attack vectors, putting security teams that operate unilaterally at a huge disadvantage.

Joint task force approach

Sophos X-Ops launched in July 2022 to apply a joint task force approach to protecting enterprises in this environment. Budd directs a cross-operational unit linking SophosLabs, Sophos SecOps and SophosAI, bringing together three established teams of seasoned experts.

From this command center perspective, real-world strategic analysis happens continuously and in real time. The task force can deploy leading-edge detection and response tools and leverage the timeliest intelligence. It’s much the same approach that has proven effective time and again in military and emergency response scenarios.

“The benefit of a joint task force model is you maintain excellence and expertise in each domain area,” Budd says. “You don’t dilute the expertise in that domain area; you break down the silos by bringing each piece that you need for that unique threat to build a unique solution.”

The incidence response team, for instance, might zero in on suspicious activity to gather hard evidence that gets turned over to malware experts for deeper analysis. AI specialists might then jump on board to develop an automated mitigation routine, suitable for scaling. And the entire mitigation effort gets added to the overall knowledge base.

This is how the Sophos X-Ops team helped neutralized a recent spike in ransomware attacks against Microsoft SQL servers. The joint task force unraveled how the attackers were able to leverage a fake downloading site and grey-market remote access tools to distribute multiple ransomware families. The campaign was thwarted by pooling resources and jointly analyzing the attackers’ tactics.

 External sharing

It struck me in discussing this with Budd that the joint task force approach directly aligns with Obama’s call for stronger alliances on the part of the good guys. Notably, Sophos X-Ops from day one has actively participated in external sharing, via the Cyber Threat Alliance (CTA)and the Microsoft Active Protections Program (MAPP.)

The CTA is a coalition of some two dozen companies and organizations, led by Cisco, Palo Alto Networks, Fortinet and Check Point, committed to sharing actionable threat intel in real time. Members proactively share information on emerging threats, malware samples and attack patterns.

With MAPP, Microsoft aims to share fresh vulnerability patching alerts with security vendors before public disclosure. This gives the security vendors a head start in developing patches and affords them a head start in distributing patches. This strengthens the overall Windows ecosystem, Budd noted.

As cyber threats continue to evolve and scale up, the urgency for companies and government agencies to do much more of this is intensifying. The good news is that the advanced technologies and vetted best practices required to completely dismantle security silos as well as to  extend external sharing far and wide, are readily available.

This all aligns with the notion that deeper levels of sharing must coalesce if we are to have any hope of tempering continually rising cyber threats. I’ll keep watch and keep reporting.

Acohido

Pulitzer Prize-winning business journalist Byron V. Acohido is dedicated to fostering public awareness about how to make the Internet as private and secure as it ought to be.


(LW provides consulting services to the vendors we cover.)