CISOs can sometimes be their own worst enemy, especially when it comes to communicating with the board of directors.

Related: The ‘cyber’ case for D&O insurance

Vanessa Pegueros knows this all too well. She serves on the board of several technology companies and also happens to be steeped in cyber risk governance.

I recently attended an IoActive-sponsored event in Seattle at which Pegueros gave a presentation titled: “Merging Cybersecurity, the Board & Executive Team”

Pegueros shed light on the land mines that enshroud cybersecurity presentations made at the board level. She noted that most board members are non-technical, especially when it comes to the intricate nuances of cybersecurity, and that their decision-making is primarily driven by concerns about revenue and costs.

Thus, presenting a sky-is-falling scenario to justify a fatter security budget, “does not resonate at the board level,” she said in her talk. “Board members must be very optimistic; they have to believe in the vision for the company. And to some extent, they don’t always deal with the reality of what the situation really is.

“So when a CISO or anybody comes into a board room and says, ‘if we don’t do this, this is going to happen,’ it makes them all feel anxious and they start to close down their thought processes around it.”

This suggests that CISOs must take a strategic approach, Pegueros observed, which includes building relationships up the chain of command and mastering the art of framing messages to fit the audience.

Last Watchdog engaged Pegueros after her presentation to drill down on some of the notions she highlighted in her talk. Here’s that exchange, edited for clarity and length.

LW: Why do so many CISOs still not get it that FUD and doom-and-gloom don’t work?

Pigueros: I think this is the case where CISOs understand the true gravity and risk of the situation and they feel a sense of urgency to drive action by senior management and the board.  When that action does not materialize as they think it should, they start to use worst case scenarios to drive action.

Pegueros

In the end, the CISOs are just trying to do the right thing and resolve the issues threatening the organization. What they fail to realize is that the Board does not truly understand the risk of the situation and since nothing has happened up until that point, why would it happen now?

LW: What are fundamental steps CISOs can take to start to think and act strategically and communicate more effectively

Pigueros:  First, they need to understand the business including financials, customer concerns, product deficiencies and any macro level issues and how they are impacting the business.  Next, they need to understand the priorities of the business and frame all the security priorities in the context of the business priorities.

If the CISO wants to drive better compliance, then they talk about how compliance is key to enabling sales and how the customers are demanding compliance to do business with the company.  If they want better patching, then the CISOs should talk about how patched systems will improve availability of the product and therefore service to the customers.

If they want improved visibility around security logs, they can talk about the benefits of better visibility to the overall troubleshooting and improved efficiencies in operations.   Boards won’t argue with more revenue, better availability (which drives revenue) or greater efficiencies (which save money)

LW: Is compliance an ace in-the-hole, in a sense, for CISOs? How does the SEC’s stricter rules come into play, for instance.

Pigueros: Compliance is not going to fix all the security risks.  Many companies who are compliant with various regulations or frameworks have had breaches.  I believe compliance sets a minimum bar and a CISO must leverage compliance initiatives to drive overall better security, but it is not sufficient in and of itself.

Compliance brings visibility to a topic.  For example, with the SEC Cybersecurity Rules, Boards are now much more aware of the importance of cyber and are having more robust conversations relative to cybersecurity.

LW: Is it overly optimistic to suggest that companies will soon start viewing security as a business enabler instead of a cost center?

Pigueros: Sound cybersecurity practices and risk management are a differentiator for many non-regulated companies and are table stakes for highly regulated organizations.   Enterprise customers are demanding and driving the conversation around cybersecurity.

They are demanding to understand how their vendors could potentially impact their customers and their reputation.  The evolving and interrelated ecosystem that most companies exist in has the entrance fee of sound cybersecurity practices.  In time, organizations who do not pay this entrance fee will be kicked out.

LW: Massively interconnected, highly interoperable digital systems of the near future hold great promise. Don’t we have to solve security to get there?

Pigueros: Understanding digital connectedness, the benefits, and risks of that relationship and how it enables strategic objectives is key for the board to understand.  Security is just one risk element of this reality.

Boards need to dig in and understand all the key connection points and how they could enable or potentially hinder growth for the organization.  We have a long way to go relative to boards because technology is disrupting the established norms and modes of operations relative to governance.  Boards must evolve or their organizations will fail.

Acohido

Pulitzer Prize-winning business journalist Byron V. Acohido is dedicated to fostering public awareness about how to make the Internet as private and secure as it ought to be.


 

It’s a digital swindle as old as the internet itself, and yet, as the data tells us, the vast majority of security incidents are still rooted in the low-tech art of social engineering.

Related: AI makes scam email look real

Fresh evidence comes from  Mimecast’s “The State of Email and Collaboration Security” 2024 report.

The London-based supplier of email security technology, surveyed 1,100 information technology and cybersecurity professionals worldwide and found:

•Human risk remains a massive exposure. Some 74 percent of cyber breaches are caused by human factors, including errors, stolen credentials, misuse of access privileges, or social engineering.

•New AI risks have lit a fire under IT teams. . Eight out of 10 of those polled expressed concerned about AI threats posed and 67 percent said AI-driven attacks will soon become the norm.

•Email remains the primary attack vector.  The newest wrinkle – Generative AI tools, like ChatGPT, are giving rise to new attack paths, compounding the pressure from old standby threats, i.e.  phishing, spoofing, and ransomware

van Zadelhoff

“Emerging tools and technologies like AI and deepfakes, along with the proliferation of collaboration platforms are changing the way threat actors work; but people remain the biggest barrier to protecting companies from cyber threats,” observes Marc van Zadelhoff, Mimecast CEO.

One types of email-borne exposure that continues to gut-punch companies large and small is Business Email Compromise (BEC) fraud. A study issued last August by Gartner analysts Satarupa Patnaik and Franz Hinner drills down on how  legacy endpoint protections are falling short in the post-Covid, GenAI operating environment.

BEC = big losses

attackers finagle their way into corporate communications, mimicking or outright hijacking legitimate email accounts. They no longer bother with malware or link, instead focusing more so than ever on human failings. And it’s paying off to the tune of $2.7 billion in losses in just one year, according to the FBI.

The Gartner report highlights how BEC fraud often begins with an Account Takeover (ATO). Attackers infiltrate a user’s account to orchestrate their grand larceny and the collateral damage can be significant: loss of trust from customers and business partners .

Patnaik and Hinner lay out an argument as to why  companies need to get on with their due diligence and move towards upgrading  to AI-based secure email gateway solutions, equipped with behavioral analysis and imposter detection. Indeed, the  technology and best practices to do this are readily available. For enterprises looking to bolster their cyber-defenses, Gartner recommends:

•Leveraging GenAI in what amounts to a counter attack to granularing monitor and apply security policies to every email.

•Tapping proven controls such as k DMARC, MSOAR, IAM, MFA to serve as an effective layered defense.

•Updating antiquated email protocols for financial transactions. Email alone should never be the gatekeeper for moving money or sensitive data.

•Implementing effective training to teach users and partners how to spot and sidestep BEC traps.

We now know what the post Coivd 19/Gen AI threat threat landscape looks like, folks. One  crucial layer to button down is human factors, which means advanced security for the most ubiquitous communication tool: email. I’ll keep watch and keep reporting.

Acohido

Pulitzer Prize-winning business journalist Byron V. Acohido is dedicated to fostering public awareness about how to make the Internet as private and secure as it ought to be.


The National Institute of Standards and Technology (NIST) has updated their widely used Cybersecurity Framework (CSF) — a free respected landmark guidance document for reducing cybersecurity risk.

Related: More background on CSF

However, it’s important to note that most of the framework core has remained the same. Here are the core components the security community knows:

Govern (GV): Sets forth the strategic path and guidelines for managing cybersecurity risks, ensuring harmony with business goals and adherence to legal requirements and standards. This is the newest addition which was inferred before but is specifically illustrated to touch every aspect of the framework. It seeks to establish and monitor your company’s cybersecurity risk management strategy, expectations, and policy.

•Identify (ID): Entails cultivating a comprehensive organizational comprehension of managing cybersecurity risks to systems, assets, data, and capabilities.

•Protect (PR): Concentrates on deploying suitable measures to guarantee the provision of vital services.Detect (DE): Specifies the actions for recognizing the onset of a cybersecurity incident.

•Respond (RS): Outlines the actions to take in the event of a cybersecurity incident.

•Recover (RC): Focuses on restoring capabilities or services that were impaired due to a cybersecurity incident.

Noteworthy updates

The new 2.0 edition is structured for all audiences, industry sectors, and organization types, from the smallest startups and nonprofits to the largest corporations and government departments — regardless of their level of cybersecurity preparedness and complexity.

Emphasis is placed on the framework’s expanded scope, extending beyond critical infrastructure to encompass all organizations. Importantly, it better incorporates and expands upon supply chain risk management processes. It also  introduces a new focus on governance, highlighting cybersecurity as a critical enterprise risk with many dependencies. This is critically important with the emergence of artificial intelligence.

To make it easier for a wide variety of organizations to implement the CSF 2.0, NIST has developed quick-start guides customized for various audiences, along with case studies showcasing successful implementations, and a searchable catalog of references, all aimed at facilitating the adoption of CSF 2.0 by diverse organizations.

The CSF 2.0 is aligned with the National Cybersecurity Strategy and includes a suite of resources to adapt to evolving cybersecurity needs, emphasizing a comprehensive approach to managing cybersecurity risk. New adopters can benefit from implementation examples and quick-start guides tailored to specific user types, facilitating easier integration into their cybersecurity practices.

Swenson

The CSF 2.0 Reference Tool simplifies implementation, enabling users to access, search, and export core guidance data in user-friendly and machine-readable formats. A searchable catalog of references allows organizations to cross-reference their actions with the CSF, linking to over 50 other cybersecurity documents – facilitating comprehensive risk management. The Cybersecurity and Privacy Reference Tool (CPRT) contextualizes NIST resources with other popular references, facilitating communication across all levels of an organization.

NIST aims to continually enhance CSF resources based on community feedback, encouraging users to share their experiences to improve collective understanding and management of cybersecurity risk. The CSF’s international adoption is significant, with translations of previous versions into 13 languages. NIST expects CSF 2.0 to follow suit, further expanding its global reach. NIST’s collaboration with ISO/IEC aligns cybersecurity frameworks internationally, enabling organizations to utilize CSF functions in conjunction with ISO/IEC resources for comprehensive cybersecurity management.

About the essayist: Jeremy Swenson is a disruptive-thinking security entrepreneur, futurist/researcher, and senior management tech risk consultant.

Congressional bi-partisanship these day seems nigh impossible.

Related: Rising tensions spell need for tighter cybersecurity

Yet by a resounding vote of 352-65, the U.S. House of Representatives recently passed a bill that would ban TikTok unless its China-based owner, ByteDance Ltd., relinquishes its stake.

President Biden has said he will sign the bill into law, so its fate is now in the hands of the U.S. Senate.

I fervently hope the U.S. Senate does not torpedo this long overdue proactive step to protect its citizens and start shoring up America’s global stature.

Weaponizing social media

How did we get here? A big part of the problem is a poorly informed general populace. Mainstream news media gravitates to chasing the political antics of the moment. This tends to diffuse sober analysis of the countless examples of Russia, in particular, weaponizing social media to spread falsehoods, interfere in elections, target infrastructure and even radicalize youth.

Finally, Congress appears to be heeding lessons available to be learned since the hacking John Podesta’s email account – not to mention all of the havoc Russia was able to foment in our 2016 elections, attempting to interfere in 39 states.

One of the most chilling examples of Russia methodically continuing to leverage social media as a strategic weapon has attracted barely any news coverage at all. In 2011, Russia launched a social media site called iFunny aimed at disaffected young men. In short order, iFunny was downloaded 10 million times and became a tool for neo-Nazi terror groups to recruit Gen-Z males.

In the weeks leading up to the 2020 U.S. presidential election, authorities in North Carolina arrested a 19-year-old male with a van full of guns and explosives and charged him with plotting to assassinate then Democratic presidential nominee Biden. Federal court documents describe how the teenager had posted memes on iFunny questioning whether he should kill Biden, and also run numerous Google searches for things like Biden’s home address and information about automatic weapons and night-vision goggles.

During this same time frame, investigators at Pixalate, a Palo Alto, Calif.-based supplier of fraud management technology, documented how iFunny distributed data-stealing malware specifically targeting smartphone users in the key swing states of Pennsylvania, Michigan and Wisconsin.

50 upcoming elections

It’s logical to assume China has been and will continue to borrow from Russia’s social media manipulation playbook.

Sanchez

“If the amount of data harvested by TikTok is similar to all other social media platforms then there is a bigger problem to deal with as misinformation and deepfakes are threats that are quickly growing,” observes Antonio Sanchez, principal evangelist at Fortra. “This could impact election outcomes and there are 50 countries having elections this year.”

Senate detractors insist that this bill – or any legislation that puts any hint of rails around social media — will stifle innovation and impinge on civil liberties. Brandon Hart, CTO at Everything Blockchain, argues that this divest-or-be-banned mandate, aimed squarely at China  “could inadvertently infringe upon (civil) liberties, potentially eroding public trust and individual autonomy.”

Safety first

Hart advocates more laisse faire intervention.

Hart

“A more fitting approach would be for the government to focus on identifying and elucidating potential threats, thereby empowering citizens to make informed decisions regarding the technologies they use,” Hart says.

Empowering citizens is all fine and well, but it is also true that the fundamental role of government is to keep the citizenry safe.

Clemens

“A nation-state must protect its citizens and today, protection extends beyond bodily or physical harm,” observes Daniel Clemens, CEO of ShadowDragon. “ The protection of a democratic government’s citizens may mean the protection of citizens’ data, which now justifies the intervention of nations.”

Clemens opines that forcing China to divest would be a “great step in countering the influence and outcomes from TikTok against a free society that does not need to be influenced by a regime that ignores basic human rights.”

Clemens further notes that if China is made to divest, it still stands to strike a windfall in profits off the sale of TikTok. “China will continue to break international laws and push the boundaries on digital surveillance to advance its interests,” he says. “There’s no change there.”

Careful calibrations

Proponents also point out that this bill has been carefully calibrated to stop a specific, tangible threat: the likelihood that China will use TikTok strategically against the U.S.

Strand

“Consideration needs to be given to determine what kind of data has and is being collected, and to what extent,” says Chris Strand, vice president, risk and compliance, at Cybersixgill. “Even in the event that no personal data is collected, there can still be reason to take action to prevent the abuse of data that relates to behaviors, emotions, and preferences, that can lead to nefarious outcomes, identity theft, and military operational intelligence.”

Clemens also adds that the West’s private sector has been moving away from China for years due to China’s rampant intellectual property thievery and censorship. “This sets an important precedent that signifies the US Government’s willingness to step in when consumer data is threatened,” Clemens says. “I hope to see more material regulatory actions against China in the future.”

I’d note that the concerted efforts by Chinese officials to downplay the significance of this bill is a sure sign that it has teeth – and, indeed, would deter America’s rivals from wielding social media as a strategic weapon against the U.S.

There’s no baseless paranoia here. Quite the opposite. The imperative for legislative intervention couldn’t be any clearer. We’re deep into a digital Pearl Harbor. Which way will the U.S. Senate pivot? We’ll soon find out. I’ll keep watch and keep reporting.

Acohido

Pulitzer Prize-winning business journalist Byron V. Acohido is dedicated to fostering public awareness about how to make the Internet as private and secure as it ought to be.


A close friend of mine, Jay Morrow, has just authored a book titled “Hospital Survival.”

Related: Ransomware plagues healthcare

Jay’s book is very personal. He recounts a health crisis he endured that began to manifest at the start of what was supposed to be a rejuvenation cruise.

Jay had to undergo several operations, including one where he died on the operating table and had to be resuscitated. Jay told me he learned about managing work stress, the fragility and preciousness of good health and the importance of family. We also discussed medical technology and how his views about patient privacy evolved. Here are excerpts of our discussion, edited for clarity and length:

LW: Your book is pretty gripping. It starts with you going on a cruise, but then ending up on this harrowing personal journey.

Morrow: That’s right. I was a projects manager working hard at a high-stress job and not necessarily paying any attention to the stress toll that it was taking on me over a number of years. Professionally, my plates were full. I was working 60 to 70 hours a week and that was probably too much.

Finally, my wife, Malia, said, ‘That’s enough!’ and she arranged for us to take a short cruise down the California coast to Mexico and back. By the time we got to the cruise terminal, my leg was hurting a little bit, it was just a little sore and I was limping a bit. Things quickly got a whole lot worse.

LW: It took quite some time to finally discover what was wrong.

Morrow: Initially, I went through a battery of different tests and even a series of operations, and they still weren’t sure. Finally, an orthopedic surgeon figured out that it was a cyst on my colon that would leak when I was under stress. This caused poisons to leak into my hip and infect the bone to the point where I contracted osteomyelitis, an excruciating bone infection.

All through this, I had to have three major operations, including removal of my femur. During one of my surgeries, I died on the operating table. I quit breathing. My heart stopped. There was no pulse or blood pressure and they had to use the paddles to bring me back to life and I was in a coma after that.

LW: How did technology come into play?

Morrow: Probably about every week I’d have to undergo an MRI. You’re inserted into a huge machine, and you’re not allowed to move. Then they spend what seems like hours checking various items. I couldn’t have survived without modern medical technology.

It helped the doctors, but it helped me even more so. The MRIs, the CAT scans and ultrasounds that I endured provided information that helped me understand what was going on. Knowing how things were progressing was very important to me.

LW: You told me your views on patient privacy shifted through the course of all this.

Morrow: It used to be you could just walk into the hospital and see a doctor with minimal fuss. Now, often times, you have to check in through layered technologies that require several levels of proving you are who you say you are. This is because of HIPAA privacy functions but also because of the waves of ransomware attacks against health care facilities.

LW: Were you at any point concerned about your privacy being invaded?

Morrow: What I came to realize is that survival trumps privacy. By default, you give up all your personal privacy to receive medical treatment in a tightly controlled environment. In fact, once you’re in a hospital, you need to be assertive. The hospital staff is overworked and most often will fall back on protocol, and sometimes protocol just does not work; sometimes you need to push back.

LW: What’s the main thing you’d like your book to convey?

Morrow: To survive a hospital, you’re going to need a care advocate other than yourself. I’m assertive by nature. But if I didn’t have my wife, and on occasion my mother or my daughter with me, I would probably not have survived. It took all of us to figure out how the place actually functioned, and how to actually get certain things done.

The nursing staff and orderlies do a good job of taking care of most things, but if you’re not assertive, you’re going to find yourself at the low end of the chain. Someone must make sure you’re not falling through the cracks.

Acohido

Pulitzer Prize-winning business journalist Byron V. Acohido is dedicated to fostering public awareness about how to make the Internet as private and secure as it ought to be.


Charities and nonprofits are particularly vulnerable to cybersecurity threats, primarily because they maintain personal and financial data, which are highly valuable to criminals.

Related: Hackers target UK charities

Here are six tips for establishing robust nonprofit cybersecurity measures to protect sensitive donor information and build a resilient organization.

•Assess risks. Creating a solid cybersecurity foundation begins with understanding the organization’s risks. Many nonprofits are exposed to potential daily threats and don’t even know it. A recent study found only 27% of charities undertook risk assessments in 2023 and only 11% said they reviewed risks posed by suppliers. These worrying statistics underscore the need to be more proactive in preventing security breaches.

•Keep software updated. Outdated software and operating systems are known risk factors in cybersecurity. Keeping these systems up to date and installing the latest security patches can help minimize the frequency and severity of data breaches among organizations. Investing in top-notch firewalls is also essential, as they serve as the first line of defense against external threats.

•Strengthen authentication. Nonprofits can bolster their network security by insisting on strong login credentials. This means using longer passwords — at least 16 characters, as recommended by experts — in a random string of upper and lower letters, numbers, and symbols. Next, implement multi-factor authentication to make gaining access even more difficult for hackers.

•Train staff regularly. A robust security plan is only as good as its weakest link. In most organizations, that exposure comes from the employees. Roughly 95% of cybersecurity incidents begin with a staff member clicking on an unsuspecting link, usually in an email. A solid cyber security culture requires regular training on the latest best practices so people know what to look out for and what to do.

•Get board involvement. Effective nonprofit cybersecurity starts at the top. Just as it’s common practice to task board members with budget reviews for fraud prevention, organizations can appoint trustees to oversee cybersecurity explicitly. Board involvement can cut through red tape and implement improved safeguards for donor information and funds

Conduct Internal Reviews. In a 2023 survey, 30% of CISOs named insider threats one of the biggest cybersecurity threats for the year. The risk factor is higher among nonprofits, as they store data about high-net-worth donors. A disgruntled employee or persons with malicious intentions can gain unauthorized access to these records to demand payments from patrons, knowing full well they can afford it.

Charity exposures

Threat actors continue to explore new methods to steal information. The usual attack vectors include:

•Data theft: Charities are rich in valuable data, whether in their email list or donor database. The hackers then sell the information or use it themselves for financial gain.

•Ransomware: This attack involves criminals holding a network and its precious data hostage until the enterprise pays the demanded amount.

•Social engineering: These attacks exploit human error to gain unauthorized access to organizational systems. Lack of proper staff training is the biggest culprit in this case.

•Malware: Hackers deploy malicious software designed to cause significant disruptions and compromise data integrity.

Amos

If any of these attacks proves successful, the consequences for nonprofits are often severe and far-reaching. In the immediate, there’s the loss of funds or sensitive information. There’s also the risk of financial penalties for breaching data protection laws. Beyond financial and reputational loss, the ripple effects become more evident with a decline in donor confidence.

Cybersecurity is a must for charities. Cyber attacks have become an increasing concern, so charities and nonprofits must commit to safeguarding private data as part of their success. By adopting proactive measures, they can stay on top of cybersecurity trends and foster enduring relationships with donors.

About the essayist: Zac Amos writes about cybersecurity and the tech industry, and he is the Features Editor at ReHack. Follow him on Twitter or LinkedIn for more articles on emerging cybersecurity trends.

Zurich, Switzerland, Feb. 27, 2024 — Chipmaking has become one of the world’s most critical technologies in the last two decades. The main driver of this explosive growth has been the continuous scaling of silicon technology (widely known as the Moore’s Law).

But these advances in silicon technology are slowing down, as we reach the physical limits of silicon. For this reason, the industry has been investing heavily in nanomaterials like carbon nanotube, graphene and TMDs, which are expected to enable chips with unprecedented functionality. However, making electronic devices with these extremely small materials at speed, with precision, and without compromising on quality has been a long-standing obstacle.

Nanotechnology company Chiral is today announcing a $3.8m funding round to address this challenge head on, innovating the way nanomaterials are integrated into devices. Its expertise in nanotechnology, automation, and high-precision robotics will be pivotal in the industry’s move beyond silicon to the next generation of electronics. The pre-seed funding round was co-led by Founderful (formerly Wingman Ventures) and HCVC and includes grants from ETH Zurich and Venture Kick.

Research has evidenced the use case and impact of nanomaterials across a range of electronics including high-performance transistors, low-power sensors, quantum devices, and many more. However, existing production methods, mostly based on chemistry, are not controllable, which has thus far prevented commercialization of these devices.

Chiral has built high-speed, automated, robotic machines that integrate nanomaterials into devices. These machines can robotically place micrometer-sized (or even nanometer-sized) materials on small chips. Repeating these motions in a fast and automated manner requires a very high level of engineering, which, when done right, ensures the precision and control that conventional chemistry-based methods lack.

The development of Chiral’s technology started as a national research project conducted at the Swiss Federal Institutes of Technology (ETH Zurich, EPFL, and Empa), in which the company’s co-founders, Seoho Jung, Natanael Lanz, and Andre Butzerin participated as PhD students. After 4 years of R&D, the research team finished its first prototype machine, which was 100 times faster than the other systems available at the time. The immediate reaction of the market to the prototype, which quickly led to the company’s first batch of pilot customers, convinced the co-founders that they should continue their activity as a company. They incorporated Chiral in June 2023 as a result.

Jung

Seoho Jung, Co-founder and CEO at Chiral commented“At Chiral, we are pioneering the next generation of electronic devices across industry. Chipmakers are aware of the potential of nanomaterials and we’re bringing that potential to life. This funding will accelerate the development of our next machine, which will unlock new market opportunities with its versatility and performance. We are also excited to scale our team to keep up with the growing demand and customer base.”

The global nanotechnology market size is projected to grow from $79.14 billion in 2023 to $248.56 billion by 2030, at a CAGR of 17.8% (Fortune business insights research). One of the largest chipmakers in the world, Taiwan Semiconductor Manufacturing Company (TSMC) presented its development roadmap showing nanomaterial-based transistors as its future architecture.

Pascal Mathis, Founding Partner at Founderful, commented: “We’re thrilled to join forces with Chiral alongside HCVC. Chiral’s AI- and robotics-based technology lets us envision a future where nanomaterial-based chips are being produced at the scale needed for commercialization – a major bottleneck up until now. We look forward to supporting Seoho, Natanael and André in their journey to introduce a new paradigm of chips beyond silicon.”

Alexis Houssou, Founding Partner at HCVC, commented: “With the current boom in AI applications, we stand at a pivotal moment where the slowdown of Moore’s law threatens to decelerate the pace of technological progress significantly. The team at Chiral has embarked on a critical mission to pave the way toward a groundbreaking post-silicon era, promising to transcend current limitations and unlock new possibilities for advancement. We couldn’t be more excited to support their mission, in collaboration with Founderful, as they build the future of computing infrastructure.”

Seoho Jung added: “In the future, it will be normal for electronic devices or chips to contain nanomaterials. The development roadmaps of the world’s leading chipmakers like TSMC, Samsung, and Intel all share our vision. We are confident that Chiral technology will empower the industry to make this transition faster.”

About Chiral: Chiral is a nanotechnology company that produces advanced electronic devices with nanomaterials. The core of the company’s technology is its robotic machines that enable the fully automated integration of clean nanomaterials with unprecedented precision and speed. Incorporated in 2023, the company is a spin-off from ETH Zurich and Empa, and is headquartered in Zurich, Switzerland. Learn more about Chiral here: https://www.chiralnano.com/ 

About Founderful: Founderful is Switzerland’s leading pre-seed fund. We give every founder our deepest understanding and highest levels of support, and together, we’re building the future of the Swiss startup ecosystem. For more information, please visit https://www.founderful.com/ or follow via LinkedIn.

About HCVC: HCVC is a venture capital firm that helps founders tackle hard problems with capital, resources and collaboration with $130m in assets under management. With offices in Paris, London and San Francisco, HCVC invests in pre-seed and seed companies that leverage breakthrough technology to digitize, automate and decarbonize the world. For more information, please visit https://www.hcvc.co/

Media contact: Bilal Mahmood, Stockwood Strategy, Mob: +44 (0) 771 400 7257

Achieving “digital trust” is not going terribly well globally.

Related: How decentralized IoT boosts decarbonization

Yet, more so than ever, infusing trustworthiness into modern-day digital services has become mission critical for most businesses. Now comes survey findings that could perhaps help to move things in the right direction.

According to DigiCert’s 2024 State of Digital Trust Survey results, released today, companies proactively pursuing digital trust are seeing boosts in revenue, innovation and productivity. Conversely, organizations lagging may be flirting with disaster.

“The gap between the leaders and the laggards is growing,” says Brian Trzupek, DigiCert’s senior vice president of product. “If you factor in where we are in the world today with things like IoT, quantum computing and generative AI, we could be heading for a huge trust crisis.”

DigiCert polled some 300 IT, cybersecurity and DevOps professionals across North America, Europe and APAC. I sat down with Trzupek and Mike Nelson, DigiCert’s Global Vice President of Digital Trust, to discuss the wider implications of the survey findings. My takeaways:

Bungled innovation

Digital trust refers to companies meeting the reasonable expectation that the digital services they offer not only protects users, but also upholds societal expectations and values. The tech sector has been preaching this for several years, acknowledging the fact that preserving trust, as digital services advance, is proving to be extremely difficult — yet crucial nonetheless.

“Trust has become absolutely paramount in the world,” Nelson observes. “Trust can be lost when you introduce digital connectivity — and digital connectivity is everywhere.”

DigiCert’s survey presents hard evidence that trust can be the basis of a winning business model. The top 33 percent of digital ‘trust leaders’ identified in DigiCert’s poll said they can respond more effectively to outages and incidents and found themselves to be in a much better position to effectively leverage innovation. Meanwhile, the bottom 33 percent found it increasingly difficult to tap into innovation.

This tug-and-pull is happening in an operating environment where digital innovation, from a global perspective, is being bungled. That’s the assessment of the 2024 Edelman Trust Barometer, a study highlighting the rapid erosion of digital trust, to the point of exacerbating polarized political views.

Trzupek

In such an environment, companies have a terrific opportunity to set themselves apart as being trustworthy, Trzupek argues. “The companies we view as the most trustworthy on the planet are able to provide very reliable digital services in consistent ways,” he says. “They’re able to connect people through trusted experiences.”

Emerging standards

Indeed, advanced technologies, new protocols and emerging best practices are at hand to help companies build and sustain trust.

And supply chain participants and individual consumers are eager recipients, naturally gravitating to trusted services, Nelson observes. Digital trust has, in fact, become a crucial factor in consumer purchasing decisions and corporate procurement strategies, he says.

This dynamic is highlighted by support of the Matter smart home devices standard. Matter is part of a fresh slate of technical standards that must take hold to enable massively interconnected, highly interoperable digital systems.

Since it was introduced two years ago, Matter has been embraced by some 400 manufacturers of IoT devices and close to one million Matter certificates have been issued, Nelson told me. “It’s not just in smart homes,” he says. “We’re building trust into devices in automotive and we’re seeing it in healthcare, as well.”

For its part, DigiCert has continued to advance it’s DigiCert ONE platform of tools and services to help companies manage their digital certificates and Public Key Infrastructure (PKI.) DigiCert’s clients and prospects are steadily modernizing the way digital connections get authenticated and sensitive assets get encrypted, Trzupek told me.

“In visiting our customers over the past 18 months, I’ve seen a newfound energy for closely examining and more effectively managing PKI infrastructure, both internally and externally,” he says.  “Companies are moving to update decades old PKI systems because they realize how pivotal this is to digital trust and everything they do.”

DigiCert has also been a leader in championing the concept of “crypto agility” —the capacity to update and adapt cryptographic routines swiftly—something Trzupek and Nelson argued is rapidly becoming a business imperative.

A starting point

Nelson

Leveraging advanced tools and embracing emerging best practices is all well and good for the trust leaders. But what about the laggards? For the organizations just starting down the path towards achieving and sustaining digital trust, Nelson outlined this framework:

•Knowledge and inventory: Begin with taking inventory of cryptographic assets and understanding how they’re utilized within the organization.

•Policies and enforcement: Next, establish organizational policies that outline appropriate and inappropriate behaviors regarding digital assets. Assure that these policies are enforceable.

•Centralized security: Streamline control over various business units that may have disparate practices, thereby improving visibility and the ability to mitigate risks.

•Factor in business impact: Finally, prioritize security efforts based on the potential business impact. Evaluate the consequences should certain assets go offline; focus on protecting the most critical areas first.

Lagging really is no longer an option. Geo-political conflict, remote work exposures, unpredictable usage of generative AI; these all stand to further undermine digital trust for months and years to come.

Will the laggards follow the trust leaders? I’ll keep watch and keep reporting.

Acohido

Pulitzer Prize-winning business journalist Byron V. Acohido is dedicated to fostering public awareness about how to make the Internet as private and secure as it ought to be.


(LW provides consulting services to the vendors we cover.)

AI chatbots are computer programs that talk like humans, gaining popularity for quick responses. They boost customer service, efficiency and user experience by offering constant help, handling routine tasks, and providing prompt and personalized interactions.

Related: The security case for AR, VR

AI chatbots use natural language processing, which enables them to understand and respond to human language and machine learning algorithms. This helps them improve their performance over time by gaining data from interactions.

In 2022, 88% of users relied on chatbots when interacting with businesses. These tools saved 2.5 billion work hours in 2023 and helped raise customer satisfaction to 69% for $0.50 to $0.70 per interaction. Forty-eight percent of consumers favor their efficiency prioritization.

Popular AI platforms

Communication channels like websites, messaging apps and voice assistants are increasingly adopting AI chatbots. By 2026, the integration of conversational AI in contact centers will lead to a substantial $80 billion reduction in labor costs for agents.

This widespread integration enhances accessibility and user engagement, allowing businesses to provide seamless interactions across various platforms. Examples of AI chatbot platforms include:

•Dialogflow: Developed by Google, Dialogflow is renowned for its comprehension capabilities. It excels in crafting human-like interactions in customer support. In e-commerce, it facilitates smooth product inquiries and order tracking. Health care benefits from its ability to interpret medical queries with precision.

•Microsoft Bot Framework: Microsoft’s offering is a robust platform providing bot development, deployment and management tools. In customer support, it seamlessly integrates with Microsoft’s ecosystem for enhanced productivity. E-commerce platforms leverage its versatility for order processing and personalized shopping assistance tasks. Health care adopts it for appointment scheduling and health-related inquiries.

IBM Watson Assistant: IBM Watson Assistant stands out for its AI-powered capabilities, enabling sophisticated interactions. Customer support experiences a boost with its ability to understand complex queries. In e-commerce, it aids in crafting personalized shopping experiences. Health care relies on it for intelligent symptom analysis and health information dissemination.

Checklist of vulnerabilities

Potential attack vectors can be exploited in AI chatbots, such as:

Input validation and sanitation: User inputs are gateways, and ensuring their validation and sanitation is paramount. Neglecting this can lead to injection attacks,, jeopardizing user data integrity.

Authentication and authorization vulnerabilities: Weak authentication methods and compromised access tokens can provide unauthorized access. Inadequate authorization controls may result in unapproved interactions and data exposure, posing significant security threats.

Privacy and data leakage vulnerability: Handling sensitive user information requires robust measures to prevent breaches. Data leakage compromises user privacy and has legal implications, emphasizing the need for stringent protection protocols.

Malicious intent or manipulation: AI chatbots can be exploited to spread misinformation, execute social engineering attacks or launch phishing. Such manipulation can harm user trust, tarnish brand reputation and have broader social consequences.

Machine learning helps AI chatbots adapt to and prevent new cyber threats. Its anomaly detection identifies suspicious behavior, proactively defending against potential breaches. Implement systems that continuously monitor and respond to security incidents for swift and effective defense.

Best security practices

Implementing these best practices establishes a robust security foundation for AI chatbots, ensuring a secure and trustworthy interaction environment for organizations and users:

Amos

Guidelines for organizations and developers: Conduct periodic security assessments and penetration testing to identify and address vulnerabilities in AI chatbot systems.

Multi-factor authentication: Implement multi-factor authentication for administration and privileged users to enhance access control and prevent unauthorized entry. Using MFA can prevent 99.9% of cyber security attacks.

•Secure communication channels: Ensure all communication channels between the chatbot and users are secure and encrypted, safeguarding sensitive data from potential breaches.

•Educating users for safe interaction: Provide clear instructions on how users can identify and report suspicious activities, fostering a collaborative approach to security.

•Avoiding sensitive information sharing: Encourage users to refrain from sharing sensitive information with chatbots, promoting responsible and secure interaction.

While AI chatbots have cybersecurity vulnerabilities, adopting proactive measures like secure development practices and regular assessments can effectively mitigate risks. These practices allow AI chatbots to provide valuable services while maintaining user trust and organizational security.

About the essayist: Zac Amos writes about cybersecurity and the tech industry, and he is the Features Editor at ReHack. Follow him on Twitter or LinkedIn for more articles on emerging cybersecurity trends.

San Mateo, Calif., Feb. 13, 2023 – The U.S. White House announced groundbreaking collaboration between OpenPolicy and leading innovation companies, including Kiteworks, which delivers data privacy and compliance for sensitive content communications through its Private Content Network.

The U.S. Department of Commerce’s National Institute of Standards and Technology (NIST) Artificial Intelligence Safety Institute Consortium (AISIC) will act as a collaborative platform where both public sector and private sector leading organizations will provide guidance on standards and methods in the development of trustworthy AI.

The Kiteworks platform provides customers with a Private Content Network that enables them to employ zero-trust policy management in the governance and protection of sensitive content communications, including the ingestion of sensitive content into generative AI (GenAI).

Kiteworks unifies, tracks, controls, and secures sensitive content moving within, into, and out of organizations. With Kiteworks, organizations can significantly improve risk management and ensure regulatory compliance on all sensitive content communications.

Raimondo

The consortium, AISIC, brings together over 200 of the nation’s foremost AI stakeholders to support the development and deployment of trustworthy and safe AI technologies. This initiative aligns with President Biden’s Executive Order on Artificial Intelligence, focusing on key priorities, such as red-teaming, capability evaluations, risk management, safety, and security guidelines, and watermarking synthetic content.

According to U.S. Commerce Secretary Gina M. Raimondo, “Through President Biden’s landmark Executive Order, we will ensure America is at the front of the pack – and by working with this group of leaders from industry, civil society, and academia, together we can confront these challenges to develop the measurements and standards we need to maintain America’s competitive edge and develop AI responsibly.”

Freestone

Tim Freestone, Chief Strategy Officer at Kiteworks, expressed his enthusiasm about the collaboration: “Kiteworks’ selection underscores our commitment to protect sensitive content from being ingested into public GenAI large language models (LLMs). Kiteworks is very excited to play a pivotal role as a groundbreaking member of the NIST AI Safety Institute Consortium, tapping our expertise in data security and compliance to help guide the responsible development and management of AI solutions.”

For further insights into this groundbreaking collaboration and Kiteworks’ involvement, Kiteworks’ Freestone is available for interviews and discussions.

About Kiteworks: Kiteworks’ mission is to empower organizations to effectively manage risk in every send, share, receive, and save of sensitive content. The Kiteworks platform provides customers with a Private Content Network that delivers content governance, compliance, and protection. The platform unifies, tracks, controls, and secures sensitive content moving within, into, and out of their organization, significantly improving risk management and ensuring regulatory compliance on all sensitive content communications. Headquartered in Silicon Valley, Kiteworks protects over 100 million end users for over 3,650 global enterprises and government agencies.