Clean Code’ is a simple concept rooted in common sense. This software writing principle cropped up some 50 years ago and might seem quaint in today’s era of speedy software development.

Related: Setting IoT security standards

At Black Hat 2023, I had the chance to visit with Olivier Gaudin, founder and co-CEO, and Johannes Dahse, head of R&D, at SonarSource, a Geneva, Switzerland-based supplier of systems to achieve Clean Code. Olivier outlined the characteristics all coding should have and Dahse explained how healthy code can be fostered. For a drill down, please give the accompanying podcast a listen.

Responsibility for Clean Code, Olivier told me, needs to be placed with the developer, whether he or she is creating a new app or an update. Caring for source code when developing and deploying applications at breakneck speed mitigates technical debt – the snowballing problems associated with fixing bugs.

Guest experts: Olivier Gaudin, co-CEO, Johannes Dahse, Head of R&D, SonarSource

“If you try to go faster but don’t take good care of the code, you are actually going slower,” Olivier argues. “Any change is going to cost you more than it should because your code is bad, dirty, junky or whatever you want to call it that’s the opposite of clean code.”

What’s more, Clean Code improves security —  by reinforcing “shift left,” the practice of testing as early as feasible in the software development lifecycle.

Olivier and Dahse make a persuasive argument that Clean Code can and should arise as the innermost layer of security. The transformation progresses. I’ll keep watch and keep reporting.

Acohido

Pulitzer Prize-winning business journalist Byron V. Acohido is dedicated to fostering public awareness about how to make the Internet as private and secure as it ought to be.


(LW provides consulting services to the vendors we cover.)

 

Something simply must be done to slow, and ultimately reverse, attack surface expansion.

Related: What Cisco’s buyout of Splunk really signals

We’re in the midst of driving towards a dramatically scaled-up and increasingly connected digital ecosystem. Companies are obsessed with leveraging cloud-hosted IT infrastructure and the speedy software development and deployment that goes along with that.

And yet it remains all too easy for malicious hackers to get deep access, steal data, spread ransomware, disrupt infrastructure and attain long run unauthorized access.

I heard a cogent assessment of the shift that must take at the Omdia Analyst Summit at Black Hat USA 2023. In a keynote address, Omdia’s Eric Parizo, managing prinicipal analyst, and Andrew Braunberg, principal analyst, unveiled an approach they coined as “proactive security.”

What I came away with is that many of the new cloud-centric security frameworks and tools fit as components of proactive security, while familiar legacy solutions, like firewalls and SIEMs, can be categorized as either preventative or reactive security. This is a useful way to look at it.

Rising reliance on proactive tools seems inevitable, although legacy tools continue to advance and have their place. The Omdia analysts called out a handful of key proactive methodologies: Risk-Based Vulnerability Management (RBVM), Attack Surface Management (ASM), and Incident Simulation and Testing (IST).

RBVM solutions don’t merely identify vulnerabilities, it quantifies and prioritizes them, making risk management more strategic. Notably, some 79 percent of enterprises recently polled by Omdia consider this risk-ranking capability indispensable.

Last Watchdog followed up with Braunberg to ask him, among other things, what RBVM solutions signal about the ramping up of proactive security. Here’s what he had to say:

LW: What is ‘proactive security’ and why is it gaining traction?

Braunberg: Proactive solutions seek out and mitigate likely threats and threat conditions before they pose a danger to the environment. These tools provide visibility, assessment, and control of an organization’s attack surface and an understanding of viable attack paths based on asset exposures and the effectiveness of deployed security controls. Omdia believes it is gaining traction because, for too long, enterprises have been investing in security solutions that only help after an attack is already on their doorstep – or has broken down the door! Proactive Security finally helps get ahead of adversaries, finding and fixing the opportunities they seek to exploit, before they can exploit them.

LW: Legacy on-prem tools tend to be preventative, advanced on-prem tools are reactive and the shiny new cloud-centric solutions are proactive. Is that fair?

Braunberg: Well, it’s fair to say that modern software defined architectures, such as cloud, can introduce many more potential exposures and that a proactive approach is particularly effective in identifying and controlling configuration drift in these environments. But Omdia believes that a mix of preventative, reactive, and proactive tools are appropriate across all components of the digital landscape.

LW: Your ‘continuous security protection lifecycle’ argument suggests we’re in an early phase of what: co-mingling; consolidating; integration of these three categories?

Braunberg

Braunberg: Omdia sees several trends at work in the market today. There is a strong trend of consolidation in proactive security segments. We predict that proactive security functionality will roll up into comprehensive proactive security platforms over the next several years. But we also see traditional reactive security suites incorporating proactive features. So, we expect consolidation, co-mingling, and integration for the foreseeable future.

LW: How would you characterize where we are today?

Braunberg:  There is significant innovation and investment in many traditional segments of proactive security. This is driven primarily by a desire to support better risk-based analytics to prioritize risk and better inform remediations. But as noted, we are also in the early stages of market consolidation.

LW: What does Cisco’s $28 billion acquisition of Splunk signal about the trajectory that network security is on?

Braunberg: It’s less about network security as much as it is filling a need for Cisco. The networking giant sees Splunk as a premium brand in a market segment, SIEM, that it had yet to enter, giving Cisco a strong opportunity to upsell existing Cisco Secure customers

LW: Won’t companies have to rethink and revamp long-engrained budgeting practices?

Braunberg: Absolutely. Omdia believes that over the coming years, enterprises should and will increase the percentage of their cybersecurity technology budgets allocated for proactive security solutions. Not only will this provide a forward-leaning approach to get ahead of threats and threat conditions before they can hurt the enterprise, but it will also reduce cybersecurity risk, in turn providing improved ROI for the security solution.

LW: How does ‘risk-based vulnerability management’ factor in?

Braunberg: RBVM will play a key role in proactive strategies. These products are already expanding into more comprehensive tools for addressing security hygiene issues across the entire digital domain for both production code and code in development.

LW: Can you characterize what’s happening in the field today with early adopters of this approach?

Braunberg: Omdia’s recent primary research, the 2023 Omdia Cybersecurity Decision Maker Survey, querying global security practitioners, found an overwhelming need to rank vulnerabilities and to prioritize next actions based on risk. Early adopters of proactive tools are primarily focused on this need.

LW: What are you hearing from these early adopters?

Braunberg: In addition to the obvious benefit of more efficient, effective security practices in the form of specific product categories like risk-based vulnerability management, which provides prioritization and remediation decision based on contextual risk to the organization, but also increased emphasis on the core tenants of Proactive Security: visibility and risk.

Proactive helps underscore the importance of being able to detect, define, categorize, and understand the risk of all assets in the extended enterprise environment. From there, it becomes possible to identify opportunities to address threat conditions, such as the need for software patches, vulnerable configurations, or even poor practices and policies.

Going forward, this will further the importance of maturation on security risk, leading to more dedicated risk teams and discerning ROI from security solutions based on their ability to reduce risk.

LW: Five years from now, will it be equal parts proactive, preventative and reactive — or some other mix?

Braunberg: It’s too early to say what the pie chart might look like, but for most organizations today, the priority is to increase the emphasis on and shift toward Proactive Security, from both a strategic and technical planning perspective. Omdia believes it’s time to shift the conversation to one of ROI based on risk reduction, and vendors offering Proactive Security solutions will be best positioned to make that case.

LW: Anything else?

Braunberg: We just published our new report on the Fundamentals of Proactive Security, which is a 6,000-word deep dive on the topic. It’s available to Omdia Cyber clients. Plus, we’ll have more on Proactive, on our sister site Dark Reading, and elsewhere in the near future.

Acohido

Pulitzer Prize-winning business journalist Byron V. Acohido is dedicated to fostering public awareness about how to make the Internet as private and secure as it ought to be.

APIs. The glue of hyper connectivity; yet also the wellspring of risk.

Related: The true scale of API breaches

I had an enlightening discussion at Black Hat USA 2023 with Traceable.ai Chief Security Officer Richard Bird about how these snippets of code have dramatically expanded the attack surface in ways that have largely been overlooked.

Please give the accompanying podcast a listen. Traceable supplies systems that treat APIs as delicate assets requiring robust protection. At the moment, Bird argues, that’s not how most companies view them.

All too many organizations, he told me, have no clue about how many APIs they have, where they reside and what they do. A good percentage of APIs, he says, lie dormant – low hanging fruit for hackers who are expert at ferreting them out to utilize in multi-stage breaches.

Companies have been obsessed with using APIs to unlock business value while turning a blind eye to API exposures.

Guest expert: Richard Bird, CSO, Traceable.ai

What’s more, APIs continue to  fuel speedy software development in an environment where standardization has been absent, Bird told me.

“There hasn’t been a lot of motion around the idea of developing boundaries and protocols from an industry standpoint,” he says.

The Biden-Harris Administration has stepped forward to stir the pot.

“Compliance is implied and inferred in the most recent executive orders and in other items coming out of NIST and the SEC,” Bird noted. “They’re basically saying, ‘Look, you have this data transport capability with APIs, so you need to include them in your security requirements.’ ”

The transformation progresses. I’ll keep watch and keep reporting.

Acohido

Pulitzer Prize-winning business journalist Byron V. Acohido is dedicated to fostering public awareness about how to make the Internet as private and secure as it ought to be.


(LW provides consulting services to the vendors we cover.)

If you’re a small business looking for the secret sauce to cybersecurity, the secret is out: start with a cybersecurity policy and make the commitment to security a business-wide priority.

Related: SMBs too often pay ransom

Small businesses, including nonprofit organizations, are not immune to cyberattacks. The average cost of a cybersecurity breach was $4.45 million in 2023, according to IBM’s Cost of a Data Breach Report, and over 700,000 small businesses were targeted in cybersecurity attacks in 2020, according to the Small Business Association.

Nonprofits are equally at risk, and often lack cybersecurity measures. According to Board Effect, 80% of nonprofits do not have a cybersecurity plan in place.

Given the risk involved, small businesses and nonprofits must consider prioritizing cybersecurity policies and practices to stay protected, retain customers, and remain successful. Financial information is one of the most frequently targeted areas, so it’s crucial your cybersecurity policies start with your finance team.

Taking an active role

Your cybersecurity policy should address your employees and technology systems.

Employee training is crucial. According to Verizon’s 2023 Data Breach Investigations Report, 74% of breaches were caused by human error, with phishing and text message phishing scams being some of the leading causes.

Training team members regularly with real-life scenarios will help them spot potential threats and protect them from exposing your business.

Taurins

It’s also essential your business evaluates its technology and keeps it regularly updated to the latest security standards. For example, your accounting technology should have features that work to protect your data, like internal controls, multi-factor authentication, or an audit trail that documents change to your data.

Consider these four best practices as the core of your finance team and business’ cybersecurity plan:

•Regularly update and back-up your data systems. Security places a crucial role in your technology. In the era of cloud computing, where programs and your information can be accessed anywhere, your business needs to keep its software up-to-date and back up critical systems. Cloud vendors often handle the security and backup processes automatically, so examine your technology and see if that is the case. If not, implement a plan to back up your information regularly and update your technology to the latest versions. These back-ups can also be used to form a disaster recovery plan in the event of a natural disaster.

•Set access privileges and internal controls. Best practice is to require teams to use enhanced security measures like strong passwords that are changed regularly and multi-factor authentication to ensure your team is the only one accessing financial information.

Also consider creating a policy for which employees can access which types of data. When multiple members of your team can easily access a wide range of data without internal controls, it creates vulnerability. Your team’s information is crucial, especially regarding financial information. Your technology should feature internal controls. Internal controls segment your company’s information by title or role and grant access to only the data they need.

•Monitor team member access through audit trails. Your accounting technology should be equipped with an audit trail that logs every change made to your data, including user data and the workstation from which the user has made the change. Monitoring who has made what changes protects your business and holds team members accountable for safe IT practices.

•Adequate IT compliance. Every business has a standard of IT compliance that team members are accountable for upholding. First, it is crucial to have systems that adhere to regulations, laws, and general industry standards. If you have concerns about protecting your financial data, consider hiring a data protection officer or an outside firm to help you maintain compliance.

No one person can prevent cyberattacks alone. The secret sauce is that it takes a thorough cybersecurity policy and a team committed to keeping your business finance and accounting teams safe. Stay proactive. Stay educated. Stay safe.

About the essayist: Neil Taurins is the General Manager of Nonprofit Solutions at MIP Fund Accounting by Community Brands. He has been with the company for over 12 years and is passionate about working with government organizations and municipalities to provide them with solutions to improve efficiency.

New government rules coupled with industry standards meant to give formal shape to the Internet of Things (IoT) are rapidly quickening around the globe.

Related: The need for supply chain security

This is to be expected. After all, government mandates combined with industry standards are the twin towers of public safety. Without them the integrity of our food supplies, the efficacy of our transportation systems and reliability of our utilities would not be what they are.

When it comes to IoT, we must arrive at specific rules of the road if we are to tap into the full potential of smart cities, autonomous transportation and advanced healthcare.

In the absence of robust, universally implemented rules of the road, cybercriminals will continue to have the upper hand and wreak even more havoc than they now do. Threat actors all-too-readily compromise, disrupt and maliciously manipulate the comparatively simple IoT systems we havein operation today.

I had an eye-opening conversation about all of this with Steve Hanna, distinguished engineer at Infineon Technologies, a global semiconductor manufacturer based in Neubiberg, Germany. We went over how governments around the world are stepping up their efforts to impose IoT security legislation and regulations designed to keep users safe.

This is happening at the same time as tech industry consortiums are hashing out standards to universally embed security deep inside next-gen IoT systems, down to the chip level. There’s a lot going on behind the scenes. For a full drill down on my discussion with Hanna, please view the accompanying videocast. Here are a few takeaways:

Minimum requirements

A few years back, a spate of seminal IoT hacks grabbed the full attention of governments worldwide. The Mirai botnet, initially discovered in October 2016, infected Internet-connected routers, cameras and digital video recorders at scale. Mirai then carried out a massive distributed denial-of-service (DDoS) attacks that knocked down Twitter, Netflix, PayPal and other major web properties.

Then in 2017, clever attackers managed to compromise a smart thermometer in a fish tank, thereby gaining access to the high-roller database of a North American casino. Soon thereafter, white hat researchers discovered and disclosed pervasive vulnerabilities in hundreds of millions of smart home devices such as cameras, thermostats and door locks.

In 2018, UK regulators got the regulatory ball rolling taking steps that would eventually result in mandated minimum requirements for IoT data storage, communications and firmware update capabilities. The U.S., other European nations and Singapore soon began moving in this direction, as well. The U.S. National Institute of Standards and Technology (NIST,) for instance, has since developed a comprehensive set of recommended IoT security best practices.

In 2023, the U.S. announced a cybersecurity certification and labeling program to help Americans more easily choose smart devices that are safer and less vulnerable to cyberattacks. The new “U.S. Cyber Trust Mark” program raises the bar for cybersecurity across common devices, including smart refrigerators, smart microwaves, smart televisions, smart climate control systems, smart fitness trackers, and more.

Guest expert: Steve Hanna, Distinguished Engineer, Infineon Technologies

“We’re moving to a world where IoT cybersecurity will be table stakes” Hanna told me. “It’s going to be required in every IoT product and governments will have their own checklist of IoT requirements, similar to what we have for electrical equipment.”

Harmonizing the baseline

The efforts by regulators and technologists to establish a baseline for IoT safety has, as might’ve been expected, given rise to conflicts and redundancies. “At the moment, we have a Tower of Babel situation where each nation has its own set of requirements and it’s a big challenge for a manufacturer how they get their product certified in multiple places,” Hanna says.

Harmonizing of different requirements across multiple nations needs to happen, Hanna argues, and this quest is made even more challenging because of the sprawling array of IoT device types. This is, in fact, precisely what a tech industry consortium, calling itself, the Connectivity Standards Alliance, has set out to tackle head on, he says.

“Basically, we’re creating, shall we say, one certification to rule them all,” Hanna told me. “We’re going to bring together all the requirements from these national and regional certifications and say if you get this one certification from CSA, then that indicates you’re compliant with all of the national or regional requirements, no matter where they might come from. And your product can then be sold in all of those different regions.”

The technologists are striving to resolve a profound pain point, in particular, for IoT device makers facing the prospect of needing to test and certify their IoT products in 50 different locales. “If I can test it once against a set of requirements that I understand, then that’s much less expensive,” Hanna says.

Safety labels

The give-and-take vetting of emerging standards that’s now unfolding reflects a tried-and-true dynamic; it’s how we arrived at having detailed food additive labels we can trust on every item on supermarket shelves and it’s why we can be sure no electrical appliance in our homes poses an egregious hazard.

The ramping up of IoT rulemaking and standards-building portends a day when we won’t have to worry as much as we now do about directly encountering badness on the Internet.

I asked Hanna about what individual citizens and small business owners can do, and he indicated that staying generally informed should be enough. He noted that the regulators and tech industry leaders are cognizant of the need to foster consumer awareness about the incremental steps forward. The push behind the new Matter home automation connectivity standard introduced in late 2022 being a case in point.

“We can’t expect the consumer to be an expert on IoT cybersecurity, that’s just not realistic,” he says. “What we can ask them to do is to look for these security labels coming soon to IoT products . . . you just can’t buy an unsafe extension cord anywhere today; only the ones with the proper safety inspections get sold. I hope the same will be true in five or 10 years for IoT products, that all of them are adequately secure and they all have that label.”

This is all part of a maturation process that must happen for digital systems to rise to the next level. I’ll keep watch and keep reporting.

Acohido

Pulitzer Prize-winning business journalist Byron V. Acohido is dedicated to fostering public awareness about how to make the Internet as private and secure as it ought to be.


(LW provides consulting services to the vendors we cover.)

 

Phone number spoofing involves manipulating caller ID displays to mimic legitimate phone numbers, giving scammers a deceptive veil of authenticity.

Related: The rise of ‘SMS toll fraud’

The Bank of America scam serves as a prime example of how criminals exploit this technique. These scammers impersonate Bank of America representatives, using the genuine bank’s phone number (+18004321000) to gain trust and deceive their targets.

Victims of the Bank of America scam have shared their experiences, shedding light on the deceptive tactics employed by these fraudsters. One common approach involves a caller with an Indian accent posing as a Bank of America representative. They may claim that a new credit card or checking account has been opened in the victim’s name, providing specific details such as addresses and alleged deposits to sound convincing.

Scam tactic exposed

Nicolas Girard shared his experience with the Bank of America scam. He received a call claiming a new checking account was opened in his name, complete with his correct address and a $5,000 deposit. To verify their authenticity, Nicolas asked for proof, but the scammers insisted he Google the Bank of America number.

Suspicious, he trusted his instincts and called the bank directly. Genuine representatives confirmed it was a scam, with no new accounts linked to his social security number. Research unveiled the widespread practice of spoofing the Bank of America number.

Nicolas took immediate action, freezing his credit accounts to protect himself. His story serves as a reminder to stay vigilant against phone scams, ensuring our financial well-being and personal security.

Scope of the threat

Grant

Based on monthly search requests and statistics from 2023, it is evident that a significant number of individuals, almost 600 views per month with an estimate of over 6,000 searches in 2023 alone, have encountered the spoofed Bank of America phone number, +18004321000. This statistic alone highlights the alarming and widespread nature of this scam. It serves as a stark reminder of the importance of raising awareness about phone number spoofing and its potential risks.

It is crucial to be aware of the red flags associated with phone scams like the Bank of America scam. Victims have reported several warning signs, such as unsolicited calls, requests for sensitive information, and high-pressure tactics. Recognizing these indicators can help individuals protect themselves from falling victim to such scams.

To combat phone harassment and protect against scams like the Bank of America scam, the tellows caller ID app offers valuable features. This app provides reverse phone number lookup, allowing users to identify potential scammers or suspicious callers. With a vast database of reported numbers and user feedback, the app provides essential information to help individuals make informed decisions about answering or blocking calls.

Practical protection

To safeguard yourself from falling victim to phone number spoofing scams, consider the following preventive measures:

•Verify Caller Authenticity: Independently contact your bank using official contact information to verify the legitimacy of any calls claiming to be from financial institutions.

•Be Wary of Sharing Personal Information: Never share sensitive information, such as account numbers or Social Security numbers, over the phone unless you initiated the call and are confident in the caller’s identity.

•Install tellows Caller ID App: Use the tellows caller ID app to identify potential scam calls and protect yourself from phone harassment. The app’s reverse phone number lookup feature provides insights into caller reputation and user-reported experiences.

By using the tellows app, users can identify and block unwanted and potentially scam calls. With its extensive global database and user-generated ratings, tellows provides insights into caller identities and their reputation. This empowers users to make informed decisions about answering or blocking calls, saving them time and frustration.

Phone number spoofing poses a growing threat. Stay vigilant and informed to protect against such fraud.

About the essayist: Richard Grant is a country content manager at tellows. He is responsible for overseeing the content strategy, user-generated ratings and data management for a specific country. Richard’s expertise in call identification and spam detection contributes to tellows’ mission of empowering individuals to avoid annoying and potentially fraudulent calls.

LAS VEGAS – Just when we appeared to be on the verge of materially shrinking the attack surface, along comes an unpredictable, potentially explosive wild card: generative AI.

Related: Can ‘CNAPP’ do it all?

Unsurprisingly, generative AI was in the spotlight at Black Hat USA 2023, which returned to its full pre-Covid grandeur here last week.

Maria Markstedter, founder of Azeria Labs, set the tone in her opening keynote address. Artificial intelligence has been in commercial use for many decades; Markstedter recounted why this potent iteration of AI is causing so much fuss, just now.

Generative AI makes use of a large language model (LLM) – an advanced algorithm that applies deep learning techniques to massive data sets. The popular service, ChatGPT, is based on OpenAI’s LLM, which taps into everything available across the Internet through 2021, plus anything a user cares to feed into it. Generative AI ingests it all, then applies algorithms to understand, generate and predict new content – in text-based summaries that any literate human can grasp.

I spoke to technologists, hackers, marketers, company founders, researchers, academics, publicists and fellow journalists about the promise and pitfalls of commoditizing AI in this fashion. I came away with a much better understanding of the disruption/transformation that is gaining momentum, with respect to privacy and cybersecurity.

Shadow IT on steroids

Generative AI, in point of fact, has, for the moment, dramatically accelerated attack surface expansion. I spoke with Casey Ellis, founder of Bugcrowd, which supplies crowd-sourced vulnerability testing, all about this. We discussed how elite hacking collectives already are finding ways to use it as a force multiplier, streamlining repetitive tasks and enabling them to scale up their intricate, multi-staged attacks.

Huynh

What’s more, generative AI has exacerbated the longstanding problem of well-intentioned employees unwittingly creating dangerous new exposures, especially in hybrid and multi-cloud networks. I spoke with Uy Huynh, vice president of solutions engineering at Island.io, about how generative AI has quickly become like BYOD and Shadow IT on steroids. Island supplies an advanced web browser security solution.

“The days of localized data loss is over,” says Huynh. “With ChatGPT, when you post sensitive content as part of a query, it subsequently makes its way to OpenAI, the underlying LLM. Every piece of information becomes a part of the model’s vast knowledge base. This unintentional leakage can have dire consequences, as sensitive information can thereafter be accessed through the right prompts.”

Of course, the good guys aren’t asleep at the wheel. Another theme that stood out at Black Hat: security innovators are, at this moment, creating and testing new ways to leverage generative AI – as a force multiplier – for their respective security specialties.

Threat intelligence vendor Cybersixgill for instance launched Cybersixgill IQ at Black Hat. This new service feeds vast data sets of threat intel into a customized LLM tuned to generate answers to nuanced security questions.

The idea is to shrink the time analysts spend sifting through data, says Brad Liggett, director of global sales engineering. Cybersixgill’s researchers, for instance, are finding they can quickly gain insights they might have missed or taken much longer to uncover.

This all really boils down to intuitive questioning of generative AI by clever human experts. Bugcrowds’ stable of independent white hat hackers, for instance, are probing for the edges of the envelope, striving to determine where usefulness ends and inaccuracy kicks in, Ellis told me.

Defense-in-depth redux

I also spoke just ahead of the conference with Horizon3.ai, Syxsense and Trustle – and we touched on how they are factoring in generative AI; for a deeper dive, please give a listen to my podcasts discussions with each. At the conference, I had deep conversations with experts from Bugcrowd, Island.io, Traceable.ai, Data Theorem, Sonar and Flexxon; stay tuned for upcoming Last Watchdog podcasts with each.

Generative AI is sure to rivet everyone’s attention for some time to come. When it comes to cybersecurity, Markstedter, the keynote presenter, astutely observed how generative AI is on track to  match the original iPhone’s adoption trajectory: massive popularity followed by an extended period of companies scrambling to gain security equilibrium.

Markstedter

“Do you remember the first version of the iPhone? It was so insecure — everything was running as root. It was riddled with critical bugs. It lacked exploit mitigations or sandboxing,” she said. “That didn’t stop us from pushing out the functionality and for businesses to become part of that ecosystem.”

Cybersecurity is undergoing a tectonic shift, folks. To get us where we need to be, traditional, perimeter-centric IT defenses need to be reconstituted and security services delivery models need to be reshaped. A new tier of overlapping, interoperable, highly automated security platforms are taking shape. Defense-in-depth remains a mantra, but one that is morphing into something altogether new.

Automation and interoperability must take over and several new security layers must coalesce and interweave to address attack surface expansion. Generative AI has come along as a two-edged sword, accelerating attack surface expansion, but also stirring cybersecurity innovation. In short, the arms race has taken on a critical new dimension.

Cutting against the grain

Nayyar

A few off-the-cuff discussions I had on the exhibits floor at Black Hat resonated. One was with Saryu Nayyar, CEO of Gurucul, supplier of a unified security and risk analysis solution. Gurucul, too, launched a “generative AI assistant” at Black Hat and has been in the vanguard of another major trend: competing to shape the multi-faceted security platforms we’ll need to carry us forward.

“We’ve always had a vision, right from the beginning, of suppling a unified, open platform,” Nayyar told me. “Our data ingestion framework supports over one thousand-plus integrations. . . Our biggest differentiator is our threat content. We use machine learning, and we have a large research team producing threat content that’s all use-case driven, content that can be used for proactive response and proactive risk reduction.”

I also had a fascinating chat with Jonathan Desrocher and Ian Amit, co-founders of Gomboc.ai, which emerged from stealth at Black Hat with a $5 million seed funding round and a strikingly unique solution. With generative AI all the rage, Gomboc is tapping into what Amit and Desrocher characterized as the polar opposite – “deterministic AI.”

Gomboc’s innovation appears to be a simplified way to drag-and-drop robust security policy onto cloud IT resources, such as AWS processing and storage. Instead of using generative AI to guess, based on information about the feature sets it can see, determinisitic AI runs through a series of predetermined checks, then applies reasoning to conclude whether a cloud asset is securely configured; it either is, or it isn’t, Desrocher told me.

Baked-in security

“It’s deterministic and it also changes the focus of what you’re modeling,” he says. “Do you model past behavior and try to extract rules to predict the future? Or are you actually modeling the problem domain to understand the physics of how it works, so that you can predict the future based on the laws of nature, if you will.”

Fresh out of stealth mode, Gomboc has a ways to go to prove it can gain traction. Amit and Desrocher, of course, have high hopes to make a big difference.

Here’s what Amit told me: “Over the medium term, we’re going to change the way that security is being managed for cloud infrastructure. And in the long term, we’re going to change the way that cloud infrastructure, in general, is being managed . . . our policy engine can also be applied to performance, cost and resilience so that DevOps won’t need to inundate themselves with those intricacies of finding the correct parameters to make things run correctly. Security is going to be baked into the way you deploy your architecture.”

Along these same lines, I had a deep conversation with Camellia Chan, co-founder and CEO of Flexxon, a Singapore-based hardware vendor that’s also cutting against the grain. Chan walked me through how Flexxon has won partnerships with Lenovo, HP and other OEMs to embed Flexxon solid state memory drives in new laptops. Branded “X-Phy,” these advanced SSDs contain AI-infused mechanisms that provide a last line security check, she told me. A full drill down is coming in my podcast discussion with Chan, so stay tuned.

The transformation progresses. I’ll keep watch and keep reporting.

Acohido

Pulitzer Prize-winning business journalist Byron V. Acohido is dedicated to fostering public awareness about how to make the Internet as private and secure as it ought to be.


(LW provides consulting services to the vendors we cover.)

 

Accessing vital information to complete day-to-day tasks at our jobs still requires using a password-based system at most companies.

Related: Satya Nadella calls for facial recognition regulations

Historically, this relationship has been effective from both the user experience and host perspectives; passwords unlocked a world of possibilities, acted as an effective security measure, and were simple to remember. That all changed rather quickly.

Today, bad actors are ruthlessly skilled at cracking passwords – whether through phishing attacks, social engineering, brute force, or buying them on the dark web. In fact, according to Verizon’s most recent data breach report, approximately 80 percent of all breaches are caused by phishing and stolen credentials. Not only are passwords vulnerable to brute force attacks, but they can also be easily forgotten and reused across multiple accounts.

They are simply not good enough. The sudden inadequacy of passwords has prompted broad changes to how companies must create, store, and manage them. The problem is these changes have made the user experience more convoluted and complicated. In other words, we’ve lost the balance between ease-of-use and adequate security under the increasingly antiquated system of password-based access.

Under the current system, companies have two choices: subject employees to burdensome processes to access work servers or become low-hanging fruit for a cyber attack.

By choosing the former – which most companies do as a shortcut to compensate for weak passwords without having to adopt new and innovative solutions – end users must comply with unintuitive experiences such as creating complicated passwords and dealing with complex password reset procedures. I would say companies that take this shortcut are still low-hanging fruit on top of inconveniencing their employees.

Combining IDs, keys

What is the solution, then? The next big thing is passwordless authentication. Let’s remove that point of attack and start fixing the problem at the source. Many organizations have already begun to jump to passwordless, but adoption is slow, and solutions are still in their infancy.

Gagnon

On the consumer side, we see solutions that work now and are incredibly easy to use. For example, we have passwordless facial and fingerprint biometric logins on our mobile phones and the thousands of apps that we use, as well as on our laptops and similar portable devices. However, no clear passwordless solutions offer easy adoption, enterprise-grade security, and interoperability to our large corporations and critical organizations.

Security remains one of the significant issues that need to be addressed on the enterprise level. Solutions need to tackle this problem by establishing trust at the user level to the point that trust is unnecessary. That sounds counterintuitive, but that is what we need to protect organizations from the relentless attacks we are seeing.

A solution that combines biometric identification with device-bound cryptographic keys and interoperable global validation standards.By combining who the user is (through biometrics) with something they know (the cryptographic key), solutions can establish user identity with sufficient confidence at the enterprise level.

Some solutions do this today. However, security and interoperability remain an issue. First and foremost, most solutions rely on connected devices like mobile phones to authenticate users. This leaves the door open to phishing and man-in-the-middle attacks.

New standards needed

Alternatively, some organizations are adopting physical security measures to keep private keys secure and offline. However, these solutions are often criticized for their lack of ease of use, limited interoperability across organizations, and lack of support.

We must keep thinking ahead on security. Attackers will continue to find ways to breach our systems, and authentication cryptography will become increasingly vulnerable to attack. Finding new methods of validation that are resistant to quantum and AI attacks is critical. Our job is to create and implement better systems.

The bottom line is user authentication is vital for securing access to data and systems. To establish trust with the user, the future of secure authentication lies in new passwordless solutions. Emerging technology and innovation in cryptography, biometrics, and device-linked authentication will also be crucial for advancing authentication.

Furthermore, driving authentication forward in our digital ecosystem can be achieved by developing new standards, collaborating with industry peers, and raising awareness. For a system to be introduced and adopted at scale, ease of use is crucial, and security must be uncompromising. The time has come for passwordless systems that seamlessly integrate into businesses without significant user experience disruptions and provide a simple, intuitive, yet secure experience for all.

About the essayist: Thierry Gagnon is Co-Founder and Chief Technology Officer (CTO) at Kelvin Zeroa start-up redefining the way organizations interact with their users in a secure digital world. Kelvin Zero is enabling highly regulated enterprises to secure authentication and know who is on the other side of every transaction.

A fledgling security category referred to as Cloud-Native Application Protection Platforms (CNAPP) is starting to reshape the cybersecurity landscape.

Related: Computing workloads return on-prem

CNAPP solutions assemble a varied mix of security tools and best practices and focuses them on intensively monitoring and managing cloud-native software, from development to deployment.

Companies are finding that CNAPP solutions can materially improve the security postures of both cloud-native and on-premises IT resources by unifying security and compliance capabilities. However, to achieve this higher-level payoff, CISOs and CIOs must first bury the hatchet and truly collaborate – a bonus return.

In a ringing endorsement, Microsoft recently unveiled its CNAPP offering, Microsoft Defender for Cloud; this is sure to put CNAPP on a rising adoption curve with many of the software giant’s enterprise customers, globally. Meanwhile, Cisco on May 24 completed its acquisition of Lightspin, boosting its CNAPP capabilities, and Palo Alto Networks has continued to steadily sharpen its CNAPP chops, most recently with the acquisition of Cider Security.

At RSA Conference 2023, I counted at least 35 other vendors aligning their core services to CNAPP, in one way or another; many more seem likely to jump on the CNAPP band wagon, going forward.

Newer vendors now primarily pitching CNAPP services include Uptycs,  Runecast and Ermetic. Others range from vulnerability management (VM) stalwarts Tenable, Rapid7 and Qualys, to vendors crossing over from the cloud security posture management (CSPM) space, like Caveonix, Lacework and Wiz. Even endpoint security giants Trend Micro and Sophos have commenced pitching CNAPP solutions; so too are API security supplier Data Theorem and secure services edge (SSE) vendor Zscaler.

Winckless

CNAPP at this juncture appeals mainly to enterprises that maintain large software development communities in the public cloud, Charlie Winckless, Gartner Senior Director Analyst, told me. “CNAPP products are tied to cloud maturity,” he explains. “This will continue to grow, but other security controls will remain important as well. CNAPPs protect cloud environments and the majority of organizations will be hybrid for a significant amount of time.”

Managing dynamic risks

Several developments have converged to put CNAPP on a fast track. Massive interconnectivity at the cloud edge is just getting started and will only intensify, going forward. This portends amazing advancements for humankind – and fresh revenue streams for innovative enterprises — but first a tectonic shift in network security must fully play out.

This is because the attack surface of cloud-native applications is expanding rapidly, with malicious hackers targeting insecure code up and down the software supply chain. Ransomware, email fraud and data theft continue to run rampant aided and abetted by insecure configurations of the myriad access points connecting on-premises and cloud IT assets.

The cybersecurity industry’s competitive bent hasn’t made it easy for companies to understand, much less gain control of these escalating exposures spinning out of a such a highly dynamic operating environment. To protect new cloud-native assets, rival vendors have pushed forward an alphabet-soup of upgraded iterations of legacy tools and all-new technologies – without paying much attention to interoperability.

The result has been a stark lack of integration which has translated into an excessive volume of alerts, a good percentage of them trivial or even false. Tension between security teams trying to cope and software developers striving to innovate as fast as possible has boiled over. Something in the form of CNAPP (as coined by Gartner) was bound to come along.

According to  Gartner’s March 2023 CNAPP market guide, CNAPP solutions consolidate multiple security and protection capabilities into a single platform capable of prioritizing excessive risks. This revolves around granular monitoring and management of cloud-native applications.

This type of overarching approach to securing modern networks can iterate from legacy security technologies, such as VM or endpoint detection and response (EDR,) or  it can extend from newer services, such as software composition analysis (SCA,) cloud workload protection platforms (CWPP,) cloud infrastructure entitlements management (CIEM.)

And now Microsoft has set out to prove that it makes good sense to come at it from the operating system level. That said, the Gartner report acknowledges that CNAPP is in a very early stage and cautions that no single vendor is best-of-breed in every capability.

New level of collaboration

It may be early, but CNAPP is demonstrating that it does a few things very well: reducing complexity, for one. There’s a huge need for this. Some 80 percent of respondents to Palo Alto Networks’ 2023 State of Cloud-Native Security Report expressed the need for a centralized security solution, with 76 percent reporting that using multiple security tools has created blind spots that make it difficult to prioritize and mitigate risk.

Segal

“Stitching together disparate security tools often results in security blind spots,” says Ory Segal, CTO of Prisma Cloud, Palo Alto’s CNAPP offering. “Attempting to triage security issues reported from multiple security systems, used by different teams, is close to impossible.”

One Palo Alto customer, a well-known global multimedia organization, recently replaced several tools with Prisma Cloud, which then swiftly detected a significant number of malicious bots abusing an API search function in one of their internet-exposed cloud workloads, Segal told me.

“Once they were aware of the abuse, they enabled bot protection on the platform and saw a dramatic decrease in daily operational costs — from thousands of dollars a day to $50 a day,” he says.

Dooley

A notable intangible benefit of CNAPP is that it eases the burden on stretched-thin security teams and creates space for more productive dialogues between security analysts, software developers and IT services. This is leading to a new level of collaboration that’s making a notable difference day-to-day for companies embracing CNAPP, says Doug Dooley, CTO at Data Thereom.

At present, security analysts and software developers tussle over shifting code audits to the left, as early as possible in the software development cycle, while IT staff separately focuses on wrangling configuration settings of cloud-hosted IT infrastructure, a piecemeal approach to security. “So this idea of artifact scanning, cloud configuration hardening, and runtime protection, particularly in production, those three programs needed to merge together,” Dooley says. “And that’s what CNAPP, when it works, does really well.”

CNAPP’s emergence happens to align with another trend gaining steam. As part of getting a better handle on their use of cloud-hosted IT infrastructure, some enterprises are reverting to running certain workloads back home — in an on-premises data center, observes Michiel De Lepper, Global Enablement Manager at Runecast. This “back-migration,” he says, is happening because certain workloads are proving to be too costly to run in the cloud, namely resource-intensive AI modeling.

De Lepper

“The IT industry is always evolving and essentially that means ever-increasing complexities because you’ve got disparate environments that you somehow need to cohesively manage,” De Lepper says.

According to Gartner, CNAPP’s superpower is that it can trump complexity by ingesting telemetry, at a deep level, across all key security systems. Advanced data analytics can then be brought to bear setting in motion automated enforcement of smart policies and automated detection and response to live attacks.

Runecast, for instance, takes a proactive approach to risk-based vulnerability management, configuration management, container security, compliance auditing, remediation and reporting. This helps with compliance, at one level, but also continually improves improving a company’s overall security posture, De Lepper told me.

“It’s no longer about creating shields,” De Lepper he says. “Instead, we’re helping our customers plug all the gaps we know that the bad guys can use.”

Synergistic intergration

I heard very similar messaging from all the CNAPP solution providers I’ve reviewed for this article. Indeed, all of them are designed to consolidate some mix of security capabilities into a single platform tuned prioritize and act upon cloud-native risks, and, by extension, exposures in related infrastructure, whether it be in the public cloud, hybrid cloud or  on premises.

The suppliers argue that this leads first and foremost to enhanced visibility not just of individual components, but much more crucially of all the communications between systems – especially connections happening ephemerally in runtime and in the API realm. This is a very positive development for security analysts, software developers and IT staff who desperately need a more unified toolset to help them collectively visually risk and make the highest use of this greater visibility.

CNAPP suppliers are starting to help these three groups lower the cost of compliance and remediate security vulnerabilities much more effectively. Gartner’s Winckless cautions that some vendors may not supply true integration, nor provide a robust feedback loop. “As with many other platforms, it’s important to look for these integrations to provide synergy and not to buy simply a collection of tools that are, at best, loosely interconnected from a single vendor in the hopes of gaining advantage,” he says.

Moving forward, CNAPP seems poised to arise as a core security component of modern business networks.

Acohido

Pulitzer Prize-winning business journalist Byron V. Acohido is dedicated to fostering public awareness about how to make the Internet as private and secure as it ought to be.


To tap the full potential of massively interconnected, fully interoperable digital systems we must solve privacy and cybersecurity, to be sure.

Related: Using ‘Big Data’ to improve health and well-being

But there’s yet another towering technology mountain to climb: we must also overcome the limitations of Moore’s Law.

After 30 years, we’ve reached the end of Moore’s Law, which states that the number of transistors on a silicon-based semiconductor chip doubles approximately every 18 months. In short, the mighty integrated circuit is maxed out.

Last spring, I attended NTT Research’s Upgrade 2023 conference in San Francisco and heard presentations by scientists and innovators working on what’s coming next.

I learned how a who’s who list of big tech companies, academic institutions and government agencies are hustling to, in essence, revive Moore’s Law and this time around direct it at optical technology.

I had a wide-ranging conversation with NTT Research President & CEO Kazu Gomi about an ambitious initiative called Innovative Optical and Wireless Network (IOWN) that aims to develop next-generation networks and computations. IOWN is all about supporting increased bandwidth, capacity and energy efficiency.

What really struck me was that IOWN also seeks to foster an “affluent and diverse” global society. For a full drill down on our discussion, please watch the accompanying videocast. Here are my takeaways.

What’s next: Internet of Everything

The world of the near future holds the promise of climate-restoring cities, autonomous transportation systems, incredible breakthroughs in healthcare and many more amazing services that could greatly benefit everyone on the planet.

However, the laws of physics dictate that silicon semiconductor chips simply won’t be able to support the massive data ingestion – and the colossal data crunching – that the Internet of Everything demands.

Fortunately, optical circuits are well suited to the task at hand. The Internet of Everything requires distributing billions more data capture sensors far and wide to form sprawling, interoperable digital shrouds overlapping one another. Each sensor in each shroud must be uniquely smart and use next to zero energy.

Working in concert, these sensor shrouds will very precisely and very securely move vast amounts of useful data very quickly to and from —  in traffic grids, utilities, communication systems, buildings and our homes.

“Optical technology can enable us to control energy consumption so we can support increasing capacity and increasing bandwidth,” Gomi summarizes.

At NTT Research in Sunnyvale, Calif., scientists are working on basic research to develop optical technology that can overcome current challenges. Their work focuses on creating smaller laser oscillators, which produce the light necessary for optical circuits. Smaller oscillators create shorter pulses that can increase bandwidth exponentially.

The business case for optical

One of the key benefits of optical circuits, Gomi emphasized, is their lower energy consumption compared to traditional circuits. This is particularly important for AI engines, which currently require large GPU clusters that use integrated circuit chips and consume vast amounts of energy.

Optical circuits have the potential to replace these GPUs, offering faster computation and drastically reduced energy consumption, he says.

Energy-efficient AI technology would make it possible to move computation to sensors at the network edge where intelligent analytics can be done in much quicker response times, consuming much less energy.

NTT executives and scientists speak often about how advanced optical technology can benefit society as a whole. It’s notable that the IOWN

mission statement actually calls for fostering a rich global society, one that’s tolerant of diversity and respectful of individual privacy.

I asked Gomi about the business case for this. He argues that if drastic changes are not made to shift to optical technology, carbon footprint issues will become a significant concern. By embracing optical technology, industries can grow, and society can benefit from the development of smarter infrastructure.

Deploying AI ethically

Gomi also acknowledged the need to strike a balance between humans and AI and to consider the ethics of AI. The conversation around AI’s potential impact on society, culture, and economics is just beginning, he says, but it’s essential to ensure that AI is implemented responsibly to avoid unintended consequences.

“AI right now can be undisciplined and has the potential to behave badly,” Gomi told me. “Bad behavior is something that must be corrected and we need to do something to discipline AI, as needed, when needed.”

You just don’t hear that kind of perspective very much from Amazon, Microsoft or Google, and certainly not from Facebook or Twitter.

In preparing to attend Upgrade 2023, I ran across a transcript of a lecture introducing IOWN delivered in 2019 by Jun Sawada, former CEO of NTT, the parent company of NTT Research.

Sawada begins by pointing out Japan’s history as a supplier of silver pearls, sapphires and cinnabar. He draws a comparison between Europe and Japan during the Industrial Revolution (1750-1850) noting the opposing perspectives of centralization vs. decentralization.

Sawada

He suggests that Japan’s Edo city, with its population of one million, represented a recycling-oriented eco-metropolis, while European cities focused on centralization and energy-driven growth. Moving on to an assessment of modern society, Sawada posits that the divisions between nations we see today results from conflicts between socialism and capitalism.

Today, he observes, the flood of information, coupled with AI-driven filtering, has led to divisiveness based on biased preferences. He advocates reconciling the economic expansion of modern European societies with Edo’s recycling mindset — and developing a global society that recognizes diverse values.

Sawada’s larger point is that IOWN holds the potential to reset our communication systems with the intention of driving towards a much greater global good. IOWN quietly continues to gain traction. How far can it take us?

I’ll keep watch and keep reporting.

Acohido

Pulitzer Prize-winning business journalist Byron V. Acohido is dedicated to fostering public awareness about how to make the Internet as private and secure as it ought to be.


(LW provides consulting services to the vendors we cover.)