In 2023, victims reported nearly 900,000 cybercrime complaints to the FBI. Altogether, losses eclipsed $12.5 billion — a significant 22% increase from the losses in 2022.

Related: Closing the resiliency gap

Unsurprisingly, experts predict this trend will continue to grow as we move further into the future.

While any business is a potential target for hackers, critical infrastructure organizations — including defense, healthcare, energy, utilities, and financial services companies — are perhaps most at risk due to their financial resources. According to the U.S. House Committee on Homeland Security, attacks on critical infrastructure organizations increased 30% in 2023.

With cyberattacks becoming more frequent —and the associated impacts becoming increasingly difficult to absorb — it’s more important than ever for critical infrastructure organizations to invest in cyber resilience. As these organizations work to fortify their ability to prepare for, respond to, and recover from cyberattacks while maintaining critical operations, we recommend four key ingredients that can help them safeguard their operations against evolving cyber threats.

I-Cross-functional collaboration

Cyber resilience isn’t possible when teams operate in silos. In fact, 59% of government leaders report that their inability to synthesize data across people, operations, and finances weakens organizational agility. To bolster cyber resilience, organizations must break down these siloes by fostering cross-departmental collaboration and making it as seamless as possible. Achieving this requires strategic investment in a triad of technologies:

•A customized, secure collaboration platform

•A project management tool like Asana, Trello, or Jira

•A knowledge-sharing solution like Confluence or Notion

Once these three foundational tools are in place, organizations should deploy the final piece of the puzzle: a dashboarding or reporting tool. These technologies can help IT leaders pinpoint any silos that exist and start figuring out how to break them down.

II-AI and automation

In today’s threat landscape, rapid detection and response to cyberattacks is essential. To build resilience, critical infrastructure organizations must invest in AI and automation to identify anomalies and potential threats faster than humans.

AI leverages predictive analytics to analyze vast amounts of data in real time, identifying patterns and predicting attacks before they occur. It also scans for vulnerabilities and proactively applies patches to secure critical infrastructure. Automation helps security teams contain threats faster by piping alerts into active incident response spaces (e.g., a dedicated channel in a collaboration platform), reducing context switching and improving focus.

Teams can also use checklist-based automation that trigger predefined incident response workflows, ensuring that incident response requirements are met while reducing human error, minimizing damage, and increasing team accountability.

III-A security-first mindset

Most organizations understand security’s importance but often treat it as an afterthought. To strengthen cyber resilience, organizations must adopt a security-first mindset, baking security into everything they do.

Too often, security teams are siloed from the rest of the organization; they’re roped in at the end when they should be fully integrated from the start.

Truly resilient organizations treat security as a shared responsibility, ensuring it’s part of every decision, project, and process. By encouraging collaboration between security teams and other business units, organizations can proactively identify risks, address vulnerabilities, and build cultures where security is prioritized at every level.

This shift minimizes potential threats but also empowers teams to protect critical assets together.

IV-Monitor, continuously learn

Hulen

No matter how resilient your organization is today, you can always improve.

To bolster cyber resilience, organizations must embrace continuous learning. Post-mortems and after-action reports are particularly valuable. By analyzing incidents, identifying what went wrong, and understanding how to prevent similar issues in the future, organizations can turn setbacks into teaching moments.

For critical infrastructure organizations in particular, it’s not a question of “if” but “when” an attack will occur. When incidents happen, organizations must learn from their success and failures in order to improve decision-making moving forward and minimize damages.

Keeping pace

Cyber resilience is a continuous journey. As cyber threats evolve, critical infrastructure organizations must constantly adapt, learn, and improve their defenses. Failure to keep pace can have disastrous impacts–not only financially, but on society’s well-being.

Resilience isn’t about achieving security; it’s about withstanding attacks, recovering quickly, and emerging stronger—all while protecting mission-critical operations.

By deploying the right tools, embracing a security-first mindset, and committing to continuous improvement, critical infrastructure organizations can stay prepared —a boon to their bottom lines and the people they serve.

About  the essayist: Corey Hulen is CEO and co-founder of Mattermost Federal, Inc., which supplies a collaboration platform for mission-critical work serving  national security, government, and critical infrastructure enterprises, from the U.S. Department of Defense, to global tech giants, to utilities, banks and other vital services.  

The post GUEST ESSAY: Four essential strategies to bolster cyber resilience in critical infrastructure first appeared on The Last Watchdog.

The rise of AI co-pilots is exposing a critical security gap: sensitive data sprawl and excessive access permissions.

Related: Weaponizing Microsoft’s co-pilot

Until now, lackluster enterprise search capabilities kept many security risks in check—employees simply couldn’t find much of the data they were authorized to access.

But Microsoft Copilot changes the game, turbocharging enterprise search and surfacing sensitive information that organizations didn’t realize was exposed.

Many assume Copilot won’t share data externally and will respect existing user permissions, leading to a false sense of security. But the real problem isn’t whether Copilot stays within its lane—it’s that the lane is far too wide. If employees already have excessive access, Copilot simply makes that exposure more visible.

Patchwork fixes fall short

This reality is hitting hard. A recent Gartner survey found that 40% of IT managers have delayed Copilot deployments due to security concerns. I’ve spoken with numerous CIOs and CISOs who say these issues are directly impacting rollout plans at major enterprises.

Alkove

Microsoft’s response? Instead of pushing organizations toward a true “least privilege” model, it suggests running limited Copilot trials to see what data gets exposed. That’s a band-aid solution, not a fix.

Copilot isn’t the problem—it just amplifies an existing one. The real issue is the outdated, over-permissioned access models that have plagued enterprises for years.

Over-provisioned access

The risks of excessive access are nothing new. Identity-related issues have become the leading driver of security breaches in recent years. But many organizations still lack modern tools to manage access effectively.

Consider this: most organizations can’t answer basic questions about their own data security, including:

•Who has access to what?

•Where did they get it?

•How are they using it?

•Should they even have it?

The problem stems from legacy IAM systems and manual, piecemeal processes—entirely inadequate for today’s decentralized cloud, SaaS sprawl, and AI-driven environments.

AI’s promise vs. risk

AI thrives on data, but that same data introduces risk. One of the biggest threats isn’t AI itself—it’s the over-provisioned access policies that leave organizations vulnerable. Microsoft’s own data shows that 95% of granted permissions go unused. That’s the opposite of least privilege.

Efforts to classify and restrict sensitive data help, but they don’t address the underlying issue: employees having more access than they need in the first place.

Despite these risks, businesses are rapidly adopting AI, with privacy and security top of mind for leadership. Yet, without a fundamental shift in access management, organizations will continue exposing themselves to unnecessary threats.

Securing AI going forward

It’s time for organizations to move beyond the “check-the-box” approach to access security. Implementing a true least privilege model—where employees only have access to the data they actually need—isn’t optional anymore. It’s a necessity.

Modern IAM solutions must provide visibility, intelligence, and automation to restructure permissions and monitor AI-driven activity. Without these foundational steps, security risks will only grow alongside AI’s expanding capabilities.

The choice is clear: either organizations take control of access security now, or AI will expose its weaknesses for them.

About the essayist: Jim Alkove is co-founder and CEO of Oleria. He led security at Salesforce, Microsoft, and Google Nest, advises startups like Aembit and Snyk, and holds 50 U.S. patents. He earned an electrical engineering degree from Purdue University.

The post GUEST ESSAY: How AI co-pilots boost the risk of data leakage — making ‘least privilege’ a must first appeared on The Last Watchdog.

President Biden’s detailed executive order relating to cybersecurity is great to see.

Biden’s order reflects the importance of cybersecurity at the highest levels – it is an issue of national security and should be treated as such.

One of the big themes coming out of the order is the need to implement the right controls, and being able to provide evidence. Section two really underscores the need for secure software development.

If it is followed through, software publishers will need to open their kimonos to show they have the right controls in place and that these are working effectively.

It is also interesting to see in section seven that NIST will be issuing guidance on “minimum cybersecurity practices”, considering common cybersecurity practices and security controls.

Gill

Moving forward, we can expect to see even greater emphasis not just on encouraging companies to implement controls, but on providing evidence of such. However, many companies will struggle here.

IT infrastructures and ecosystems have become incredibly complex. Most large organizations do not even have visibility of what assets they have, let alone the status of their security controls across those assets.

This isn’t due to a lack of effort or care from cybersecurity professionals. The challenge lies in the fact that most large organizations rely on 50+ cybersecurity tools to protect their fast-moving IT environments.

These tools operate in silos, disconnected from one another and informed by incomplete configuration management databases (CMDB). As we move into an era of ‘trust, but verify’, organizations will be under increasing pressure not only to outline what controls they have, but to demonstrate their effectiveness.

Most large organizations already possess the data they need to understand their assets, controls coverage, and controls effectiveness, but it’s scattered and inaccessible. This data must be transformed into actionable, trusted intel, enabling security leaders to identify gaps, enforce accountability, and ensure stakeholders meet agreed-upon standards of controls.”

About the essayist: Jonathan Gill is CEO at Panaseer which supplies a continuous controls monitoring solution

The post GUEST ESSAY: President Biden’s cybersecurity executive order is an issue of national security first appeared on The Last Watchdog.

In the modern world of software development, code quality is becoming a critical factor that determines a project success. Errors in code can entail severe consequences.

Related: The convergence of network, application security

For example, vulnerabilities in banking applications can lead to financial data leaks, and errors in medical systems can threaten the health of patients. Such incidents not only harm users but also undermine trust in technology in general, and pose reputational risks to companies. In the global economy where every mistake can cost millions, it is important to identify and fix problems early in the development process.

Code analysis is the process of detecting errors, flaws, and security defects in software. It can be performed manually or automatically. What manual analysis is concerned, we basically mean the classic code review method. Code review has the purpose of error search, working out recommendations on code improvement, and also contributes to education of new programmers.

Finding potential vulnerabilities is another important aspect of code analysis. Hackers can exploit some vulnerabilities to gain unauthorized access to data or systems. With the growing threat of cyber attacks, data security is becoming a priority for many companies. Therefore, regular code checks help protect information in advance and minimize risks.

Karpov

The same as methods of detecting errors in code, vulnerability detection methods range from manual testing to automated solutions. However, the manual approach is often insufficient, especially in large and complex projects. Therefore, the automated search for potential vulnerabilities becomes inevitable.

One of the ways to analyze code automatically is to use static analyzers. A static code analyzer is a tool that examines source code for errors and potential vulnerabilities without executing it. The analyzer helps developers detect problems even before the code is run. This reduces the cost of fixes and prevents many negative consequences. This process is similar to an editor checking a text for typos and grammar errors before publication.

A static analysis tool can be integrated into development processes, allowing you to run the analysis automatically with each code change. This ensures that developers receive immediate feedback after making changes to the code. This approach helps maintain high-quality standards and minimize the likelihood of errors.

Static analyzers not only detect errors but also provide developers with detailed reports and documentation with recommendations on how to eliminate the flaws. It can be used for training and improving the programming skills of a team, as developers can study the causes of errors and avoid them in the future. This approach to learning contributes to a high-quality-code culture within the team.

Static analyzers enable you to allocate more resources for solving business problems. Errors found in the early development stages require less time and effort to fix than those found later. This not only saves developers’ resources but also reduces financial risks for the company. Timely bug fixing prevents possible losses from releasing a low-quality product.

Our team develops the PVS-Studio SAST solution and has extensive experience in helping companies implement static analysis into their development process. For readers, we’re offering a promo code, #thelastwatchdog, for a 30-day trial version of the static analyzer. This will allow you to test the tool on your projects and decide whether static analysis meets the needs of your business. Also, you can always contact us if you have any questions on static analysis. I hope we can assist you in improving your software development processes.

About the essayist: Andrey Karpov is a co-founder of the PVS-Studio project. He has been a CTO for a long time and taken part in the development of the C++ analyzer core. Now Andrey is engaged in team management, employee training, and DevRel activities.

The post GUEST ESSAY: The key role static code analyzers play in detecting coding errors, eliminating flaws first appeared on The Last Watchdog.

Application Programming Interfaces (APIs) have become the backbone of modern enterprises, facilitating seamless communication between both internal systems and external partners.

Related: Biden-Harris administration opens Supply Chain Resilience Center

As organizations increasingly rely on APIs, the number of APIs in use has dramatically increased. Since attackers follow the attack surface, this growth in API usage has not gone unnoticed. The concentration of critical business logic and sensitive data flowing through APIs makes them an attractive target for malicious actors aiming to exploit vulnerabilities for financial gain, data theft, or service disruption.

Focused on API security, Wallarm’s API ThreatStats report gathers all the available data on API-related cybersecurity incidents and vulnerabilities for analysis. Additionally, the report identifies and tracks the trends that impact organizations.

Q3 API security incidents

Not surprisingly, Q3 2024 saw an increased number of API related cybersecurity incidents. APIs continue to be at the heart of some of the largest and most impactful breaches we’re seeing. In the last quarter, Deutsche Telekom topped the list by exposing 252 million users due to unauthenticated API access. Other key incidents included:

Hotjar and Business Insider exposed 80 million users due to client-side API issues (cross-site scripting and OAuth mismanagement).

Fractal exposed the sensitive personal information of 6,300 customers due to an insecure API script.

ExploreTalent’s authorization issues in a misconfigured API disclosed 11.4 million user records.

•Metro Pacific Tollways Corporation (MPTC) suffered an API leak affecting nearly 1 million records, including sensitive API logs.

These incidents are telling because they span multiple industries. API security issues aren’t limited to technology companies or any specific sector. APIs are used across various industries, and therefore, the API security incidents impact all industries, from telecom to tollways.

In terms of root causes, these incidents show that authentication and authorization continue to be problematic for APIs. The systems designed to protect the data behind these APIs are consistently and successfully under attack.

Finally, it’s notable that many of these incidents were driven by client-side API vulnerabilities. The OWASP API Top 10 is an industry-standard list of API related issues, focusing on server-side security. Attackers appear to be taking advantage of the blind spot represented by client-side issues like cross-site scripting.

Q3 API security trends

Wallarm’s analysis of the API related vulnerabilities provides valuable insight into the most important trends for API security. Q3 saw the largest number of API-related vulnerabilities since we began this analysis at the beginning of 2022. 469 vulnerabilities were analyzed for Q3 2024, compared to 388 in the previous quarter, a 21% increase. In the first edition of this report for Q1 2022, there were 48.

The scale of the problem continues to grow. Notably, 45% of these issues scored a 7.5 on the Common Vulnerability Scoring System (CVSS), indicating that API vulnerabilities skew towards higher risk overall. Not only are the number of vulnerabilities increasing, they are bringing increased risk to organizations.

Additionally, the analysis breaks down the vulnerabilities based on the affected type of software, with enterprise software from vendors like Oracle, VMWare, and Cisco topping the list at 39.6%. DevOps tools took the second spot at a close 36.2%. API related vulnerabilities impact enterprise organizations doing their own development most.

Key takeaways

The key takeaways for the API ThreatStats report differ, depending on your role.

CISOs should focus more on strategy than execution. Based on the Q3 analysis, comprehensive API discovery and robust authentication controls should figure prominently in their strategic objectives. These Initiatives are crucial, as unknown and poorly secured APIs can pose major vulnerabilities.

Novikov

CISOs shouldn’t overlook client-side API vulnerabilities, which are often ignored but have been shown to be exploited by attackers. While it seems like AI is everywhere, CISOs shouldn’t ignore the connection between APIs and AI in their strategic plans. These two technologies will grow together.

API Architects don’t have dramatically different priorities, but they need to focus on practical, implementable solutions as part of API architecture. Ensuring robust authentication across all APIs, for example, is paramount, as authentication is foundational for API security.

Grasping connections

Architects also need to translate some of those strategic directions down to the technical level. Implementing detailed input validation and output encoding to prevent injection attacks and data leaks will help remove API security risk. Finally, API architects who are implementing AI are best positioned to see the tight connections and build security in from the ground up.

Security practitioners shouldn’t be left out, as they are generally the executors of the CISOs strategic plans. Alignment here is key. Regular, comprehensive security assessments to identify and address vulnerabilities must be conducted proactively.

Monitoring and securing client-side applications should align with the CISO objectives. These practitioners should also stay informed about emerging threats and CVEs, keeping the CISO and the organization updated as the API threat landscape continuously evolves.

API security is a cross-functional responsibility. These recommendations are aligned, but must be applied at multiple levels within the organization. As noted, the API threat landscape continues to grow and organizations– from the CISO down– must be prepared.

About the essayist:  Ivan Novikov is the Chief Executive Officer of Wallarm, which supplies a unified, automated API security solution that works with any platform, any cloud, multi-cloud, cloud-native, hybrid and on-premise environment. 

The post Guest Essay: API security-related exposures rose steeply across all industries in Q3 2024 first appeared on The Last Watchdog.

Ever since the massive National Public Data (NPD) breach was disclosed a few weeks ago, news sources have reported an increased interest in online credit bureaus, and there has been an apparent upswing in onboarding of new subscribers.

Related: Class-action lawsuits pile up in wake of NPD hack

So what’s the connection? NPD reported the exposure of over 2.7 billion records. The breach was initially caused by a third-party malicious actor who infiltrated NPD’s systems in December 2023.

The data began leaking in April 2024, and by summer, it was being sold on the dark web for $3.5 million. The stolen information included full names, Social Security numbers, mailing addresses, phone numbers, and email addresses of millions of U.S., Canadian, and British citizens.

While NPD claimed that around 1.3 million individuals were directly affected, analysts like Troy Hunt found evidence of much wider exposure, including 134 million unique email addresses and even criminal record data. Investigations are ongoing, and several class-action lawsuits have been filed, alleging that the company failed to implement sufficient security measures.

There is little doubt that high-profile breaches like this will persist. This drives public awareness of the risks associated with identity theft. As a result, many people rush to protect themselves by subscribing to services that offer credit monitoring, identity theft protection, and fraud alerts. Online credit bureaus, like Equifax, Experian, and TransUnion, often see an uptick in new users after breaches because consumers realize the potential risks to their financial well-being and identity.

The growing threat of cybercrime, including ransomware attacks and large-scale data leaks, is also pushing individuals to take more control of their personal data. Credit monitoring services provide ongoing tracking of credit reports for suspicious activity, and some even offer insurance for identity theft-related losses. As breaches become more frequent, credit protection services become a more attractive option for those seeking peace of mind and financial security.

What’s more, some credit bureaus have started offering more comprehensive packages that include dark web monitoring, fraud detection, and restoration services, which are enticing consumers to subscribe to these services at a higher rate.

Devaluing SSNs

This breach had such wide implications, it caused millions of consumers and thousands of organizations to look more closely at how to protect themselves, their identities and sensitive data. The sad reality is we have been de-sensitized by these constant breaches.

Kumar

NPD certainly could have done many things better but there is one thing that is on us. Perhaps the time has come to get rid of using our social security numbers. It is the simplest and least expensive solution that will have a highly positive impact on overall security. Today, we use the same SSN across dental clinics, car dealerships, and mortgage applications. It is no stretch to predict that it’s guaranteed to get compromised eventually.

Rather, we should treat SSN as just another piece of personally identifiable information (PII) like an email address – confidential information but not a sensitive one that unlocks your bank accounts. Governments can create a digital identity at birth to replace SSN in its current use. That identity is tied to specific vendors. As an example, you have two tokens – one for NPD and another for your bank, and after such a breach, the NPD token would be revoked so NPD cannot use your data, but everything will work fine at the bank.

The NPD breach serves as a stark reminder of the critical importance of data security in today’s digital world, particularly in regulated industries such as financial services and healthcare. As more personal data is collected, stored, and shared online, providers and their organizations must take proactive steps to safeguard this information from cyberattacks.

Trust principles

Given the complex cybersecurity environment, with data breaches unfortunately happening at a record pace, organizations need to continue to build and establish trust with everyone from individuals to partners, employees and those in the supply chain.  All organizations with access to personally identifiable information (PII) should adhere to essential identity trust principles that include:

•Advanced and strict fraud prevention: Doing so will help prevent threat actors from not only creating fake accounts to impersonate legitimate users but make it much more difficult for them to gain access in the first place.

•Attach and hold to compliance frameworks: Compliance frameworks are designated to protect stakeholders against misuse of any kind – data included. Most industries have strict regulations in place, and in many cases, organizations will be subject to fines if adherence to regulations are violated.

•Trust: Users must know that their data is safe with the entities interacted with – this provides confidence to share information in the first place.

Many affected individuals were unaware of the breach or even the fact that NPD had collected their data in the first place. NPD’s practice of scraping data elements from non-public sources without consent raises serious ethical and legal concerns. This brings up the issue about how our governmental and private institutions handle PII.  Even when strict compliance frameworks achieve their goals, they are not enough to put the necessary restrictions on the usage of this type of data. Again, should SSN be the key identity point?

When there is a breach of this magnitude that involves SSNs, there is a scramble for individuals to protect themselves through efforts such as:

•Freezing consumer credit reports: Contacting the major credit bureaus (Equifax, Experian, and TransUnion) to prevent new credit accounts from being opened without consent.

•Accessing free weekly credit reports: Gaining access to free weekly credit reports to monitor any suspicious activity.

In the case of NPD, the hackers targeted a data broker whose role is to aggregate information from many data sources. Initial reports indicated the company’s apparent security missteps increased the impact.

This begs the question, did NPD have too much data?  Did they understand the data they had, and why was it not properly protected?

If an organization falls victim to a data breach, they would be in a better position to respond if they have less sensitive data and better-quality data – and without SSN as the key identifier.  As it was, in the case of NPD the leaks came in spurts, with several types of data – much of it was erroneous. This may go against a data broker’s interests, but the first lesson is to ensure they reduce the amount of PII and remove redundant, obsolete, and trivial data (ROT). It would be safer and more effective to handle the minimal amount of data they are allowed to possess.

About the essayist: Ambuj Kumar is Co-founder and CEO of Simbian, AI Agents for cybersecurity  

 

The post GUEST ESSAY: Massive NPD breach tells us its high time to replace SSNs as an authenticator first appeared on The Last Watchdog.

Passwords have been the cornerstone of basic cybersecurity hygiene for decades.

Related: Passwordless workpace long way off

However, as users engage with more applications across multiple devices, the digital security landscape is shifting from passwords and password managers towards including passwordless authentication, such as multi-factor authentication (MFA), biometrics, and, as of late, passkeys.

But as secure and user-friendly as these authentication methods are, cybercriminals are already busily sidestepping all forms of authentication – passwords, MFA, and passkeys – to sometimes devastating effect.

Passwordless work arounds

Without a doubt, passwordless authentication is a significant improvement over traditional passwords and effectively addresses the persistent risk of easy to guess passwords and password reuse. Most passkeys available to consumers leverage unique biometric authentication data and cryptographically secure means to authenticate users when they access websites and applications.

This new authentication technique is gaining traction, especially since the FIDO Alliance has advocated for its implementation over the last year. Moreover, leading tech companies like Google, Microsoft, and Apple have developed robust frameworks to integrate this system of authentication.

Yet history reminds us that cyber threats evolve alongside our defenses. As we move towards a passwordless world, bad actors are finding new avenues to exploit, including simply working around passwordless authentication with session hijacking attacks and other forms of next-generation account takeover – and the tradeoff is significant.

The most alarming threat to users and businesses today, bar none, is malware. Criminals increasingly use infostealer malware and other low-cost and highly effective malware-as-a-service tools to exfiltrate valid identity data needed for authentication, like session cookies.

The role of infostealers

Hilligoss

Infostealers pose a significant challenge for websites and servers that validate user identities. Armed with an anti-detect browser and a valid cookie, bad actors can mimic a trusted device or user, easily sidestep authentication methods, and seamlessly blend in without raising any red flags. Once the session is hijacked, criminals can access a user’s accounts, and masquerade as the user to perpetrate additional cyber incidents such as fraud and ransomware.

And this attack method is on the rise.  In 2023, infostealer malware use tripled, with 61% of breaches attributable to this threat. SpyCloud researchers highlighted how malware infections are a major player in identity exposures in the recent 2024 Identity Exposure Report.

While most infostealer malware are non-persistent in their infiltration, and extraction of information takes only a matter of seconds, leaving the device with nary a sign, the threat of the stolen data to a user and organization security is much more persistent. A valid session cookie will remain on a person’s browser until it expires or a proactive security team invalidates it. Some cookies can last for months or years. As long as cookie data remains valid, it can be sold and traded multiple times and used to perpetrate different attacks.

Lateral exposures

Criminals are interested in the data but even more so, the level of access the data can grant. So beyond cookies they are also accessing keychains, local files, single-sign on logins, and escalating privileges – essentially instigating a wide range of actions from a single entry point, whether it’s within a browser or on a device.

The use of single sign-on (SSO) only exacerbates the problem, as a successful breach can potentially grant unauthorized access to multiple linked accounts and services across multiple business and personal devices.

Case in point: In January 2023, the continuous integration and delivery platform CircleCI announced it had experienced a data breach caused by infostealer malware deployed to an engineer’s laptop. The malware stole a valid, two-factor-backed SSO session, executed a session cookie theft, impersonated the employee, and escalated access to a subset of the company’s production systems, potentially accessing and stealing encrypted customer data.

Security practitioners often fail to recognize the extensive scope of the session hijacking issue or take steps to mitigate it. Even when teams have visibility into stolen session cookies, our research has found that 39% fail to terminate them.

Despite having short timeouts, MFA, and passkeys in place, there will still be security gaps. This is particularly true due to the use of third parties having unmanaged or under-managed devices, which security teams may not have access to or sufficient control over.

Additional strategies

Passwordless security authentication is still an important part of any layered security strategy, but since it can still be sidestepped via stolen cookies for session hijacking, it’s not a silver bullet to combat cyber attacks.

Additional strategies, such as monitoring for compromised web sessions, invalidating stolen cookies, and promptly resetting exposed user credentials are critical. This means quickly and accurately being able to determine when any component of an employee, contractor, vendor, or customer’s identity is compromised and moving fast to remediate and negate the value of stolen identity data. This takes the steps traditionally set forth of cleaning and re-imaging a machine one step further to properly remediate the data that could still be floating on the criminal underground and nullifying it.

As criminals step up their game, failing to make this shift could leave organizations vulnerable to a wide array of next-generation attack methods. And with passkeys and other passwordless authentication methods soaring in popularity, time is of the essence.

About the essayist: Trevor Hilligoss served nine years in the U.S. Army and has an extensive background in federal law enforcement, tracking threat actors for both the DoD and FBI. He is a member of the Joint Ransomware Task Force and serves in an advisory capacity for multiple cybersecurity-focused non-profits. He currently serves as the Vice President of SpyCloud Labs at SpyCloud

The post GUEST ESSAY: How cybercriminals are using ‘infostealers’ to sidestep passwordless authentication first appeared on The Last Watchdog.

AI has the potential to revolutionize industries and improve lives, but only if we can trust it to operate securely and ethically.

Related: The key to the GenAI revolution

By prioritizing security and responsibility in AI development, we can harness its power for good and create a safer, more unbiased future.

Developing a secured AI system is essential because artificial intelligence is a transformative technology, expanding its capabilities and societal influence. Initiatives focused on trustworthy AI understand the profound impacts this technology can have on individuals and society. They are committed to steering its development and application towards responsible and positive outcomes.

Security considerations

Securing artificial intelligence (AI) models is essential due to their increasing prevalence and criticality across various industries. They are used in healthcare, finance, transportation, and education, significantly impacting society. Consequently, ensuring the security of these models has become a top priority to prevent potential risks and threats.

•Data security. Securing training data is crucial for protecting AI models. Encrypting data during transmissionwill prevent unauthorized access. Storing training data in encrypted containers or secure databases adds a further layer of security.

Data masking can safeguard sensitive data, even during breaches. Regular backups and a disaster recovery plan are essential to minimize data loss and ensure the security and integrity of training data, safeguarding AI models from potential risks and threats.

•Model Security. Model encryption should be employed to protect against unauthorized access, tampering, or reverse engineering. Watermarking or digital fingerprints can help track AI models and detect unauthorized use.

Digital signatures ensure the integrity and authenticity of models, confirming they have not been altered. Implementing model versioning is crucial for tracking updates and preventing unauthorized changes.

Mandadi

Additionally, regular testing and validation are necessary to ensure models function correctly and are free of security vulnerabilities. These measures collectively enhance the security of AI models, protecting them from potential risks. Attention to detail in these areas is vital:

•Infrastructure Security. Protecting hardware components like GPUs and TPUs used in training and deploying AI models is crucial. Updating software with the latest security patches and adhering to secure coding practices.

Implementing robust network security protocols, including firewalls and intrusion detection systems, is necessary to block unauthorized access. Cloud security is critical since many AI models are trained and deployed on cloud-based platforms.

Additionally, an effective incident response plan is essential for quickly addressing security incidents and mitigating the impact of breaches. Together, these measures ensure the infrastructure’s security and protect against potential risks and threats.

•Access controls. It is crucial to tightly control access to AI models, data, and infrastructure to prevent security incidents. Role-based access controls should limit access based on user roles and privileges, alongside robust authentication and authorization mechanisms.

Following the principle of least privilege access is vital, granting users only necessary access. Monitoring user activity helps detect and respond to potential security incidents.

•Secure development lifecycle. Building secure AI systems requires a systematic approach. By integrating security into every stage of AI development, organizations can ensure the confidentiality, integrity, and availability of their AI systems and data. You can build a secure AI system by following the steps below.

•Secure design. The secure design stage is foundational to the secure AI development lifecycle. It involves defining security requirements and threat models, conducting security risk assessments and architecture reviews, and implementing secure data management and privacy controls.

This stage ensures security is integrated into the AI system from the beginning, minimizing the risk of security breaches and vulnerabilities.

•Development. During the development stage, developers apply secure coding practices, conduct regular security testing and vulnerability assessments, utilize secure libraries and dependencies, and establish authentication, authorization, and access controls. This stage prioritizes security in the development of the AI system and addresses potential vulnerabilities early on.

•Deployment. Ensuring secure deployment configurations and settings is crucial during the deployment stage. Thorough security testing and vulnerability assessments are conducted beforehand. Utilizing secure deployment mechanisms and infrastructure is essential for securely deploying the AI system. Implementing robust monitoring and logging controls also mitigates potential security risks.

•Operation and maintenance. Once your AI system is operational, it should undergo continuous security monitoring. This includes regular updates, security assessments, and risk evaluations. Incident response and disaster recovery plans are also in place to maintain security and address potential incidents.

Developing secure AI systems requires a systematic approach that integrates security into every stage of AI development. Implementing robust security measures and ethical considerations builds trust in AI solutions, ensuring they are secure, reliable, and resilient. This approach enables AI to be a powerful tool for positive change.

About the essayist: Harish Mandadi, is the founder and CEO of AiFA Labs as CEO and Founder. AiFA Labs, which supplies comprehensive enterprise GenAI platforms for text, imagery and data patterns.

The post GUEST ESSAY: Taking a systematic approach to achieving secured, ethical AI model development first appeared on The Last Watchdog.

The National Institute of Standards and Technology (NIST) has updated their widely used Cybersecurity Framework (CSF) — a free respected landmark guidance document for reducing cybersecurity risk.

Related: More background on CSF

However, it’s important to note that most of the framework core has remained the same. Here are the core components the security community knows:

Govern (GV): Sets forth the strategic path and guidelines for managing cybersecurity risks, ensuring harmony with business goals and adherence to legal requirements and standards. This is the newest addition which was inferred before but is specifically illustrated to touch every aspect of the framework. It seeks to establish and monitor your company’s cybersecurity risk management strategy, expectations, and policy.

•Identify (ID): Entails cultivating a comprehensive organizational comprehension of managing cybersecurity risks to systems, assets, data, and capabilities.

•Protect (PR): Concentrates on deploying suitable measures to guarantee the provision of vital services.Detect (DE): Specifies the actions for recognizing the onset of a cybersecurity incident.

•Respond (RS): Outlines the actions to take in the event of a cybersecurity incident.

•Recover (RC): Focuses on restoring capabilities or services that were impaired due to a cybersecurity incident.

Noteworthy updates

The new 2.0 edition is structured for all audiences, industry sectors, and organization types, from the smallest startups and nonprofits to the largest corporations and government departments — regardless of their level of cybersecurity preparedness and complexity.

Emphasis is placed on the framework’s expanded scope, extending beyond critical infrastructure to encompass all organizations. Importantly, it better incorporates and expands upon supply chain risk management processes. It also  introduces a new focus on governance, highlighting cybersecurity as a critical enterprise risk with many dependencies. This is critically important with the emergence of artificial intelligence.

To make it easier for a wide variety of organizations to implement the CSF 2.0, NIST has developed quick-start guides customized for various audiences, along with case studies showcasing successful implementations, and a searchable catalog of references, all aimed at facilitating the adoption of CSF 2.0 by diverse organizations.

The CSF 2.0 is aligned with the National Cybersecurity Strategy and includes a suite of resources to adapt to evolving cybersecurity needs, emphasizing a comprehensive approach to managing cybersecurity risk. New adopters can benefit from implementation examples and quick-start guides tailored to specific user types, facilitating easier integration into their cybersecurity practices.

Swenson

The CSF 2.0 Reference Tool simplifies implementation, enabling users to access, search, and export core guidance data in user-friendly and machine-readable formats. A searchable catalog of references allows organizations to cross-reference their actions with the CSF, linking to over 50 other cybersecurity documents – facilitating comprehensive risk management. The Cybersecurity and Privacy Reference Tool (CPRT) contextualizes NIST resources with other popular references, facilitating communication across all levels of an organization.

NIST aims to continually enhance CSF resources based on community feedback, encouraging users to share their experiences to improve collective understanding and management of cybersecurity risk. The CSF’s international adoption is significant, with translations of previous versions into 13 languages. NIST expects CSF 2.0 to follow suit, further expanding its global reach. NIST’s collaboration with ISO/IEC aligns cybersecurity frameworks internationally, enabling organizations to utilize CSF functions in conjunction with ISO/IEC resources for comprehensive cybersecurity management.

About the essayist: Jeremy Swenson is a disruptive-thinking security entrepreneur, futurist/researcher, and senior management tech risk consultant.

Charities and nonprofits are particularly vulnerable to cybersecurity threats, primarily because they maintain personal and financial data, which are highly valuable to criminals.

Related: Hackers target UK charities

Here are six tips for establishing robust nonprofit cybersecurity measures to protect sensitive donor information and build a resilient organization.

•Assess risks. Creating a solid cybersecurity foundation begins with understanding the organization’s risks. Many nonprofits are exposed to potential daily threats and don’t even know it. A recent study found only 27% of charities undertook risk assessments in 2023 and only 11% said they reviewed risks posed by suppliers. These worrying statistics underscore the need to be more proactive in preventing security breaches.

•Keep software updated. Outdated software and operating systems are known risk factors in cybersecurity. Keeping these systems up to date and installing the latest security patches can help minimize the frequency and severity of data breaches among organizations. Investing in top-notch firewalls is also essential, as they serve as the first line of defense against external threats.

•Strengthen authentication. Nonprofits can bolster their network security by insisting on strong login credentials. This means using longer passwords — at least 16 characters, as recommended by experts — in a random string of upper and lower letters, numbers, and symbols. Next, implement multi-factor authentication to make gaining access even more difficult for hackers.

•Train staff regularly. A robust security plan is only as good as its weakest link. In most organizations, that exposure comes from the employees. Roughly 95% of cybersecurity incidents begin with a staff member clicking on an unsuspecting link, usually in an email. A solid cyber security culture requires regular training on the latest best practices so people know what to look out for and what to do.

•Get board involvement. Effective nonprofit cybersecurity starts at the top. Just as it’s common practice to task board members with budget reviews for fraud prevention, organizations can appoint trustees to oversee cybersecurity explicitly. Board involvement can cut through red tape and implement improved safeguards for donor information and funds

Conduct Internal Reviews. In a 2023 survey, 30% of CISOs named insider threats one of the biggest cybersecurity threats for the year. The risk factor is higher among nonprofits, as they store data about high-net-worth donors. A disgruntled employee or persons with malicious intentions can gain unauthorized access to these records to demand payments from patrons, knowing full well they can afford it.

Charity exposures

Threat actors continue to explore new methods to steal information. The usual attack vectors include:

•Data theft: Charities are rich in valuable data, whether in their email list or donor database. The hackers then sell the information or use it themselves for financial gain.

•Ransomware: This attack involves criminals holding a network and its precious data hostage until the enterprise pays the demanded amount.

•Social engineering: These attacks exploit human error to gain unauthorized access to organizational systems. Lack of proper staff training is the biggest culprit in this case.

•Malware: Hackers deploy malicious software designed to cause significant disruptions and compromise data integrity.

Amos

If any of these attacks proves successful, the consequences for nonprofits are often severe and far-reaching. In the immediate, there’s the loss of funds or sensitive information. There’s also the risk of financial penalties for breaching data protection laws. Beyond financial and reputational loss, the ripple effects become more evident with a decline in donor confidence.

Cybersecurity is a must for charities. Cyber attacks have become an increasing concern, so charities and nonprofits must commit to safeguarding private data as part of their success. By adopting proactive measures, they can stay on top of cybersecurity trends and foster enduring relationships with donors.

About the essayist: Zac Amos writes about cybersecurity and the tech industry, and he is the Features Editor at ReHack. Follow him on Twitter or LinkedIn for more articles on emerging cybersecurity trends.